Mastering Enterprise BI: Extend the Data Source with Named Calculations, Pt. I

Monday Mar 13th 2006 by William Pearson
Share:

Analysis Services 2005 introduces yet another flexible capability: Named Calculations within the Data Source View. Join Architect Bill Pearson in a hands-on exploration of another means for supporting agings and other cube structures by creating a "virtual" formula column in an underlying source table.

About the Series ...

This article is a member of the series Introduction to MSSQL Server Analysis Services. The series is designed to provide hands-on application of the fundamentals of MS SQL Server Analysis Services, with each installment progressively presenting features and techniques designed to meet specific real - world needs. For more information on the series, please see my initial article, Creating Our First Cube.

Note: To follow along with the steps we undertake, the following components, samples and tools are recommended, and should be installed according to the respective documentation that accompanies MSSQL Server 2005:

  • Microsoft SQL Server 2005 Database Engine

  • Microsoft SQL Server 2005 Analysis Services

  • Microsoft SQL Server 2005 Integration Services

  • Business Intelligence Development Studio

  • Microsoft SQL Server 2005 sample databases

  • The Analysis Services Tutorial sample project and other samples that are available with the installation of the above.

To successfully replicate the steps of the article, you also need to have:

  • Membership within one of the following:

    • the Administrators local group on the Analysis Services computer

    • the Server role in the instance of Analysis Services

  • Read permissions within any SQL Server 2005 sample databases we access within our practice session, as appropriate.

Note: Current Service Pack updates are assumed for the operating system, MSSQL Server 2005 ("MSSQL Server"), MSSQL Server 2005 Analysis Services ("Analysis Services"), MSSQL Server 2005 Reporting Services ("Reporting Services") and the related Books Online and Samples. Images are from a Windows 2003 Server environment, but the steps performed in the articles, together with the views that result, will be quite similar within any environment that supports MSSQL Server 2005 and its component applications.

About the Mastering Enterprise BI Articles ...

Having implemented, and developed within, most of the major enterprise BI applications for over for fourteen years, and having developed an appreciation for the marriage of ease of use and analytical power through my background in Accounting and Finance, I have come to appreciate the leadership roles Cognos and other vendors have played in the evolution of OLAP and enterprise reporting. As I have stated repeatedly, however, I have become convinced that the components of the Microsoft integrated BI solution (including MSSQL Server, Analysis Services, and Reporting Services) will commoditize business intelligence. It is therefore easy to see why a natural area of specialization for me has become the conversion of Cognos (and other) enterprise BI to the Microsoft solution. In addition to converting formerly dominant enterprise Business Intelligence systems, such as Cognos, Business Objects, Crystal, and others, to the Reporting Services architecture, I regularly conduct strategy sessions about these conversions with large organizations in a diverse range of industries – the interest grows daily as awareness of the solution becomes pervasive. Indeed, the five-to-six-plus figures that many can shave from their annual IT budgets represent a compelling sweetener to examining this incredible toolset.

The purpose of the Mastering Enterprise BI subset of my Introduction to MSSQL Server Analysis Services series is to focus on techniques for implementing features in Analysis Services that parallel those found in the more "mature" enterprise OLAP packages. In many cases, which I try to outline in my articles at appropriate junctures, the functionality of the OLAP solutions within well-established, but expensive, packages, such as Cognos PowerPlay Transformer and Cognos PowerPlay, can be met – often exceeded – in most respects by the Analysis Services / Reporting Services combination – at a tiny fraction of the cost. The vacuum of documentation comparing components of the Microsoft BI solution to their counterparts among the dominant enterprise BI vendors, to date, represents a serious "undersell" of both Analysis Services and Reporting Services, particularly from an OLAP reporting perspective. I hope to contribute to making this arena more accessible to everyone, and to share my implementation and conversion experiences as the series evolves – and, within the context of the Mastering Enterprise BI articles, to demonstrate that the ease of replicating popular enterprise BI features in Analysis Services will be yet another reason that the Microsoft solution will commoditize Business Intelligence.

For more information about the Mastering Enterprise BI articles, see the section entitled "About the Mastering Enterprise BI Articles" in my article Relative Time Periods in an Analysis Services Cube, Part I.

Introduction

In past articles of this series, I presented ways of "extending" the data source tables underlying our Analysis Services 2000 cubes through the use of SQL expressions within the Member Key and Member Name columns (in the case of dimension structures), and in the Source column (in the case of measures). In some scenarios (such as one that I presented in my article Mastering Enterprise BI: Create Aging "Buckets" in a Cube, where I proposed the use of an IIF / CASE scenario to build the necessary dimensional structure into a sample cube to support aging buckets), we got a glimpse of how, although the approach might work to help us deliver desired results in our business environments, the use of SQL expressions within these rather limited selectors might become cumbersome in many situations. Among many overall improvements and added conveniences in the design arena, Analysis Services 2005 offers us far more flexibility in this area, as well

As I stated in Introduction to MSSQL Server Analysis Services: Introducing Data Source Views, the advent of the Data Source View represents a significant design and development enhancement within Analysis Services. Forming a central, unified view of the metadata within our Analysis Services Project, the Data Source View contains the logical model of the schema used by database objects, including cubes, dimensions, and so forth. A Data Source View can be built to represent one or more Data Sources, allowing us to integrate data from multiple data stores within a single cube, or even dimension. The Data Source View serves as an abstract layer: the Analysis Services database objects are not bound directly to the underlying physical objects within the supporting data stores, but are bound, instead, to the logical objects within the Data Source View.

As we also noted in Introducing Data Source Views , as well as in other articles, one of many advantages that the Data Source View layer offers is that it can contain logical objects, such as queries, relationships, and calculated columns, that do not exist within (and, indeed, are entirely separate from) the underlying data sources. This factor alone offers a great deal of power in scenarios where, for whatever reason, we cannot create these objects within the data sources upon which we are constructing our Analysis Services Projects. A great example of such a logical object is the Named Calculation.

In this two-part article, we will examine Named Calculations, and get hands-on exposure to the process of adding them to a basic cube we construct within the new Business Intelligence Development Studio. We will overview the creation of Named Calculations, and discuss ways in which they can offer flexibility in general cube design and development. As a part of our examination of the steps, we will:

  • Prepare Analysis Services, and our environment, by creating an Analysis Services Project to house our development steps, and to serve as a platform for the design of a quick cube model, within which to perform subsequent procedures in our session (Part I);
  • Create a Data Source containing the information Analysis Services needs to connect to a database (Part I).
  • Create a Data Source View containing schema information (Part I);
  • Add examples of Named Calculations within the Data Source View (Part I);
  • Create a Cube containing data from our sample relational tables (Part II);
  • Create a Dimension based upon two of our Named Calculations to support "aging buckets" (Part II);
  • Deploy our Analysis Services Solution (Part II);
  • Browse the Cube, focusing on the new aging dimension structures (Part II).

Introducing Named Calculations

Overview and Discussion

As many of us are already aware, the Business Intelligence Development Studio provides the environment and the tools that we need to design and develop business intelligence solutions based upon Analysis Services 2005.

We will create an Analysis Services Project within the Business Intelligence Development Studio, to provide the environment and the tools that we need to design and develop business intelligence solutions based upon Analysis Services 2005. As we have noted in the past, the pre-assembled Analysis Services Project that makes its home within the Studio assists us in organizing and managing the numerous objects that we will need to support our efforts to create and deploy our Analysis Services database.

We will leverage the Cube Wizard in this article to quickly design and create a cube, allowing us to focus on the subject matter of the article with minimal peripheral distraction. The Cube Wizard not only helps us simplify the design and creation of our cubes, as it did within Analysis Services 2000: the Analysis Services 2005 Cube Wizard is more powerful, leveraging IntelliCube technology to examine and classify many of the attributes of our data. Analysis Services can determine, for example, prospective fact tables, dimensions, hierarchies, levels and other structural members of our cubes from a given database schema at which it is pointed. Regardless of whether we make a habit of using the wizard in our cube development efforts, it certainly provides a way to rapidly generate a cube, if only to eliminate part of the repetitive work involved to create a "starting point" model, which we can then "prune and groom" to more precisely meet the business requirements of our employers and customers.

Considerations and Comments

For purposes of the practice exercises within this series, we will be working with samples that are provided with MSSQL Server 2005 Analysis Services. The samples with which we are concerned include, predominantly, the Adventure Works DW Analysis Services database (with member objects). The Adventure Works DW database and companion samples are not installed by default in MSSQL Server 2005. The samples can be installed during Setup, or at any time after MSSQL Server has been installed.

The topics "Running Setup to Install AdventureWorks Sample Databases and Samples" in SQL Server Setup Help or "Installing AdventureWorks Sample Databases and Samples" in the Books Online (both of which are included on the installation CD(s), and are available from www.Microsoft.com and other sources, as well), provide guidance on samples installation. Important information regarding the rights / privileges required to accomplish samples installation, as well as to access the samples once installed, is included in these references.

Hands-On Procedure

We will begin by creating a new project within the Business Intelligence Development Studio, wherein we will create a Data Source pointed to a database sample provided with MSSQL Server 2005. This way, anyone with access to the installed application set and its samples can complete the steps in the practice session.

Preparation

Create a New Analysis Services Project

We begin our preparation within Business Intelligence Development Studio, where we will create a new Microsoft SQL Server 2005 Analysis Services (SSAS) Project. As we have noted in other articles, those of us who have worked within earlier versions of Reporting Services, or within the Visual Studio development environment under other circumstances, will recognize the general look and feel of the Studio. Under this new style of development for Analysis Services, to which much of the documentation refers as "project mode," we create an Analysis Services Project, which houses a set of Analysis Services objects. The object set can include Data Sources, Data Source Views, Dimensions, Cubes, and other components.

Using an Analysis Services Project template affords us many benefits, such as helping us to organize and manage the items that are required to build, debug, and deploy an Analysis Services database. The development of an Analysis Services cube, which is our objective within this lesson, as well as one of the most common tasks in Analysis Services, can be accelerated though the use of the Analysis Services 2005 Cube Wizard, which both guides and simplifies the process, as is the way of wizards. IntelliCube technology, which can detect the relationships between attributes, dimensions, and facts, makes the wizard more effective than the wizard that was available in Analysis Services 2000.

While we will not go into the processes and structures that underlie our immediate topic, building a basic cube from scratch, let's make a mental note that all objects that we create and store within a given project are defined as files (using an XML representation) that eventually are deployed to an Analysis Services database.

1.  Click the Start button.

2.  Select Microsoft SQL Server 2005 within the Program group of the menu.

3.  Click SQL Server Business Intelligence Development Studio, as depicted in Illustration 1.


Illustration 1: Opening SQL Server Business Intelligence Development Studio

The Microsoft Visual Studio 2005 development environment opens, beginning with the Start page, as shown in Illustration 2.


Illustration 2: The Start Page, Microsoft Visual Studio 2005 Development Environment (Compressed View)

4.  Close the Start Page tab.

5.  Select File --> New on the Visual Studio main menu.

6.  Select Project from the cascading menu, as depicted in Illustration 3.


Illustration 3: Beginning a New Project ...

The New Project dialog appears.

7.  Select Business Intelligence Projects in the Project types pane of the dialog.

8.  Select Analysis Services Project in the Templates pane, as shown in Illustration 4.


Illustration 4: Select Analysis Services Project

NOTE: The templates that appear in your template pane may differ, depending upon which SQL Server 2005 components are installed in your environment, as well as whether additional templates (for Business Intelligence Projects or other types of projects that can be created) have been defined in Visual Studio.

9.  Change the project Name (currently displaying a default) to the following:

ANSYS045 Named Calculations

The Solution Name changes to match the project Name by default.

10.  Navigate to a convenient location to store the Project and Solution files, modifying the Location box accordingly (Visual Studio will create a directory based upon our input here).

The New Project dialog appears similar to that depicted in Illustration 5.


Illustration 5: The New Project Dialog, with our Input

11.  Click OK to accept our input and to create the new Analysis Services Project.

The New Project dialog closes, as Visual Studio creates the project, and solution ANSYS045 Named Calculations appears in the Solution Explorer as shown in Illustration 6.


Illustration 6: The New Analysis Services Project Appears

Solution Explorer presents a tree view of the objects contained in the solution, the primary management unit within the Business Intelligence Development Studio, which can contain multiple projects. Individual projects, such as the one we have created, themselves contain folders for the objects that can be defined for projects of a similar type. It is, after all, the purpose of an Analysis Services Project to group related files containing the XML code behind the objects in a given Analysis Services database.

As we can see in the present instance, the Analysis Services Project template, upon which our ANSYS045 Named Calculations project was based, contains the following folders:

  • Data Sources
  • Data Source Views
  • Cubes
  • Dimensions
  • Mining Structures
  • Roles
  • Assemblies
  • Miscellaneous

Anytime we create a new Analysis Services Project, the Solution Explorer and the Properties window are visible and docked, as we see them in the present case. The following windows are hidden and docked, initially, and appear on the right, bottom, or left side of the development environment, depending upon where they are docked:

  • Server Explorer
  • Toolbox
  • Task List
  • Error List

Viewing any of the hidden windows is as simple as positioning the pointer over it – its mouseover behavior is to reappear. We can also click the Auto Hide button (depicted for the Solution Explorer in Illustration 7) to hide or unhide a window.


Illustration 7: The Auto Hide Button – Solution Explorer

Finally, we can always open a closed window by selecting it from the View menu atop the development environment, as shown in Illustration 8.


Illustration 8: Reopen Closed Windows from the View Menu ...

Having created a new Analysis Services Project, we are ready to define a Data Source and Data Source View.

Define a Data Source

Defining a Data Source, as we have discovered in past articles, is typically the first step we take with an Analysis Services Project. A project can have multiple Data Sources, within which we are essentially defining the connection string used to connect to the actual sources of our data. While we can define connections to source databases on remote computers in the business environment, we will, in this article, be working with one of the sample databases that are available to us when we install MSSQL Server 2005, the AdventureWorksDW database.

Let's create a Data Source within our new ANSYS045 Named Calculations project by taking the following steps:

1.  Right-click Data Sources folder within the Solution Explorer.

2.  Select New Data Source from the context menu that appears, as depicted in Illustration 9.

Click for larger image

Illustration 9: Initial Step in Creating a New Data Source

The Data Source Wizard appears, opening with the Welcome to the Data Source Wizard page, as shown in Illustration 10.

Click for larger image

Illustration 10: Welcome to the Data Source Wizard ...

3.  Click Next.

The Select how to define the connection page appears. This page allows us to define a Data Source based upon a new or existing connection. (It also allows us to base a Data Source upon a connection that has been previously created, as long as the existing Data Source definition exists within the same project, or within another project housed in the same solution within which we are currently working.)

4.  Click New on the Select how to define the connection page.

The Connection Manager dialog appears.

5.  Ensure that .Net Providers\SqlClient Data Provider is selected in the Provider selection box atop the Connection Manager dialog.

6.  Type \ select the appropriate server name / server name with instance into the Server name box.

We can substitute "localhost" or "localhost\<instance name>" here, if it is appropriate to our environment.

7.  Select / complete the appropriate authentication choice for your environment in the section labeled Log on to the server (I selected the radio button to the left of Use Windows Authentication in the lab environment in which I prepared this article).

8.  Select AdventureWorksDW in the Select or enter a database name selector box, within the Connect to a database section of the dialog.

The Connection Manager dialog appears similar to that depicted in Illustration 11.


Illustration 11: Connection Manager Dialog in My Lab Environment

9.  Click the Test Connection button (in the bottom left corner of the Connection Manager dialog) to ascertain that we have a valid connection.

We receive a positive confirmation message, Test connection succeeded, assuming settings in the dialog are in accordance with the above steps, as appropriate for our own environments. The confirmation message box appears as shown in Illustration 12.


Illustration 12: Confirmation Message Indicating Successful Connection Test

10.  Click OK to dismiss the message box and return to the Connection Manager dialog.

11.  Click OK on the Connection Manager dialog to accept our settings and create the data connection.

We return to the Select how to define the connection page of the Data Source Wizard, which appears similar to that depicted in Illustration 13.


Illustration 13: Select How to Define the Connection Page with New Data Connection

12.  Click Next.

We arrive at the Impersonation Information page, where we define connection credentials for the data source.

13.  Click the radio button to the immediate left of Use the service account, as shown in Illustration 14.


Illustration 14: Select the Use the Service Account Option

14.  Click Next.

The Completing the Wizard page appears, as depicted in Illustration 15.


Illustration 15: Completing the Wizard ...

We will leave the assigned Data Source name in place for purposes of our practice session.

15.  Click Finish to create the new Data Source, and to dismiss the Data Source Wizard.

The Wizard closes, and the new Data Source appears in the Data Sources folder within the Solution Explorer as shown in Illustration 16.


Illustration 16: The New Data Source Appears in the Solution Explorer

Having established the basic prerequisites, we are ready, at this stage, to create a Data Source View.

Procedure

Define a Data Source View

Having defined the Data Source within our Analysis Services Project, our next steps surround the creation of the Data Source View, a procedure that is customary at this stage in most Analysis Services Projects. It is important to realize, as we work with a "live" data connection that we have defined, that we could certainly continue our development efforts with the metadata without an open connection. The Data Source View provides a single, unified view of the metadata from the tables and views that concern us within our project.

NOTE: For more information on Data Source Views, see my article Introduction to MSSQL Server Analysis Services: Introducing Data Source Views at Database Journal.

To define a Data Source View, we will take the following steps:

1.  Right-click Data Source Views folder within the Solution Explorer.

2.  Select New Data Source View from the context menu that appears, as depicted in Illustration 17.

Click for larger image

Illustration 17: Initial Step in Creating a New Data Source View

The Data Source View Wizard appears, opening with the Welcome to the Data Source View Wizard page, as shown in Illustration 18.

Click for larger image

Illustration 18: The Initial Page of the Data Source View Wizard

3.  Click Next.

We arrive at the Select a Data Source page, where we see our Adventure Works DW Data Source in the Relational data sources list box on the left side of the page, as depicted in Illustration 19.


Illustration 19: Adventure Works DW Appears in the Relational Data Sources

Our newly created Data Source is positioned as the default, and will serve us in meeting the objectives of our practice exercise. It is particularly useful to note, as we have mentioned in Introducing Data Source Views, as well as other articles, that we can define a primary data source (a single Data Source like Adventure Works DW is an example), and then add tables and views from secondary data sources.

NOTE: Heterogeneous queries are supported as long as one Data Source is a MSSQL Server Data Source.

4.  Click Next.

We arrive at the Select Tables and Views page, where we see the various tables of the Adventure Works DW data source appear in the Available objects list box on the left of the page.

5.  Click FactResellerSales in the Available objects list to select it.

6.  Click the button marked > to move FactResellerSales to the Included objects list on the right half of the page, as shown in Illustration 20.


Illustration 20: Selecting a Table for Inclusion in the Data Source View

7.  Click the Add Related Tables button, located underneath the Included objects list.

Several tables appear within the Included objects list, as depicted in Illustration 21.


Illustration 21: Our Tables Selection for the Data Source View

8.  Click Next.

The Completing the Wizard page appears, as shown in Illustration 22.


Illustration 22: The Final Page of the Data Source View Wizard

As indicated in the Completing the Wizard page, our selection includes the following tables:

  • FactResellerSales
  • DimProduct
  • DimPromotion
  • DimCurrency
  • DimReseller
  • DimTime
  • DimSalesTerritory
  • DimEmployee

9.  Click Finish to create the new Data Source View, and to dismiss the Data Source View Wizard.

Our new Data Source View, Adventure Works DW, appears in the Data Source Views folder within the Solution Explorer window, as depicted in Illustration 23.


Illustration 23: The New Data Source View in the Solution Explorer

The Data Source View is also presented within the Data Source View Designer, which now opens within Business Intelligence Development Studio, as shown in Illustration 24.


Illustration 24: The New Data Source View in the Data Source View Designer

One of several designers within the Studio, the Data Source View Designer for our new Data Source View contains numerous elements that aid us in our organization and design efforts. We can modify the composition of the tables that comprise our view, as well as performing other activities, here. We will get some hands-on exposure to one such activity, working with Named Calculations, within the next section.

Add Named Calculations within the Data Source View Designer

The capability to add Named Calculations within the Data Source View is yet another "abstract layer" convenience offered in Analysis Services 2005. A Named Calculation allows us to insert a SQL expression that is then sent to the provider, allowing us to "extend" the existing data source for added flexibility. Whereas we were limited to decidedly meager avenues for the insertion of expressions within the Analysis Manager of the previous version (see my articles Derived Measures vs. Calculated Measures and Mastering Enterprise BI: Create Aging "Buckets" in a Cube, within this series, as a couple of examples), the new option to add Named Calculations within the Data Source View Designer provides a far more robust, articulate means of achieving what amounts to extending our relational tables or views.

There are several ways to add / modify Named Calculations from within the Data Source View Designer, as we shall see, including:

  • Clicking New Named Calculation in the Toolbar pane;
  • Right-clicking a table in the Tables pane of the Data Source View Designer and selecting New Named Calculation ... from the context menu that appears;
  • Right-clicking a table in the Diagram pane of the Data Source View Designer and selecting New Named Calculation ... from the context menu that appears;
  • Right-clicking a defined Named Calculation in the Diagram pane of Data Source View Designer, and selecting Edit Named Calculation from the context menu that appears.

We can reference tables outside the schema defined within our Data Source View, as well. This, combined with the capability to use elaborate SQL expressions, places unprecedented power at this abstract layer, where we can place "calculated columns" to enhance the underlying tables and views.

A Simple Named Calculation: Create a Member Description with CASE

Let's create a simple Named Calculation to provide a member name / description where, at present, only a single character is available from within the relational source, the DimEmployee table.

1.  Click the DimEmployee table to select it, within the Adventure Works DW.dsv [Design] tab.

2.  Click the New Named Calculation button in the Designer Toolbar pane, as depicted in Illustration 25.


Illustration 25: Click the New Named Calculation Button in the Toolbar Pane ....

The Create Named Calculation dialog appears.

3.  Type the following into the Column Name box, atop the dialog:

ANSYS045_Gender Description

4.  Input the following into the Description box:

ANSYS045_Gender Description

5.  Type the following into the Expression box, atop the dialog:

CASE WHEN Gender = 'M' THEN 'Male' ELSE 'Female' END

The Create Named Calculation dialog appears, with our input, as shown in Illustration 26.


Illustration 26: The Create Named Calculation Dialog with Our Input

6.  Click OK to create the new Named Calculation.

The dialog closes, and we can see the Named Calculation within the DimEmployee table, where it appears at the end of the listed columns, as depicted in Illustration 27.


Illustration 27: The New Named Calculation in the DimEmployee Table

Let's take a look at the new Named Calculation at the data level – a great way to ascertain the effectiveness of our SQL expression. (The Create Named Calculation dialog will arrest an attempt to create a Named Calculation with a dialog / expression that is incomplete or if the Expression box contains syntactical errors, but it can hardly be expected to check our logic, should we happen to "correctly specify" something we do not intend). The Data Source View provides a way to browse a subset of the data from the perspective of the abstract layer, with the Explore Data option.

7.  Right-click the DimEmployee table within the Data Source View Designer.

8.  Select Explore Data from the context menu that appears, as shown in Illustration 28.


Illustration 28: Click Select Explore Data from the Context Menu ....

The Load Sample Data message box briefly appears, and then a new tab appears within BI Development Studio. Within this tab, named Explore DimEmployee Table, we can browse table data – including our Named Calculation, as we shall see.

9.  On the Explore DimEmployee Table tab, scroll to the rightmost column, ANSYS045_Gender Description.

ANSYS045_Gender Description appears, similar to what is partially depicted in Illustration 29.


Illustration 29: The New Named Calculation Appears within the Explore Tab

We can easily see that the Named Calculation is performing its intended objectives by contrasting its member fields to the associated fields in the Gender column to its left. This Named Calculation might be useful as one approach to provide a description versus the single-letter held in the original table, for, as an example, a member name definition within a cube.

10.  Right-click the Explore DimEmployee Table tab.

11.  Select Close from the context menu that appears, as shown in Illustration 30.


Illustration 30: Closing the Explore DimEmployee Table Tab ...

We are returned to the Data Source View Designer.

A More Involved Named Calculation: Support for Aging "Buckets" with CASE

Let's look at a slightly more involved example – and a way to meet a need for generation of aging "buckets" similar to the ones we created (using limited Analysis Services 2000 options) within our cube in Mastering Enterprise BI: Create Aging "Buckets" in a Cube, the article to which we referred earlier.

In the article, as some of us may recall, we examined the use of a dimension whose members were based upon conditional logic to attain our objectives. I stated that this was but one of several approaches I have used to support aging stratification / grouping structures within Analysis Services (having also accomplished aging with calculated measures, as well as via hybrid approaches where I performed aging of transactions at an intermediate view level within the RDBMS, etc.). We noted that, in the same manner that derived measures can be based upon conditional logic (as an example, I noted that I often used CASE statements in derived measures for this purpose, when the RDBMS under consideration was MSSQL Server), we could use conditional logic to create the members of a dimensional level outfitted for the purpose of supporting aging groups.

NOTE: For more information on the derived measures in general (from an Analysis Services 2000 perspective), see my article Derived Measures vs. Calculated Measures, also within this Database Journal series.

I stated that the approach to take in creating aging "buckets," whether through the method we examined in the article, or through alternatives we suggested there and in later articles, depends upon many factors, chief of which might be optimization considerations. Our intent in that article was, just as our intent in this article is, not to focus upon optimization per se, as this is an area that can vary greatly from environment to environment. Then, as now, we were presenting an approach, an option for achieving aging groupings. The option is conceptually very similar to our approach via the capabilities we were afforded in Analysis Services 2000 – the Named Calculations afforded in Analysis Services 2005 simply make it easier to accomplish – even with more elaborate conditional logic. Of course, it still remains to the reader to determine whether the support structure itself (a dimension structure based upon conditional logic) is appropriate, or if another method is a better "fit" within their own setting.

In our previous approaches, we cited an example where we were to add aging capabilities to a cube that a client already had in place. In essence, they wished to age "accounts receivable," based upon customer transaction dates, over a given time frame within their current year. We will replicate the substance of the work we performed in the practice example that follows, using our practice data source instead of the sample FoodMart data source (an MS Office Access database) we used in the Analysis Services 2000 example.

Let's assume, again, that the consumers have told us that they wish to age the customer accounts into the following relatively common aging buckets:

  • < 30 days

  • 30 – 60 days

  • 61 – 90 days

  • 91-120 days

  • > 120 days

Limitations of the underlying sample data require us to ignore the fact that there is no "invoice" date. Indeed, the simple business scenario presented within the underlying relational database gives us little more than a "cash basis" depiction of business: the sales transactions, at least the ones that are collected from the AdventureWorks OLTP into the AdventureWorks DW database that we are using as the source for our cube, do not indicate important information that would be necessary to generate conventional accounts receivable agings or other attributes. (In the real world, we would likely be creating accounts receivable agings based upon transactional data within the transactional database (OLTP), and not with data warehouse / mart data anyway). We will keep things simple and use the underlying data source as we have aligned it to support our cube.

From the perspective of the data source (AdventureWorks DW), the transaction date (we will use the Ship Date, as it at least approximates a date that revenue would become "earned and measurable," and thus support an "account receivable") that we will use resides within the DimTime table, joined to the FactResellerSales table as depicted in Illustration 31.


Illustration 31: Source of the "Transaction Date"

Because the data source provides no easily discernable, subsequent "collection" date for the "receivables" generated on given transaction dates (again, it appears that sales are "cash basis" in this simple data), we will need to make some assumptions. We will assume, for purposes of our exercise, that none of the transactions with ship dates over the last six months have had matching receipts upon them. This will give us some aging strata to support the illustration of our point. (Again, the idea here is to demonstrate a method for creating aging buckets, not to model an accounts receivable system.)

A quick query of the tables concerned indicates that the current (in our scenario, the "most recent") year for available data is 2004, with transactions extending through the early days of June. Within our practice scenario, we will age transactions that have shipment dates between January 1, 2004 and June 30, 2004, making the assumptions we have already discussed. We will take the following steps to create Named Calculations as an approach to providing both keys and names for the members of an aging dimension within our cube.

1.  Right-click the DimTime table, in the Tables pane.

2.  Select New Named Calculation ...from the context menu that appears, as shown in Illustration 32.


Illustration 32: Select New Named Calculation ... from the Tables Pane

The Create Named Calculation dialog appears.

3.  Type the following into the Column Name box, atop the dialog:

ANSYS045_Aging_Bucket_Key

4.  Input the following into the Description box:

Numeric Aging Bucket Key

5.  Type (or cut and paste) the following into the Expression box, atop the dialog:


CASE 
   WHEN DATEDIFF(Day, DimTime.FullDateAlternateKey, '12/31/2003') 
      < 30 THEN 29 
   WHEN DATEDIFF(Day,  DimTime.FulldateAlternateKey, '12/31/2003') 
      BETWEEN 30 AND 60 THEN 60 
   WHEN DATEDIFF(Day, DimTime.FulldateAlternateKey, '12/31/2003') 
      BETWEEN 61 AND 90 THEN 90 
   WHEN DATEDIFF(Day, DimTime.FulldateAlternateKey, '12/31/2003') 
      BETWEEN 91 AND 120 THEN 120 ELSE 999 END

The Create Named Calculation dialog appears, with our input, as depicted in Illustration 33.


Illustration 33: The Create Named Calculation Dialog with Our Input

6.  Click OK to create the new Named Calculation.

The dialog closes.

Next, we will create a Named Calculation to provide member names for our aging "buckets."

7.  Right-click the DimTime table, this time within the Diagram pane.

8.  Select New Named Calculation ... from the context menu that appears, as shown in Illustration 34.


Illustration 34: Select New Named Calculation from the Diagram Pane ...

The Create Named Calculation dialog appears.

9.  Type the following into the Column Name box, atop the dialog:

ANSYS045_Aging_Bucket_Name

10.  Input the following into the Description box:

Aging Bucket Name

11.  Type (or cut and paste) the following into the Expression box, atop the dialog:


CASE 
   WHEN DATEDIFF(Day, DimTime.FullDateAlternateKey, '12/31/2003') 
      < 30 THEN '< 30' 
   WHEN DATEDIFF(Day,  DimTime.FulldateAlternateKey, '12/31/2003') 
      BETWEEN 30 AND 60 THEN '30 - 60' 
   WHEN DATEDIFF(Day, DimTime.FulldateAlternateKey, '12/31/2003') 
      BETWEEN 61 AND 90 THEN '61 - 90' 
   WHEN DATEDIFF(Day, DimTime.FulldateAlternateKey, '12/31/2003') 
      BETWEEN 91 AND 120 THEN '91 - 120'  ELSE '> 120' END

The Create Named Calculation dialog appears, with our input, as depicted in Illustration 35.


Illustration 35: The Create Named Calculation Dialog with Our Input

12.  Click OK to create the new Named Calculation.

The dialog closes.

13.  Expand the DimTime table within the Tables pane, by clicking the "+" sign to its immediate left.

14.  Scroll down as required to see the two new Named Calculations appear toward the bottom of the tree, as shown in Illustration 36.


Illustration 36: The Named Calculations Appear in the Tables Pane

Here, as well as in other places, we can right-click the Named Calculation and modify it, should the need arise. Let's verify, as we did with the first Named Calculation we created in the last section, the effectiveness of the SQL expressions we have used.

15.  Right-click the DimEmployee table within the Tables pane.

16.  Select Explore Data from the context menu that appears, as depicted in Illustration 37.


Illustration 37: Click Select Explore Data from the Context Menu ....

The Load Sample Data message box briefly appears, as we saw earlier, and then a new tab, named Explore DimEmployee Table, appears within BI Development Studio. Within this tab, we can see our new Named Calculations.

17.  On the Explore DimEmployee Table tab, scroll to the rightmost two columns, ANSYS045_Aging_Bucket_Key and ANSYS045_Aging_Bucket_Name.

A sample of the two new Named Calculations appears partially depicted in Illustration 38.


Illustration 38: The New Named Calculations Appear within the Explore Tab

We can once again gain confidence that the Named Calculations are performing their intended objectives, by contrasting representative member fields with the associated fields in the FullDateAlternateKey column near the far left of the table. (We assigned any "out of scope" dates a key of "999" as an example. We might have made this key any value – preferably something that might lend itself to filtering in browses and reports, as we mentioned earlier, from our cube.)

18.  Right-click the Explore DimEmployee Table tab.

19.  Select Close from the context menu, as we did before, to dismiss the Explore DimEmployee Table tab.

We are returned to the Data Source View Designer. We will continue our work in the subsequent article, within which we will create a basic cube structure, and then focus on the use of Named Calculations that we have created within this session, to support the dimensional structure of our cube.

20.  Select File --> Save All from the main menu, to save all work for Part II, as shown in Illustration 39.


Illustration 39: Saving Our Work for Part II ...

21.  Select File --> Exit to leave the Business Intelligence Development Studio, when desired.

Conclusion

In this, the first half of a two-part article, we began an examination of Named Calculations. We prepared Analysis Services, as well as our environment, for the steps in Part I and Part II, creating an Analysis Services project within which to create Named Calculations, and from which to put them to work within a basic cube. We created a Data Source containing the information Analysis Services needed to connect to our source database (the AdventureWorks DW sample), and then created a Data Source View.

Having completed preparation for the focus of our session, we got some hands-on exposure to the design and creation of Named Calculations within the Data Source View. This positioned us to put the Named Calculations to work in Part II, where we will create a dimensional structure for our "aging buckets," which we will support with our new Named Calculations. We will then deploy and browse our cube, to verify the effectiveness of our solution.

» See All Articles by Columnist William E. Pearson, III

Discuss this article in the MSSQL Server 2000 Analysis Services and MDX Topics Forum.

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved