Age transactional data with derived dimensional
structures. Business Intelligence Architect and CPA Bill Pearson shows a
cube-based approach to creating aging "buckets" for enterprise
About the Series ...
article is a member of the series Introduction to MSSQL Server Analysis
Services. The series is designed to provide hands-on application of
the fundamentals of MS SQL Server Analysis Services, with each
installment progressively adding features and techniques designed to meet
specific real - world needs. For more information on the series, as well as
the hardware / software requirements to prepare for the exercises we
will undertake, please see my initial article, Creating Our First Cube.
Note: Current Service Pack updates are assumed for MSSQL Server 2000, MSSQL
Server 2000 Analysis Services, and the related Books Online
and Samples. Images are from a Windows 2003 Server
environment, within which I have also implemented MS Office 2003, but
the steps performed in the articles, together with the views that result, will
be quite similar within any environment that supports MSSQL Server 2000 and MSSQL Server 2000 Analysis Services ("Analysis
Services" or "MSAS"). The same is generally true,
except where differences are specifically noted, when MS Office 2000 and
above are used in the environment, in cases where MS Office components
are presented in the article.
About the Mastering Enterprise
BI Articles ...
Having implemented, and developed within, most of the major
enterprise BI applications for over ten years, and having developed an
appreciation for the marriage of ease of use and analytical power
through my background in Accounting and Finance, I have come to appreciate the
leadership roles Cognos and other vendors have played in the evolution of OLAP
and enterprise reporting. As I have stated repeatedly, however, I have become
convinced that the components of the Microsoft integrated BI solution
(including MSSQL Server, Analysis Services, and Reporting
Services) will commoditize business intelligence. It is therefore easy to
see why a natural area of specialization for me has become the conversion of
Cognos (and other) enterprise BI to the Microsoft solution. In addition to converting formerly
dominant enterprise Business Intelligence systems, such as Cognos, Business
Objects, Crystal, and others, to the Reporting Services architecture, I regularly
conduct strategy sessions about these conversions with large organizations in a
diverse range of industries the interest grows daily as awareness of the
solution becomes pervasive. Indeed, the five-to-six-plus figures that many can
shave from their annual IT budgets represent a compelling sweetener to
examining this incredible toolset.
The purpose of the Mastering
Enterprise BI subset of my Introduction to
MSSQL Server Analysis Services series is to focus on techniques for implementing features in
Analysis Services that parallel those found in the more "mature"
enterprise OLAP packages. In
many cases, which I try to outline in my articles at appropriate junctures, the
functionality of the OLAP solutions within well-established, but expensive,
packages, such as Cognos PowerPlay Transformer and Cognos PowerPlay,
can be met often exceeded in most respects by the Analysis Services /
Reporting Services combination at a tiny fraction of the cost. The vacuum of
documentation comparing components of the Microsoft BI solution to their
counterparts among the dominant enterprise BI vendors, to date, represents a
serious "undersell" of both Analysis Services and Reporting Services,
particularly from an OLAP reporting perspective. I hope to contribute to
making this arena more accessible to everyone, and to share my implementation
and conversion experiences as the series evolves and, within the context of
the Mastering Enterprise BI articles, to demonstrate that the ease of
replicating popular enterprise BI features in Analysis Services will be yet
another reason that the Microsoft solution will commoditize Business
more information about the
BI articles, see the section entitled
"About the Mastering
Enterprise BI Articles" in my
Time Periods in an Analysis Services Cube, Part I.
this article, we will look at a common business need, the aging of
values. Aging is typically a process by which the enterprise determines the
length of time that has transpired since a transaction (usually financial) has
taken place within an account. Examples of common subject areas for agings in
the business environment include accounts receivable, accounts payable and
inventory, among many less common uses.
of the specific type of aging that we need to enact, we can apply the
principles we will examine in this article to reach a solution that works
within the OLAP environment. Moreover, and more to the point of our Mastering Enterprise BI series, we can replicate the
functionality provided in many accounting and financial applications, as well
as a plethora of "pre-fab" reporting solutions on the market, within
the integrated Microsoft BI solution.
In this article, we will:
general aging concepts, and their pervasiveness in the business environment;
the exercise by creating a clone of the FoodMart Analysis Services
database, within which we will age customer transactions, simulating accounts receivable
inside the sample Sales cube;
Table Filters (dimension and cube) to limit our cube to a specified
transaction date range.
Create an Aged
Periods shared dimension within the sample cube to provide "buckets"
for date-based transactional data;
capability for use in our verification process later.
adequacy of our solution by demonstrating the use of the new capabilities from
the perspective of the Cube Browser in Analysis Manager;
to a subsequent article where we create aging buckets through an alternative
the use of our new structures in the ultimate reporting application. We will
also look forward to a subsequent article where we employ the aging buckets
that we create in this article within Reporting Services.
An Approach to Aging within a Cube
Overview and Discussion
In this article, we will focus upon an accounts receivable aging scenario, largely because of the pervasiveness of such agings in the business environment, but also because it will work slightly better within the constraints of the sample databases (both relational and Analysis Services), among other sample objects, that are available to virtually anyone who has installed Analysis Services. Regardless of the type of aging we need to create within the cubes we design for clients and employers, the general concept is virtually always the same: to create "buckets" to group, and typically aggregate, transactions based upon their ages. In this article, we will work within an accounts receivable scenario, but I have applied the same principles within inventory agings, where I have assisted clients in determining risk of obsolescence, as well as an ongoing basis for write downs and other adjustments, etc. in the inventory realm. Furthermore, I have implemented underlying structural support in numerous other scenarios where I have applied aging concepts within cube design and development.
Within an accounts receivable aging, customer transactions are classified according to the date of sale, and, although presentations are legion, the end objective is to present an aging schedule of some sort that lists accounts and quantifies the length of time balances / transactions have been outstanding. Because the risk of uncollectibility increases the longer accounts remain unpaid, aging schedules help the organization to minimize risk by revealing patterns of delinquency and helping to create a focus upon where collection efforts should be concentrated.
A well-maintained accounts receivable aging can also help the enterprise in the prevention of loss of future sales, as some customers / clients might be tempted to seek other sources of supply and "jump" to another supplier if, say, their balances reach high levels and other circumstances emerge (recessionary phases in the economy, reduced demand for the customer's goods, general instability, and other factors). The objective of aging is to minimize bad debt on the books, as well as to optimize cash flows and the overall health of the organization.
The approach we take to achieve our ends in this article represents but one of several that I have taken for aging within Analysis Services cubes. It is, in my opinion, both easy to understand and straightforward to implement, given a meaningful transaction date within the data upon which an aging can be based. In our example, we will be "aging" the purchases of customers in a manner that is intended to simulate accounts receivable. The transactions and structures within the FoodMart Sales sample cube are, to say the least, a highly simplified reflection of the real world, but nevertheless provide the basis we need to create a scenario that can be extrapolated to accounts receivable, as well as other accounts where we might wish to similarly age transactions. The idea is to demonstrate a method in a way that is of benefit to virtually anyone who has installed Analysis Services with its accompanying samples.
In this article, we will examine the use of a dimension whose members are based upon conditional logic to attain our objectives. As I said earlier, this is but one of several approaches I have used to perform aging processes in Analysis Services (I have also done it with calculated measures, as well as using hybrid approaches where I perform the aging of the transactions at an intermediate view level within the RDBMS, for various reasons). In the same manner that derived measures can be based upon conditional logic (as an example, I often use CASE statements in derived measures for this purpose, when the RDBMS under consideration is MSSQL Server), we can use conditional logic to create the members of a dimension outfitted for the purpose of housing aging groups.
The approach to take in creating aging "buckets," be it through the method we will examine in this article, or through alternatives we will suggest here and in later articles, depends upon many factors, chief of which are perhaps optimization considerations. Our intent in this article will not be to focus upon optimization per se, as this is an area that can vary greatly from environment to environment. In this article, we will present an approach, an option to achieving aging accumulations. It is up to the reader to determine whether the support structure itself (a dimension based upon conditional logic) is appropriate, or if another method is a better "fit" within their own setting.
For purposes of our practice procedure, we will assume that we have been contacted by a client to assist in the addition of aging capabilities to a cube they already have in place. We meet with representatives of the Accounting department of the FoodMart organization, who inform us that they wish to age accounts receivable based upon customer transaction dates, over the last six months of their current year, 1997. The consumers emphasize repeatedly that queries are to process as rapidly as possible.
The data under consideration, we are told, resides within the Sales cube (which accompanies the installation of Analysis Services as a sample). The consumers tell us that they wish to age the customer accounts into the following relatively common aging buckets:
< 30 days
30 60 days
61 90 days
> 120 days
Limitations of the sample Sales cube require us to ignore the fact that there is no "sold" date (indeed, the simple business scenario presented within the cube gives us little more than a "cash basis" depiction of business. The sales transactions that underlie the Sales cube do not indicate important information that would be necessary to generate conventional accounts receivable agings or other attributes. This, however, is what we have to work with within a sample set that everyone has. From the perspective of the data source (FoodMart 2000.mdb), the transactiondate that we will use resides within the time_by_day table, joined to the sales_fact_1997 table as shown in Illustration 1.
Illustration 1: Source of the Transaction Date
Because the data source provides no "collection" date for the "receivables" generated on a given transaction date (again, it appears that sales are "cash basis" in this simple data), we will need to make some assumptions. Keep in mind that the idea here is to demonstrate a method for creating aging buckets, and not to model an accounts receivable system, and so forth.
We will focus on the last six months of 1997, and, for purposes of our practice example, assume the unlikely circumstance that no receipts have been posted on the sales that have taken place within those six months. Again, we are limited by the simplicity of the data sources, and are simply practicing the generation of aging accumulations via one straightforward method. The scenario within which we will work is hardly a reflection of a real world financial system and the data it houses. In the real world, collection dates, as well as "status" flags (such as "received," "receipt matched to receivable transaction," and so forth, would provide us the capability of selecting only "unpaid" or "outstanding" receivables, which we would be able to filter out of the population of "all receivables" as of a given date for aging purposes. (Integrated financial databases, such as CODA-Financials, even allow us to determine other factors within the receivables collection cycle, such as time elapsed between the initial transaction date and the collection date, which are captured as part of the match between receipts and associated receivables, which also marks a transaction as fully / partially paid, etc. The storage of these events and dates provide us the capability to generate useful metrics regarding general cash flows, customer payment trends, and other opportunities for analysis).
As a part of using a dimension whose members are "derived" to perform our aging accumulations, we will also focus upon the underlying RDBMS, the FoodMart MS Access database. The approach we use within our dimension to generate our aging accumulations will differ, somewhat, within the far more likely scenario for a data source within the business environment, in which we would be generating a cube based upon an MSSQL Server database (or perhaps use another common enterprise RDBMS as the primary data source). We will offer an approach that would be more appropriate in constructing a similar set of dimensional members for an MSSQL Server relational data source at the appropriate time, as an added tip in extrapolating the concepts of the lesson to your own business environments.
We confirm our understanding of the requirement with the Accounting department representatives, and set about the creation of the required capabilities.
Considerations and Comments
For purposes of our practice exercise, we will be working with the Sales cube, within a copy of the FoodMart 2000 Analysis Services database. The Sales cube, as most of us are aware, is a member of a group of working samples that accompanies a typical installation of Analysis Services. If the sample FoodMart MSAS database, the FoodMart .mdb that underlies it, or any other sample object to which we refer in our practice exercise, was not installed, or was removed prior to your beginning this article, please see the MSAS documentation, including the Books Online, for the straightforward procedure to restore the database from the archive (.cab) file containing the samples or elsewhere. As of this writing, a copy of the archive / other components can be obtained from the installation CD or via download from the appropriate Microsoft site(s).
We will construct a shared dimension, whose members are derived from conditional logic that will generate aged accumulations. As I stated earlier, I often use calculated members to achieve a similar effect depending upon the environment involved, and the specific optimization considerations therein. While the focus of this article is not optimization, it might be useful to provide a brief treatment of the high-level considerations. A good starting point is to compare and contrast dimensions and calculated members in general, as well as from within the current perspective.
Unlike calculated members (including, of course, calculated measures), whose values are created at runtime based upon the MDX expression(s) they contain, a dimension, just like a measure, is calculated during the cube processing cycle, and is stored within Analysis Services. In the case of the calculated member, only its definition is stored, and the values it renders are calculated in memory when specified in a query ("runtime"). Because only the definition is stored, cube processing time ("build time") is not materially increased, and, indeed, depending upon the environment, this might be an appropriate tradeoff. In our present scenario, however, query processing time might represent a more important concern, so we lose a possible benefit behind the choice of a calculated measure to provide the needed values.
As we will see in the practice example, the approach of deriving the members of our Aging Periods dimension will take advantage of the flexibility that is offered in by the member key column for dimensions in Analysis Services, which we will leverage to house the conditional logic we will use for deriving the aging buckets. In general, our approach might make more sense if the aging aggregations are frequently called upon for reporting, and if query speed is important, which, as we have learned from our hypothetical information consumer group, is the case within the scenario of our practice example. Calculated members might be a better choice for values that we expect to be used infrequently, again because the tradeoff is tipped more in favor of a lower overhead on the cube processing side, where query response time might not be as high a priority.
The "calculation" behind derived members within a dimension can extend well beyond simple concatenation and the like, as we shall see in our example, and can apply conditional logic and other nuances in derivation of the members. The syntax obviously has to fit the database our practice example will leverage MS Access friendly syntax, because the FoodMart 2000 data source ships as an MS Access .mdb.
Clone the FoodMart 2000 Sample Database
We will create a copy of the FoodMart 2000 sample database, which, along with the several cubes and other objects it contains, accompanies an Analysis Services installation. This will allow us to keep the original sample database intact for other uses.
1. Open Analysis Manager, beginning at the Start menu.
2. Expand the Analysis Servers folder by clicking the "+" sign to its immediate left.
Our server(s) appear.
3. Expand the desired server.
Our database(s) appear, in much the same manner as depicted in Illustration 2.
Illustration 2: Example Databases Displayed within Analysis Manager
NOTE: Your databases / cube tree will differ, depending upon the activities you have performed since the installation of MSAS (and the simultaneous creation of the original set of sample cubes). Should you want or need to restore the cubes to their original state, simply restore the database under consideration. For instructions, see the MSSQL Server 2000 Books Online.
4. Right-click the FoodMart 2000 database.
Again, we are making a copy of the database, because our lesson will involve making changes to the data connections, as well as the Sales cube, shared dimensions, and other objects, within the practice example. As we have noted, working with the copy will allow us to maintain our existing sample cube in its current condition, and available to other possible users and applications.
5. Select Copy from the context menu that appears.
6. Right-click on the Analysis Server.
7. Select Paste from the context menu that appears, as shown in Illustration 3.
Illustration 3: Paste the Cloned Database to the Analysis Server
The Duplicate Name dialog appears.
As noted in previous articles, we cannot have two databases of the same name in a given Analysis Server.
8. Type the following into the Name box of the Duplicate Name dialog:
The Duplicate Name dialog appears, with our modification, as depicted in Illustration 4.
Illustration 4: The Duplicate Name Dialog, with New Name
TIP: As I have mentioned elsewhere, in this and other series, the foregoing is also an excellent way of renaming a cube, (a "rename" capability is not available here, as it is in many Windows applications). Simply create a duplicate, give it the name to which you wish to rename the old cube, and then delete the old cube, as appropriate. This also works for Analysis Services databases, dimensions and other objects.
9. Click OK to apply the name change.
The new database, AgingDim, appears in the tree beneath the Analysis Server, among the databases already in place, as shown in Illustration 5.
Illustration 5: The New Database Appears ...
We now have a copy of the FoodMart 2000 database, within which we can perform the steps of our practice exercise. We will be working with the Sales cube within this database. We will next need to apply filters to contain only the purchase dates that occur within the last six months of year 1997, in accordance with the specifications that the information consumers have communicated.
Apply a Filter to the Time Dimension
We will next enact a source filter upon the Time dimension to limit the transactions in our cube to those that have occurred in the last half of 1997. This is one part of preparing the data for our practice session, and constitutes a step that would probably not occur in many real world scenarios. Again, we are simply making it possible to practice the creation of an aging capability using a sample data environment that is available to virtually anyone with an Analysis Services installation.
1. Expand the new AgingDim Analysis Services database by clicking the "+" sign to its immediate left.
2. In like manner, expand the Shared Dimensions folder within the AgingDim database.
3. Right-click the Time dimension.
4. Select Edit from the context menu that appears, as depicted in Illustration 6.
Illustration 6: Select Edit from the Context Menu
The Dimension Editor opens.
5. With the Time dimension (top of the tree) selected, click the Advanced tab in the Properties pane (underneath the tree pane)
6. Type (or cut and paste) the following into the Source Table Filter box within the Advanced Properties tab:
"time_by_day"."the_date" >#05/31/1997# AND "time_by_day"."the_date" <#01/01/1998#
7. In the Allow Duplicate Names box (on the Advanced tab, immediately above the Source Table Filter box), select True.
The Advanced Tab Properties pane appears, with our modifications, as shown in Illustration 7.
Illustration 7: Advanced Tab Properties Pane with Modifications
The filter we have put in place ensures that the new shared dimension will entrain only transactions that fit within the time band specified by the information consumers (the last half of year 1997). Next, we will process the dimension to prepare it for use within the Sales cube in our practice session.
8. Right-click the Time dimension, atop the tree, as before.
9. Select Process Dimension ... from the context menu that appears, as depicted in Illustration 8.
Illustration 8: Select Process Dimension ... from the Context Menu
10. Click Yes on the dialog that next asks if we wish to save the dimension, as shown in Illustration 9, if necessary.
Illustration 9: Save the Dimension ...
The Process a Dimension dialog appears next, affording us an opportunity to select a processing method.
11. Ensure that the radio button to the immediate left of Rebuild the dimension structure is selected, as depicted in Illustration 10.
Illustration 10: Preparing to Rebuild ...
Processing commences and completes rapidly. Successful processing is indicated by the green message at the bottom of the Process viewer, which appears as shown in Illustration 11.
Illustration 11: Successful Processing is Indicated
13. Click Close to dismiss the Process viewer.
We are returned to the Dimension Editor. While we would typically reprocess a cube containing a dimension that we have modified, we will defer processing the Sales cube until we have completed most of the steps of the procedure that follows, in the interest of saving time.
14. Select File --> Exit from the main menu to return to Analysis Manager.
We are now prepared to add an aging dimension to the Sales cube, pursuant to the expressed business requirements of the information consumers.
Create a New Aging Shared Dimension
We will create support for aging within our cube by adding a new shared dimension. It will be within this dimension that we construct the logic to derive our aging buckets.
1. Right-click the Shared Dimensions folder within the new AgingDim Analysis Services database.
2. Select New Dimension from the context menu that appears.
3. Select Editor from the cascading menu that appears next, as depicted in Illustration 12.
Illustration 12: Select New Dimension --> Editor from the Context Menus
The Choose a Dimension Table dialog appears.
4. Scroll down in the Tables pane (left half of the dialog), to locate the time_by_day table in the clone database.
5. Select the time_by_day table, as shown in Illustration 13.
Illustration 13: Select the Time_by_day Table
6. Click OK to accept the selection, and to progress to the Dimension Editor.
7. From the Schema tab in the Dimension Editor, drag the field named the_date from within the time_by_day table to the left, dropping it onto the <New> dimension in the tree, as depicted in Illustration 14.
Illustration 14: Drag the Field Named The_date to the New Dimension
The Date appears as the first level under the <New> dimension.
8. Click <New> in the Dimension tree to highlight it.
9. Type Aged Period into the Name box, on the Basic tab of the Properties pane.
10. Click the The Date level in the Dimension tree.
11. Replace the Name on the Basic tab of the Properties pane with the following:
Our changes appear as shown in a partial view of the Dimension Editor in Illustration 15.
Illustration 15: Highlighted Changes in the Dimension Editor (Partial View)
We now need to insert a new level within the new dimension.
12. Right-click the Transaction Date level in the Dimension tree.
13. Select New Level, as depicted in Illustration 16.
Illustration 16: Select New Level ...
The Insert Level dialog appears.
14. Select the_date as the member key column (it will be a temporary assignment, as we shall see), as shown in Illustration 17.
Illustration 17: Selecting The_date as a "Placeholder..."
15. Click OK to accept the selection.
The Date appears in the tree, underneath the Transaction Date level. We wish to insert the new level above the Transaction Date level, and so we will re-arrange it in the next step.
16. Select The_Date in the tree.
17. Drag The_Date to a position on top of the Transaction Date level
18. Drop the Transaction Date level, to "swap places" with the Transaction Date level.
The_Date appears in the tree, now as the first level, as depicted in Illustration 18.
Illustration 18: Moving the New Level to the "Level One" Position ...
19. Click The_Date in the Dimension tree to select it, if necessary.
20. Replace The_Date in the Name box of the Properties panel with the following:
21. Replace the current text in the Member Key Column box, "time_by_day"."the_date", with the following:
IIF(DateDiff('d', "time_by_day"."the_date", '12/31/1997')< 30, 29,
IIF(DateDiff('d', "time_by_day"."the_date", '12/31/1997')>= 30 AND DateDiff('d',
"time_by_day"."the_date", '12/31/1997')<= 60, 60, IIF(DateDiff('d',
"time_by_day"."the_date", '12/31/1997')>= 61 AND DateDiff('d',
"time_by_day"."the_date", '12/31/1997')<= 90, 90, IIF(DateDiff('d',
"time_by_day"."the_date", '12/31/1997')>= 91 AND DateDiff('d',
"time_by_day"."the_date", '12/31/1997')<= 120, 120, 999))))
NOTE: The "SQL Server equivalent," wherein we would leverage the CASE statement to achieve the same results, would be approximated by the following:
CASE WHEN DATEDIFF(Day, "dbo"."time_by_day"."the_date", '12/31/1997') < 30
THEN 29 WHEN DATEDIFF(Day, "dbo"."time_by_day"."the_date", '12/31/1997')
BETWEEN 30 AND 60 THEN 60 WHEN
DATEDIFF(Day,"dbo"."dbo"."time_by_day"."the_date", '12/31/1997') BETWEEN
61 AND 90 THEN 90 WHEN DATEDIFF(Day,"dbo"."time_by_day"."the_date",
'12/31/1997') BETWEEN 91 AND 120 THEN 120 ELSE 999 END
In the above, we are simply setting up a key for each bucket. The key could be anything intuitive I chose numbers that would support sorting, but the keys could certainly be defined as anything that makes sense for the environment in which the cube resides.
22. Replace the current text in the Member Name Column box, "time_by_day"."the_date", with the following:
IIf(DateDiff('d', "time_by_day"."the_date", '12/31/1997')< 30, '< 30',
IIF(DateDiff('d', "time_by_day"."the_date", '12/31/1997')>= 30 AND DateDiff('d',
"time_by_day"."the_date", '12/31/1997')<= 60, '30 - 60', IIF(DateDiff('d',
"time_by_day"."the_date", '12/31/1997')>= 61 AND DateDiff('d',
"time_by_day"."the_date", '12/31/1997')<= 90, '61 - 90', IIF(DateDiff('d',
"time_by_day"."the_date", '12/31/1997')>= 91 AND DateDiff('d',
"time_by_day"."the_date", '12/31/1997')<= 120, '91 - 120', '> 120'))))
NOTE: The "SQL Server equivalent," wherein we would leverage the CASE statement to achieve the same results, would be similar to the following:
CASE WHEN DATEDIFF(Day, "dbo"."vW_MSAS_Inventory"."Received Date",
'01/31/2005') < 30 THEN '< 30' WHEN
DATEDIFF(Day,"dbo"."vW_MSAS_Inventory"."Received Date", '01/31/2005')
BETWEEN 30 AND 60 THEN '30 - 60' WHEN
DATEDIFF(Day,"dbo"."vW_MSAS_Inventory"."Received Date", '01/31/2005')
BETWEEN 61 AND 90 THEN '61 - 90' WHEN
DATEDIFF(Day,"dbo"."vW_MSAS_Inventory"."Received Date", '01/31/2005')
BETWEEN 91 AND 120 THEN '91 - 120' ELSE '> 120' END
Our changes appear in the Properties pane, as partially shown in Illustration 19.
Illustration 19: Changes in the Dimension Editor (Partial View)
23. Click the Advanced tab in the Properties pane.
24. Select Key in the Order By box, as depicted in Illustration 20.
Illustration 20: Select Key in the Order By Box ...
25. Click the Aged Period dimension atop the tree to select it.
26. Click the Advanced tab, in the Properties pane, as we did with the Time dimension earlier.
27. Type (or cut and paste) the following into the Source Table Filter box within the Advanced Properties tab:
"time_by_day"."the_date" >#05/31/1997# AND "time_by_day"."the_date" <#01/01/1998#
Our addition to the Source Table Filter box for the Aged Period dimension appears just as it did for the Time dimension earlier.
Let's process the Aged Period dimension, saving our changes as part of the process.
28. Right-click the Aged Period dimension, atop the tree pane.
29. Select Process Dimension ... from the context menu that appears, as shown in Illustration 21.
Illustration 21: Select Process Dimension from the Context Menu
30. Click Yes on the dialog that next asks if we wish to save the dimension.
The Process a Dimension dialog appears next, defaulted to the Rebuild the dimension structure processing method.
31. Click OK to rebuild the dimension structure.
Processing commences and completes rapidly, indicated, once again, by the green Processing completed successfully message at the bottom of the Process viewer.
32. Click Close to dismiss the Process viewer.
33. Click the Data tab in the Dimension Editor.
34. Expand the All Aged Period level that appears.
The new Aged Period dimension appears on the Data tab, reflecting the aging buckets we have created, as depicted in Illustration 22.
Illustration 22: The Aging Buckets Appear on the Data Tab of the Dimension Editor
We note that, within each of the bucket members of the Age level, the corresponding transaction dates are captured. This provides a quick "reasonableness" test that the buckets are aggregating data properly.
35. Select File --> Exit from the main menu to close the Dimension Editor.
We return to Analysis Manager.
Add the New Aged Period Dimension, and Apply a Filter, to the Sales Cube
We will next make a couple of modifications to our cube to finalize construction of our Aged Period dimension. First, we will add the shared dimension to the Sales cube, and then we will put a fact table filter in place to limit our cube to the specified transactions those that occurred in the last half of 1997.
1. Right-click the Sales cube within the cloned AgingDim Analysis Services database.
2. Select Edit from the context menu that appears, as shown in Illustration 23.
Illustration 23: Opening the Cube Editor
The Cube Editor opens.
3. Right-click the Dimensions folder that appears underneath the Sales cube.
4. Select Existing Dimensions ... on the context menu that appears, as depicted in Illustration 24.
Illustration 24: Preparing to Add the New Shared Dimension to the Sales Cube
The Dimension Manager appears.
5. Select Aged Period in the Shared dimensions pane (left half of the Dimension Manager dialog).
6. Click the single arrowhead button (">") to move Aged Period to the Cube dimensions pane (right half of the Dimension Manager dialog), as shown in Illustration 25.
Illustration 25: Adding the Aged Period Dimension to the Cube Dimensions
The Aged Period dimension appears in the Cube dimensions pane.
The Dimension Manager closes, and we are returned to the Cube Editor. We see the Aged Period dimension appear in the cube tree pane, as depicted in Illustration 26.
Illustration 26: The Aged Period Shared Dimension Appears in the Cube Tree
We will now save the cube, and then apply the fact table filter to limit transactions to the time frame specified by the information consumers.
8. Select File --> Save from the main menu of the Cube Editor.
9. Click Yes at the Save the Cube dialog that next appears, prompting us to continue (with loss of storage settings as a result).
10. Select File --> Exit to close the Cube Editor.
11. Select No when prompted to design storage, as shown in Illustration 27.
Illustration 27: Decline Storage Design When Prompted ...
The Design Storage dialog disappears, and we return to Analysis Manager.
12. Expand the Sales cube, by clicking the "+" sign to its immediate left.
13. Expand the Partitions folder that appears underneath the expanded Sales cube.
14. Right-click the sole partition, named Sales, which appears within the expanded folder.
15. Select Edit from the context menu that appears, as depicted in Illustration 28.
The first page of the Partition Wizard appears, as shown in Illustration 29.
16. Without making any changes, click Next.
17. Click Next again, at the Select the data slice (optional) page, as depicted in Illustration 30.
Illustration 30: Click Next on the Select the Data Slice (Optional) Page
18. Click Next, once again, on the Specify the partition type page that appears, as shown in Illustration 31.
Illustration 31: Click Next on the Specify the Partition Type Page
We arrive at the Finish the Partition Wizard page.
19. Click the radio button to the left of Design the aggregations later on the dialog.
20. Click Advanced, as depicted in Illustration 32.
Illustration 32: Click Advanced on the Finish the Partition Wizard Dialog
The Advanced Settings dialog opens.
21. Type (or cut and paste) the following syntax into the filter statement box:
"time_by_day"."the_date" > #05/31/1997#
AND "time_by_day"."the_date" < #01/01/1998#
The Advanced Settings dialog appears as shown in Illustration 33, with the syntax we have added.
Illustration 33: Our Filter Statement in the Advanced Settings Dialog
Now let's set up a drillthrough capability that will help us quickly evaluate the effectiveness of our efforts in the Verification section that follows.
22. Click the Drillthrough Options button in the bottom section of the Advanced Settings dialog.
The Partition Drillthrough Options dialog opens.
23. Within the selection checklist, ensure that only the checkboxes presented in Table 1 below are checked:
Table 1: Select Settings for the Detail Drillthrough View ...
Our intent here is simply to establish the display of dated transactions to give us a feel that transactions within reasonable date ranges are, indeed, underneath the totals we see in the age buckets. The columns selected here could obviously be varied to accomplish other specific needs just as easily.
The Partition Drillthrough Options dialog (partial view of the selection checklist) appears as depicted in Illustration 34, with the syntax we have added.
Illustration 34: Partial View of the Selection Checklist
24. Click OK to accept changes and close the Partition Drillthrough Options dialog.
25. Click OK once again, this time to accept and close the Advanced Settings dialog with our input.
26. Click the Finish button on the Finish the Partition Wizard page, to which we return next.
27. Click OK on the Fact Table Row Count message box that appears next, as shown in Illustration 35.
Illustration 35: Click OK on the Fact Table Row Count Message Box
The Fact Table Row Count message box closes. Now all that remains is to process the cube, and to verify the operation of our new aging capability using the Cube Browser.
Process the Cube and Review the Aging Capabilities within the Cube Browser
Let's process the cube to update it, before reviewing the new aging structures in action.
1. Right-click the Sales cube, once again.
2. Select Edit, as before, to open the Cube Editor.
3. From within the Cube Editor, right-click the Sales cube atop the tree.
4. Select Process Cube ... from the context menu that appears, as shown in Illustration 36.
Illustration 36: Select Tools --> Process Cube ... from the Main Menu
5. Click Yes (if prompted) to save the cube.
6. Click No when prompted to design aggregations, in the next message box that appears as shown in Illustration 37.
Illustration 37: Forgo the Design of Aggregations by Clicking No ...
The Process a Cube dialog appears next.
7. Click OK to begin Full processing of the Sales cube, as depicted in Illustration 38.
Illustration 38: Click OK to Begin Full Processing
Processing begins immediately, and the Process viewer appears, displaying the various logged events with which most of us have come to be familiar. Processing completes, and the viewer presents the green Processing completed successfully message, as shown in Illustration 39.
Illustration 39: Processing Completes Successfully as Indicated on the Process Viewer
8. Click Close to dismiss the viewer.
We will next review our new Aged Period dimension in action.
9. Click the Data tab within Analysis Manager.
The Cube Browser Data view appears.
10. Within the Data view, ensure that the new Aged Period dimension is in place as the row axis, by dragging it there to replace the existing dimension, as necessary.
11. Within the Data view, ensure that the Measures dimension is in place as the column axis, dragging it there to replace the existing dimension, if required.
The Data view, with the aforementioned dimensions in place within the respective axes, appears as partially depicted in Illustration 40.
Illustration 40: Data View, with Initial Axes in Place (Partial View) ...
We see the aggregations within the aging buckets that we have defined.
12. Double-click the "< 30" label to explode to the transaction date level, as partially shown in Illustration 41.
Illustration 41: Perform a Drilldown to Ascertain Proper Date Inclusion (Partial View)
We can readily see that the "<30" bucket appears to contain transactions that were "under thirty days old" at year end for 1997. (Alternate drill down on other buckets will also satisfy us in this same conclusion).
13. Right-click the Store Sales cell for a given date (in my example, I clicked Store Sales for 1997-12-11).
14. Select Drill Through ... from the context menu that appears, as depicted in Illustration 42.
Illustration 42: Perform a Drill Through on Store Sales for a Given Date ...
The Drillthrough Data viewer appears as partially shown in Illustration 43.
Illustration 43: Partial View of the Drillthrough Data ...
We can see at the transactional level, also, that the proper dates appear to be included in the transaction date (and ultimately the aging bucket) rollups.
15. Close the Drillthrough Data viewer when finished.
16. Experiment with drilldown and drillthrough until satisfied that our results are as expected.
We will return to examine the use of our aging buckets from within a reporting environment in a subsequent article in my MSSQL Server Reporting Services series at Database Journal. As I mentioned in the introduction, there are other ways to approach aging as well (some more optimal than others); this is only an introduction to one approach that can be managed largely from within the Analysis Services layer of an integrated business intelligence solution. (We will examine the use of an RDBMS view to support aging within a cube in a prospective article in this series).
17. Select File --> Exit to close the Cube Editor.
18. Select File --> Exit again, to close Analysis Services.
In this article, we introduced a general business need that is familiar to most of us, the aging of values. We then narrowed our discussion to a pervasive example of aging, the aging of accounts receivable. We touched upon the principles of aging customer accounts in preparation for our examination of a solution that works within the OLAP component of the integrated Microsoft BI solution, and that replicates the functionality provided in many accounting and financial applications, as well as many "pre-fab" reporting solutions on the market.
After discussing aging concepts, we prepared for our practice exercises by creating a clone of the FoodMart Analysis Services database. Our intent was to be able to use an existing example cube, Sales, as well as existing (and one additional) shared dimensions as a platform for aging customer transactions, and simulating accounts receivable. We created an Aged Periods shared dimension within the sample cube to provide "buckets" for date-based transactional data, inducted our new shared dimension into our Sales cube, and, using both drilldown and drillthrough capabilities, verified the adequacy of the processed cube in meeting the needs of a hypothetical group of information consumers.
» See All Articles by Columnist William E. Pearson, III
Discuss this article in the MSSQL Server 2000 Analysis Services and MDX Topics Forum.