Business Intelligence Architect Bill Pearson leads a
hands-on examination of the Analysis Services 2005 Query Log. In addition, we
discuss customization of the Query Log and its use as a direct reporting
About the Series ...
article is a member of the series Introduction to MSSQL Server Analysis Services. The series is designed to
provide hands-on application of the fundamentals of MS SQL Server Analysis
Services, with each installment progressively presenting features and
techniques designed to meet specific real - world needs. For more information
on the series, please see my initial article, Creating Our First Cube.
Note: To follow along with the steps we undertake, the following components,
samples and tools are recommended, and should be installed according to the
respective documentation that accompanies MSSQL Server 2005:
Server 2005 Database Engine
Server 2005 Analysis Services
Intelligence Development Studio
Server 2005 sample databases
The Analysis Services
Tutorial sample project and other samples that are available with the
installation of the above.
successfully replicate the steps of the article, you also need to have:
within one of the following:
Read permissions within any SQL
Server 2005 sample databases we access within our practice session, if
Note: Current Service Pack updates are assumed for the operating system, MSSQL
Server 2005 ("MSSQL Server"), MSSQL Server 2005 Analysis
Services ("Analysis Services"), MSSQL Server 2005 Reporting
Services ("Reporting Services") and the related Books
Online and Samples. Images are from a Windows 2003
Server environment, but the steps performed in the articles, together with
the views that result, will be quite similar within any environment that
supports MSSQL Server 2005 and its component applications.
In my article, Usage-Based Optimization in Analysis Services 2005, we introduced and explored Usage-Based
Optimization, gaining some hands-on exposure to the Usage-Based Optimization Wizard. We noted that the new Usage-Based
Optimization Wizard improves dramatically upon the effectiveness of the Analysis
Services 2000 Usage Analysis (going significantly farther than the
generation of the simple reports) and Storage Design (allowing for
up-to-date, usage-based optimization) Wizards. We focused upon
the way that the Usage-Based Optimization Wizard offers us the
capability to base aggregation design upon a given cube's usage
statistics, in combination with other factors, and allows us to make
subsequent adjustments to our existing aggregation design and storage
mode as time passes, and as information is collected from which meaningful
statistics can be derived.
We examined the operation
of the Usage-Based Optimization Wizard within a context of aggregation
design, and then reinforced our understanding with a practice exercise within
which we enabled the Analysis Server Query Log to capture query
statistics within a copy of a sample Analysis Services database we
created for the exercise. After next processing the clone database, we
manipulated data within a cube therein to create Query log entries. The
focus of the exercise then became performance of a procedure whereby we set aggregations
for our designated practice cube with the Usage-Based Optimization Wizard.
Throughout the guided steps of the Wizard we examined each of the
possible settings that it makes available to us, and commented upon general
optimization concepts as we proceeded through the practice example.
article, we will examine more closely the Query Log itself. I often
receive requests from clients and readers, asking how they can approach the creation of more
sophisticated reporting to assist in their usage analysis pursuits. This is
sometimes based upon a need to create a report that presents data as it appears
in, say, the Query Log table / file, in a way that allows for printing,
publishing to the web, or otherwise delivering report results to information
consumers. Moreover, some users simply want to be able to design different
reports that they can tailor themselves, to meet specific needs. Yet others
want a combination of these capabilities.
Each of these more
sophisticated analysis and reporting needs can be met in numerous ways. In this lesson, we will we will
examine the source of cube performance statistics, the Query Log,
discussing its location and physical structure, how it is populated, and other
characteristics. We will discuss ways that we can customize the degree and
magnitude of statistical capture in the Query Log to enhance its value
with regard to meeting more precisely our local analysis and reporting needs. We
will practice the process of making the necessary changes in settings to
illustrate how this is done. Finally, we will discuss options for generating
more in-depth, custom reports than the wizard provides, considering ways that
we can directly obtain detailed information surrounding cube processing
events in a manner that allows more sophisticated selection, filtering and
display, as well as more customized reporting of these important metrics.
The Analysis Services 2005 Query Log
Overview and Discussion
entire idea behind "optimization based upon utilization" is, first
and foremost, to enhance performance based upon what consumers ask for
on a recurring basis. Beginning with capabilities that
debuted in Analysis Services 2000, we have been able to leverage historical query
details to ascertain the aggregations of data that our cubes need to maintain
to support the most frequently "asked" queries. We could apply
filters to refine this exploration, and extrapolate what we learn to the
specification of which aggregations to maintain, thus maintaining the
appropriate pre-calculations for the consumer populations we support, as we
detailed in Usage-Based
Optimization in Analysis Services 2005.
have multiple options, when we venture upon utilization analysis and
utilization-based optimization within Analysis Services 2005, in how we
incorporate the Query Log. Examples include the use of the Usage-Based
Optimization Wizard, as we saw in Usage-Based
Optimization in Analysis Services 2005, to create usage-based
aggregations in a directed manner, so as to fine tune the storage / processing
tradeoffs involved. Alternatively, we might create reports, using Reporting
Services or other relational report writers, to analyze usage or even usage
trends to prompt forehanded action with regard to aggregation design, as well
as general cube sizing and structure. As illustrations, I have created
dashboard objects for various clients that keep administrators informed of what
multidimensional intersects are being queried most often, as well as what the
processing times for those queries are (to identify "candidate intersects"
for more well-tuned aggregations); intersects that are rarely accessed
(candidates, perhaps for removal, or less intensive aggregations); the overall
cube size; and trends regarding these and other values to highlight the need
for storage and optimization planning at future dates. Important to any
optimization effort is the ongoing requirement to revisit the process to
capture changes that occur over time in usage patterns the more history we
have of actual usage, the more value we can add with usage-based optimization.
of the ways we employ the data within the Query Log, we must populate
the log first. We will perform the steps to do so once again in this session,
as preparation to browsing the log, as well as discussing various reporting and
"fine tuning" options, in general. In this article, we will:
Create of a
copy of a sample Analysis Services database for use in our practice
Enable the Analysis
Server Query Log to capture query statistics;
cube and manipulate data, to create Query Log entries;
Examine the Query
Log contents, discussing the various statistics captured;
reporting options, including the use of SQL Server Reporting Services
as relational and / or OLAP reporting tool;
Comment upon customization
concepts as we proceed through our practice example.
Considerations and Comments
For purposes of the practice
exercises within this series, we will be working with samples that are provided
with MSSQL Server 2005 Analysis Services. These samples include,
predominantly, the Adventure Works DW Analysis Services database (with
member objects). The Adventure Works DW database and companion samples
are not installed by default in MSSQL Server 2005. The samples can be
installed during Setup, or at any time after MSSQL Server has
been installed. The topics "Running Setup to Install AdventureWorks
Sample Databases and Samples" in SQL Server Setup Help or
AdventureWorks Sample Databases and Samples" in the Books Online (both of which are included on
the installation CD(s), and are available from www.Microsoft.com and other sources), provide
guidance on samples installation.
regarding the rights / privileges required to accomplish samples installation,
as well as to access the samples once installed, is included in the references
I have noted.
Let's get some
hands-on exposure to the Analysis Services Query Log. To prepare, we
will create a new Analysis Services database, based upon the existing Adventure
Works DW sample database, to insulate the original sample database from
modifications we will make. We will accomplish the creation of the "clone"
database from within the SQL Server Management Studio, and then enable
query logging to prepare for our exploration of the Query Log.
a Clone Analysis Services Database in SQL Server Management Studio
begin our preparation within SQL Server Management Studio, where we will
create a clone of the sample Adventure Works DW database. Adventure
Works DW can be installed by anyone installing MSSQL Server 2005 Analysis
Click the Start
SQL Server 2005 within the Program group of the menu.
Server Management Studio, as shown in Illustration 1.
Illustration 1: Opening
SQL Server Management Studio
The Connect to Server
Services in the Server type selector.
Type / select
the server name / instance, if appropriate) into the Server name
authentication information, as required in your own environment.
The Connect to Server
dialog appears, with the appropriate input for our local environments, similar
to that depicted in Illustration 2.
Illustration 2: The
Connect to Server Dialog, with Representative Settings
Click the Connect
button to connect with the specified Analysis Services server.
The SQL Server
Management Studio opens.
Within the Object
Explorer (the leftmost pane of the Studio, by default), expand the
server in which we are working, if necessary, by clicking the "+"
sign to its immediate left.
Expand the Databases
folder that appears underneath the expanded server.
the Adventure Works DW database.
Up... from the context menu that appears, as shown in Illustration 3.
Right-click the Adventure Works DW Database Select Back Up ...
The Backup Database
Adventure Works DW dialog appears.
default name that appears in the Backup file box with the following:
ANSYS049 Adventure Works DW.abf
Uncheck the Apply
compression setting in the Options section.
Uncheck the Encrypt
backup file setting that immediately follows.
The relevant portion of
the Backup Database Adventure Works DW dialog appears, as depicted in Illustration
Illustration 4: The
Backup Database Adventure Works DW Dialog (Relevant Portion)
Click OK to
begin the backup.
The Backup Database
Adventure Works DW dialog grays, as the Executing symbol in the Progress
pane (lower left corner of the dialog) becomes active. The process may run
several minutes depending upon the resources available on the local system.
Once completed, the dialog closes, returning us to the Management Studio.
We will next restore the same
backup, to create a differently named copy of the existing sample database a copy
wherein we can make modifications without impairing the existing sample, which
we may wish to use to complete tutorials included with MSSQL Server 2005
Within the Object
Explorer, right-click the Databases folder underneath the Adventure
Works DW database.
from the context menu that appears, as shown in Illustration 5.
Right-click the Databases Folder Select Restore ...
The Restore Database dialog
Click the Browse
button to the right of the box (second from the top) labeled From backup
The Locate Database
Files dialog appears.
the following backup file (where we located it in our backup steps above):
ANSYS049 Adventure Works DW.abf
Click the file
within the Select the file window, to place the file name into the File
name box, as depicted in Illustration 6.
Illustration 6: Locate
Database Files Dialog with Our Input ...
to accept the file path / name, and to close the Locate Database Files
We return to the Restore
Database dialog, where we see the file we have selected appear in the From
backup file box.
following into the Restore database box immediately above the From
backup file box:
ANSYS049 Adventure Works DW
The relevant portion of
the Restore Database dialog, with our input, appears as shown in Illustration
Illustration 7: The
Completed Restore Database Dialog (Partial View)
to initiate the restoration.
The Restore Database dialog
grays, as the Executing symbol in the Progress pane, once again,
becomes active. The process runs, and, once completed, the dialog closes,
returning us to the Management Studio. Here we see the new ANSYS049
Adventure Works DW database appear in the Object Browser, as
depicted in Illustration 8.
Illustration 8: The New
Database Appears ...
NOTE: If the new database does not appear immediately,
right-click the Databases folder and select Refresh from the
context menu that appears, as shown in Illustration 9.
Refreshing as Required ...
created the ANSYS049
Adventure Works DW
database, we can
now transition to the procedural portion of our session, and get some hands-on exposure
to the Query Log in Analysis Services 2005. In the next section,
we will take an additional preparatory step: enabling the logging of query
Query Logging to Gather Statistics
based upon usage. We therefore must capture usage statistics to
have a basis for optimization. We do this by enabling query logging
within the Analysis Server, as we shall see in the steps that follow.
the Analysis Server with which we are working.
from the context menu that appears, as depicted in Illustration 10.
Illustration 10: Select
Properties from the Context Menu ...
The Analysis Services
Properties dialog appears. As we have noted in other articles, this is a
busy place, indeed: many settings of a default and operational nature are
maintained here. (My advice is to learn all that we can about each entry
within this table-like dialog, so as to know of its existence when the time
comes to address the utility that it offers). Here we will enable the logging
of query statistics.
Scroll down to
the Log \ QueryLog \ CreateQueryLogTable entry in the table within the
Change the Value
setting for the entry, which is defaulted to false, to true,
via the selector provided, as shown in Illustration 11.
Changing the CreateQueryLogTable Setting Value to True ...
We next need to direct Analysis
Services as to where we wish to house the Query Log. Analysis
Services 2005 offers us a great deal of flexibility (and improves upon Analysis
Services 2000's provision of an MS Access database for this purpose,
which could be migrated to MSSQL Server). We will direct that the Query
Log database be created within the sample AdventureWorksDW
relational database, which underlies our new clone Analysis Services
database, ANSYS049 Adventure Works DW. This is a convenient place to
put the Query Log for this session, but we are free to put it into any OLE-DB
/ .NET compatible data source.
In the Log
\ QueryLog \ QueryLogConnectionString row, immediately below the CreateQueryLogTable
row (where we assigned the value of true above), click the Value
box, and then click the box that appears to its immediate right (marked "..."),
as shown in Illustration 12.
Illustration 12: Beginning
Connection Setup for the Query Log Data Source
Connection Manager appears.
Leave the Provider
setting at its default of Native OLE DB \ SQL Native Client.
Select / type
the appropriate Server or Server / Instance combination into the Server
(Mine is MOTHER1\MSSQL2K5,
as we see in illustrations throughout recent articles of the series. )
If we are working in an environment wherein a side-by-side installation of MSSQL
Server 2000 and MSSQL Server 2005 has been performed, the Server
Name / Instance will be required ("Localhost" / the
Server name alone will not be assigned to the MSSQL Server 2005
instance, by default).
In the Log on
to the server section of the Connection Manager dialog, make the
authentication selections appropriate to your environment.
I am using Windows
Authentication, and therefore select the respective radio button.
In the Connect
to a database section in the lower half of the dialog, in the selector box
labeled Select or enter a database name, select the AdventureWorksDW relational
The Connection Manager
dialog appears, with our input, similar to that depicted in Illustration 13.
Illustration 13: Connection
Manager Dialog, with our Input
Click the Test
Connection button to ascertain connectivity to the database.
settings, confirmation of connectivity appears in a message box, as shown in Illustration
Illustration 14: "Test
to close the message box.
to accept settings and to close the Connection Manager dialog.
We are returned to the Analysis
Services Properties dialog, where we see that the Value box of the Log
\ QueryLog \ QueryLogConnectionString row now contains a connection
string, courtesy of the Connection Manager. We have enabled the collection
of query statistics and established the location of their collection.
Next, we will make a couple of additional adjustments before examining and then
populating the Query Log table.
In the Log
\ QueryLog \ QueryLogSampling row, which appears three rows below the Log
\ QueryLog \ QueryLogConnectionString row, change the Value from the
default of 10 to 1.
Here we are merely
increasing the sampling rate from the default of "capture
statistics from every tenth query" to "capture statistics for each
query," a step similar to those we took in previous articles with the Usage-based
Optimization Wizard as it existed in Analysis Services 2000.
(Although the setting was managed a bit differently, in the Write to log once
per [number] box within Analysis Services 2000, our intent then, as
it is now, was to simply allow the log to capture enough data to make the
procedural steps of our practice exercise meaningful.)
In the Log
\ QueryLog \ QueryLogTableName row, which appears in the row immediately
below the Log \ QueryLog \ QueryLogSampling row, modify the Value
from its default of OLAPQueryLog to the following:
It might be useful to note,
too, that we could have specified a file rather than a table for the intended destination
for usage statistics collection. If we chose a file versus a table, we can
specify the location of the file in the Log \ QueryLog \ QueryLogFileName row,
which appears in the row immediately below the Log \ QueryLog \ QueryLogConnectionString
Our settings, within the relevant
portions of the rows of the Analysis Services Properties dialog we have
visited, appear as depicted in Illustration 15.
Modified Settings within the Analysis Services Properties Dialog (Partial View)
to enact the settings we have made above.
indicator appears in the bottom left corner of the dialog momentarily, and then
dialog closes. We are returned to the SQL Server Management Studio.
We can confirm the
creation of the new MSASQueryLog table within the AdventureWorksDW relational
database easily from the SQL Server Management Studio, where we can
access the relational world in combination with the OLAP world, as we shall see
in the next steps.
Connect button atop the Object Explorer.
Engine from the menu that appears, shown circled in Illustration 16.
Connecting to the SQL Server 2005 Database Engine
to Server dialog appears, just as we saw earlier when connecting to Analysis
correct Server or Server / Instance combination into the Server
appropriate authentication mode into the Authentication box that
immediately follows (with related details in the Password box, as
The Connect to Server
dialog appears similar to that depicted in Illustration 17.
Illustration 17: The
Connect to Server Dialog Appears
We return to the SQL
Server Management Studio, where we see the instance of the Database
Engine appear within the Object Explorer, underneath the Analysis
Server with which we have been working, as shown in Illustration 18.
Illustration 18: The
Database Engine Instance Appears within Object Explorer
Expand the Database
Engine Server by clicking the "+" sign to its immediate left, if
Expand the Databases
folder underneath the expanded server.
Expand the AdventureWorksDW
Expand the Tables
folder within the AdventureWorksDW database.
The tables within the
folder appear. Among them, we see the new ANSYS049_MSASQueryLog table that
we created in our earlier steps, as depicted in Illustration 19.
Illustration 19: The
MSASQueryLog Table Appears in the AdventureWorksDW Database
The table is in place and
query logging is enabled. We are now positioned to generate some sample
queries, and to begin our examination of the Query Log.
Process the Database Clone and Generate Statistics to be Captured by the Query Log
We need some log entries
upon which to base an examination of the Query Log, so we will populate
the table with some rows in a "quick and dirty" manner by doing quick manipulations of the data in the cube. First, we will
process the new Analysis Services database copy we have created.
Explorer, once again, right-click the new Analysis Services
database, ANSYS049 Adventure Works DW, which we created earlier. (The
database can be found within the Databases folder of the Analysis
from the context menu that appears, as shown in Illustration 20.
Processing the Clone Analysis Services Database ...
The Process Database
dialog appears for Analysis Services database ANSYS049 Adventure
Works DW, as depicted in Illustration 21.
Illustration 21: The
Process Database Dialog Appears
settings on the dialog at default, click OK to begin processing.
The Process Progress viewer
appears, logging the events of database processing as they occur. Once all
database objects are processed, we receive a Process Succeeded message
in the Status bar at the bottom of the viewer, as shown in Illustration
Illustration 22: Successful
Processing Completion is Indicated ...
Click the Close
button to dismiss the Process Progress viewer.
We are returned to the SQL
Server Management Studio, where next we will perform a few actions to
generate statistics in the Query Log table.
Expand the Analysis
Services database ANSYS049 Adventure Works DW, as necessary, by
clicking the "+" sign to its immediate left within Object Explorer.
Expand the Cubes
folder that appears within the ANSYS049 Adventure Works DW tree.
Adventure Works cube that appears within the Cubes folder.
from the context menu that appears, as depicted in Illustration 23.
Illustration 23: Select
Browse from the Context Menu ...
Works [Browse] tab appears within SQL Server Management Studio.
Here we will do a few browses to generate sample query statistics within the Query
within the Metadata pane by clicking the "+" sign to its
Expand the Internet
Sales measure folder that appears underneath the expanded Measures level.
related to Internet Sales appear.
Drag measure Internet
Sales Amount into the Data pane, dropping it into the area marked Drop
Totals or Detail Fields Here, as shown in Illustration 24.
Illustration 24: Drag
and Drop the Measure into the Data Pane ...
Drag measure Internet
Order Quantity from the Metadata pane to the Data pane,
dropping it to the right of the Internet Sales Amount.
Expand the Date
dimension within the Metadata pane.
Expand the Calendar
folder that appears underneath the expanded Date dimension.
Expand the Date.Calendar
hierarchy that appears within the expanded Calendar folder.
Drag member Calendar
Year into the Data pane, dropping it onto the area marked Drop
Column Fields Here, as depicted in Illustration 25.
Illustration 25: Drag
and Drop Calendar Year into the Column Fields of the Data Pane ...
Expand the Product
dimension within the Metadata pane.
Expand the Product
Categories hierarchy that appears underneath the expanded Product dimension.
Drag the Category
level, appearing underneath the Product Categories hierarchy, into
the Data pane, dropping it onto the area marked Drop Row Fields Here,
as shown in Illustration 26.
Illustration 26: Drag
and Drop Product Categories into the Row Fields of the Data Pane ...
The Data pane appears,
after our insertions, as partially depicted in Illustration 27.
Illustration 27: Our
Initial Browse in the Data Pane (Partial View)
Drill down on each
of the Product Categories by clicking the "+" sign to its
immediate left, as shown in Illustration 28.
Illustration 28: Drilling Down on
Product Categories ...
Drill down on
the Calendar Year CY 2003, to display the bi-annual levels H1 CY 2003
and H2 CY 2003.
Drill down on
level H2 CY 2003 one level further, to expose the underlying quarterly
levels Q3 CY 2003 and Q4 CY 2003, as depicted in Illustration
Drilling Down on Select Date Dimension Levels ...
In a manner
similar to the steps above, perform several browses involving
different dimensions / dimension hierarchies and measures.
Having provided for the
collection of a representative set of query statistics, we can now get some
exposure to the contents of the Query Log.
Overview of the Query log
and Operation of the Query Log
As we have seen in
other articles, and have emphasized in the sections above, the Query Log lies
at the heart of Usage Analysis for the Analysis Services Database and
some of its child objects, including cubes. As we have also previously noted,
the Query Log captures details about the queries that have been enacted
upon the server by client applications. We have prepared the Query Log
for examination in this section, having selected the creation and maintenance
of the MSSQL Server table incarnation versus the file option, which we
mentioned was possible in passing earlier.
A study of the MSSQL
Server table, which we named ANSYS049_MSASQueryLog, reveals some
differences over its counterpart in the previous version of MSSQL Server
Analysis Services. Unlike its SQL Server Analysis Services 2000
predecessor (by default an MS Access database named msmdqlog.mdb),
we can name the Query Log table whatever we direct in the Properties
settings for the Analysis Server under consideration, and begin
with an MSSQL Server table (rather than converting an MS Access
table to one) as we saw above.
A look at
the ANSYS049_MSASQueryLog table reveals the nature of the data within
the Query Log. Let's look at the layout of the table from SQL Server
Management Studio, before examining its contents on a more specific basis.
From within Object
Explorer, right-click the ANSYS049_MSASQueryLog table (housed within
the Tables folder of the AdventureWorksDW relational database, as
we saw earlier).
from the context menu that appears, as shown in Illustration 30.
Illustration 30: Opening
the ANSYS049_MSASQueryLog Table Schema ...
table schema appears on a new Browser tab (adjacent to the Adventure
Works [Browse] tab we opened earlier), within the SQL Server Management
Studio, as depicted in Illustration 31.
Illustration 31: The ANSYS049_MSASQueryLog
As we noted in Usage-Based
Optimization in Analysis Services 2005, the Usage-Based
Optimization Wizard relies upon the Query Log to support its
operations. As we can see, the log is composed of several relatively
straightforward fields. The fields, together with their respective
descriptions, are summarized in Table 1.
Table 1: The Fields of
the Query Log
name of the Analysis Services database used in the query
name of the cube used in the query
name of the user that ran the query
numeric string indicating the level from each dimension used to satisfy the
time the query began
length of time (in seconds) of the query execution
In lockstep with a
review of the fields from a description perspective, we can view the actual
data in the table from the SQL Server Management Studio, as well.
From the Object
Explorer, right-click the ANSYS049_MSASQueryLog table, once again.
Table from the context menu that appears, as shown in Illustration 32.
Illustration 32: Opening
the ANSYS049_MSASQueryLog Table ...
table appears on another new Browser tab (adjacent to the tabs we have
already opened), within the SQL Server Management Studio, as partially
depicted in Illustration 33.
Illustration 33: Partial View of
the ANSYS049_MSASQueryLog Table
the fields has significant potential, with regard to analysis and reporting
utility. I have even created cubes from this table, which, coupled with
performance, sizing and other such information from various sources, can
support all manner of administrative and similar analysis. The fourth column, Dataset,
can be highly useful with regard to the specific information that it reveals
about cube usage. The somewhat cryptic records within this column represent
the associated levels accessed for each dimensional hierarchy within the
query. An example of the Dataset field appears (enclosed in a red
rectangle), within a subset of a sample row, as shown in Illustration 34.
Illustration 34: Example
of the Dataset Field
won't go into a detailed explanation in this lesson, I expect to publish a
prospective article that outlines the interpretation of the digits in the Dataset
field (we will trace an example Dataset field's component digits to
their corresponding components in the respective cube structure), along with
more information regarding report writing based upon the Query Log in
general. Our purpose here is more to expose general options for using the Query
Log directly to generate customized usage analysis reports.
fields provide rather obvious utility in analyzing cube usage, together with
performance in general. The fields present information which, particularly in
combination with Dataset, helps us to report precisely on the exact
points at which queries interact with the cube. These combinations can provide
excellent access and "audit" data. To some extent, they can confirm
the validity of cube design if, say, a developer wants to verify which
requests, collected during the business requirements phase of cube design, are
actually valid, and which, by contrast, might be considered for removal from
the cube structure based upon disuse, should the time arrive that we wish to
optimize cube size and performance by jettisoning little-used data.
StartTime and Duration provide the
ingredients for evolved performance trending (via relational reports,
OLAP reports, KPIs assembled from either or both, and more), and act as useful
statistics upon which to base (or filter) numerous types of administrative
reports, including information that will help us to plan for heavy reporting
demands and other cyclical considerations.
Query Log Capture
have seen, the usage-based analysis and optimization processes provided via the
their functions based upon statistics captured in the Query Log. From within
the Analysis Server Properties dialog box, which contains options
that control user interfaces, server environment, processing, logging, and
add-ins, we can make the query logging process more frequent than the default
of one-in-every-ten queries, as we saw earlier. Moreover, from the Properties
settings, we can:
Stop and start
logging (we can leave it stopped entirely, if we desire);
Clear, idle or
restart the log;
Create a new "prospective"
log, while retaining its predecessor for archiving, etc.;
We have also mentioned that we can further manage the
automatic logging of processing messages by directing the server to enable the
logging of the messages to a log file for which we can designate a file path
While setting the logging interval too low might degrade
performance in a busy production environment, the logging interval setting for
a development environment might typically be a lower number than 10. This would
obviously allow us to capture more voluminous statistics to support intensive
analysis of cube performance and use, prior to optimizing it for final delivery
to information consumers. To cite an example of the utility of lowering the Sampling Rate
that most of us might find intuitive, a fairly straightforward approach to
optimizing performance based on usage is to create partitions with zero
aggregations, adjust query logging to log every query for a period of time to
capture typical usage patterns, and then use the Usage-Based Optimization Wizard to design aggregations
appropriate to the usage. Once this is done, we can likely afford to raise the
Sampling Rate frequency again, to lower the overhead inherent in its
Server Management Studio when ready.
we have a basic understanding of the workings of the Query Log, we will
discuss options for producing customized reports to meet our business needs.
Direct Reporting for Sophisticated Utilization Analysis
As many of us probably
can see, we can rely upon the fact that the options for reporting choices for
cube utilization analysis are plentiful, whether we maintain the Query Log
in an MSSQL relational table or a file, as we have discussed, or otherwise move
/ transform the data, and house it in an administrative reporting warehouse /
mart, or some other such arrangement. We will find that many common packages
can be used in identical fashion to report from these stores via ODBC and other
channels. I have even created cubes (both for Analysis Services and other
popular OLAP cubes) for larger clients from the statistical information
regarding cube usage, together with statistics that come from other diagnostic
sources, generating robust and useful mechanisms for tracking cube usage and
performance from many perspectives. (I expect to publish articles that
detail more of the specifics of some of these administrative database and
reporting options, in later articles in this series, as well as my MSSQL Server Reporting Services series, here
at Database Journal).
Regardless of the
reporting application, the concepts are essentially the same in reporting from
the Query Log. The general steps include the following:
of a connection to MSSQL Server, and the database containing the Query
Log table (or a connection to the respective location of the data if it has
been housed elsewhere);
Creation of a
query / dataset against the data source;
Creation of a
report or other layout file within which to accumulate / present the selected
deployment of the report / other layout in a manner whereby it becomes
accessible to the intended audience(s).
As I have stated, our "medium"
for reporting can be selected from a wide array of applications. I have created
similar reporting processes in the past using a wide range of business
intelligence tools, including MSSQL Server Reporting Services, Cognos, Business
Objects, Crystal Reports, ProClarity, MicroStrategy, and Microsoft Office, among
other less common tools.
Because reporting typically
entails more than the mere entrainment of the data from the Query Log,
for use in a report or reports within any of a multiple choice of applications,
we will not be able to examine report writing and formatting considerations,
aggregating the data in the report or elsewhere, etc., in this article. Many
of us will know the steps required to create robust and attractive reports
within our own choices of solutions. Other of my articles elsewhere deal with
the specifics of working with various reporting options. The demand is evident,
in discussions with my clients and readers, for the capability to trend cube
performance, create user access reports, and so forth, and I will focus later on
some of those needs, together with ways to meet them. I will attempt to
accomplish this over the months to come, with different reporting tools after
we have exposed other components of the statistical data source, of which the Query
Log is only a part.
In this article,
we introduced the general need for more sophisticated analysis based upon usage
analysis statistics that we can collect surrounding our Analysis Services
cubes. Our primary focus was a closer examination of the source of Analysis
Services Database (and predominantly cube) performance statistics, the Query
Log, discussing its location and physical structure, how it is populated,
and other characteristics.
We first created a copy of a sample Analysis Services
database for use in our practice exercise, and then enabled the Analysis
Server Query Log to capture query statistics. We then processed our cube,
before manipulating data to create Query Log entries, to complete
preparation for our overview of the Query Log.
moved into the focus of our session, examining the Query Log contents,
discussing the various statistics captured. Finally, we discussed reporting
possibilities, commenting on both relational and OLAP options. As a part of
our exploration of Query Log statistics capture, we touched upon various
means of customizing Query Log capture to meet local needs.
See All Articles by Columnist William E. Pearson, III
Discuss this article in the MSSQL Server 2000 Analysis Services and MDX Topics Forum.