Introduction to the Analysis Services 2005 Query Log

About the Series …

This
article is a member of the series Introduction to MSSQL Server Analysis Services. The series is designed to
provide hands-on application of the fundamentals of MS SQL Server Analysis
Services
, with each installment progressively presenting features and
techniques designed to meet specific real – world needs. For more information
on the series, please see my initial article, Creating Our First Cube.

Note: To follow along with the steps we undertake, the following components,
samples and tools are recommended, and should be installed according to the
respective documentation that accompanies MSSQL Server 2005:

  • Microsoft SQL
    Server 2005 Database Engine

  • Microsoft SQL
    Server 2005 Analysis Services

  • Business
    Intelligence Development Studio

  • Microsoft SQL
    Server 2005 sample databases

  • The Analysis Services
    Tutorial sample project and other samples that are available with the
    installation of the above.

To
successfully replicate the steps of the article, you also need to have:

  • Membership
    within one of the following:

    • the Administrators
      local group on the Analysis Services computer

    • the Server
      role in the instance of Analysis Services.

  • Read permissions within any SQL
    Server 2005
    sample databases we access within our practice session, if
    appropriate.

Note: Current Service Pack updates are assumed for the operating system, MSSQL
Server 2005
("MSSQL Server"), MSSQL Server 2005 Analysis
Services
("Analysis Services"), MSSQL Server 2005 Reporting
Services
("Reporting Services") and the related Books
Online
and Samples. Images are from a Windows 2003
Server
environment, but the steps performed in the articles, together with
the views that result, will be quite similar within any environment that
supports MSSQL Server 2005 and its component applications.

Introduction

In my article, Usage-Based Optimization in Analysis Services 2005, we introduced and explored Usage-Based
Optimization
, gaining some hands-on exposure to the Usage-Based Optimization Wizard. We noted that the new Usage-Based
Optimization Wizard
improves dramatically upon the effectiveness of the Analysis
Services 2000
Usage Analysis (going significantly farther than the
generation of the simple reports) and Storage Design (allowing for
up-to-date, usage-based optimization) Wizards. We focused upon
the way that the Usage-Based Optimization Wizard offers us the
capability to base aggregation design upon a given cube’s usage
statistics
, in combination with other factors, and allows us to make
subsequent adjustments to our existing aggregation design and storage
mode
as time passes, and as information is collected from which meaningful
statistics can be derived.

We examined the operation
of the Usage-Based Optimization Wizard within a context of aggregation
design
, and then reinforced our understanding with a practice exercise within
which we enabled the Analysis Server Query Log to capture query
statistics within a copy of a sample Analysis Services database we
created for the exercise. After next processing the clone database, we
manipulated data within a cube therein to create Query log entries. The
focus of the exercise then became performance of a procedure whereby we set aggregations
for our designated practice cube with the Usage-Based Optimization Wizard.
Throughout the guided steps of the Wizard we examined each of the
possible settings that it makes available to us, and commented upon general
optimization concepts as we proceeded through the practice example.

In this
article, we will examine more closely the Query Log itself. I often
receive requests from clients and readers, asking how they can approach the creation of more
sophisticated reporting to assist in their usage analysis pursuits. This is
sometimes based upon a need to create a report that presents data as it appears
in, say, the Query Log table / file, in a way that allows for printing,
publishing to the web, or otherwise delivering report results to information
consumers. Moreover, some users simply want to be able to design different
reports that they can tailor themselves, to meet specific needs. Yet others
want a combination of these capabilities.

Each of these more
sophisticated analysis and reporting needs can be met in numerous ways. In this lesson, we will we will
examine the source of cube performance statistics, the Query Log,
discussing its location and physical structure, how it is populated, and other
characteristics. We will discuss ways that we can customize the degree and
magnitude of statistical capture in the Query Log to enhance its value
with regard to meeting more precisely our local analysis and reporting needs. We
will practice the process of making the necessary changes in settings to
illustrate how this is done. Finally, we will discuss options for generating
more in-depth, custom reports than the wizard provides, considering ways that
we can directly obtain detailed information surrounding cube processing
events in a manner that allows more sophisticated selection, filtering and
display, as well as more customized reporting of these important metrics.

The Analysis Services 2005 Query Log

Overview and Discussion

The
entire idea behind "optimization based upon utilization" is, first
and foremost, to enhance performance based upon what consumers ask for
on a recurring basis. Beginning with capabilities that
debuted in Analysis Services 2000, we have been able to leverage historical query
details to ascertain the aggregations of data that our cubes need to maintain
to support the most frequently "asked" queries. We could apply
filters to refine this exploration, and extrapolate what we learn to the
specification of which aggregations to maintain, thus maintaining the
appropriate pre-calculations for the consumer populations we support, as we
detailed in Usage-Based
Optimization in Analysis Services 2005
.

We
have multiple options, when we venture upon utilization analysis and
utilization-based optimization within Analysis Services 2005, in how we
incorporate the Query Log. Examples include the use of the Usage-Based
Optimization Wizard
, as we saw in Usage-Based
Optimization in Analysis Services 2005
, to create usage-based
aggregations in a directed manner, so as to fine tune the storage / processing
tradeoffs involved. Alternatively, we might create reports, using Reporting
Services or other relational report writers, to analyze usage – or even usage
trends – to prompt forehanded action with regard to aggregation design, as well
as general cube sizing and structure. As illustrations, I have created
dashboard objects for various clients that keep administrators informed of what
multidimensional intersects are being queried most often, as well as what the
processing times for those queries are (to identify "candidate intersects"
for more well-tuned aggregations); intersects that are rarely accessed
(candidates, perhaps for removal, or less intensive aggregations); the overall
cube size; and trends regarding these and other values to highlight the need
for storage and optimization planning at future dates. Important to any
optimization effort is the ongoing requirement to revisit the process to
capture changes that occur over time in usage patterns – the more history we
have of actual usage, the more value we can add with usage-based optimization.

Regardless
of the ways we employ the data within the Query Log, we must populate
the log first. We will perform the steps to do so once again in this session,
as preparation to browsing the log, as well as discussing various reporting and
"fine tuning" options, in general. In this article, we will:

  • Create of a
    copy of a sample Analysis Services database for use in our practice
    exercise;

  • Enable the Analysis
    Server Query Log
    to capture query statistics;

  • Process the
    cube and manipulate data, to create Query Log entries;

  • Examine the Query
    Log
    contents, discussing the various statistics captured;

  • Discuss
    reporting options, including the use of SQL Server Reporting Services
    as relational and / or OLAP reporting tool;

  • Comment upon customization
    concepts as we proceed through our practice example.

Considerations and Comments

For purposes of the practice
exercises within this series, we will be working with samples that are provided
with MSSQL Server 2005 Analysis Services. These samples include,
predominantly, the Adventure Works DW Analysis Services database (with
member objects). The Adventure Works DW database and companion samples
are not installed by default in MSSQL Server 2005. The samples can be
installed during Setup, or at any time after MSSQL Server has
been installed. The topics "Running Setup to Install AdventureWorks
Sample Databases and Samples
" in SQL Server Setup Help or
"
Installing
AdventureWorks Sample Databases and Samples
" in the Books Online (both of which are included on
the installation CD(s), and are available from www.Microsoft.com and other sources), provide
guidance on samples installation.

Important information
regarding the rights / privileges required to accomplish samples installation,
as well as to access the samples once installed, is included in the references
I have noted.

William Pearson
William Pearson
Bill has been working with computers since before becoming a "big eight" CPA, after which he carried his growing information systems knowledge into management accounting, internal auditing, and various capacities of controllership. Bill entered the world of databases and financial systems when he became a consultant for CODA-Financials, a U.K. - based software company that hired only CPA's as application consultants to implement and maintain its integrated financial database - one of the most conceptually powerful, even in his current assessment, to have emerged. At CODA Bill deployed financial databases and business intelligence systems for many global clients. Working with SQL Server, Oracle, Sybase and Informix, and focusing on MSSQL Server, Bill created Island Technologies Inc. in 1997, and has developed a large and diverse customer base over the years since. Bill's background as a CPA, Internal Auditor and Management Accountant enable him to provide value to clients as a liaison between Accounting / Finance and Information Services. Moreover, as a Certified Information Technology Professional (CITP) - a Certified Public Accountant recognized for his or her unique ability to provide business insight by leveraging knowledge of information relationships and supporting technologies - Bill offers his clients the CPA's perspective and ability to understand the complicated business implications and risks associated with technology. From this perspective, he helps them to effectively manage information while ensuring the data's reliability, security, accessibility and relevance. Bill has implemented enterprise business intelligence systems over the years for many Fortune 500 companies, focusing his practice (since the advent of MSSQL Server 2000) upon the integrated Microsoft business intelligence solution. He leverages his years of experience with other enterprise OLAP and reporting applications (Cognos, Business Objects, Crystal, and others) in regular conversions of these once-dominant applications to the Microsoft BI stack. Bill believes it is easier to teach technical skills to people with non-technical training than vice-versa, and he constantly seeks ways to graft new technology into the Accounting and Finance arenas. Bill was awarded Microsoft SQL Server MVP in 2009. Hobbies include advanced literature studies and occasional lectures, with recent concentration upon the works of William Faulkner, Henry James, Marcel Proust, James Joyce, Honoré de Balzac, and Charles Dickens. Other long-time interests have included the exploration of generative music sourced from database architecture.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles