Explain Away Your Troubles

Thursday Mar 6th 2003 by James Koopmann
Share:

Determining when things change in a database is the first step in zeroing in on problems. One of the prime culprits to change is the very SQL that we attempt to run every day. James Koopmann shares his no-frills method to help sniff out those changes that occur.

Determining when things change in a database is the first step in zeroing in on problems. One of the prime culprits to change is the very SQL that we attempt to run every day. Here is a no-frills method to help sniff out those changes that occur.

Quite a few years ago when I switched from DB2 to Oracle there was one component of DB2 I missed. With DB2 you could bind an application to a database. Among other things, the bind process would determine the access path for each SQL statement that was in your application. Then, no matter when you ran the application it would always use the same access path. This was quite a relief with DB2 because you always knew that once you tuned an application, it would stay tuned. This is not the case with earlier versions of Oracle, or where outlines have not been implemented or maintained. Every time you run an application, the SQL must go through the optimizer, where possibly a new access path will be chosen than the last time. I would like to emphasize that this is not going to happen very much if your data distribution remains the same. Unfortunately, we all live in hostile environments where the data and data distribution can change quite often over time. Not only do we have to watch out for the data changing on us but also developers changing applications or the many adhoc queries that infiltrate your database. I would venture to say that it is this second flavor of hostility that can pose the greatest threat to your database.

The next best thing to having a bound access path for every application we run, is to be able to determine when that access path has changed for each and every SQL statement that gets executed. To do this I have come up with a set of scripts that will allow you to capture, store, and report on changes in explain plans for all of your SQL statements.

The first thing to do is create a few structures to store the SQL and explain plans within your Oracle database. Figure 1 shows these structures. The sequence number is just an indexing mechanism. The table sqlexp_sqltext will store the sequence number assigned to the SQL, the date it was added / updated within the table, and the actual SQL statement. The next table sqlexp_plan_table is an exact replica of the plan_table that would be created if you were to run the utlxplan.sql script. Check Oracle's plan_table against this one and feel free to modify any of the scripts here to be compliant. This table will store the explain plans for all SQL captured. The statement_id column will be the sequence number assigned to the SQL statement in question.

Figure 1.


--# sqlexp_create.sql
--#
--# create objects needed to monitor sql
--#

CREATE SEQUENCE sqlexp_seq
      INCREMENT BY 1 START WITH 1;

CREATE TABLE sqlexp_sqltext (
             TEXT_SEQ        NUMERIC,
             TEXT_DATE       DATE,
             SQL_TEXT        VARCHAR2(1000))
             TABLESPACE TOOLS;

CREATE TABLE sqlexp_plan_table (
             statement_id    VARCHAR2(30),
             timestamp       DATE,
             remarks         VARCHAR2(80),
             operation       VARCHAR2(30),
             options         VARCHAR2(30),
             object_node     VARCHAR2(128),
             object_owner    VARCHAR2(30),
             object_name     VARCHAR2(30),
             object_instance NUMERIC,
             object_type     VARCHAR2(30),
             optimizer       VARCHAR2(255),
             search_columns  NUMERIC,
             id              NUMERIC,
             parent_id       NUMERIC,
             position        NUMERIC,
             other           LONG)
             TABLESPACE TOOLS;

The next task is to extract all of the SQL from your system. To do this I have two scripts that I use. The first script sqlexp_pull.sql (Figure 2.), is a PL/SQL script that does nothing more then query v$sqlarea and write out each and every select statement to a flat file defined by 'utl_path' and with a file name of 'sqlexp_put.lst'. You will need to change the 'utl_path' variable to a valid directory on your system that has been set up for the UTL_FILE package.

--#--------------------------------------------------------------------------#
--# sqlexp_pull.sql 
--#

set serverout on 
set echo off
set verify off
set linesize 132
set pagesize 40
set long 9999

DECLARE 

CURSOR c0 IS 
          select sql_text
            from v$sqlarea
           where upper(sql_text) like 'SELECT%'
           order by 1;

utl_file_handle         UTL_FILE.FILE_TYPE;
utl_path                VARCHAR2(255) := '<valid utl_path>';
utl_file_name           VARCHAR2(255) := 'sqlexp_pull.lst';

BEGIN 
  DBMS_OUTPUT.ENABLE(1000000);
  BEGIN
    utl_file_handle := UTL_FILE.FOPEN(utl_path, utl_file_name, 'w');
  EXCEPTION
    WHEN UTL_FILE.INVALID_PATH THEN
      dbms_output.put_line
    ('File location or filename was invalid. '||substr(sqlerrm,1,30));
      dbms_output.put_line('    SQLCODE:  '||sqlcode);
      RETURN;
    WHEN UTL_FILE.INVALID_MODE THEN
      dbms_output.put_line
    ('The open_mode parameter in FOPEN was invalid. '||substr(sqlerrm,1,30));
      dbms_output.put_line('    SQLCODE:  '||sqlcode);
      RETURN;
    WHEN UTL_FILE.INVALID_OPERATION THEN
      dbms_output.put_line
    ('The file could not be opened or operated on as requested. '||substr(sqlerrm,1,30));
      dbms_output.put_line('    SQLCODE:  '||sqlcode);
      RETURN;
  END;
    FOR r0 IN c0 LOOP
      UTL_FILE.PUT_LINE(utl_file_handle,r0.sql_text);
      UTL_FILE.FFLUSH(utl_file_handle);
    END LOOP;
    UTL_FILE.FCLOSE(utl_file_handle);
EXCEPTION
  WHEN NO_DATA_FOUND THEN
    DBMS_OUTPUT.PUT_LINE('No data found in sqlarea');
END; 
/


Just after capture of the SQL, the next step is to run all of this SQL through a Perl script (Figure 3) which will massage the statements some. What this script does is to replace all strings and numbers to bind variables and to filter out any SQL statement that references a 'v$' table. The reason for replacing strings and numbers with bind variables is because if you didn't, you would run the risk of having thousands of the same SQL statement where the only difference was the string or number that was in the 'WHERE' clause. I strip out the statements that reference 'v$' tables because 99.9% of these are internal to Oracle and are of no interest to us. This simple script converts 98% of all my SQL statements properly. Please feel free to modify this script so you can achieve a higher percentage. When you run this script against the output from sqlexp_pull.sql you should pipe the output to another file; I use sqlexp_tmp.lst. I then get rid of all of the duplicate SQL statements in the sqlexp_tmp.lst by using the UNIX sort command and pipe it to another file sqlexp_put.lst. This file will now be used for loading the sqlexp_sqltext and sqlexp_plan_table tables.

Command to sort and get rid of duplicates:

sort   -u   $UTL_PATH/sqlexp_put.lst    $UTL_PATH/sqlexp_tmp.lst

Figure 3.

#! /usr/local/bin/perl
#
open(SQLEXPLST, "<valid utl_path>/sqlexp_pull.lst");
while ($sqltext = <SQLEXPLST>) {
  chop($sqltext);
  if ($sqltext =~ /^ *select\b/i) {
    ($dummy,$orderby) = split(/^ *select *.* *from *.* *order *by */i,$sqltext);
    if ( $orderby ne '' ) {
      $orderby = "order by ".$orderby;
    }
    $sqltext =~ s/,/ , /g;
    $sqltext =~ s/'/ ' /g;
    $sqltext =~ s/=/ = /g;
    $sqltext =~ s/\|\|/ , /g;
    $sqltext =~ s/\(/ \( /g;
    $sqltext =~ s/\)/ \) /g;
    $sqltext =~ s/'[^']*'/:string/g;
    $sqltext =~ s/[^:]\b\d+\b/ :num/g;
    $sqltext =~ s/order *by *.*/$orderby/i;
    if ($sqltext =~ / *.* *from *.*\$+.*/i) {
      $sqltext = "";
    }
  }
  if ($sqltext ne '') {
    print "$sqltext\n";
  }
}
close(SQLEXPLST);

At this point, I would like to state that you do not have to go through this process of extracting and parsing SQL statements from your database if you know what all your SQL statements are. Just put the SQL statements you want to track into the flat file 'sqlexp_put.lst' in your utl_path and proceed on from this point. The format for this file is such that each SQL statement must be on one line, don't worry about wrapping, this is what the load process wants.

The next step and true meat of this article is the script sqlexp_put.sql. This script allows you to do maintenance and reporting on the SQL statements either captured using the method above or already loaded into the 'sqlexp' tables. This script allows for a wide variety of actions to be performed. The most common actions, that I use myself, have been outlined in the following table. For all flags and values, please see the documentation at the top of the script.

Flag1

Flag2

Flag3

Action of script sqlexp_put.sql

l

n

%

Load all new SQL found in the file sqlexp_put.lst.

This will also report on changes in explain plans found for all SQL processed.

Any SQL that are un-explainable will end up in file <utl_path>/sqlexp_err.lst

n

o

%

Process all SQL in table sqlexp_sqltext and re-run explain plans. Compare the new explain plan with the old explain plan stored in sqlexp_plan_table and report on the differences.

r

s

<id>

Show the SQL and explain plan for statement_id = <id>.

NOTE: this is NOT the explain plan stored in sqlexp_plan_table, it is a totally new explain plan generated with current statistics.

s

s

<id>

Show the SQL and explain plan for statement_id = <id>.

NOTE: this IS the explain plan stored in sqlexp_plan_table.

n

s

<id>

Process SQL where statement_id = <id> and report on differences if found.

u

s

<id>

Update sqlexp_plan_table with the NEW explain plan for statement_id = <id>.


Download sqlexp_put.sql

If you work in a hostile environment, or just want a neat little tool to track the SQL that is going against your databases, this is the tool for you. You will now have a way to proactively determine when an explain plan has changed, a tool to tell you what the explain plan use to be, and hopefully a mechanism that you can tailor to suit your needs. I will typically stagger the running of these scripts to try and capture the different SQL that is run during the day (flag1=l, flag2=n, flag3=%). I will also run a report daily which will tell me of any explain plans that have changed (flag1=n, flag2=o, flag3=%).

Please feel free to modify the above scripts for your installation. I have only trapped select statements; you may want to trap updates, deletes, and inserts. Your may also want to modify the scripts to store a history of changes instead of just the last explain plan. A few cautions if you plan to change any of the scripts. Beware of whom you run these scripts as, the user must have full access to all the objects referenced in the SQL (you may need to issue grants and create synonyms). If you change the parsing for the script sqlexp.pl, be aware that you may now be doing substitutions differently then the way you initially captured and parsed SQL before. This will snowball into loading SQL that you have loaded before. In any event, I hope I have given you a small window into looking at all the SQL in your database.

» See All Articles by Columnist James Koopmann

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved