Striking Gold with LogMiner - Part 1: Getting Started

Thursday Jun 17th 2004 by Steve Callan
Share:

Oracle's LogMiner utility has been around for several years now, and it originally appeared as a new feature (as opposed to being an improvement of something else). The purpose of this series is to help demystify some of the procedures, illustrate some examples of using LogMiner, and to recommend cases where LogMiner should be a DBA's primary tool for recovering lost data.

Oracle's LogMiner utility has been around for several years now, and it originally appeared as a new feature (as opposed to being an improvement of something else). Each major release of Oracle typically ushers in a new set of features, some of which are actually useful and can make your job as a DBA even easier. The "buzz" on LogMiner has worn off. Part of its initial allure was its ability to help DBAs recover data by generating SQL "undo" statements from the online (and archived) redo logs. Part of what made LogMiner undesirable to use is its interface and reliance on executing a package containing a laundry list of options.

The purpose of this series is to help demystify some of the procedures, illustrate some examples of using LogMiner, and to recommend cases where LogMiner should be a DBA's primary tool for recovering lost data.

Let's look at an outline of some setup and initialization steps you can use to make LogMiner more palatable. Using LogMiner involves (or can involve) a one time setup, a command to start the utility or session, SQL queries you perform to capture the "redo" or "undo" statements, and a command to end the session. Data is stored in dynamic views ("v$" views), and the query to get the SQL statements is actually the easiest part of using LogMiner.

Shown below is a summary table of the steps involved.

Create a

Dictionary

Add logfiles (online or archive)

Start the session

End the session

Use a flat file; need utl_file_dir set for this

Use the redo logs; need to be in archivelog mode

Use an online catalog

1st one is new

2nd and later are "add"

Files can also be removed

Only need one from each group

Use option print_pretty_sql for better readability

Specify a time range or SCNs

End is one statement with no options

Creating a dictionary

LogMiner uses a dictionary to "translate internal object identifiers and datatypes to object names and external data formats." (Oracle9i Database Administrator's Guide) You have a choice of three types of dictionaries for LogMiner. To use the flat file version (which I use later in the article), you will need the UTL_FILE_DIR parameter set.

Having picked the flat file version for my dictionary, what does it take to set the parameter? Trivia question: if you are using an SPFILE (or not), can you dynamically set the UTL_FILE_DIR parameter? One way to find out is to look at what is displayed in the Enterprise Manager Console "Configuration" window.

In the highlighted line, there is no checkmark in the "Dynamic" column, so if "utl_file_dir" is not already set, you will have to set it and bounce the instance.

Before running the DBMS_LOGMNR utility, you may need to compile the packages. Instead of compiling each invalid object one at a time, you can make use of the utlrp.sql script.

SQL> select object_name, status
from dba_objects
where object_name like '%LOG%'
and object_type = 'PACKAGE';


OBJECT_NAME                    STATUS
------------------------------ -------
DBMS_INTERNAL_LOGSTDBY         INVALID
DBMS_LOGMNR                    INVALID
DBMS_LOGMNR_D                  INVALID
DBMS_LOGMNR_INTERNAL           INVALID
DBMS_LOGMNR_LOGREP_DICT        INVALID
DBMS_LOGMNR_SESSION            INVALID
DBMS_LOGREP_DEF_PROC           INVALID
DBMS_LOGREP_DEF_PROC_UTL       INVALID
DBMS_LOGREP_EXP                INVALID
DBMS_LOGREP_IMP                INVALID
DBMS_LOGREP_IMP_INTERNAL       INVALID
DBMS_LOGREP_UTIL               INVALID
DBMS_LOGSTDBY                  VALID
LOGMNR_DICT_CACHE              INVALID
(a couple of others are not shown)

After your DBMS_LOGMNR package is ready to go, you can create the LogMiner dictionary. Two recommendations from Oracle are to use a separate tablespace for LogMiner objects and to enable supplemental logging. The examples for these recommendations are shown below along with the "create dictionary" statement. Note that the package used to create the dictionary is DBMS_LOGMNR_D.

SQL> execute dbms_logmnr_d.build ('dictionary.ora', -
> 'c:\ora9i\admin\db00\file_dir', -
> options => dbms_logmnr_d.store_in_flat_file);

PL/SQL procedure successfully completed.

SQL> execute dbms_logmnr_d.set_tablespace('logmnr_ts');

PL/SQL procedure successfully completed.

SQL> alter database add supplemental log data
  2  (primary key, unique index) columns;

Database altered.

Adding logfiles

With these steps created, we are now ready to add logfiles to the collection of files we want to be able to examine via SQL statements. The steps to add logfiles are very easy, and you can add and remove them at will. You can also use a shortcut to add files (no need to specify the "OPTIONS" parameter). Track down the names of your log files (not using archive logs here) with

SQL> select * from v$logfile;

 GROUP# STATUS  TYPE    MEMBER
------- ------- ------- -----------------------------------
      3         ONLINE  C:\ORA9I\ORADATA\DB00\REDO03.LOG
      2         ONLINE  C:\ORA9I\ORADATA\DB00\REDO02.LOG
      1         ONLINE  C:\ORA9I\ORADATA\DB00\REDO01.LOG

With these file names in hand, use the ADD_LOGFILE procedure with the NEW option for the first one, and an optional ADDFILE for subsequent log files.

SQL> exec dbms_logmnr.add_logfile( -
> logfilename => 'C:\ORA9I\ORADATA\DB00\REDO03.LOG', -
> options => dbms_logmnr.new);

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.add_logfile( -
> logfilename => 'C:\ORA9I\ORADATA\DB00\REDO02.LOG', -
> options => dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

SQL> exec dbms_logmnr.add_logfile( -
> logfilename => 'C:\ORA9I\ORADATA\DB00\REDO01.LOG', -
> options => dbms_logmnr.addfile);

PL/SQL procedure successfully completed.

Log files are easily removed:

SQL> exec dbms_logmnr.add_logfile( -
> logfilename => 'C:\ORA9I\ORADATA\DB00\REDO01.LOG', -
> options => dbms_logmnr.removefile);

PL/SQL procedure successfully completed.

and easily replaced:

SQL> exec dbms_logmnr.add_logfile( -
> logfilename => 'C:\ORA9I\ORADATA\DB00\REDO01.LOG');

PL/SQL procedure successfully completed.

Starting, using, and ending a LogMiner session

We are now ready to mine for gold, well, SQL. Suppose Scott calls you and says he deleted rows from his emp table (where empno is greater than 7900). It was a mistake, and he needs the data restored to his table. The first step is to start LogMiner and populate v$logmnr_contents. This view or "table" is what you query against to extract the SQL_REDO and SQL_UNDO statements.

There are several options as to how you gather the contents. More than likely, you're going to know a time range as opposed to an SCN number, so knowing approximately when Scott deleted the rows is all we need from him (aside from the table name).

SQL> exec dbms_logmnr.start_logmnr( -
> dictfilename => 
  'c:\ora9i\admin\db00\file_dir\
    dictionary.ora', -
> starttime => 
  to_date('06-Jun-2004 17:30:00', 
  'DD-MON-YYYY HH24:MI:SS'), -
> endtime => 
  to_date('06-Jun-2004 17:35:00', 
  'DD-MON-YYYY HH24:MI:SS'));

PL/SQL procedure successfully completed.

Now we are ready to see what took place.

SQL> select sql_redo, sql_undo
  2  from v$logmnr_contents
  3  where username = 'SCOTT'
  4  and seg_name = 'EMP';

SQL_REDO
------------------------------------------------------------------------------------------
SQL_UNDO
------------------------------------------------------------------------------------------
delete from "SCOTT"."EMP" where "EMPNO" = '7902' and "ENAME" = 'FORD' and "JOB" = 'ANALYST ' and "MGR" = '7566' and "HIREDATE" = TO_DATE('03-DEC-81', 'DD-MON-RR') and "SAL" = '3000' and "COMM" IS NULL and "DEPTNO" = '20' and ROWID = 'AAAHW7AABAAAMUiAAM'; insert into "SCOTT"."EMP"("EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO") values ('7902','FORD','ANALYST','7566',TO_DATE('03-DEC-81', 'DD-MON-RR'),'3000',NULL,'20');
*******************************************************************************************
delete from "SCOTT"."EMP" where "EMPNO" = '7934' and "ENAME" = 'MILLER' and "JOB" = 'CLERK ' and "MGR" = '7782' and "HIREDATE" = TO_DATE('23-JAN-82', 'DD-MON-RR') and "SAL" = '1300' and "COMM" IS NULL and "DEPTNO" = '10' and ROWID = 'AAAHW7AABAAAMUiAAN'; insert into "SCOTT"."EMP"("EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO") values ('7934','MILLER','CLERK','7782',TO_DATE('23-JAN-82', 'DD-MON-RR'),'1300',NULL,'10');
********************************************************************************************
delete from "SCOTT"."EMP" where "EMPNO" = '7935' and "ENAME" = 'COLE' and "JOB" = 'LINDA' and "MGR" = '7839' and "HIREDATE" = TO_DATE('01-MAY-04', 'DD-MON-RR') and "SAL" = '4000' and "COMM" IS NULL and "DEPTNO" = '30' and ROWID = 'AAAHW7AABAAAMUiAAO'; insert into "SCOTT"."EMP"("EMPNO","ENAME","JOB","MGR","HIREDATE","SAL","COMM","DEPTNO") values ('7935','COLE','LINDA','7839',TO_DATE('01-MAY-04', 'DD-MON-RR'),'4000',NULL,'30');

You can see how Oracle (shading and "****" lines were added for readability) took the "delete from emp where empno > 7900" statement and turned it into something more complex. It should be apparent that the SQL_UNDO statements are practically in a cut and paste state, ready for immediate use.

To end your LogMiner session, issue the following command:

SQL> exec dbms_logmnr.end_logmnr;

PL/SQL procedure successfully completed.

Suggestions for Oracle regarding LogMiner

If LogMiner is a utility, then include all of its information (background, steps, examples, etc.) in the Utilities guide instead of spreading the information out over several guides.

If LogMiner is a utility, get rid of the reliance on the DBMS_PACKAGE-NAME.OPTION syntax. Export and Import are good examples of easy to use (and format) utilities.

If the package is in place, wouldn't it be much simpler to be able to issue a query with the appropriate where clauses to extract the information of interest, instead of performing all of the setup in convoluted dbms_logmnr.whatever statements? Perhaps a simple version of LogMiner (with negligible impact on performance), specified with a dynamic parameter (simple_logminer=true), could be made part of a DBCA-created database.

In Closing

As you can see, the steps needed to setup LogMiner were not too painful, and the steps to obtain the SQL_REDO and SQL_UNDO statements were fairly straightforward. A new DBA could use this package to recover simple user errors and not need to access archived log files. So could a senior DBA, but use of this package is a good dividing line between quick and relatively small recoveries (suitable for a new DBA to be responsible for) and more complex recoveries (in the senior DBA's realm of duties).

» See All Articles by Columnist Steve Callan

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved