Using LogMiner - Part 2 - Striking Out

Wednesday May 11th 2005 by Steve Callan

Steve Callan investigates a scenario in which LogMiner may or may not be able to recover data lost through user error.

In a previous article on LogMiner, the steps for setting up your environment to use this tool were shown. Oracle's documentation lists several ways LogMiner can be used to recover lost data. Generally, "lost" in this case means a user error induced type of loss. Under the appropriate conditions, you can strike gold with LogMiner. In other situations, you run the risk of striking out. This article investigates a scenario that LogMiner may or may not be able to recover or fix. Let's start off by reviewing what Oracle says LogMiner can do.

Potential Uses for Data Stored in Redo Logs

(See Chapter 9 of the Oracle 9.2 Administrator's Guide)

  • Pinpointing when a logical corruption to a database, such as errors made at the application level, may have begun.
  • Detecting and whenever possible, correcting user error, which is a more likely scenario than logical corruption.
  • Determining what actions you would have to take to perform fine-grained recovery at the transaction level.
  • Performance tuning and capacity planning through trend analysis.
  • Performing post-auditing. The redo logs contain all of the information necessary to track any DML and DDL statements executed on the database, the order in which they were executed, and who executed them.

The next part of interest concerns the "what" as in what are in the redo logs that is of use for LogMiner. The last three items from the list are shown below.

  • The name of the user who issued the DDL or DML statement to make the change (USERNAME column).
  • Reconstructed SQL statements showing SQL that is equivalent (but not necessarily identical) to the SQL used to generate the redo records (SQL_REDO column). If a password is part of the statement in a SQL_REDO column, the password is encrypted.
  • Reconstructed SQL statements showing the SQL statements needed to undo the change (SQL_UNDO column). SQL_UNDO columns that correspond to DDL statements are always NULL. Similarly, the SQL_UNDO column may be NULL for some datatypes and for rolled back operations.

The $64,000 Question

We know we can recover from bad or incorrect DML statements. According to the documentation, can you recover from an errant DDL statement? Drop a table? No problem getting that back - sort of. Here is the big question: can you recover from a drop user username cascade? Consider the following scenario. You are working on a development database and need to transfer data from a production environment into a development environment so that your development database has more realistic data in it. You need to take the current development database and make a new one-off version of it. Because of the way the application works, you cannot perform a straightforward export/import. The application requires that you run a wizard-like utility on the new version of the development database prior to the import. Further, you do not have file system access to the production database's data files, so you are stuck with the export dump file.

You start by exporting the current development database's data so you can import it into the new one-off version of it. You perform the import, but you did not know about the wizard requirement, so you drop the schema owner using the cascade clause so you can start over with a clean database. A scheduled backup has since overwritten your one copy of the export dump file. Can you recover what was just dropped in the new database?

No one cares about the old development database except for a user who put his training class data into it. Although everyone is of the opinion "why did he do that in the first place, too bad for him," the fact remains that he (and you) need to recover the lost/dropped schema. It wasn't your problem to start with, but it is now.

What are your options? Flashback is of no use because it only deals with DML statements, not DDL ones. There is no other backup because the intent of the original development database was that it is a throwaway database. You are not running in archive log mode for the same reason. You do, however, happen to have three 100MB redo logs that have not cycled or overwritten one another. Can LogMiner bail you of the dropped user situation?

The first thing you want to do is put a screaming halt on anything and anyone else using the database. Shut it down, and shut it down now. Take a cold backup and put it somewhere safe. Moreover, this is a case where you do want to include the redo logs as part of the copy. Taking the cold backup is prudent in that you will not or cannot be any worse off than you already are.

What is in the redo logs when Scott drops a table? Using a simple drop table emp by Scott should give you an appreciation of the obstacles that lay ahead of you.

Here is a quick run through of Scott dropping his emp table and you (knowing a fairly exact time so as to narrow down the amount of data returned) using a prepared script to fire up LogMiner and see what you can retrieve.

Note: Part of the table was cut off for display purposes

SQL> select * from emp;

     EMPNO ENAME      JOB             
---------- ---------- --------- 
      7369 SMITH      CLERK           
      7499 ALLEN      SALESMAN        
      7521 WARD       SALESMAN        
      7566 JONES      MANAGER         
      7654 MARTIN     SALESMAN        
      7698 BLAKE      MANAGER         
      7782 CLARK      MANAGER         
      7788 SCOTT      ANALYST         
      7839 KING       PRESIDENT       
      7844 TURNER     SALESMAN        
      7876 ADAMS      CLERK           
      7900 JAMES      CLERK           
      7902 FORD       ANALYST         
      7934 MILLER     CLERK           

14 rows selected.

SQL> drop table emp;

Table dropped.

SQL> select * from emp;
select * from emp
ERROR at line 1:
ORA-00942: table or view does not exist

Scott alerts you about the dropped table and he can narrow down the time to a 15-second window.

execute ('dictionary.ora', -
'C:\ora9i\oradata\db00', -
options => dbms_logmnr_d.store_in_flat_file);

execute dbms_logmnr_d.set_tablespace('logmnr_ts');

alter database add supplemental log data
(primary key, unique index) columns;

exec dbms_logmnr.add_logfile( -
logfilename => 'C:\ORA9I\ORADATA\DB00\REDO01.LOG', -
options =>;

exec dbms_logmnr.add_logfile( -
logfilename => 'C:\ORA9I\ORADATA\DB00\REDO02.LOG', -
options => dbms_logmnr.addfile);

exec dbms_logmnr.add_logfile( -
logfilename => 'C:\ORA9I\ORADATA\DB00\REDO03.LOG', -
options => dbms_logmnr.addfile);

exec dbms_logmnr.start_logmnr( -
dictfilename => 'C:\ora9i\oradata\db00\dictionary.ora', -
starttime => to_date('04-May-2005 22:50:00','DD-MON-YYYY HH24:MI:SS'),-
endtime => to_date('04-May-2005 22:50:15', 'DD-MON-YYYY HH24:MI:SS'));

With all of the above successfully run, you can query v$logmnr_contents.

Seeing only "drop table emp;" - especially if you were expecting to see all of the SQL_UNDO statements lined up and ready for cut and paste, and don't forget there may be hundreds of thousands of rows of data involved - has got to make your jaw drop and your heart race a little bit. If you recall the 5th bullet in the potential uses list, truth in advertising is in effect here as you can plainly see that the drop table DDL statement has been recorded.

Don't give up just yet. Surely, there must be something in the redo logs that can be used. Alter the query a bit and you get the following:

The query returns almost 30 rows, and the first eight return the columns of the table (you can see EMPNO, ENAME and JOB in the screenshot). So far, this does not look too bad or incomprehensible. What is not obvious is the work that went into making the output "pretty." Start from the top and insert the undo statements one at a time.

You may find that several of the more obscure insert statements result in 0 rows updated. I am not a database forensic expert, but a "0 rows updated" would give me pause considering this is supposed to be an undo type of statement.

After running through the set of LogMiner generated undo statements, let's connect as Scott and see what the results are.

There is nothing quite like an ORA-00600 error to cap off the evening while you are sweating bullets trying to mine around 500,000 rows worth of data across 80 tables. I believe I was precise in editing the generated undo statements as there were no syntax errors - the results of each statement were either 1 or 0 rows updated or inserted.

With the Scott case, we can always drop the user with cascade and start over - or can we?

With ORA-00600 errors, you can use MetaLink's ORA-00600 lookup utility. After doing DML on sys tables with the undo statements, maybe you have an easy fix, maybe not.

Some Conclusions about LogMiner

It is impractical to try to recover any significant amount of data.

Simple DML statements - the one row type of mistake - are easily undone.

Multiple rows affected by the same error are not too hard to correct.

Recovering from a simple DDL statement error virtually guarantees a "game over" end state for the novice, average and above average DBA. Unless you have significant experience with DML on SYS-level objects, it is not worth the risk to your database (assuming there are usable schemas remaining within). Although the undo information is present in some fashion, using it is not explicitly supported. The documentation could point that out in a better manner.

In Closing

The scenario described earlier reminds me of a line from the movie "The Recruit" when Al Pacino is dispensing advice to the trainees ("Rule number one: don't get caught."). If you are working on an unfamiliar database, before you start doing anything else, ensure you have a good backup. To you, the database may be a piece of junk, but to someone else, it represents days or weeks worth of work. Although you may have the sympathy of other people (developers and managers who share the opinion that development databases were never meant to store anything real), the bottom line is that there is valuable data that must be recovered. Even though the errant user bears some responsibility for the situation in the first place, you are the DBA and you are the one who looks foolish for being complacent and being caught without a backup. You know you know better than to get caught like that, so don't let it happen.

» See All Articles by Columnist Steve Callan

Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved