Reporting Database Object Increase On Oracle

Friday May 30th 2003 by Marin Komadina
Share:

On the production Oracle system, a DBA is responsible for collecting database increase statistics. Learn how to do so, easily and economically, without changing the existing database.

On the production Oracle system, a DBA is responsible for collecting database increase statistics. Collected information will be the basic triggers for purchasing additional storage resources. A DBA has several possibilities when it comes to the question of how to collect day-to-day database increase information. The minimal questions which need to be answered before starting with any procedure are:

  • Does the system have to be up and running 24 hours a day / 7 days a week for 12 months in a year and can't be shut down for even one hour in a week or month
  • Can you install any database object or load any data in the database

If the answers to the above questions dictate no changes to the database, then you will need to find a solution that does not require touching the existing database. Nevertheless, a DBA needs to cut all of the solutions that require a tremendous amount of investigation and testing, and he needs to come up with an easy and economical solution.

This article covers:

  • Definition of Database Daily Increase
  • Methods for Monitoring Database Object Increase
  • Object Increase Reporting Using Export Utility
  • Conclusion

Definition of Database Daily Increase

We can define database daily increase as an amount of new information, which has been stored in the database per day. Measurement is appropriate on the tablespace level (storage in MB) and on the object level (table increase in number of records).

Tablespace increase information provides direct storage information, while object increase information has only indirect storage information.

Object increase information uses a table as the smallest unit of change, based on the absolute number of new records per table. The absolute number is derived as a result of the insert and deletes on the table level.

This article will focus on collecting database increase information at the object level.

Overview of the main methods for collecting object level activity:

  • Flashback query
  • LogMiner
  • Table level auditing
  • Table level monitoring
  • Third party tool
  • Export utility

Methods of Monitoring Database Object Increase

The Oracle Flashback method uses a version based read consistency mechanism. For every database change, a 'before' image is stored in the undo tablespace (Rollback Segments). Starting with version 9i, the user can control how long this change is kept in the database. To reproduce database activity, we need to examine and glue together the undo tablespace information about database changes. The Flashback method requires time and knowledge to sort and extract the database growth information. Besides, it is supported only on 9i.

The Logminer method utilizes archive and online logs for getting the necessary information. The redolog information is written in a condensed binary form, unsuitable for regular text editors; log files need to be converted into the ANSI format. Logiminer uses a special dictionary and produces output on the database defined utl_file_dir location.

Potential problems using this method are the impossibility to restart database to initialize a new database parameter plus the very complicated procedure to get object activity information in the desired format.

Table level auditing is the next alternate way to obtain the object increase report. Database level auditing is enabled setting the database parameter audit_trail to DB. Auditing records are written to the database audit trail (SYS.AUD$ table). Object level auditing for insert, delete, and update activities can later be enabled per object level.

Auditing will cut down the database performance and utilize database disk space for storing audit information. If performance is not a big issue, we might still be faced with a database which could no be restarted, or we have a storage problem for storing auditing activity.

TheTable level monitoring method, utilizes Oracle's facility to monitor table modifications. The command ALTER TABLE MONITORING will start the table modification monitoring. All table changes are logged in the database dictionary table. The only real problem with this method is inaccuracy of the collected data in a case of transaction rollback. Changes already written to the modification table are not rolled back.

The Third Party tool can be also one of the ways to collect object increase information. There are lots of the tools on the market, capable of doing very good database and object level reporting. Most of them will collect statistics in the internal repository tables or will utilize database-auditing logs. Collected information will later be analysed and converted to a HTML reports. Third party tools are usually expensive and require creating special a database repository.

The Export Utility is very often used as a user schema backup method, as a protection against logical corruptions. If such export logic exists on the system we can use it for generating table reports which will show an increase in table size. If it does not exist we can easily add it as a UNIX cron job. After generating the report, the exported data file can be deleted and the export log file used as a source for reporting. This method has no known impact on the database and will not require a restart, or any kind of changes in the database.

Object Increase Reporting Using Export Utility

As an example, let's assume we have an Oracle database 8.1.7.3 with one application user, ARTIST. We will generate a table increase report for this user's schema tables. A daily export has been scheduled on the system and we are checking one of the daily export log files:

$ cat exp_artist_20030526_1930.log

Connected to: Oracle8i Enterprise Edition Release 8.1.7.3.0 - Production
With the Partitioning option
JServer Release 8.1.7.3.0 - Production
Export done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
 
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user ARTIST
. exporting object type definitions for user ARTIST
About to export ARTIST's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export ARTIST's tables via Conventional Path ...
. . exporting table               DATA_INPUT            16675 rows exported
. . exporting table               DATA_OUTPUT           16979 rows exported
. . exporting table               APP_MESSAGES        3272087 rows exported
. . exporting table               DELTA_MONITOR       1983707 rows exported
. . exporting table               HISTORY_USER         474029 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting snapshots
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

In this export file, we can find the actual number of records for every table.

On the designated directory, we will have export log files (log) and export data files (dmp):

$ pwd
/backup_data/export
$ ls -lrt 
-rw-r--r--   1 oracle   dba        16726 Feb 10 00:56 exp_artist_20030524_1930.log
-rw-r--r--   1 oracle   dba        32605 Feb 11 06:54 exp_artist_20030525_1930.log
-rw-r--r--   1 oracle   dba      14077308595 Feb 11 06:54 exp_artist_20030525_1930.dmp.Z

We have exported files from 25.05.2003 and log files from 24/25.05.2003.

The previous export file has been deleted, and the log file is still there for comparison.

The UNIX script compare_artist_exports.ksh is used for comparing two export log files.

Resulting report generated by compare_artist_exports.ksh shell script:

Daily Database Grow, 20030525_1930

Old logfile : /backup_data/export/exp_artist_20030524_1930.log
New logfile:  /backup_data/export/exp_artist_20030525_1930.log

Table                       Old Records          New Records          Daily Increase  
------------------------------------------------------------------------------------
DATA_INPUT                  Old: 16675            New: 16685          Delta: 10             
DATA_OUTPUT                 Old: 16979            New: 16979             
APP_MESSAGES                Old: 3272087          New: 3282999        Delta: 10912
DELTA_MONITOR               Old: 5                New: 0              Delta: 5 
HISTORY_USER                Old: 474029           New: 475050         Delta: 1021  
----------------------------------------------------------------------------------------------

We can easily see the difference in the number or records for user tables. This method is very effective and without additional overhead. While this method will not provide detailed information about actual updates, deletes and inserts, it will give summary information. Information from the export log file can be transformed to disk storage information or used as a source for generating graphical charts.

Conclusion

DBAs can sometimes be stuck between user requests and the technology provided by database and database tools. An appropriate solution does not always need to be a hi-tech solution. Keep it simple!

» See All Articles by Columnist Marin Komadina

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved