dcsimg
 

Monitoring Table Change in Oracle

Monday Dec 23rd 2019 by David Fitzjarrell

Learn of possible scenarios to monitor changes when replicating Oracle. Read on to discover the best course of action for moving forward!

Database replication is no longer restricted to Oracle-to-Oracle configurations; Oracle-to-cloud and Oracle-to-BigQuery are just two of the various options that can now be selected for replication configurations. In a good number of these configurations lies GoldenGate as the tool of choice, given its versatility and reliability. Unfortunately, when replicating Oracle to another platform, actions such as table modifications can throw a monkey-wrench into the works. Thus, it would be desirable to track such changes in anticipation of handling GoldenGate extract abends gracefully and quickly. Let's look at the possible scenarios and determine the best course of action.

The first thought the DBA might have is Unified Auditing, as it provides a wealth of information for auditable actions. Alas 'audit table' is not among the list of available privileges to audit:


SCOTT @ orcl > create audit policy alter_tab_pol
  2  privileges alter table;
privileges alter table
           *
ERROR at line 2:
ORA-46355: missing or invalid privilege audit option.


SCOTT @ orcl >

Interestingly the 'ALTER ANY TABLE' privilege is auditable, but it doesn't audit what you might think would be audited:


SCOTT @ orcl > create audit policy table_pol
  2  privileges create any table, alter any table, drop any table;

Audit policy created.

SCOTT @ orcl > audit policy table_pol;

Audit succeeded.

SCOTT @ orcl > 

Such a policy only audits the granting of such privileges to other users and might not always produce an audit record. The requirement is yet unfulfilled by auditing so another solution must be produced. Fortunately, Oracle offers system-level triggers which can produce audit records for such actions. An example of how this might be done is shown below. First a table is created to contain the audit records generated:


create table ddl_log (
operation   varchar2(30),
obj_owner   varchar2(35),
object_name varchar2(35),
sql_text    varchar2(200),
attempt_by  varchar2(35),
attempt_dt  timestamp);
 
create index ddl_log_idx 
on ddl_log(obj_owner, operation);

The table is indexed on obj_owner and operation to speed up report generation. Next a trigger is created as the user who owns the tables to be monitored to log all CREATE, ALTER and DROP statements that have been executed:


create or replace trigger ddl_trigger
before create or alter or drop
on schema

declare
 oper ddl_log.operation%type;
 sql_text ora_name_list_t;
 i        pls_integer; 
begin
  i := sql_txt(sql_text);
  if i = 1 then
        insert into ddl_log
        select ora_sysevent, ora_dict_obj_owner,
        ora_dict_obj_name, sql_text(1), user, v_systimestamp
        from dual;
  elsif i = 2 then
        insert into ddl_log
        select ora_sysevent, ora_dict_obj_owner,
        ora_dict_obj_name, sql_text(1)||sql_text(2), user, v_systimestamp
        from dual;
  elsif i >= 3 then
        insert into ddl_log
        select ora_sysevent, ora_dict_obj_owner,
        ora_dict_obj_name, sql_text(1)||sql_text(2)||sql_text(3), user, v_systimestamp
        from dual;
  end if;

end ddl_trigger;
/

Since the number of 64-byte 'pieces' of the SQL text can be quite large the trigger restricts the SQL_TEXT column to the first three 'pieces', making the maximum length of the string 192 characters. As expected for larger statements the complete text will not be provided but it should capture any 'alter table' statements in their entirety. Note that this trigger will capture not only ALTER TABLE statements but also any CREATE/ALTER/DROP statement submitted to the database. This means that alter user, alter trigger, alter package, alter function, alter tablespace, alter system, create ... and drop ... statements also get logged in the DDL_LOG table. Because of this, the table may grow rapidly and become quite large, thus a plan for keeping a finite history should be created. For most systems, 90 days should be sufficient to track table changes in the database. Reports generated from the logged data can be kept for longer periods of time (for example, 12 months) before being removed.

A sample script for managing the table data is provided below; it enforces a 90-day window of data. A log directory is created:


mkdir -p /u01/app/oracle/ddl_chg/purge_logs

A SQL script is written to purge the old records from DDL_LOG:


column sys_date new_value dt noprint
column name new_value db_nm noprint
select to_char(sysdate,'RRRRMMDD') sys_date from dual;
select name from v$database;

spool /u01/app/oracle/ddl_chg/purge_logs/ddl_log_purge_$db_nm._&dt..log
set echo on

--
-- Records slated for removal
--
select * From ddl_log where attempt_dt < sysdate - 90;

--
-- Delete selected records
--
delete from ddl_log where attempt_dt < sysdate - 90;

commit;

spool off
set echo off

This, obviously, can't run directly from cron (or any similar scheduler) so a wrapper script is needed:


#!/bin/ksh

#
# purge_ddl_log_90.sh
#
# Shell script to purge old audit records
# from the DDL_LOG table
#

#
# Find the selected database and set the environment
#
set -A database `ps -ef | grep [p]mon | grep '<name>' |  awk -F"_" '{print $3}'`

for i in ${database[@]}

#
# Set the environment for the database
#
do
        ORACLE_SID=$i
        export ORACLE_SID

        ORAENV_ASK=NO
        export ORAENV_ASK

        unset ORACLE_BASE
        export ORACLE_BASE

        PATH=$PATH:<ORACLE_HOME/bin location>

        . <ORACLE_HOME/bin>/oraenv -s

        LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib:$ORACLE_HOME/precomp/public
        export LD_LIBRARY_PATH

        PATH=$ORACLE_HOME/bin:$PATH
        export PATH

#
# Start SQL*Plus and execute the script
#
        sqlplus /nolog <<EOF
connect / as sysdba
@/u01/app/oracle/ddl_chg/purge_ddl_log_90.sql
EOF

done

#
# Make the output files readable for all
*
cd /u01/app/oracle/ddl_chg/purge_logs

chmod 666 *.log

#
# Remove old purge logs
#

find . -name "purge*log" -mtime +365 -exec /bin/rm -rf {} \;

The shell script sets the proper environment and ORACLE_SID based on the output of the ps command. The script will need to be edited to provide the database name to search for and the ORACLE_HOME location. More than one database name can be specified using | as a separator:


'abd|def|ghi|jkl'

This provides a way to purge the DDL_LOG table in every database where this table/trigger combination has been installed. The database name is included in the log file name to keep the purge trails separate for each database. The length of time to keep the logfiles can be changed to meet the storage limits of the system being monitored.

Change reports can be generated from the data found in the DDL_LOG table:


set linesize 140
column sdate new_value sdt noprint
select to_Char(sysdate, 'RRRRMMDDHH24')sdate from dual;

column modlen new_value mlen noprint
select 'a'||nvl(max(length(modification)),25) modlen From
(select obj_owner owner, object_name tabname, substr(sql_text, instr(sql_text, 'modify ')) modification, attempt_dt mod_time
from ddl_log
where (instr(sql_text, 'alter table') > 0
or instr(sql_text, 'ALTER TABLE') > 0));
column objlen new_value olen noprint
select 'a'||nvl(max(length(owner||'.'||tabname)),60) objlen From
(select obj_owner owner, object_name tabname, substr(sql_text, instr(sql_text, 'modify ')) modification, attempt_dt mod_time
from ddl_log
where (instr(sql_text, 'alter table') > 0
or instr(sql_text, 'ALTER TABLE') > 0));

column modification format &mlen
column mod_time format a29
column tab_name format &olen

select owner||'.'|| tabname tab_name, modification, mod_time
from
(select obj_owner owner, object_name tabname, substr(sql_text, instr(sql_text, 'add ')) modification, attempt_dt mod_time
from ddl_log
where instr(lower(sql_text), 'alter table') > 0
union
select obj_owner owner, object_name tabname, substr(sql_text, instr(sql_text, 'drop ')) modification, attempt_dt mod_time
from ddl_log
where instr(lower(sql_text), 'alter table') > 0
union
select obj_owner owner, object_name tabname, substr(sql_text, instr(sql_text, 'modify ')) modification, attempt_dt mod_time
from ddl_log
where instr(lower(sql_text), 'alter table') > 0
union
select obj_owner owner, object_name tabname, substr(sql_text, instr(sql_text, 'ADD ')) modification, attempt_dt mod_time
from ddl_log
where instr(lower(sql_text), 'alter table') > 0
union
select obj_owner owner, object_name tabname, substr(sql_text, instr(sql_text, 'DROP ')) modification, attempt_dt mod_time
from ddl_log
where instr(lower(sql_text), 'alter table') > 0
union
select obj_owner owner, object_name tabname, substr(sql_text, instr(sql_text, 'MODIFY ')) modification, attempt_dt mod_time
from ddl_log
where instr(lower(sql_text), 'alter table') > 0) dl
where lower(dl.modification) not like '%table%'
and mod_time >= trunc(systimestamp)
order by 1, 3

spool /u01/app/oracle/ddl_chg/log/tab_chg_rpt_&sdt._&1..lst
/
spool off

The database name is passed to the script so it will be included in the report file name. The code reports only on table changes (hence the long string of UNION queries) and produces a report similar to the one shown below:


TAB_NAME         MODIFICATION                   MOD_TIME
---------------- ------------------------------ -----------------------------
SCOTT.DDL_LOG    modify sql_text varchar2(200)  23-NOV-19 01.23.49.859971 PM

The script also sets the column formatting based on the maximum length of the stored data to possibly reduce the line length. Timestamp data was used in order to provide both date and visible time values for the generated change records. These scripts have been tested but might require some modifications based on the operating system vendor's implementation of Linux/Unix.

For those DBAs not running replicated systems this might not be of much use. But, for those replicating data from Oracle to other systems (such as BigQuery, Snowflake and the like), knowing when table changes have occurred can make it easier to deal with replication failures created by those changes. The faster the replication process can get back on track the faster the systems that rely on that replicated data can return to functionality.

# # #

See articles by David Fitzjarrell

Home
Mobile Site | Full Site