Oracle 10g Backup Guide: A Small County Government Approach

Monday Jan 14th 2008 by DatabaseJournal.com Staff
Share:

This document offers one approach to database backups for Oracle 10g databases on Windows 32 and 64 bit servers.

by Kevin E. Medlin

Abstract

Database backups are one of the most important parts of a database administrator’s job. Backup strategies need to be reviewed on a regular basis. Backups themselves need to be tested frequently. This document offers one approach to database backups for Oracle 10g databases on Windows 32 and 64 bit servers. Recovery Manager (RMAN) is employed as the primary backup application. RMAN is fast, flexible, and can compress the sometimes-large backup files. A Recovery Catalog is also used. Data Pump exports are used as a secondary backup application. All steps are automated, and scripts are provided with explanations in the document.

We have Oracle database 10g installed on Windows 2003, 32 and 64 bit servers. We keep our databases up almost 24x7, but do perform server re-boots in the early Sunday morning hours. We also occasionally take the servers down at scheduled times for maintenance on some of those Sunday mornings for things such as Windows patching and Oracle patching. We use Recovery Manager (RMAN) to perform online backups of our databases. RMAN is the Oracle suggested method for performing backup and recovery on databases (Chien, 2005, p. 3). RMAN is a great tool, but since we use Oracle standard edition not all of the benefits are available (parallelism, block media recovery, mean time to recover (MTTR), among others). Still, we are able to work RMAN into our backup strategy in a major way. With RMAN backups, we have been able to “refresh” our test databases upon request from developers or management. We also perform data pump exports on Oracle database 10g databases as additional safety measures. We use a combination of Windows batch scripts, SQL scripts, RMAN scripts, and scheduled tasks to automate these operations. Our main goal is to keep things as uniform as possible across the servers in an attempt to keep things simple. For the most part, we have been successful.

Our backup strategy is simple; take a weekly RMAN backup and archivelogs the rest of the week. Take a secondary backup of an export or data pump. Of course, there is much more to it than that. I will give the list of steps and then explain each one. In the explanations, there will be setup information, scripts, suggestions, etc. I recommend reading through this document entirely before beginning/updating/altering any current backup plans you currently have in place.

Our 9 steps for a great 10g nightly backup strategy are;

1.  Delete old log files and rename current logs.

2.  Delete all RMAN backup files.

3.  Perform a level 0 (zero) RMAN backup.

4.  Create clone files.

5.  Create archivelog backup, which includes Recovery Catalog housekeeping.

6.  Delete data pump export files.

7.  Perform data pump export.

8.  Check logs for errors.

9.  Page and/or email short error description.

1. Delete old log files and rename current logs

This is performed every day. It is good practice to create a log file for all scripts. In step 8, I check all the logs for any errors so all current logs are renamed. When they are renamed, it is easy to tell which errors refer to old jobs. These all need to be deleted eventually so as to not create a space issue.

Code Listing 1:

qgrep -l rman D:\oracle\admin\common\backup\logs\* >> %LOGFILE%
del /Q D:\oracle\admin\common\backup\logs\*.oldlog3 >> %LOGFILE%
ren D:\oracle\admin\common\backup\logs\*.oldlog2 *.oldlog3 >> %LOGFILE%
ren D:\oracle\admin\common\backup\logs\*.oldlog1 *.oldlog2 >> %LOGFILE%
ren D:\oracle\admin\common\backup\logs\*.log *.oldlog1 >> %LOGFILE%

Tip: Try to automate log file cleanup. It’s hard to remember everything!

2. Delete all RMAN backup files

This is only performed before a level 0 RMAN backup. Our RMAN backups are performed to the X: drive, X:\RMAN to be exact. We size this drive to hold our level 0 backups, the archivelogs, the archivelog backups, and the data pump exports. Clearing out the RMAN files on a weekly basis assures that there will be enough space on the drive for the next week for backups.

Code Listing 2:

# Running these commands will show the files that will be deleted in the next step.
FORFILES /p x:\rman /m *.bks   /d -0 /c "CMD /C echo @PATH\@FILE @FDATE.@FTIME" >> %logfile%
FORFILES /p x:\rman /m *cfile* /d -0 /c "CMD /C echo @PATH\@FILE @FDATE.@FTIME" >> %logfile%

# These commands perform the actual deletion.
FORFILES /p x:\rman /d -0 /c "CMD /C del  *.bks" >> %logfile%
FORFILES /p x:\rman /d-0 /c "CMD /C del  *cfile*" >> %logfile%

Tip: Definitely automate deletion of large files on disk. You will surely run out of space at a bad time.

The X factor

The X: drive is a critical piece of this backup ballet. We regularly clone our production databases to test and development databases on other servers, or alternate nodes. RMAN likes to get its files from where it backed them up. The easiest way to do this is back up to tape. This way, it makes no difference what server you are on when you perform the clone. When you tell RMAN your target database, it goes straight to the media management layer (MML) for the files it needs. Our problem with tape was unreliability with tapes or tape drives. Our solution was to back up to disk. The problem was going to be copying files back and forth from server to server to the same drive mapping. What we needed to do was map a SAN drive to our production server and then have our test server map to the same SAN drive at boot time. There was no way to have our SAN do this, but we could do it with a command at the command line from the server. We were able to solve this issue with a service from the Windows Resource Kit called AutoExNT. It basically allows us to run a batch file at boot time (Fenner, 2007). We are able to put the command in there and now production and test are mapped to the same X: drive.

The X-files factor

AutoExNT works awesomely for the 32-bit servers. The problem comes when you have a 64-bit server. Unfortunately, there are no Windows Resource Kits available for 64-bit Windows, so no AutoExNT. Luckily, we were able to persuade our development staff to create a “Launcher” service for us that works on 64-bit Windows. It is the same thing as AutoExNT. Whatever we put in the batch file is executed when the server boots.

3. Perform a level 0 (zero) RMAN backup

This step is usually performed once per week. We have some larger, more active databases that create huge amounts of archivelogs. In a recovery or cloning scenario, the time to restore the database will take too long. In these instances, we will take more than one level 0 backup during the week. In the level 0 RMAN backup, we perform no Recovery Catalog maintenance. The script is called DBname_lvl_0_disk.rcv.

Code Listing 3:

#************* Configure controls for Recovery Manager *******************
#***************** in case they have been changed ************************
configure controlfile autobackup on;
configure controlfile autobackup format for device type disk to 'X:\rman\CFILE_%F.bks';
configure channel 1 device type disk format 'X:\rman\BACKUPPIECE_%d_%U_%T_%s.bks' maxpiecesize 2048M;
configure retention policy to recovery window of 34 days;
#************* Perform weekly RMAN level 0 Database Backup ***************
backup as compressed backupset incremental level = 0 database;
#************* Confirm level 0 Backup is valid ***************************
restore database check logical validate;

The first command configures RMAN so the control file and spfile will be backed up automatically. The second command instructs RMAN to name the file in a particular format. In this case, when backing up to disk call the file ‘X:\rman\CFILE_F%.bks’. Line three says create a disk channel and call it “1”. Name the backup pieces 'X:\rman\BACKUPPIECE_%d_%U_%T_%s.bks' and any backup piece created should be no larger than 2 GB. The fourth line tells the Recovery Catalog that the backups for the target database should be considered good for 34 days. The fifth line actually performs the backup. It tells RMAN to create a compressed backup set, which really means one or more compressed backup pieces. It also says take a full backup of only the database. No archivelogs will be backed up. Since storage is a major issue, compressing backups has really helped out (Freeman, 2004, p. 90). We have found this to be one of the most useful RMAN features. The last line performs a check on the backup that just finished. It reads the backup files and validates them to make sure they are not corrupt or damaged in some way. I highly advise NOT skipping this step. Nothing could be worse than needing to restore a database and finding out too late that one or more of the files are no good!

Tip: You can use the files while they are being validated. Yes, it’s true! I have begun a clone or restore many times after the backup was completed but before the validation was done.

I call the RMAN level 0 backup script using a Windows batch file. The batch file is called DBname_lvl_0_disk.bat.

Code Listing 4:

set NLS_DATE_FORMAT=DD-MON-YYYY HH24:MI:SS
set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
set CURDIR=D:\oracle\admin\common\backup

cd %CURDIR%

rman target ID/pword@DBname catalog rcatID/rcatpword@rcatname log=logs\DBname_lvl_0_disk.log @DBname_lvl_0_disk.rcv

page_on_backup.vbs DBname_level_0_disk.log page_DBname_level_0.log DBname

The first two lines set operating system environment variables. We prefer the more detailed date mask of "05-DEC-2007 23:59:59" rather than "05-DEC-2007". The date format becomes more important during times of recovery. Setting the NLS_LANG variable removes any doubt about which character set the database is using (Bersinic & Watson, 2005, ch. 23:p. 8). The third and fourth lines are important for using scheduled tasks. Windows needs to be directed to where the RMAN script is, so set the directory and then move there. Next, RMAN is actually called. The target and catalog are both logged into. A log file is created in a separate directory inside the current directory called “logs” and the script in code listing 4 is called. If there are any errors, a Visual Basic script is called that pages support personnel. If there are no errors then an email of the log file is sent. There will be more details on paging in section 9.

4. Create clone files

This is a pivotal step to automating the “refresh” for test databases using RMAN backups. The clone files batch jobs create the actual “duplicate database” statements we use to clone our production databases to our test areas. This is a SQL script called create_TESTDB_clone_files.sql that runs on the production server.

Code Listing 5:

set echo off;
set feedback off;
set linesize 140;
SET PAGESIZE 0;
set trimspool on;
ALTER SESSION SET NLS_DATE_FORMAT='YYYY-MON-DD HH24:MI:SS';
select checkpoint_change# from v$database;
alter system archive log current;
select sysdate from dual;
-- **************************************************************************
-- **************************************************************************
-- **************************************************************************
-- ******************************* TESTDBSERVER *****************************
-- ******************************* TESTDBSERVER *****************************
-- ******************************* TESTDBSERVER *****************************
-- **************************************************************************
-- **************************************************************************
-- **************************************************************************
--TESTDB1
spool \\TESTDBSERVER\d$\oracle\admin\common\clone\clone_to_TETSTDB1.rcv
select 'duplicate target database to TESTDB1 until time ' ||''''|| sysdate ||''';' from dual;
spool off;
--TESTDB2
spool \\TESTDBSERVER\d$\oracle\admin\common\clone\clone_to_TETSTDB2.rcv
select 'duplicate target database to TESTDB2 until time ' ||''''|| sysdate ||''';' from dual;
spool off;
--TESTDB3
spool \\TESTDBSERVER\d$\oracle\admin\common\clone\clone_to_TETSTDB3.rcv
select 'duplicate target database to TESTDB23until time ' ||''''|| sysdate ||''';' from dual;
spool off;
-- **************************************************************************
-- **************************************************************************
-- **************************************************************************
-- ********************************* THE END ********************************
-- ********************************* THE END ********************************
-- ********************************* THE END ********************************
-- **************************************************************************
-- **************************************************************************
-- **************************************************************************
alter system archive log current;
select sysdate from dual;
select checkpoint_change# from v$database;
select sysdate from dual;
exit;

This script runs on the production server and spools the output to the test server. The first thing you notice is the NLS_DATE_FORMAT setting. This is being set the same way it was set in the level 0 backup. Next, you see that we have selected the system change number or SCN. Databases can also be cloned and/or recovered by using the SCN (Greenwald, Stackowiak & Stern, 2004, p. 151). We used to duplicate using the SCN but no longer do. We didn’t remove this step because we like to see the SCN in the log file. In case of a production recovery scenario, the SCN is available in one additional location. In the next statement, we archive the current redo log. We have been performing RMAN duplications since Oracle 8i and always had issues with the logs. This was the only sure fire way we could make it work every time. Next, we select the sysdate, we like to see it under the SCN. Dropping down to the first spool statement, you see that an RMAN script is being written to TESTDBSERVER called clone_to_TESTDB1.rcv. There will be only one line in the script and when written, will look like this:

Code Listing 6:

duplicate target database to TESTDB1 until time '2007-DEC-05 19:55:00';

You can write a separate clone script for each test database on your test database server. As you can see, this is what we have done. We have some production database servers with more than one production database. For those we just have two of these scripts we run, one against each production database creating a cloning script for each test database.

Code Listing 7:

set NLS_DATE_FORMAT=DD-MON-YYYY HH24:MI:SS
set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
set CURDIR=D:\oracle\admin\common\batch

cd %CURDIR%

sqlplus system/pword@DBNAME @create_TESTDB_clone_files.sql > logs\create_TESTDB_clone_files.log

The batch file for this script is simple. The first two lines set your environment. The third and fourth lines are important for using scheduled tasks. Windows needs to be directed to where the SQL script is, so set the directory and then move there. Next, SQLPlus is actually called and runs the script to create the clone files. A log file is created in a separate directory inside the current directory called “logs”.

by Kevin E. Medlin

5. Create archivelog backup which includes Recovery Catalog housekeeping

The archivelog backup is taken every day. We have already mentioned we normally take a level 0 backup once per week. Since this is a daily occurrence, we perform our RMAN Recovery Catalog maintenance in this step. The script is called DBNAME_arc_only_disk.rcv.

Code Listing 8:


#************* Configure controls for Recovery Manager ************
#************* in case they have been changed *********************
configure controlfile autobackup on;
configure controlfile autobackup format for device type disk 
	to 'X:\rman\CFILE_%F.bks';
configure retention policy to recovery window of 34 days;

#************* Perform nightly RMAN Archivelog Backup *************
backup archivelog all format 'X:\rman\ARC_%d_%U_%T_%s.bks';

#************* Maintenance Step to remove old Archivelogs *********
delete noprompt archivelog until time 'SYSDATE - 3';

#************* Maintenance Steps to clean Recovery Catalog ********
report obsolete device type disk;
crosscheck backup completed before 'sysdate-34';
delete noprompt obsolete recovery window of 34 days device type disk;
delete noprompt expired backup device type disk;
#************* Show all controls configured for this **************
#************* Database in RMAN ***********************************
show all;

#************* List all RMAN disk backups *************************
list backup of database device type disk;

The first two commands configure RMAN for the control file and spfile auto backup. The first command turns it on so that every time a backup is run for a target this has been set for, the control file and spfile are backed up. The second command instructs RMAN on how to name the file on a particular format. In this case, when backing up to disk call the file ‘X:\rman\CFILE_F%.bks’. The third line tells the Recovery Catalog that the backups for the target database are good for 34 days. The fourth line actually performs the backup. It tells RMAN to back up all the archivelogs on disk in the specified format. The next step removes all archivelogs older than three days. We like to keep three days of archivelogs on disk. Now we start in the Recovery Catalog maintenance. Catalog maintenance is very important. If these files were deleted and the maintenance steps not performed, then the Recovery Catalog would contain information about backups that were no longer significant (Alapati, 2005, p. 661). The next step reports obsolete backups made to disk that meet our retention policy of 34 days. Any backups older are considered obsolete and can be deleted. The crosscheck command will notify you whether or not any of your files are missing. If they are, they will be marked as expired. The next two delete commands remove the obsolete and expired information from the catalog. Remember, we delete all of our RMAN backup files from disk every 7 days. Our retention policy is set to 34 days because that is what our on-site tape retention policy is. If needed, we could restore RMAN files on a server up to 34 days old. Could we recover the files to a database? Yes, we could! How? Because we have a 34 day retention policy and all our RMAN information about those backups are still in the Recovery Catalog! Next, the show all command gives the configured parameters we have in RMAN (Hart & Freeman, 2007, p. 89). The list command shows all the disk backups that are still relevant in the Recovery Catalog.

Tip: Obviously, the Recovery Catalog is very important and needs to be backed up (Looney, 2004, p. 918). It is also the easiest to recover. If you lose your Recovery Catalog and you’re in a pinch, you can import the Recovery Catalog schema into any database and voila! You’ve got a new Recovery Catalog (Exporting and Importing, 2005).

The batch file for this script is simple. The batch file is called DBNAME_arc_only_disk.bat.

Code Listing 9:

set NLS_DATE_FORMAT=DD-MON-YYYY HH24:MI:SS
set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
set CURDIR=D:\oracle\admin\common\backup

cd %CURDIR%

rman target ID/pword@DBname catalog rcatID/rcatpword@rcatname log=logs\DBNAME_arc_only_disk.log @DBNAME_arc_only_disk.rcv

page_on_backup.vbs DBNAME_arc_only_disk.log page_DBNAME_arc_only.log DBNAME

The first two lines set variables for your environment. The third and fourth lines are important for using scheduled tasks. Windows needs to be directed to where the RMAN script is, so set the directory and then move there. Next RMAN is called. The target and the catalog are logged into. A log file is created in a separate directory inside the current directory called “logs” and the script in code listing 9 is called. If there are any errors, a Visual Basic script is called that pages support personnel. If there are no errors then an email of the log file is sent. There will be more details on paging in section 9.

6. Delete data pump export files

Data Pump needs new file names for the dump files each time it runs. Unlike export, it will not overwrite old dump files. So prior to any nightly data pump scheduled task, the old data pump files need to be removed. This batch file does just that.

Code Listing 10:

set CURDIR=D:\oracle\admin\common\batch
cd %CURDIR%
set logfile=logs\delete_Xdrive_expdp_files.log
echo 'logfile = ' %logfile% >  %logfile%
echo . >> %logfile%
echo . >> %logfile%
echo '*********************************************************' >> %logfile%
echo '* The following files will be deleted.                  *' >> %logfile%
echo '*********************************************************' >> %logfile%
echo . >> %logfile%
echo . >> %logfile%
FORFILES /p X:\data_pump\DMPs /m *.dmp   /d -0 /c "CMD /C echo @PATH\@FILE   @FDATE.@FTIME" >> %logfile%
echo . >> %logfile%
echo . >> %logfile%
echo '*********************************************************' >> %logfile%
echo '* Starting deletes now ...                              *' >> %logfile%
echo '*********************************************************' >> %logfile%
echo . >> %logfile%
echo . >> %logfile%
FORFILES /p X:\data_pump\DMPs /d -0 /c "CMD /C del  *.dmp" >> %logfile%
echo . >> %logfile%
echo . >> %logfile%

As we’ve seen in the other batch scripts here, initially we set the directory then move there. We also set the log file as a variable since we will be using it frequently. In fact, the first entry into the log file is the log file name. The echoes with dots are just for better readability in the log. There are really only two significant commands in this script and both of them are FORFILES. The first one simply lists the files that will be deleted. The second one actually performs the deletion of the files.

7. Perform data pump export

As an additional safety measure in our portfolio, we also take nightly data pump exports. As an additional advantage, when there are times when we need a table or two restored it is far easier to get them here than from RMAN.

We use par files to hold our data pump commands, just like regular exports (Kumar, Kanagaraj & Stroupe, 2005). We have some variables set in the database. You will see them in the par file and these are the SQL commands used to create them:

Code Listing 11:

create directory x_dp_dumps as 'X:\data_pump\DMPs';
create directory x_dp_logs  as 'X:\data_pump\logs';

These signal data pump where to send the dump files and log files. Here are the par file contents:

Code Listing 12:

content   = all
dumpfile  = x_dp_dumps:DBNAME_FULL_%U.dmp
estimate  = statistics
full      = y
job_name  = DBNAME_FULL
logfile   = x_dp_logs:DBNAME_FULL.log
filesize  = 2G

Content equals all means we want to export everything, or no exclusions. The dump file parameter asks for a file location and name. The location is given as a variable. The file name uses a substitution variable, %U. The %U will be replaced by a two number integer starting with 01. One file could be created or many, depending on the database size. Estimate gives you a good idea about what size your dump file will be. Block is the default but we use statistics since ours are current. Full specifies whether or not you want to export a full database mode export. Job_name is a preference, in case you like to name your own. Log file is set up similarly to dump file. Log file asks for a file location and name. The location is given as a variable and the name is also given. File size we use as a preference. We like to keep our file sizes to 2 GB or less. When copying or compressing, it is far easier and faster to move or compress 10 files at the same time than 1 big file.

We call data pump as a scheduled task, but we set it up a little differently. We have an individual par file for each database and one common batch file to execute them. Here is the command used in Scheduled Tasks:

Code Listing 13:

D:\oracle\admin\common\expdp\expdp_DATABASE.bat DBNAME

Here is the actual batch file used to call the data pump par files.

Code Listing 14:

set DATABASE=%1
set ORACLE_HOME=D:\oracle\product\10.2.0\db_1

%ORACLE_HOME%\bin\expdp ID/pword@%DATABASE% parfile=D:\oracle\admin\common\expdp\expdp_%DATABASE%.par

The only thing being passed to the batch file is the database name. It becomes %DATABASE%. Performing data pump exports in this manner has worked out pretty well for us.

8. Check logs for errors

Every night after all the batch jobs on a server have completed, we run an error check on that server. It is a simple batch file that performs a qgrep on key words in key logs and formats the information in an easily readable fashion. As previously stated, you can easily tell the “old” logs from the current “log”s by how the files are named. Here is the batch file called error_check.bat:

Code Listing 15:

error_check.bat
set SERVER=DBSERVER
set LOGFILE=error_check.log
echo ************************************************************ > %LOGFILE%
echo *********************************************************** >> %LOGFILE%
echo *********************************************************** >> %LOGFILE%
echo %SERVER% >> %LOGFILE%
echo Daily Error Report >> %LOGFILE%
date /T >> %LOGFILE%
echo *********************************************************** >> %LOGFILE%
echo *********************************************************** >> %LOGFILE%
echo *********************************************************** >> %LOGFILE%
echo .  >> %LOGFILE%
echo .  >> %LOGFILE%
echo *********************************************************** >> %LOGFILE%
echo The following files have been found with errors. >> %LOGFILE%
echo *********************************************************** >> %LOGFILE%
echo .  >> %LOGFILE%
echo .  >> %LOGFILE%
echo Backup files  >> %LOGFILE%
echo .  >> %LOGFILE%
qgrep -l RMAN- D:\oracle\admin\common\backup\logs\* >> %LOGFILE%
echo .  >> %LOGFILE%
echo .  >> %LOGFILE%
echo Batch files  >> %LOGFILE%
echo .  >> %LOGFILE%
qgrep -l error D:\oracle\admin\common\batch\logs\* >> %LOGFILE%
echo .  >> %LOGFILE%
echo .  >> %LOGFILE%
echo Clone files  >> %LOGFILE%
echo .  >> %LOGFILE%
qgrep -l RMAN- D:\oracle\admin\common\clone\logs\* >> %LOGFILE%
echo .  >> %LOGFILE%
echo .  >> %LOGFILE%
echo Alert Logs  >> %LOGFILE%
echo .  >> %LOGFILE%
@rem
qgrep -l ORA- D:\oracle\product\10.2.0\admin\DBNAME1\udump\* >> %LOGFILE%
qgrep -l ORA- D:\oracle\product\10.2.0\admin\DBNAME2\udump\* >> %LOGFILE%
@rem
echo .  >> %LOGFILE%
echo .  >> %LOGFILE%

The backup file check is for RMAN errors. The batch file check is for errors with file deletions and creations. The clone file error check is for failed database duplications. The alert log check is a little misleading. This actually checks the udump directories for files with errors. Shortly after the log is created, we send it to ourselves using a free email client called Bmail from Beyond Logic. This is what our email batch file email_errors.bat, looks like:

Code Listing 16:

bmail -s 10.10.10.10 -t support2@thecountyoverhere.gov -f Oracle@ 
thecountyoverhere.gov -h -a "DBSERVER Daily Error Report" -m error_check.log
bmail -s 10.10.10.10 -t support1@thecountyoverthere.gov -f Oracle@ 
thecountyoverhere.gov -h -a "DBSERVER Daily Error Report" -m error_check.log

9. Page and/or email short error description

Some jobs need immediate notification upon failure. For these, we use a Visual Basic script that sorts through whether or not we have an error and immediately sends us a page. This script runs at every execution and sends an email with the log output. This is something we want on these jobs regardless of whether the job completes successfully or not. But if it fails, we want an email of the log and a page indicating the failure. The script is called with three arguments, like this:

Code Listing 17:

page_on_backup.vbs DBNAME_arc_only_disk.log page_DBNAME_arc_only.log DBNAME

The arguments are log name, script log name, and database name. Here is a copy of page_on_backup.vbs. This is the script that runs in our RMAN level 0 backups and archive log backups.

Code Listing 18:

'This script emails the log file for a backup and searches it for the phrase "ORA-". If found, pages the recipients
'Additional pager Numbers
'   whodat - 7045551212@message.alltel.com
'   whodis - 7045551313@messaging.nextel.com

Dim ArgObj, var1, var2
Set ArgObj = WScript.Arguments
var1 = ArgObj(0)
var2 = ArgObj(1)
var3 = ArgObj(2)


'email log files

Dim WshSHell1 : set WshShell1 = CreateObject("WScript.Shell")
WshShell1.Run("D:\oracle\admin\common\error\bmail -s 10.10.10.10 -t 
support1@thecountyoverthere.gov -f Oracle@thecountyoverhere.gov -h -a " & 
var1 & "  attached -m d:\oracle\admin\common\backup\logs\" & var1 &"")
WshShell1.Run("D:\oracle\admin\common\error\bmail -s 10.10.10.10 -t 
support2@thecountyoverhere.gov -f Oracle@thecountyoverhere.gov -h -a " & var1 
& "  attached -m d:\oracle\admin\common\backup\logs\" & var1 &"")

'msgbox "var1 = " & var1 & "  var2 = " & var2 & ""

Const ForReading = 1, ForWriting = 2

Set WshNetwork = WScript.CreateObject("WScript.Network")
Dim lgmain : Set lgmain = CreateObject("Scripting.FileSystemObject")
Dim lgmain2 : Set lgmain2 = 
lgmain.OpenTextFile("D:\Oracle\Admin\common\backup\logs\" & var2 &"", 
ForWriting, True)

lgmain2.WriteLine "Processing began: " & Now
lgmain2.WriteLine ""
lgmain2.WriteLine ""

Set objRegEx = CreateObject("VBScript.RegExp")
objRegEx.Global = True
objRegEx.Pattern = "ORA-"

Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.OpenTextFile("D:\oracle\admin\common\backup\logs\" & var1 & "", ForReading)
strSearchString = objFile.ReadAll
objFile.Close

Set colMatches = objRegEx.Execute(strSearchString)
If colMatches.Count > 0 Then
  Dim WshSHell2 : set WshShell2 = CreateObject("WScript.Shell")
  WshShell2.Run("D:\oracle\admin\common\error\bmail -s 10.10.10.10 -t 
 7045551414@my2way.com -f " & var3 & "@thecountyoverhere.gov -h -a " & var3 & 
 "_BACKUP_ERRORS_FOUND")
  WshShell2.Run("D:\oracle\admin\common\error\bmail -s 10.10.10.10 -t 
 support1@thecountyoverthere.gov -f " & var3 & "@thecountyoverhere.gov -h -a " 
 & var3 & "_BACKUP_ERRORS_FOUND")
  WshShell2.Run("D:\oracle\admin\common\error\bmail -s 10.10.10.10 -t 
 support2@thecountyoverhere.gov -f " & var3 & "@thecountyoverhere.gov -h -a " 
 & var3 & "_BACKUP_ERRORS_FOUND")
  lgmain2.WriteLine "page completed"
End If
If colMatches.Count = 0 Then
          lgmain2.WriteLine "no problems found, no page required"
End If

The first thing that happens is that an email of the log is sent in an email. Next, the error codes are searched for in the log. If an error is found, a page is also sent. If not, the script completes without paging.

Conclusion

A successful backup plan is a major portion of any database administrator’s overall database strategy. Backups must be carefully planned and checked often. Automation is a good thing and can be very useful. It must also be thoroughly defined and rigorously tested. It can be done if you think about your environment logically. Ask questions such as, “What must be done first? What must be done next?” and so on. When you reach a roadblock, think about other ways you can perform the same task. This becomes easier if you try to think out your environment and what you would like to accomplish ahead of time, this could keep you from having to backtrack. Sometimes, changing the order of tasks may accomplish your goal as well.

References

Alapati, Sam R. (2005). Expert Oracle Database 10g Administration. New York. Springer-Verlag New York, Inc.

Bersinic, Damir & Watson, John (2005). Oracle Database 10g OCP Certification All-In-One Exam Guide [Electronic Version]. California. The McGraw-Hill Companies, Inc.

Bryla, Bob & Thomas, Biju (2005). OCP: Oracle 10g New Features for Administrators Study Guide. California. Sybex, Inc.

Chien, Timothy (2005). Best Practices for Oracle Database 10g Backup and Recovery. Retrieved 11-15-2007 from http://www.oracle.com/technology/deploy/availability/pdf/S942_Chien.doc.pdf.

(2005). Exporting and Importing the Recovery Catalog. Retrieved on 11-15-2007 from

http://download-west.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmcatdb007.htm.

Fenner, Tim (2007-08-22). Demystifying the AutoExNT tool in the Windows Server 2003 Resource Kit. Retrieved 11-15-2007 from http://searchwincomputing.techtarget.com/tip/0,289483,sid68_gci1269060,00.html.

Freeman, Robert G. (2004). Oracle Database 10g New Features. California. The McGraw-Hill Companies, Inc.

Greenwald, Rick, Stackowiak, Robert & Stern, Jonathan. (2004). Oracle Essentials: Oracle Database 10g. California. O'Reilly Media, Inc.

Hart, Matthew & Freeman, Robert G. (2007). Oracle Database 10g RMAN Backup & Recovery. California. The McGraw-Hill Companies, Inc.

Kumar, Arun R., Kanagaraj, John & Stroupe, Richard (2005). Oracle Database 10g INSIDER SOLUTIONS [Electronic Version]. Indiana. Sams Publishing.

Looney, Kevin (2005). Oracle Database 10g: The Complete Reference. California. The McGraw-Hill Companies, Inc.

Kevin Medlin

Kevin Medlin has been administering, supporting, and developing in a variety of industries including energy, retail, insurance and government since 1997. He is currently a DBA supporting Oracle and SQL Server, and is Oracle certified in versions 8 through 10g. He received his graduate certificate in Storage Area Networks from Regis University and he will be completing his MS in Technology Systems from East Carolina University in 2008. When he's not trying to make the world a better place through IT, he enjoys spending time with his family, traveling, hanging out by the pool, riding horses, hiking, and camping.



Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved