Moving to Oracle RAC One Node

Wednesday Oct 6th 2010 by Vincent Chan
Share:

To remain competitive in a rapidly changing global economy, businesses are increasingly demanding higher levels of service availability and IT cost reductions through server and storage consolidations. Oracle RAC One Node is well suited to meet such challenges. This article covers how to implement Oracle RAC One Node on Oracle Enterprise Linux 5.

To remain competitive in a rapidly changing global economy, businesses are increasingly demanding higher levels of service availability and IT cost reductions through server and storage consolidations. Oracle RAC One Node is well suited to meet such challenges. This article covers how to implement Oracle RAC One Node on Oracle Enterprise Linux 5.

Oracle RAC One Node, a new option in Oracle Database 11g Release 2, is a single instance of Oracle RAC running on a node in a cluster. RAC One Node offers a form of virtualization where multiple databases can be consolidated on the same physical server, sharing a common pool of storage and server resources and thus reducing the physical IT footprints. It enhances protection from hardware or software failures and allows the ability to perform server maintenance and workload management by transparently relocating database instances to an available node. Oracle RAC One Node does not offer the same capabilities of Oracle RAC. However, when the need arises for scalability and high availability that Oracle RAC provides, an Oracle RAC One Node can easily be upgraded to a multi-node RAC (requires Oracle RAC license).

In this guide, we will demonstrate migrating our applications residing on an existing single database instance to Oracle RAC One Node and later converting the RAC One Node to a full RAC. We will also briefly explore some of the new RAC services in Oracle 11g Release 2.

Downloads for this guide:

  • Oracle Database 11g Release 2 (11.2.0.1.0) for Linux x86-64.
  • Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.1.0) for Linux x86-64.

Note that as of this writing, Oracle RAC One Node is only available on Linux platform.

Overview

Our RAC One Node set up comprises of two Intel x86 servers running on Oracle Enterprise Linux 5. The Operating System has been installed and patched on both nodes and each has access to a shared storage and connectivity to the public and private network.

This guide is structured into the following steps:

1.  Preliminary Installation

2.  Install Oracle Database 11g Release 2 Grid Infrastructure software

3.  Install Oracle Database 11g Release 2 software

4.  Create ASM disk group for Fast Recovery Area

5.  Create Oracle Database 11g (appsdb)

6.  Initialize the Database to Oracle RAC One Node

7.  Migrate database applications to Oracle RAC One Node

8.  Oracle Database Migration with Omotion

9.  Create a second Oracle Database (racdb) on the spare node

10.  Initialize the second Database to Oracle RAC One Node

11.  Upgrade the second Database to Oracle RAC

An overview of our single database instance environment:

Host Name

OS Kernel

Memory

Processor

Service Name

Instance Name

Database Name

plaice

Oracle Enterprise Linux 5 (2.6.18-164.0.0.0.1.el5)

4GB

1 x Intel Xeon Dual Core, 2.0Ghz

finsdb

finsdb

finsdb

Software owner account and location:

OS User

Oracle Base

Oracle Home

oracle

/u01/app/oracle

/u01/app/oracle/product/11.2.0/dbhome_1

File storage:

File System

Location

ext3

/u01/oradata/appsdb

IP Addresses:

Name

IP Address

plaice

192.168.1.180

An overview of the Oracle RAC One Node environment:

Host Name

OS Kernel

Memory

Processor

Service Name

Instance Name

Database Name

gurnard1

Oracle Enterprise Linux 5 (2.6.18-164.0.0.0.1.el5)

4GB

1 x Intel Xeon Quad Core, 2.0Ghz

appsdb

finsdb

appsdb_1

appsdb

gurnard2

Oracle Enterprise Linux 5 (2.6.18-164.0.0.0.1.el5)

4GB

1 x Intel Xeon Quad Core, 2.0Ghz

racdb

racdb_1

racdb

Software owner accounts and locations:

OS User

Oracle Base

Oracle Home

grid

/u01/app/grid

/u01/app/11.2.0/grid

oracle

/u01/app/oracle

/u01/app/oracle/product/11.2.0/dbhome_1

File storage:

File

File System

Location

Redundancy

OCR and Voting Disk

ASM

+DATA

Normal

Database Files

ASM

+DATA

Normal

Online Redo Logs

ASM

+DATA, +FRA

Normal/External

Fast Recovery Area

ASM

+FRA

External

IP Addresses:

Name

IP Address

gurnard1

192.168.1.181

gurnard2

192.168.1.182

gurnard1-priv

10.10.10.181

gurnard2-priv

10.10.10.182

Grid Naming Service (GNS) VIP

192.168.1.208

And an overview of the Oracle RAC environment:

Host Name

OS Kernel

Memory

Processor

Service Name

Instance Name

Database Name

gurnard1

Oracle Enterprise Linux 5 (2.6.18-164.0.0.0.1.el5)

4GB

1 x Intel Xeon Quad Core, 2.0Ghz

racdb

racdb2

racdb

gurnard2

Oracle Enterprise Linux 5 (2.6.18-164.0.0.0.1.el5)

4GB

1 x Intel Xeon Quad Core, 2.0Ghz

racdb1

Software owner accounts and locations:

OS User

Oracle Base

Oracle Home

grid

/u01/app/grid

/u01/app/11.2.0/grid

oracle

/u01/app/oracle

/u01/app/oracle/product/11.2.0/dbhome_1

File storage:

File

File System

Location

Redundancy

OCR and Voting Disk

ASM

+DATA

Normal

Database Files

ASM

+DATA

Normal

Online Redo Log

ASM

+DATA, +FRA

Normal/External

Fast Recovery Area

ASM

+FRA

External

IP Addresses:

Description

Name

IP Address

Public network address

gurnard1.vcconsult.com

192.168.1.181

Public network address

gurnard2.vcconsult.com

192.168.1.182

Private network address

gurnard1-priv.vcconsult.com

10.10.10.181

Private network address

gurnard2-priv.vcconsult.com

10.10.10.182

Grid Naming Service (GNS) VIP

gurnard.vcconsult.com

192.168.1.208

You’ll install the Oracle Grid Infrastructure and Oracle Database software on each node for high availability and redundancy.

1. Preliminary Installation

This section lists the setup necessary before starting the Grid Infrastructure software installation.

Software and Hardware requirements

Review the software and hardware requirements listed here. Like in previous versions, the installer does a prerequisite checks before the software installation. New in 11g Release 2 is the installer now offers the ability to generate a Fixup script to fix kernel parameters.

SCAN (Single Client Access Name)

SCAN is new in Oracle 11g Release 2. It simplifies client management by providing a single point of access for clients to connect to any database in the cluster. In the event when nodes were added or removed from the cluster, clients that use SCAN to access the cluster do not have to make any explicit changes to the client settings. This is particularly beneficial for RAC One Node where databases can be configured to relocate to any available nodes.

SCAN is resolvable by either of the following methods:

1.  Domain Name Service (DNS): At least one IP address has to be defined in DNS for the SCAN. The recommended approach is to create a round robin SCAN resolvable to a maximum of three different addresses for high availability and scalability.

2.  Grid Naming Service (GNS): Three IP addresses are automatically acquired from the DHCP service and the GNS provides name resolution for the SCAN.

SCAN addresses, virtual IP addresses, and public IP addresses must all be on the same subnet.

Refer to the following whitepaper for additional information on SCAN.

In this installment, we will use GNS to resolve the SCAN addresses.

Configure GNS (Grid Naming Service) Virtual IP Address

The Grid Naming Service (GNS) is a part of the Grid Plug and Play feature (GPnP), which allows the Oracle Clusterware to dynamically allocate virtual IP addresses for each node through DHCP service running on the public network. All name resolution requests for the cluster within a subdomain delegated by the DNS are handed off to GNS using multicast Domain Name Service (mDNS) included within the Oracle Clusterware. Using GNS eliminates the need for managing IP addresses and name resolution and is especially advantageous in a dynamic cluster environment where nodes are often added or removed.

Prior to the Grid Infrastructure software installation, we have configured a static IP address for the GNS virtual IP and defined gurnard.vcconsult.com as the subdomain to delegate in DNS.

Refer to the Oracle Grid Infrastructure Installation Guide for additional information about GNS.

Configure Oracle ASM (Automatic Storage Management) shared storage

In our environment, we have created four ASM disks; three of which were dedicated for storing the OCR, Voting disk, datafiles and online redo logs and one for the Fast Recovery Area.

[root@gurnard1 ~]# oracleasm listdisks
VOL1
VOL2
VOL3
VOL4
 
[root@gurnard2 ~]# oracleasm listdisks
VOL1
VOL2
VOL3
VOL4

Oracle ASM storage configuration is described here.

CTSS (Cluster Time Synchronization Service)

CTSS is configured automatically as a part of the Grid Infrastructure installation to synchronize the clocks on all the nodes. As an alternative, you can manually configure NTP (Network Time Protocol) on both nodes. We will synchronize the time across the cluster nodes using CTSS.

2. Install Oracle Database 11g Release 2 Grid Infrastructure software

The Grid Infrastructure software now installs both the Oracle Clusterware and ASM into the same Oracle Home. We will use the grid Unix account to install the software.

Download and unzip the software to a staging area and as the grid user on gurnard1, execute runInstaller to start the installation.

grid@gurnard1-> id
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1100(asmadmin),1300(asmdba)
 
grid@gurnard1-> more .profile
export ORACLE_SID=+ASM1
export EDITOR=vi
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin
umask 022
            grid@gurnard1-> /software/11gR2x64/grid/runInstaller

1.  Select Installation Option: Click on Install and Configure Grid Infrastructure for a Cluster.

2.  Select Installation Type: Click on Advance Installation.

3.  Select Product Languages: Select English.

4.  Grid Plug and Play Information:

a.  Cluster Name: gurnard-cluster

b.  SCAN Name: gurnard-cluster-scan.gurnard.vcconsult.com

c.   SCAN Port: 1521

d.  Select Configure GNS.

·     GNS Sub Domain: gurnard.vcconsult.com

·     GNS VIP Address: 192.168.1.208

Oracle Grid Infrastructure - Setting up Grid Infrastructure

5.  Cluster Node Information:

a.   Click on Add.

  • Hostname: gurnard2.vcconsult.com
  • Virtual IP Name: AUTO

b.  Click on SSH Connectivity.

  • OS Username: grid
  • OS Password: ******

c.   Click on Setup.

Oracle Grid Infrastructure - Click on Setup

d.  Click on OK.

Oracle Grid Infrastructure - Step 5 of 6

6.  Specify Network Interface Usage:

a.   Interface Name: eth0; Subnet: 192.168.1.0; Interface Type: Public

b.  Interface Name: eth1; Subnet: 10.10.10.0; Interface Type: Private

7.  Storage Option Information: Select Automatic Storage Management (ASM).

8.  Create ASM Disk Group:

a.   Disk Group Name: DATA

b.  Redundancy: Select Normal

c.   Add Disks: Select Candidate Disks.

  • Select ORCL:VOL1, ORCL:VOL2, ORCL VOL3

Oracle Clusterware files are stored in the DATA disk group. The disk group redundancy level determines the type of default file mirroring by Oracle ASM:

  • External redundancy: no mirroring
  • Normal redundancy: 2-way mirroring
  • High redundancy: 3-way mirroring

Oracle Grid Infrastructure - Setting Up Grid Infrastructure - Step 8 of 15

9.  Specify ASM Password: Select Use different passwords for these accounts.

a.   SYS: ********

b.  ASMSNMP: ********

10.  Failure Isolation Support: Select Do not use Intelligent Platform Management Interface (IPMI).

11.  Privileged Operating System Groups:

a.  ASM Database Administrator (OSDBA) Group: asmdba

b.  ASM Instance Administration Operator (OSOPER) Group: asmdba

c.   ASM Instance Administrator (OSASM) Group: asmadmin

Click YES on the “OSDBA and OSOPER are the same OS group” warning message. We will use the same OS group, asmdba..

12.  Specify Installation Location:

a.   Oracle Base: /u01/app/grid

b.  Software Location: /u01/app/11.2.0/grid

13.  Create Inventory:

a.   Inventory Directory: /u01/app/oraInventory

b.  oraInventory Group Name: oinstall (automatically populated by the installer)

14.  Perform Prerequisite Checks: Click on Ignore All since we know we have ample swap space to complete the installation. If you encounter kernel parameter warning messages, click on “Fix & Check Again” to generate the runfixup.sh script in /tmp/CVU_11.2.0.1.0_grid. As the root user, execute the script on both nodes to correct the issues.

click on “Fix & Check Again” to generate the runfixup.sh script in /tmp/CVU_11.2.0.1.0_grid

15.  Summary: Click on Finish.

Oracle Grid Infrastructure - Summary: Click on Finish

Oracle Grid Infrastructure - Step 16 of 17

16.  Execute Configuration scripts: Execute the scripts below as the root user on both nodes sequentially, one at a time. Do not proceed to the next script until the current script completes.

a.   Execute /u01/app/oraInventory/orainstRoot.sh on gurnard1 and gurnard2.

b.  Execute /u01/app/11.2.0/grid/root.sh on gurnard1 and gurnard2.

On gurnard1,

[root@gurnard1 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
 
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

On gurnard2,

[root@gurnard2 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
 
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
 
 
[root@gurnard1 ~]# /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...
 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
 
 
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-03-12 15:30:37: Parsing the host name
2010-03-12 15:30:37: Checking for super user privileges
2010-03-12 15:30:37: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'gurnard1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'gurnard1'
CRS-2676: Start of 'ora.gipcd' on 'gurnard1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'gurnard1'
CRS-2676: Start of 'ora.gpnpd' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'gurnard1'
CRS-2676: Start of 'ora.cssdmonitor' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'gurnard1'
CRS-2672: Attempting to start 'ora.diskmon' on 'gurnard1'
CRS-2676: Start of 'ora.diskmon' on 'gurnard1' succeeded
CRS-2676: Start of 'ora.cssd' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'gurnard1'
CRS-2676: Start of 'ora.ctssd' on 'gurnard1' succeeded
 
ASM created and started successfully.
 
DiskGroup DATA created successfully.
 
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'gurnard1'
CRS-2676: Start of 'ora.crsd' on 'gurnard1' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 6ad595e8f0314f4fbf2b455dd502122c.
Successful addition of voting disk 8d2d22d9625e4f50bf8febb2f5e659ec.
Successful addition of voting disk 9bcc835983de4f3cbfdb85bdd6927bd3.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   6ad595e8f0314f4fbf2b455dd502122c (ORCL:VOL1) [DATA]
 2. ONLINE   8d2d22d9625e4f50bf8febb2f5e659ec (ORCL:VOL2) [DATA]
 3. ONLINE   9bcc835983de4f3cbfdb85bdd6927bd3 (ORCL:VOL3) [DATA]
Located 3 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'gurnard1'
CRS-2677: Stop of 'ora.crsd' on 'gurnard1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'gurnard1'
CRS-2677: Stop of 'ora.asm' on 'gurnard1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'gurnard1'
CRS-2677: Stop of 'ora.ctssd' on 'gurnard1' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'gurnard1'
CRS-2677: Stop of 'ora.cssdmonitor' on 'gurnard1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'gurnard1'
CRS-2677: Stop of 'ora.cssd' on 'gurnard1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'gurnard1'
CRS-2677: Stop of 'ora.gpnpd' on 'gurnard1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'gurnard1'
CRS-2677: Stop of 'ora.gipcd' on 'gurnard1' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'gurnard1'
CRS-2677: Stop of 'ora.mdnsd' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'gurnard1'
CRS-2676: Start of 'ora.mdnsd' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'gurnard1'
CRS-2676: Start of 'ora.gipcd' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'gurnard1'
CRS-2676: Start of 'ora.gpnpd' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'gurnard1'
CRS-2676: Start of 'ora.cssdmonitor' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'gurnard1'
CRS-2672: Attempting to start 'ora.diskmon' on 'gurnard1'
CRS-2676: Start of 'ora.diskmon' on 'gurnard1' succeeded
CRS-2676: Start of 'ora.cssd' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'gurnard1'
CRS-2676: Start of 'ora.ctssd' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'gurnard1'
CRS-2676: Start of 'ora.asm' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'gurnard1'
CRS-2676: Start of 'ora.crsd' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'gurnard1'
CRS-2676: Start of 'ora.evmd' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'gurnard1'
CRS-2676: Start of 'ora.asm' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'gurnard1'
CRS-2676: Start of 'ora.DATA.dg' on 'gurnard1' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'gurnard1'
CRS-2676: Start of 'ora.registry.acfs' on 'gurnard1' succeeded
 
gurnard1     2010/03/12 15:38:52     /u01/app/11.2.0/grid/cdata/gurnard1/backup_20100312_153852.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 1651 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
 
 
 
[root@gurnard2 ~]# /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script...
 
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
 
 
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2010-03-12 15:40:55: Parsing the host name
2010-03-12 15:40:55: Checking for super user privileges
2010-03-12 15:40:55: User has super user privileges
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node gurnard1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
CRS-2672: Attempting to start 'ora.mdnsd' on 'gurnard2'
CRS-2676: Start of 'ora.mdnsd' on 'gurnard2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'gurnard2'
CRS-2676: Start of 'ora.gipcd' on 'gurnard2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'gurnard2'
CRS-2676: Start of 'ora.gpnpd' on 'gurnard2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'gurnard2'
CRS-2676: Start of 'ora.cssdmonitor' on 'gurnard2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'gurnard2'
CRS-2672: Attempting to start 'ora.diskmon' on 'gurnard2'
CRS-2676: Start of 'ora.diskmon' on 'gurnard2' succeeded
CRS-2676: Start of 'ora.cssd' on 'gurnard2' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'gurnard2'
CRS-2676: Start of 'ora.ctssd' on 'gurnard2' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'gurnard2'
CRS-2676: Start of 'ora.drivers.acfs' on 'gurnard2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'gurnard2'
CRS-2676: Start of 'ora.asm' on 'gurnard2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'gurnard2'
CRS-2676: Start of 'ora.crsd' on 'gurnard2' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'gurnard2'
CRS-2676: Start of 'ora.evmd' on 'gurnard2' succeeded
 
gurnard2     2010/03/12 15:46:00     /u01/app/11.2.0/grid/cdata/gurnard2/backup_20100312_154600.olr
Preparing packages for installation...
cvuqdisk-1.0.7-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 2042 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

Execute Configuration Scripts

Click on OK.

gridinstall_otherCA4

17.  Finish: Click on Close.

At the end of the grid install, if the CVU fails, fix the issue and then manually re-run the following to check again:

./runcluvfy.sh stage -post crsinst -n all -verbose > /tmp/postchecks.log
 

Congratulations, you have completed the installation of Oracle Grid Infrastructure!

2.1 Oracle Grid Infrastructure Post-Installation checks

Now that we have successfully installed the Grid Infrastructure, let’s verify the environment we have just set up.

Application resources

grid@gurnard1-> crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       gurnard1
               ONLINE  ONLINE       gurnard2
ora.LISTENER.lsnr
               ONLINE  ONLINE       gurnard1
               ONLINE  ONLINE       gurnard2
ora.asm
               ONLINE  ONLINE       gurnard1                 Started
               ONLINE  ONLINE       gurnard2                 Started
ora.eons
               ONLINE  ONLINE       gurnard1
               ONLINE  ONLINE       gurnard2
ora.gsd
               OFFLINE OFFLINE      gurnard1
               OFFLINE OFFLINE      gurnard2
ora.net1.network
               ONLINE  ONLINE       gurnard1
               ONLINE  ONLINE       gurnard2
ora.ons
               ONLINE  ONLINE       gurnard1
               ONLINE  ONLINE       gurnard2
ora.registry.acfs
               ONLINE  ONLINE       gurnard1
               ONLINE  ONLINE       gurnard2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       gurnard2
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       gurnard1
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       gurnard1
ora.gns
      1        ONLINE  ONLINE       gurnard1
ora.gns.vip
      1        ONLINE  ONLINE       gurnard1
ora.gurnard1.vip
      1        ONLINE  ONLINE       gurnard1
ora.gurnard2.vip
      1        ONLINE  ONLINE       gurnard2
ora.oc4j
      1        OFFLINE OFFLINE
ora.scan1.vip
      1        ONLINE  ONLINE       gurnard2
ora.scan2.vip
      1        ONLINE  ONLINE       gurnard1
ora.scan3.vip
      1        ONLINE  ONLINE       gurnard1
 
 
grid@gurnard1-> srvctl status nodeapps
VIP 192.168.1.222 is enabled
VIP 192.168.1.222 is running on node: gurnard1
VIP 192.168.1.226 is enabled
VIP 192.168.1.226 is running on node: gurnard2
Network is enabled
Network is running on node: gurnard1
Network is running on node: gurnard2
GSD is disabled
GSD is not running on node: gurnard1
GSD is not running on node: gurnard2
ONS is enabled
ONS daemon is running on node: gurnard1
ONS daemon is running on node: gurnard2
eONS is enabled
eONS daemon is running on node: gurnard1
eONS daemon is running on node: gurnard2

Oracle Cluster Ready Services (CRS)

grid@gurnard1-> crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
 
grid@gurnard2-> crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
 
grid@gurnard1-> crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.1.0]

Oracle Cluster Registry (OCR) and Voting DiskCluster Ready Services (CRS)

grid@gurnard1-> ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2824
         Available space (kbytes) :     259296
         ID                       : 1041140530
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded
 
                                    Device/File not configured
 
                                    Device/File not configured
 
                                    Device/File not configured
 
                                    Device/File not configured
 
         Cluster registry integrity check succeeded
 
         Logical corruption check bypassed due to non-privileged user
 
grid@gurnard1-> crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   03caacce4e594faabfeeae5018b7fc8f (ORCL:VOL1) [DATA]
 2. ONLINE   25a2945e01564f30bfd8bf1f50d66fc8 (ORCL:VOL2) [DATA]
 3. ONLINE   448383e5ce5b4fdbbfdf3d9a5ba7b85e (ORCL:VOL3) [DATA]
Located 3 voting disk(s).
 

Oracle High Availability Services

[root@gurnard1 ~]# /u01/app/11.2.0/grid/bin/crsctl config has
CRS-4622: Oracle High Availability Services autostart is enabled.
 
[root@gurnard1 ~]# ps -ef|egrep 'gpnpd|gns|mdns'|grep -v egrep
grid      2859     1  0 17:35 ?        00:00:00 /u01/app/11.2.0/grid/bin/mdnsd.bin
grid      2878     1  0 17:35 ?        00:00:00 /u01/app/11.2.0/grid/bin/gpnpd.bin
root      3601     1  0 17:37 ?        00:00:00 /u01/app/11.2.0/grid/bin/gnsd.bin 
	-trace-level 0 -ip-address 192.168.1.208 -startup-endpoint ipc://GNS_gurnard1_3415_5ae142164cf056b9
 
 
[root@gurnard2 ~]# /u01/app/11.2.0/grid/bin/crsctl config has
CRS-4622: Oracle High Availability Services autostart is enabled.
 
[root@gurnard2 ~]# ps -ef|egrep 'gpnpd|gns|mdns'|grep -v egrep
grid      2703     1  0 17:35 ?        00:00:00 /u01/app/11.2.0/grid/bin/mdnsd.bin
grid      2717     1  0 17:35 ?        00:00:00 /u01/app/11.2.0/grid/bin/gpnpd.bin

Oracle Automatic Storage Management (ASM)

grid@gurnard1-> srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
 
grid@gurnard2-> srvctl config asm -a
ASM home: /u01/app/11.2.0/grid
ASM listener: LISTENER
ASM is enabled.
 
grid@gurnard1-> srvctl status asm -a
ASM is running on gurnard1,gurnard2
ASM is enabled.

Oracle Listener

grid@gurnard1-> srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: 


Single Client Access Name (SCAN)

grid@gurnard1-> srvctl config scan
SCAN name: gurnard-cluster-scan.gurnard.vcconsult.com, Network: 1/192.168.1.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /192.168.1.223/192.168.1.223
SCAN VIP name: scan2, IP: /192.168.1.224/192.168.1.224
SCAN VIP name: scan3, IP: /192.168.1.225/192.168.1.225
 
grid@gurnard1-> srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node gurnard2
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node gurnard1
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node gurnard1
 
grid@gurnard1-> srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node gurnard2
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node gurnard1
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node gurnard1
 
grid@gurnard1-> nslookup gurnard-cluster-scan.gurnard.vcconsult.com
Server:         192.168.1.13
Address:        192.168.1.13#53
 
Name:   gurnard-cluster-scan.gurnard.vcconsult.com
Address: 192.168.1.223
Name:   gurnard-cluster-scan.gurnard.vcconsult.com
Address: 192.168.1.225
Name:   gurnard-cluster-scan.gurnard.vcconsult.com
Address: 192.168.1.224

Grid Naming Service (GNS)

grid@gurnard1-> srvctl config gns -a
GNS is enabled.
GNS is listening for DNS server requests on port 53
GNS is using port 5353 to connect to mDNS
GNS status: OK
Domain served by GNS: gurnard.vcconsult.com
GNS version: 11.2.0.1.0
GNS VIP network: ora.net1.network
 
grid@gurnard1-> srvctl status gns
GNS is running on node gurnard1.
GNS is enabled on node gurnard1.

Virtual Network Interfaces

 grid@gurnard1-> srvctl status vip -n gurnard1
VIP 192.168.1.222 is enabled
VIP 192.168.1.222 is running on node: gurnard1
grid@gurnard1-> srvctl status vip -n gurnard2
VIP 192.168.1.226 is enabled
VIP 192.168.1.226 is running on node: gurnard2
 
 
[root@gurnard1 ~]# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:16:3E:12:93:AD
inet addr:192.168.1.181  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe12:93ad/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:152558 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3958 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:79485094 (75.8 MiB)  TX bytes:1017059 (993.2 KiB)
 
eth0:1    Link encap:Ethernet  HWaddr 00:16:3E:12:93:AD
inet addr:192.168.1.225  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 
eth0:2    Link encap:Ethernet  HWaddr 00:16:3E:12:93:AD
inet addr:192.168.1.208  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 
eth0:4    Link encap:Ethernet  HWaddr 00:16:3E:12:93:AD
inet addr:192.168.1.224  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 
eth0:6    Link encap:Ethernet  HWaddr 00:16:3E:12:93:AD
inet addr:192.168.1.222  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 
eth1      Link encap:Ethernet  HWaddr 00:16:3E:1C:C4:2D
          inet addr:10.10.10.181  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe1c:c42d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:11294 errors:0 dropped:0 overruns:0 frame:0
          TX packets:112960 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2707940 (2.5 MiB)  TX bytes:50319220 (47.9 MiB)
 
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:55173 errors:0 dropped:0 overruns:0 frame:0
          TX packets:55173 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:18087118 (17.2 MiB)  TX bytes:18087118 (17.2 MiB)
 
sit0      Link encap:IPv6-in-IPv4
          NOARP  MTU:1480  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
 
 
[root@gurnard2 ~]# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:16:3E:3F:4F:AE
inet addr:192.168.1.182  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe3f:4fae/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:19674 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6111 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4345841 (4.1 MiB)  TX bytes:1167411 (1.1 MiB)
 
eth0:1    Link encap:Ethernet  HWaddr 00:16:3E:3F:4F:AE
inet addr:192.168.1.223  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 
eth0:2    Link encap:Ethernet  HWaddr 00:16:3E:3F:4F:AE
inet addr:192.168.1.226  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 
eth1      Link encap:Ethernet  HWaddr 00:16:3E:77:6C:E0
          inet addr:10.10.10.182  Bcast:10.10.10.255  Mask:255.255.255.0
          inet6 addr: fe80::216:3eff:fe77:6ce0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:123811 errors:0 dropped:0 overruns:0 frame:0
          TX packets:132742 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:52925305 (50.4 MiB)  TX bytes:75688595 (72.1 MiB)
 
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:39980 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39980 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:16839844 (16.0 MiB)  TX bytes:16839844 (16.0 MiB)
 
sit0      Link encap:IPv6-in-IPv4
          NOARP  MTU:1480  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
 

3. Install Oracle Database 11g Release 2 Software

Download and unzip the software to a staging area and as the oracle Unix user on gurnard1, execute runInstaller to start the installation.

oracle@gurnard1-> id 
uid=1200(oracle) gid=1000(oinstall) groups=1000(oinstall),1200(dba),1300(asmdba)
 
oracle@gurnard1-> more .profile
export EDITOR=vi
export ORACLE_UNQNAME=appsdb
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin
umask 022     
 
oracle@gurnard1-> /software/11gR2x64/database/runInstaller

1.  Configure Security Updates: Leave the Email and My Oracle Support Password blank.

2.  Select Installation Option: Select Install database software only.

3.  Node Selection: Select Real Application Clusters database installation. Make sure both nodes are selected.

a.   Click on SSH Connectivity.

  • OS Username: oracle
  • OS Password: *******

b.  Click on Setup.

c.   Click on OK.

4.  Select Product Languages: Select English.

5.  Select Database Edition: Select Enterprise Edition.

a.   Click on Select Options.

  • Select Oracle Partitioning, Oracle Real Application Testing.

6.  Specify Installation Location:

a.   Oracle Base: /u01/app/oracle

b.  Software Location: /u01/app/oracle/product/11.2.0/dbhome_1

7.  Privileged Operating System Groups:

a.   Database Administrator (OSDBA) Group: dba

b.  Database Operator (OSOPER) Group: dba

8.  Perform Prerequisite Checks: Click on Ignore All since we know we have ample swap space to complete the installation. If you encounter kernel parameter warning messages, click on “Fix & Check Again” to generate the runfixup.sh script in /tmp/CVU_11.2.0.1.0_grid.As the root user, execute the script on both nodes to correct the issues.

Oracle Database 11g R2 Installer - Prerequiste checks

9.  Summary: Click on Finish.

Oracle Database 11g R2 Installer - Summary

Oracle Database 11g R2 Installer -  Install Product

10.  Execute Configuration scripts: Execute the scripts below as the root user on both nodes.

a.   Execute /u01/app/oracle/product/11.2.0/dbhome_1/root.sh on gurnard1 and gurnard2.

11.  Return to the Execute Configuration scripts screen on gurnard1 and click on OK.

12.  Finish: Click on Close.

4. Create ASM disk group for Fast Recovery Area

We will create another disk group (+FRA) for Fast Recovery Area using ASMCA (ASM Configuration Assistant).

As the grid Unix user, execute asmca.

grid@gurnard1-> asmca  

1.  Disk Groups tab: Click on Create.

ASM Configuration Assistant - Configure ASM

2.  Create Disk Group:

a.   Disk Group Name: FRA.

b.  Redundancy: Select External (None).

c.   Select Member Disks: Select ORCL:VOL4.

ASM Configuration Assistant -  Create Disk Group

ASM Configuration Assistant - Disk Group Creation

3.  Disk Groups tab: Click on Exit.

ASM Configuration Assistant - Configure ASM - Disk Groups

5. Create Oracle Database 11g (appsdb)

As the oracle user on gurnard1, execute dbca.

oracle@gurnard1-> dbca

1.  Welcome: Select Oracle Real Application Clusters database.

2.  Operations: Select Create a Database.

3.  Database Templates: Select General Purpose or Transaction Processing.

4.  Database Identification:

a.   Configuration Type: Admin-Managed.

b.  Global Database Name: appsdb.

c.   SID Prefix: appsdb.

d.  Select the nodes: gurnard1. Make sure you select just one node.

Database Configuration Assistant - Database Identification

5.  Management Options:

a.   On the Enterprise Manager tab, select

  • Configure Enterprise Manager.

6.  Database Credentials: For simplicity, we selected Use the Same Administrative Password for All Accounts.

a.  Password: **********

b.  Confirm Password: **********

7.  Database File Locations: Select Use Oracle-Managed Files.

a.  Database Area: +DATA

8.  ASM Credentials:

a.   Specify ASMSNMP password specific to ASM: **********

9.  Recovery Configuration: Select Specify Flash Recovery Area.

a.   Flash Recovery Area: +FRA

b.  Flash Recovery Area Size: 5000

10.  Database Content: Leave Sample Schemas unchecked.

11.  Initialization Parameters: Accept the default values.

12.  Database Storage: Click on Next.

13.  Creation Options: Select the database creation options:

a.   Select Create Database.

14.  Summary: Click on OK.

15.  Database Configuration Assistant: Click on Exit.

Database Configuration Assistant - Database Creation Complete

5.1 Oracle Database Post-Installation checks

At this stage, we have a single database instance, appsdb1 running on gurnard1. The database is not yet a RAC One Node on the cluster until we initialize it using the raconeinit utility in the next section.

oracle@gurnard1-> srvctl config database -d appsdb -a
Database unique name: appsdb
Database name: appsdb
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/appsdb/spfileappsdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: appsdb
Database instances: appsdb1
Disk Groups: DATA,FRA
Services:
Database is enabled
Database is administrator managed
 
oracle@gurnard1-> srvctl status database -d appsdb
Instance appsdb1 is running on node gurnard1
 
 
oracle@gurnard1-> sqlplus system@appsdb
 
SQL*Plus: Release 11.2.0.1.0 Production on Sat Mar 27 05:31:56 2010
 
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
 
Enter password:
 
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management and Real Application Testing options
 
SQL> select instance_number instance#,instance_name,host_name,status
  2  from gv$instance
  3  /
 
 INSTANCE# INSTANCE_NAME    HOST_NAME      STATUS
---------- ---------------- -------------- ------------
         1 appsdb1          gurnard1       OPEN  
 
 
SQL> select inst_id,thread#,bytes/1024/1024 "Size(MB)",members,status
  2  from gv$log
  3  /
 
   INST_ID    THREAD#   Size(MB)    MEMBERS STATUS
---------- ---------- ---------- ---------- ----------------
         1          1         50          2 INACTIVE
         1          1         50          2 INACTIVE
         1          1         50          2 CURRENT
 
 
SQL> select inst_id,group#,member,is_recovery_dest_file
  2  from gv$logfile
  3  order by 1,2
  4  /
 
INST_ID GROUP# MEMBER                                       IS_
------- ------ -------------------------------------------- ---
      1      1 +DATA/appsdb/onlinelog/group_1.261.713575025 NO
      1      1 +FRA/appsdb/onlinelog/group_1.257.713575033  YES
      1      2 +DATA/appsdb/onlinelog/group_2.262.713575037 NO
      1      2 +FRA/appsdb/onlinelog/group_2.258.713575043  YES
      1      3 +DATA/appsdb/onlinelog/group_3.263.713575045 NO
      1      3 +FRA/appsdb/onlinelog/group_3.259.713575049  YES
 
6 rows selected.
 
 
SQL> select  inst_id,
  2           name,
value
  3  from    gv$parameter
  4  where   name in ('cluster_database',
  5                    'db_name',
  6                    'instance_name',
  7                    'instance_number',
  8                    'local_listener',
  9                    'remote_listener',
 10                    'undo_tablespace')
 11  order by 2
 12  /
 
INST_ID NAME                 VALUE
------- -------------------- -----------------------------------
      1 cluster_database     TRUE
      1 db_name              appsdb
      1 instance_name        appsdb1
      1 instance_number      1
      1 local_listener       (DESCRIPTION=(ADDRESS_LIST=(ADDRESS
                             =(PROTOCOL=TCP)(HOST=192.168.1.222)
                             (PORT=1521))))
 
      1 remote_listener      gurnard-cluster-scan.gurnard.vccons
                             ult.com:1521
 
      1 undo_tablespace      UNDOTBS1
 
7 rows selected.

6. Initialize the Database to Oracle RAC One Node

Prior to initializing the database to RAC One Node, we have to apply patch 9004119 to the Oracle Database home. The patch creates some utilities used in managing and administrating a RAC One Node environment.


6.1 Apply Patch 9004119

Download and unzip the patch in a staging area. As the oracle user on either node, execute:

oracle@gurnard1-> echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1
oracle@gurnard1-> pwd
/tmp/9004119
oracle@gurnard1-> $ORACLE_HOME/OPatch/opatch apply
Invoking OPatch 11.1.0.6.6
 
Oracle Interim Patch Installer version 11.1.0.6.6
Copyright (c) 2009, Oracle Corporation.  All rights reserved.
 
 
Oracle Home       : /u01/app/oracle/product/11.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /etc/oraInst.loc
OPatch version    : 11.1.0.6.6
OUI version       : 11.2.0.1.0
OUI location      : /u01/app/oracle/product/11.2.0/dbhome_1/oui
Log file location : /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2010-04-05_14-41-54PM.log
 
Patch history file: /u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch_history.txt
 
ApplySession applying interim patch '9004119' to OH '/u01/app/oracle/product/11.2.0/dbhome_1'
 
Running prerequisite checks...
 
OPatch detected the node list and the local node from the inventory.  OPatch will patch the local system then propagate the patch to the remote nodes.
 
 
This node is part of an Oracle Real Application Cluster.
Remote nodes: 'gurnard2'
Local node: 'gurnard1'
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/11.2.0/dbhome_1')
 
 
Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files and inventory (not for auto-rollback) for the Oracle Home
Backing up files affected by the patch '9004119' for restore. This might take a while...
Backing up files affected by the patch '9004119' for rollback. This might take a while...
 
Patching component oracle.rdbms, 11.2.0.1.0...
Copying file to "/u01/app/oracle/product/11.2.0/dbhome_1/bin/racone2rac"
Copying file to "/u01/app/oracle/product/11.2.0/dbhome_1/bin/Omotion"
Copying file to "/u01/app/oracle/product/11.2.0/dbhome_1/bin/raconefix"
Copying file to "/u01/app/oracle/product/11.2.0/dbhome_1/bin/raconeinit"
Copying file to "/u01/app/oracle/product/11.2.0/dbhome_1/bin/raconestatus"
ApplySession adding interim patch '9004119' to inventory
 
Verifying the update...
Inventory check OK: Patch ID 9004119 is registered in Oracle Home inventory with proper meta-data.
Files check OK: Files from Patch ID 9004119 are present in Oracle Home.
 
The local system has been patched.  You can restart Oracle instances on it.
 
 
Patching in rolling mode.
 
 
The node 'gurnard2' will be patched next.
 
 
Please shutdown Oracle instances running out of this ORACLE_HOME on 'gurnard2'.
(Oracle Home = '/u01/app/oracle/product/11.2.0/dbhome_1')
 
Is the node ready for patching? [y|n]
y
User Responded with: Y
Updating nodes 'gurnard2'
   Apply-related files are:
     FP = "/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/9004119_Oct_22_2009_12_33_44/rac/copy_files.txt"
     DP = "/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/9004119_Oct_22_2009_12_33_44/rac/copy_dirs.txt"
     MP = "/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/9004119_Oct_22_2009_12_33_44/rac/make_cmds.txt"
     RC = "/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/9004119_Oct_22_2009_12_33_44/rac/remote_cmds.txt"
 
Instantiating the file "/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/9004119_Oct_22_2009_12_33_44/rac/copy_files.txt.instantiated" 
	by replacing $ORACLE_HOME in "/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/9004119_Oct_22_2009_12_33_44/rac/copy_files.txt" with actual path.
Propagating files to remote nodes...
Instantiating the file "/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/9004119_Oct_22_2009_12_33_44/rac/copy_dirs.txt.instantiated" 
	by replacing $ORACLE_HOME in "/u01/app/oracle/product/11.2.0/dbhome_1/.patch_storage/9004119_Oct_22_2009_12_33_44/rac/copy_dirs.txt" with actual path.
Propagating directories to remote nodes...
 
The node 'gurnard2' has been patched.  You can restart Oracle instances on it.
 
 
OPatch succeeded.
 
 
 
oracle@gurnard1-> cd $ORACLE_HOME/bin
oracle@gurnard1-> ls racone* Omotion
Omotion  racone2rac  raconefix  raconeinit  raconestatus
 
oracle@gurnard2-> cd $ORACLE_HOME/bin
oracle@gurnard2-> ls racone* Omotion
Omotion  racone2rac  raconefix  raconeinit  raconestatus

Script Name

Description

raconefix

Fixes metadata after an Omotion failure or failover

raconeinit

Initialize the database to RAC One Node

raconestatus

Check the status of RAC One Node database

racone2rac

Upgrade RAC One Node database to RAC

Omotion

Migrate database online from one node to another

6.2 Initialize database (appsdb) to RAC One Node

We will use the raconeinit utility created by the patch to initialize the database. During the initialization operation, the database instance, appsdb1 will be shutdown and brought back up as appsdb_1.

The script automatically creates the directories and files supporting the renamed instance, appsdb_1:

oracle@gurnard1-> ls $ORACLE_BASE/diag/rdbms/appsdb/appsdb_1
alert  cdump  hm  incident  incpkg  ir  lck  metadata  stage  sweep  trace
 
oracle@gurnard1-> ls -lat $ORACLE_HOME/dbs/*appsdb_1
lrwxrwxrwx 1 oracle oinstall 55 Apr  5 15:11 /u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwappsdb_1 -> /u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwappsdb

As the oracle user on either node, execute the following:

oracle@gurnard1-> srvctl status database -d appsdb
Instance appsdb1 is running on node gurnard1
 
 
oracle@gurnard1-> raconestatus
 
RAC One Node databases on this cluster:
 
 
  Database UP Fix Required     Current Server         Candidate Server Names
  ======== == ============ ======================  ===========================
 
 
Available Free Servers: gurnard2
 
 
 
oracle@gurnard1-> raconeinit
 
Candidate Databases on this cluster:
 
 #      Database        RAC One Node    Fix Required
===     ========        ============    ============
[1]       appsdb             NO              N/A
 
Enter the database to initialize [1]:
 
 
Database appsdb is now running on server gurnard1
 
Candidate servers that may be used for this DB:   gurnard2
 
Enter the names of additional candidate servers where this DB may run (space delimited): gurnard2
 
Please wait, this may take a few minutes to finish........
 
Database configuration modified.
 
 
oracle@gurnard1-> raconestatus
 
RAC One Node databases on this cluster:
 
 
  Database UP Fix Required    Current Server       Candidate Server Names
  ======== == ============ ====================  ==========================
    appsdb  Y       N                  gurnard1           gurnard1 gurnard2
 
 
Available Free Servers:
 
 
oracle@gurnard1-> srvctl status database -d appsdb
Instance appsdb_1 is running on node gurnard1
 

Notice that the database instance was renamed from appsdb1 to appsdb_1.

Congratulations, you have completed the installation of Oracle RAC One Node!

7. Migrate database applications to Oracle RAC One Node

Now that we have our Oracle RAC One Node installed and configured, let’s migrate our database applications (GL, PO) from our existing single database instance (finsdb) to the new RAC One Node environment.

The data will be migrated using Oracle Transportable Tablespace. Oracle Transportable Tablespace is an efficient method of moving bulk of data quickly between databases.

7.1 Verify that the tablespaces are self-contained on the source database

SQL> execute dbms_tts.transport_set_check('GL_D,PO_D',TRUE);
 
PL/SQL procedure successfully completed.
 
SQL> SQL> select * from transport_set_violations;
 
no rows selected

7.2 Set the source tablespaces to be transported to read only

SQL> alter tablespace gl_d read only;
 
Tablespace altered.
 
SQL> alter tablespace po_d read only;
 
Tablespace altered.

7.3 Export the tablespaces’ metadata

oracle@plaice-> expdp dumpfile=exp_glpo.dmp logfile=exp_glpo.log directory=data_pump_dir transport_tablespaces=gl_d,po_d
 
Export: Release 11.2.0.1.0 - Production on Sun Mar 28 12:40:08 2010
 
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
 
Username: system
Password:
 
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  system/******** dumpfile=exp_glpo.dmp logfile=exp_glpo.log directory=data_pump_dir transport_tablespaces=gl_d,po_d
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /u01/app/oracle/admin/appsdb/dpdump/exp_glpo.dmp
******************************************************************************
Datafiles required for transportable tablespace GL_D:
  /u01/oradata/appsdb/gl_d01.dbf
Datafiles required for transportable tablespace PO_D:
  /u01/oradata/appsdb/po_d01.dbf
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 12:41:44

7.4 Transport the tablespaces’ metadata and datafiles to the target shared storage

Since the datafiles were small, we simply transferred the files using FTP to the target server (gurnard1). We could have also mounted the export directory on to the target server or set up the ASM shared storage access on the source server (plaice).

As the oracle user on the source server:

oracle@plaice-> cd /u01/oradata/finsdb
oracle@plaice-> ls gl* po*
gl_d01.dbf  po_d01.dbf
oracle@plaice-> ls /u01/app/oracle/admin/finsdb/dpdump/*dmp
/u01/app/oracle/admin/finsdb/dpdump/exp_glpo.dmp
oracle@plaice-> sftp gurnard1
Connecting to gurnard1...
sftp> cd /tmp
sftp> put gl_d01.dbf
Uploading gl_d01.dbf to /tmp/gl_d01.dbf
gl_d01.dbf                                  100%  100MB  20.0MB/s   00:05
sftp> put po_d01.dbf
Uploading po_d01.dbf to /tmp/po_d01.dbf
po_d01.dbf                                  100%  100MB  25.0MB/s   00:04
sftp> put /u01/app/oracle/admin/finsdb/dpdump/exp_glpo.dmp
Uploading /u01/app/oracle/admin/finsdb/dpdump/exp_glpo.dmp to /tmp/exp_glpo.dmp
/u01/app/oracle/admin/finsdb/dpdump/exp_glp 100%  612KB 612.0KB/s   00:00
sftp> bye

As the grid user on the target server:

grid@gurnard1-> cd /tmp
grid@gurnard1-> ls *.dmp *.dbf
exp_glpo.dmp  gl_d01.dbf  po_d01.dbf
grid@gurnard1-> asmcmd
ASMCMD> cp /tmp/gl_d01.dbf +DATA/APPSDB/DATAFILE/GL
copying /tmp/gl_d01.dbf -> +DATA/APPSDB/DATAFILE/GL
ASMCMD> cp /tmp/po_d01.dbf +DATA/APPSDB/DATAFILE/PO
copying /tmp/po_d01.dbf -> +DATA/APPSDB/DATAFILE/PO
ASMCMD> cd +DATA/appsdb/datafile
ASMCMD> ls -l
Type      Redund  Striped  Time             Sys  Name
                                            N    GL => +DATA/ASM/DATAFILE/GL.272.715702487
                                            N    PO => +DATA/ASM/DATAFILE/PO.273.715702501
DATAFILE  MIRROR  COARSE   MAR 27 22:00:00  Y    SYSAUX.257.713574807
DATAFILE  MIRROR  COARSE   MAR 27 22:00:00  Y    SYSTEM.256.713574803
DATAFILE  MIRROR  COARSE   MAR 27 00:00:00  Y    UNDOTBS1.258.713574809
DATAFILE  MIRROR  COARSE   MAR 27 00:00:00  Y    USERS.259.713574809

7.5 Create the target schemas, GL and PO

SQL> connect system/Oracle#101@appsdb
Connected.
SQL> create user po identified by oracle default tablespace users temporary tablespace temp;
 
User created.
 
SQL> create user gl identified by oracle default tablespace users temporary tablespace temp;
 
User created.
 
SQL> grant connect,resource to po,gl;
 
Grant succeeded.

7.6 Import the tablespaces set

oracle@gurnard1-> cd /u01/app/oracle/product/11.2.0/dbhome_1/rdbms/log
oracle@gurnard1-> mv /tmp/exp_glpo.dmp .
oracle@gurnard1-> impdp dumpfile=exp_glpo.dmp 
	logfile=imp_glpo.log directory=data_pump_dir transport_datafiles='+DATA/ASM/DATAFILE/GL.272.715702487','+DATA/ASM/DATAFILE/PO.273.715702501'
 
Import: Release 11.2.0.1.0 - Production on Sun Mar 28 13:59:05 2010
 
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
 
Username: system@appsdb
Password:
 
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/********@appsdb dumpfile=exp_glpo.dmp logfile=imp_glpo.log 
	directory=data_pump_dir transport_datafiles=+DATA/ASM/DATAFILE/GL.272.715702487,+DATA/ASM/DATAFILE/PO.273.715702501
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 14:00:02

7.7 Set the target tablespaces to read write

SQL> connect system/Oracle#101@appsdb
Connected.
SQL> alter tablespace gl_d read write;
 
Tablespace altered.
 
SQL> alter tablespace po_d read write;
 
Tablespace altered.

7.8 Check the imported objects on the target database

Compare the objects between the source and the target databases to verify that we have the exact replica of the source schemas on the target database.

On the source database (finsdb)

SQL> select object_type, count(*)
  2  from dba_objects
  3  where owner='GL'
  4  group by object_type 
  5  order by 1
  6  /
 
OBJECT_TYPE           COUNT(*)
------------------- ----------
INDEX                       19
PROCEDURE                    2
SEQUENCE                     3
SYNONYM                      1
TABLE                        7
VIEW                         1
 
6 rows selected.
 
SQL> select object_type, count(*)
  2  from dba_objects
  3  where owner='PO'
  4  group by object_type 
  5  order by 1
  6  /
 
OBJECT_TYPE           COUNT(*)
------------------- ----------
INDEX                        4
LOB                          1
TABLE                        2
TRIGGER                      2
TYPE                         3
TYPE BODY                    1
 
6 rows selected.
 
SQL> select constraint_type, count(*)
  2  from dba_constraints
  3  where owner='GL'
  4  group by constraint_type
  5  order by 1
  6  /
 
C   COUNT(*)
- ----------
C         15
P          7
R         10
U          1
 
4 rows selected.
 
SQL> select constraint_type, count(*)
  2  from dba_constraints
  3  where owner='PO'
  4  group by constraint_type
  5  order by 1
  6  /
 
C   COUNT(*)
- ----------
C         10
P          2
R          1
U          2
 
4 rows selected.
 
SQL> select owner, count(*)
  2  from dba_objects
  3  where owner in ('GL','PO')
  4  group by owner
  5  /
 
OWNER                            COUNT(*)
------------------------------ ----------
PO                                     13
GL                                     33
 
SQL> select count(*)
  2  from dba_objects
  3  where owner in ('GL','PO')
  4  and status='INVALID';
 
  COUNT(*)
----------
         0

On the target database (appsdb)

SQL> select object_type, count(*)
  2  from dba_objects
  3  where owner='GL'
  4  group by object_type
  5  order by 1
  6  /
 
OBJECT_TYPE           COUNT(*)
------------------- ----------
INDEX                       19
TABLE                        7
 
2 rows selected.
 
SQL> select object_type, count(*)
  2  from dba_objects
  3  where owner='PO'
  4  group by object_type
  5  order by 1
  6  /
 
OBJECT_TYPE           COUNT(*)
------------------- ----------
INDEX                        4
LOB                          1
TABLE                        2
TRIGGER                      2
 
4 rows selected.

Notice from the above that Oracle Transportable Tablespace does not import procedure, sequence, synonym , view and type objects. We have to manually create these objects.

SQL> select constraint_type, count(*)
  2  from dba_constraints
  3  where owner='GL'
  4  group by constraint_type
  5  order by 1
  6  /
 
C   COUNT(*)
- ----------
C         15
P          7
R         10
U          1
 
4 rows selected.
 
SQL> select constraint_type, count(*)
  2  from dba_constraints
  3  where owner='PO'
  4  group by constraint_type
  5  order by 1
  6  /
 
C   COUNT(*)
- ----------
C         10
P          2
R          1
U          2
 
4 rows selected.

7.9 Create missing objects on the target database

We will run the queries below on the source database to extract the DDLs of the missing objects and privileges and execute the script on the target database. Lastly, perform a check on the objects count and make sure there are no invalid objects on the target database.

On the source database (finsdb)

select dbms_metadata.get_ddl(object_type,object_name,owner)||'/' 
from   dba_objects 
where object_type in ('PROCEDURE','SEQUENCE','SYNONYM','VIEW')
and    owner='GL'
/
 
select dbms_metadata.get_ddl(object_type,object_name,owner)||'/' 
from   dba_objects 
where object_type in ('TYPE')
and    owner='PO'
/
 
select 'grant '||
privilege||
' on '||
owner||
'.'||
table_name||
' to '||
grantee||
' '||
decode(grantable,'YES',' with grant option ')||
decode(hierarchy,'YES',' with hierarchy option ')||
';'
from   dba_tab_privs
where grantee in ('GL','PO')
/

On the target database (appsdb)

select owner, count(*)
from   dba_objects
where owner in ('GL','PO')
group by owner
/
 
select count(*)
from   dba_objects
where owner in ('GL','PO')
and    status='INVALID'
/

7.10 Create the service name (finsdb) in RAC One Node database

After the successful completion of the data migration, we will create a new service name, finsdb in Oracle RAC One Node database to support the migrated applications.

oracle@gurnard1-> srvctl add service -d appsdb -s finsdb -r appsdb_1 -e select -m basic -w 5 -z 180
 
oracle@gurnard1-> srvctl start service -s finsdb -d appsdb
 
oracle@gurnard1-> srvctl status service -d appsdb
Service finsdb is running on instance(s) appsdb_1
 
 
SQL> select name,failover_method,failover_type,failover_retries,failover_delay
  2  from dba_services
  3  where name='finsdb'
  4  /
 
             FAILOVER   FAILOVER   FAILOVER FAILOVER
NAME         METHOD     TYPE        RETRIES    DELAY
------------ ---------- ---------- -------- --------
finsdb        BASIC      SELECT           180        5  
 
 
 
oracle@gurnard1-> sqlplus system/Oracle#101@gurnard-cluster-scan.gurnard. vcconsult.com:1521/finsdb
 
SQL*Plus: Release 11.2.0.1.0 Production on Sat Mar 27 23:47:09 2010
 
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
 
 
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management and Real Application Testing options
 
 
SQL> select service_name,failover_type, failover_method, failed_over
  2  from gv$session where username='SYSTEM'
  3  /
 
SERVICE_NAME    FAILOVER_TYPE FAILOVER_METHOD  FAILED_OVER
--------------- ------------- ---------------- ------------
finsdb          SELECT        BASIC            NO

7.11 Add the client-side TNS service name for failover

As the oracle user on gurnard1 and gurnard2, add the TNS entry for finsdb in /u01/app/oracle/product/11.2.0/dbhome_1/network/admin/tnsnames.ora.

FINSDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = gurnard-cluster-scan.gurnard.vcconsult.com)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = finsdb)
      (FAILOVER_MODE =
        (TYPE = SELECT)
        (METHOD = BASIC)
        (RETRIES = 180)
        (DELAY = 5)
      )
    )
  )
 
 
SQL> connect system@finsdb
Enter password:
Connected.
SQL> select service_name,failover_type, failover_method, failed_over
  2  from gv$session where username='SYSTEM'
  3  /
 
SERVICE_NAME    FAILOVER_TYPE FAILOVER_METHOD  FAILED_OVER
--------------- ------------- ---------------- ------------
finsdb          SELECT        BASIC            NO

This is a simplistic overview of applications migration to illustrate the use of Oracle Transportable Tablespace feature to move a subset of an Oracle database to Oracle RAC One Node database. Proper planning and a comprehensive and systematic testing are the keys to successful data migration.

Congratulations, your applications are now running on Oracle RAC One Node!

8. Oracle Database Migration with Omotion

We will demonstrate using the Omotion utility to perform a live migration of a database from one node to an available node. The Omotion command starts and opens the target instance and then shuts down the source instance. All database sessions connected via the finsdb service name are transparently failed over to the target instance and normal transactional activities resume on the running instance. There will be a brief loss of connectivity (ORA-12514) for new connections when the finsdb service is relocated to the target instance during the cluster reconfiguration.

8.1 Connect to the database instance using the finsdb service

SQL> connect system@finsdb
Enter password:
Connected.
SQL> select thread#,instance_number,instance_name,host_name,status
  2  from gv$instance
  3  /
 
   THREAD# INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME    STATUS
---------- --------------- ---------------- ------------ -------
         1               1 appsdb_1         gurnard1     OPEN

8.2 Migrate the database from gurnard1 to gurnard2

oracle@gurnard1-> Omotion -v
 
RAC One Node databases on this cluster:
 
 #      Database                    Server                      Fix Required
===     ========        ==============================          ============
[1]       appsdb                              gurnard1                N
 
Enter number of the database to migrate [1]:
 
Specify maximum time in minutes for migration to complete (max 30) [30]:
 
appsdb Database is administrator managed .
appsdb database is running in appsdb server pool.
Current Running instance: appsdb_1
 
Current Active Server       : gurnard1
 
Available Target Server(s) :
 #            Server            Available
===     ==================      =========
[1]               gurnard2          Y
 
Enter number of the target node [1]:
 
Omotion Started...
Starting target instance on gurnard2...
Migrating sessions...
Stopping source instance on gurnard1...
Omotion Completed...
 
=== Current Status ===
Database appsdb is running on node gurnard2
 
 
oracle@gurnard1-> srvctl status database -d appsdb
Instance appsdb_2 is running on node gurnard2

Notice that after the migration, the appsdb_1 database instance was renamed to appsdb_2. Regardless of what the database instance was renamed to, no changes to the client configuration files or applications were necessary if the client sessions used the finsdb service name to access the database.

8.3 Verify that the session has failed over

From the same session in step 8.1, verify that the session has transparently failed over to the available target node, gurnard2.

SQL> select thread#,instance_number,instance_name,host_name,status
  2  from gv$instance
  3  /
 
   THREAD# INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME    STATUS
---------- --------------- ---------------- ------------ -------
         2               2 appsdb_2         gurnard2     OPEN
 
 
 
SQL> select service_name,failover_type, failover_method, failed_over
  2  from gv$session where username='SYSTEM'
  3  /
 
SERVICE_NAME    FAILOVER_TYPE FAILOVER_METHOD FAILED_OVER
--------------- ------------- --------------- ------------
finsdb          SELECT        BASIC           YES

8.4 Check Undo Tablespace and Online Redo Logs

When the appsdb_1 instance was relocated and renamed to appsdb_2 on the other node, a new set of online redo logs and an undo tablespace were created and assigned to the new instance. The undo tablespace and online redo logs associated with appsdb_1 (thread#1) will remain unused until the appsdb_2 instance is migrated back to appsdb_1.

SQL> select inst_id,thread#,bytes/1024/1024 "Size(MB)",members,status
  2  from gv$log
  3  /
 
INST_ID    THREAD#   Size(MB)    MEMBERS STATUS
------- ---------- ---------- ---------- ----------
      2          1         50          2 INACTIVE
      2          1         50          2 CURRENT
      2          1         50          2 INACTIVE
      2          2         50          2 CURRENT
      2          2         50          2 INACTIVE
      2          2         50          2 INACTIVE
 
6 rows selected.
 
 
SQL> select value, file_name, bytes/1024/1024 "size(MB)"
  2  from gv$parameter, dba_data_files
  3  where value=tablespace_name and value like 'UNDO%';
 
VALUE      FILE_NAME                                            size(MB)
---------- -------------------------------------------------- ----------
UNDOTBS2   +DATA/appsdb/datafile/undotbs2.271.715566389               95

9. Create a second Oracle Database (racdb) on the spare node

You can create as many databases as you need in the cluster as long as you have the system resources to support the databases. We will create another database, racdb on the spare node, gurnard2 to take advantage of the idle resources (requires Oracle RAC One Node license).

The procedure to create the database is similar to the steps outlined in section 5. Install Oracle Database 11g (appsdb). We specify racdb instead for the Global Database Name and SID and select gurnard2 as the node to install the database.

After the successful completion of the database installation, you will see the following:

Create a second Oracle Database (racdb) on the spare node

oracle@gurnard1-> srvctl config database -d racdb
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/racdb/spfileracdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb1
Disk Groups: DATA,FRA
Services:
Database is administrator managed
 
oracle@gurnard1-> srvctl status database -d racdb
Instance racdb1 is running on node gurnard2

10. Initialize the second Database to Oracle RAC One Node

oracle@gurnard1-> raconestatus
 
RAC One Node databases on this cluster:
 
 
  Database UP Fix Required    Current Server       Candidate Server Names
  ======== == ============ ====================  ==========================
    appsdb  Y       N                  gurnard1           gurnard1 gurnard2

 
 
Available Free Servers:
 
 
oracle@gurnard1-> raconeinit
 
Candidate Databases on this cluster:
 
 #      Database        RAC One Node    Fix Required
===     ========        ============    ============
[1]       appsdb            YES               N
[2]        racdb             NO              N/A
 
Enter the database to initialize [1]: 2
 
 
Database racdb is now running on server gurnard2
 
Candidate servers that may be used for this DB: gurnard1
 
Enter the names of additional candidate servers where this DB may run (space delimited): gurnard1
 
Please wait, this may take a few minutes to finish.......
 
Database configuration modified.
 
 
 
oracle@gurnard1-> raconestatus
 
RAC One Node databases on this cluster:
 
 
  Database UP Fix Required    Current Server       Candidate Server Names
  ======== == ============ ====================  ==========================
    appsdb  Y       N                  gurnard1           gurnard1 gurnard2
     racdb  Y       N                  gurnard2           gurnard2 gurnard1
 
 
Available Free Servers:
 
 
oracle@gurnard1-> srvctl config database -d racdb
Database unique name: racdb
Database name: racdb
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/racdb/spfileracdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: racdb
Database instances: racdb_1
Disk Groups: DATA,FRA
Services:
Database is administrator managed
 
 
oracle@gurnard1-> srvctl status database -d racdb
Instance racdb_1 is running on node gurnard2
 
oracle@gurnard1-> srvctl status database -d appsdb
Instance appsdb_1 is running on node gurnard1

We now have created and configured two Oracle RAC One Node databases. For failover protection, racdb_1 instance can failover to gurnard1 and likewise, appsdb_1 instance can failover to gurnard2.

11. Upgrade the second Database to Oracle RAC

Upgrading from Oracle RAC One Node to Oracle RAC is as simple as executing the racone2rac script and using DBCA to add additional instances.

11.1 Execute racone2rac script

We will run racone2rac to convert racdb to Oracle RAC. During the conversion operation, the racdb_1 database instance is shutdown and brought back up with a new name called racdb1. In 11.2.0.2, the conversion operation will not offline the database.

oracle@gurnard1-> racone2rac
 
Candidate Databases on this cluster:
 
 #      Database         RACOne Node    Fix Required
===     ========        =============   ============
[1]       appsdb            YES               N
[2]        racdb            YES               N
 
Enter the database to convert to one node RAC database [1]: 2
 
 
You are converting racdb to single instance Oracle RAC database running on gurnard2 .
 
Do you want to Continue? [Y]es or [N]o? Y
 
Please wait, this may take a few minutes to finish.........
Database configuration modified.
 
Run dbca to add as many instances as you want to this database.

11.2 Execute DBCA to add an additional instance (racdb2)

As the oracle user on gurnard2, execute dbca.

oracle@gurnard2-> dbca

1.  Welcome: Select Oracle Real Application Clusters database.

2.  Operations: Select Instance Management.

3.  Instance Management: Select Add an instance.

4.  List of cluster database: Select racdb.

a.   Username: sys

b.  Password: **********

Select an active cluster database to add an instance

5.  List of cluster database instances: Click on Next.

List of cluster database instances

6.  Instance naming and node selection: Accept the default instance name, racdb2.

Instance naming and node selection

7.  Instance Storage: Click on Finish.

8.  Summary: Click on OK.

Database Configuration Assistant Summary

Database Configuration Assistant - Adding Instance in Progress

oracle@gurnard1-> srvctl status database -d appsdb
Instance appsdb_1 is running on node gurnard1
 
oracle@gurnard1-> srvctl status database -d racdb
Instance racdb2 is running on node gurnard1
Instance racdb1 is running on node gurnard2

Conclusion:

Oracle RAC One Node is easy to install, administer and manage. IT organizations eager to embrace the benefits of Oracle RAC One Node can easily implement the solution at a cost advantage over the Oracle RAC option. When the database is ready to scale out to more than one node, Oracle RAC One Node can be upgraded to a full Oracle RAC database within minutes.

For more information about Oracle RAC One Node, please refer to Oracle RAC One Node: Better Virtualization for Databases.

Vincent Chan (vkchan99@yahoo.com) is a Senior Computer Systems Architect at Lockheed Martin Corporation. He is an Oracle Certified Master DBA with more than 14 years of experience architecting and implementing Oracle solutions for various clients.

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved