Oracle Database 11gR2: Installing Grid Infrastructure

Thursday Feb 25th 2010 by Jim Czuprynski
Share:

Jim Czuprynski demonstrates how to install and configure a new Oracle Database 11g Release 2 (11gR2) Grid Infrastructure home as the basis for the majority of the grid computing features that were only available in a Real Application Clusters (RAC) clustered database environment in previous releases.

Synopsis. Oracle Database 11g Release 2 makes it much simpler to configure and incorporate many of the grid computing features that were only available in a Real Application Clusters (RAC) clustered database environment in previous releases for a single-instance Oracle database. This article – the first in this series - will demonstrate how to install and configure a new Oracle 11g Release 2 (11gR2) Grid Infrastructure home as the basis for the majority of these grid computing features.

It’s been a few months since I summarized the incredible array of new features that Oracle has introduced as part of Oracle Database Release 11gR2, and in that span of time, I’ve been experimenting with those features as I’ve built a new infrastructure for experimentation. Among the most intriguing new features is the consolidation of Automatic Storage Management (ASM) with Oracle Clusterware (OC) into a pragmatic and sensible arrangement called the Oracle Grid Infrastructure (GI). As I’ll demonstrate in this article, the venerable Oracle Universal Installer (OUI) utility gets a welcome update in this release, but first I’ll need to perform quite a bit of system administration work before we can invoke it and explore its new features.

First ... A Word About The (Computing) Environment. I’ve made some long-desired changes to my home office’s personal computing infrastructure so that I can manage my workload effectively and efficiently with my favorite virtualization environment, VMWare:

  • I’ve upgraded to Oracle Enterprise Linux (OEL) 5 Update 2 (kernel 2.6.18-92.el5) for my base computing platform, a home-grown gaming server with 4GB of memory running an AMD Opteron dual-core processor.
  • I’ve also finally moved up to VMWare Workstation Version 7.0.0 for all my VMWare endeavors, and though I still occasionally long for the freedom of VMWare Server 2.0a (as in free!), I’ve found that Workstation is just as stable and that it works extremely well with OEL as both its host and guest OS.

Setting Up For Oracle 11gR2 Grid Infrastructure

I’m going to implement my 11gR2 Grid Infrastructure via a series of Oracle best practices that I’ve encountered over the years and have gleaned through a thorough reading of Oracle’s technical documentation. I’ll be using raw disk partitions for configuring all of the ASM disks that will eventually comprise the various ASM disk groups needed for my demonstrations.

Creating the Required Raw Partitions. The Oracle 11gR2 Grid Infrastructure leverages ASM to store multiple copies of the Oracle Clusterware Registry (OCR) file, multiple Voting Disks, and of course the ASM disk groups disks themselves. Since the maximum number of logical partitions that can be created within any one extended partition is 12, I’ve created two VMWare virtual disks sized at 18.5 GB and 11.0 GB, respectively. Here’s the output from the terminal session during which I used the Linux fdisk command to create the remaining logical partitions:

[root@11gR2Base ~]# fdisk -l /dev/sde

Disk /dev/sde: 19.3 GB, 19327352832 bytes
255 heads, 63 sectors/track, 2349 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1        2349    18868311    5  Extended
/dev/sde5               1         281     2257069+  83  Linux
/dev/sde6             282         562     2257101   83  Linux
/dev/sde7             563         843     2257101   83  Linux
/dev/sde8             844        1124     2257101   83  Linux
/dev/sde9            1125        1405     2257101   83  Linux
/dev/sde10           1406        1686     2257101   83  Linux
/dev/sde11           1687        1967     2257101   83  Linux
/dev/sde12           1968        2248     2257101   83  Linux

[root@11gR2Base ~]# fdisk -l /dev/sdf

Disk /dev/sdf: 12.0 GB, 12079595520 bytes
255 heads, 63 sectors/track, 1468 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1        1468    11791678+   5  Extended
/dev/sdf5               1         281     2257069+  83  Linux
/dev/sdf6             282         562     2257101   83  Linux
/dev/sdf7             563         843     2257101   83  Linux
/dev/sdf8             844        1124     2257101   83  Linux
/dev/sdf9            1125        1405     2257101   83  Linux

Assigning Raw Partitions to Block Device Endpoints. Oracle has recommended for some time that block devices are a much better choice for the ASM file system, especially since I’ve occasionally heard rumors that support for traditional raw devices allocated through the /etc/sysconfig/rawdevices configuration file may be reduced or disappear in the future.

For this and all future Oracle 11gR2 features demonstrations, I’ve configured a special service, losetup, that will construct, configure and allocate virtual block devices during server startup. For the losetup script to work properly, however, note that I also needed to increase the default number of loopback devices from eight to 16; I did this by adding the following line to the /etc/modprobe.conf system configuration file, and then rebooting the server to make sure it took effect:


options loop max_loop=16

Listing 1.1 shows the losetup script I used to complete the assignment of raw partitions to virtual block devices. After I copied the script to file /etc/init.d/losetup, I then registered the new service (as the root user) via chkconfig:

#> chmod 775 /etc/init.d/losetup
#> chkconfig losetup --add
#> chkconfig losetup on
#> chkconfig losetup --list

After rebooting the server, here’s the result of implementing the losetup script – the successful allocation of block devices as shown below:

[root@11gR2Base ~]# ls -la /dev/xv*
lrwxrwxrwx 1 root root 10 Feb  7 20:49 /dev/xvdb -> /dev/loop1
lrwxrwxrwx 1 root root 10 Feb  7 20:49 /dev/xvdc -> /dev/loop2
lrwxrwxrwx 1 root root 10 Feb  7 20:49 /dev/xvdd -> /dev/loop3
lrwxrwxrwx 1 root root 10 Feb  7 20:49 /dev/xvde -> /dev/loop4
lrwxrwxrwx 1 root root 10 Feb  7 20:49 /dev/xvdf -> /dev/loop5
lrwxrwxrwx 1 root root 10 Feb  7 20:49 /dev/xvdg -> /dev/loop6
lrwxrwxrwx 1 root root 10 Feb  7 20:49 /dev/xvdh -> /dev/loop7
lrwxrwxrwx 1 root root 10 Feb  7 20:49 /dev/xvdi -> /dev/loop8
lrwxrwxrwx 1 root root 10 Feb  7 20:49 /dev/xvdj -> /dev/loop9
lrwxrwxrwx 1 root root 11 Feb  7 20:49 /dev/xvdk -> /dev/loop10
lrwxrwxrwx 1 root root 11 Feb  7 20:49 /dev/xvdl -> /dev/loop11
lrwxrwxrwx 1 root root 11 Feb  7 20:49 /dev/xvdm -> /dev/loop12
lrwxrwxrwx 1 root root 11 Feb  7 20:49 /dev/xvdn -> /dev/loop13

Configuring and Implementing ASMLIB

To keep my ASM configuration simple to manage, I’ll also use the Oracle ASM disk management drivers that ASMLIB provides to “stamp” each target mount point before actually creating ASM disks and disk groups. First, I’ll confirm that the oracleasm drivers appropriate to my OS kernel version have indeed been installed:

[root@11gR2Base ~]# rpm -qa | grep oracleasm
oracleasm-2.6.18-92.el5xen-2.0.4-1.el5
oracleasm-2.6.18-92.el5-2.0.4-1.el5
oracleasm-2.6.18-92.el5debug-2.0.4-1.el5
oracleasm-support-2.0.4-1.el5

Excellent! My system administrator took care of this when she installed Oracle Enterprise Linux 5 Update2; otherwise, I’d be forced to remind her to download the appropriate ORACLEASM drivers and then install them on my server. However, it was necessary to make sure that the connection to the appropriate oracleasm RPMs was available, and that took a little extra manipulation as shown in the output below:

[root@11gR2Base ~]# /usr/lib/oracleasm/oracleasm_debug_link 2.6.18-92.el5 $(uname -r)
oracleasm_debug_link: Target exists
[root@11gR2Base ~]# ls -l /lib/modules/$(uname -r)/kernel/drivers/addon/oracleasm
total 576
-rw-r--r-- 1 root root 579514 May 23  2008 oracleasm.ko

[root@11gR2Base ~]# /etc/init.d/oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration:           [  OK  ]
Loading module "oracleasm":                                [  OK  ]
Mounting ASMlib driver filesystem:                         [  OK  ]
Scanning system for ASM disks:                             [  OK  ]

[root@11gR2Base ~]# /etc/init.d/oracleasm status
Checking if ASM is loaded:                                 [  OK  ]
Checking if /dev/oracleasm is mounted:                     [  OK  ]

“Stamping” Candidate Disks With ASMLIB. Now that ASMLIB is configured properly, it’s time to apply ASMLIB “stamps” to each virtual device via the createdisk command as shown below. This makes it much simpler to configure and manage ASM disks without having to use complex mount point naming conventions:

[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK1 /dev/xvdb
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK2 /dev/xvdc
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK3 /dev/xvdd
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK4 /dev/xvde
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK5 /dev/xvdf
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK6 /dev/xvdg
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK7 /dev/xvdh
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ASMDISK8 /dev/xvdi
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK1 /dev/xvdj
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK2 /dev/xvdk
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK3 /dev/xvdl
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK4 /dev/xvdm
[root@11gR2Base ~]# /etc/init.d/oracleasm createdisk ACFDISK5 /dev/xvdn

Finally, I’ll invoke ASMLIB’s listdisks command to confirm that all disks have been correctly “stamped” and are now ready for use in concert with my upcoming Grid Infrastructure installation:

[root@11gR2Base ~]# /etc/init.d/oracleasm listdisks
ACFDISK1
ACFDISK2
ACFDISK3
ACFDISK4
ACFDISK5
ASMDISK1
ASMDISK2
ASMDISK3
ASMDISK4
ASMDISK6
ASMDISK7
ASMDISK8

Installing Oracle Grid Infrastructure

I know this probably seems like an immense amount of preparation, but based on my numerous attempts at installing the Grid Infrastructure in both 32- and 64-bit environments, I’ve found that it’s important to perform it properly and carefully; otherwise, I’m just going to have to return to my checklist to figure out what I missed before I can complete the installation. But once these setup steps are completed, it’s time to begin installation of the Grid Infrastructure through Oracle Universal Installer (OUI). OUI gets a welcome facelift in this release, and that’s evident from the first panel that’s displayed after invoking the runInstaller command script from the Grid Infrastructure staging directory as shown in Figure 1.1 below:

Click for larger image
Initial OUI Grid Infrastructure Installation Panel

Figure 1.1. Initial OUI Grid Infrastructure Installation Panel.

Note that I’ve chosen the option to install Grid Infrastructure for a standalone server; the installation for a clustered environment like Real Application Clusters is obviously quite different. The next panel allows me to select which languages will be supported for this installation (Figure 1.2):

Click for larger image
Language Choices

Figure 1.2. Language Choices.

And now the beauty of using ASMLIB is apparent: It’s really easy to see which virtual disks are which because I’m able to label the mount points any way I choose for simpler management and identification, as shown in Figure 1.3.:

Click for larger image
Assigning Raw Partitions for ASM Disk Groups

Figure 1.3. Assigning Raw Partitions for ASM Disk Groups.

Now that my initial ASM disk group configuration is complete, the next two panels allow me to specify a strong, secure password (Figure 1.4) as well as assign appropriate operating system groups (Figure 1.5):

Choosing ASM Passwords
Figure 1.4. Choosing ASM Passwords.

Specifying ASM Administration Groups
Figure 1.5. Specifying ASM Administration Groups.

Note that the location for my Grid Infrastructure is placed within the same general path that a traditional Oracle database home would be placed, as shown in Figure 1.6 below.

Choosing Grid Infrastructure Home Path
Figure 1.6. Choosing Grid Infrastructure Home Path.

And here’s a neat new feature! As shown in Figure 1.7, once Oracle 11gR2 has automatically verified that all prerequisites have been met, it identifies whether any of the issues detected can be ignored (at my peril, of course!) or even repaired automatically by generating a fixit.sh script:

Validating Installation Prerequisites
Figure 1.7. Validating Installation Prerequisites.

Since I know from prior installation attempts that the reported issues are not “show stoppers,” I simply selected the Ignore All option on the panel, and OUI then presents me with a summary of what it intends to accomplish during the installation. Note another neat feature: I can tell OUI to generate a response file just by clicking the Save Response File ... button.

Final Confirmation
Figure 1.8. Final Confirmation.

As the installation continues, OUI returns a summary of its status, but note that I can opt to view a detailed summary of that installation at any time by clicking on the Details button:

Installation Progress
Figure 1.9.1. Installation Progress.

Installation Details
Figure 1.9.2. Installation Details.

At last, the expected prompt to run the root.sh script – always a good sign that the majority of the installation is complete.

root.sh Prompt
Figure 1.9.3. root.sh Prompt.

Post-root.sh Configuration
Figure 1.9.4. Post-root.sh Configuration.

Now that all configuration assistants have completed their work, OUI pats me on the back for a job well done.

Installation Confirmation
Figure 1.10. Installation Confirmation.

Next Steps

In the next article in this series, I’ll demonstrate how to configure the new ASM Clustered File System (ACFS) to show how ACFS makes short work of applying patches to the binary files for any Oracle Home so configured.

References and Additional Reading

Before you proceed to experiment with any of these new features, I strongly suggest that you first look over the corresponding detailed Oracle documentation before trying them out for the first time. I’ve drawn upon the following Oracle Database 11g Release 2 documents for this article’s technical details:

E10881-03 Oracle Database 11gR2 New Features

E10592-04 Oracle Database 11gR2 SQL Language Reference

E10595-06 Oracle Database 11gR2 Administrator’s Guide

E10713-04 Oracle Database 11gR2 Concepts

E10820-03 Oracle Database 11gR2 Reference

E10500-04 Oracle Database 11gR2 Storage Administrator’s Guide

And these Oracle MetaLink documents are also extremely helpful in setting up Grid Infrastructure for the first time:

462618.1 Cannot Find Exact Kernel Version Match For ASMLib (Workaround using oracleasm_debug_link tool)

» See All Articles by Columnist Jim Czuprynski

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved