RACing ahead with Oracle on VMware - Part 3: Installing Oracle 10g Release 2 Clusterware on a 2-node Windows 2003 Enterprise Edition Server

Friday Nov 18th 2005 by Tarry Singh

Part 3 of this series discusses the installation of Clusterware on a 2-node windows 2003 server with VMware.

A Brief Pep talk

Since the last article, I have received several e-mails requesting a description on the installation of Clusterware on a 2-node windows 2003 server with VMware. We will cover the installation issue on RHEL4 Release 2 (Nahant) as well, in a later article.

You hear all kinds of claims and see written material that says a lot but actually proves or demonstrates very little. The purpose of showing errors is to show that not everything works like a charm. My purpose in writing this article is to show you every detailed version of the scenario that you will be trying out with the necessary tools in hand.


  • VMware software: A VMware workstation or an Evaluation version of a GSX Server.
  • A Server (if you're lucky) or just a plain PC/Desktop with 2G memory. (Remember we will do a real 2-node scenario here so give those machines at least 800 Mb each).
  • Lots of patience (This is sage advice for us all). You will mess things up now and then, so have patience.
  • If you can set up a simple machine [well you guessed it right, another VM with a simple LDAP(ADS)/DNS] server then it's good BUT if you do have an active/working LDAP/DNS server the better. I will not go into detail on creating an ADS /DNS server; it is simple. Do a search on Google and you will find tons of information on that.
  • BACKUP!!! The great thing about VMware is that you can do a progressive backup, meaning you can go on with creating Virtual Machines > Successful? > Back up > Install OS's successful? > Overwrite the Backup > and so on. This way you will save a lot of time and frustration.
  • Oracle software: Go to Oracle's site and download the necessary software (that would be Database and Clusterware). If you do not have an account at OTN, get it. It's FREE!

OK now let's take a look at the overview/architecture of our servers.

Architecture Overview of the 2-node Oracle 10g Release 2 RAC on Windows 2003 Virtual Machine with VMware

I always draw a simple sketch of what I will be needing, what I have in my hand and how the whole architecture will look when I am done with the setup and of course, keep the scalability factor at the back of your mind, as we want to build a 64 node RAC cluster, right? Remember Planning is crucial.

Click for larger image

All right then, without much ado let's started with the Clusterware setup.

Finally Setting up the Virtual Machines for Oracle 10g Release 2 Clusterware Installation

Your Clusterware is very sensitive to a lot of scenarios and doing everything on the Virtual Machines makes it all the more challenging. You need to set up Shared disks.

Setting up Shared SCSI Disks

Step 1: We did see in the last article how you can set up shared disks with the plaindisk scenario, well now I have a better proposal. Just go to your vmware root directory and do the following:

D:\Program Files\VMware\VMware Workstation> vmware-vdiskmanager.exe -c -s 10Gb -a lsilogic -t 2 ASM1.vmdk

Check out the command reference

Exactly one major option should be specified.

VMware Virtual Disk Manager - build 13124.

Usage: vmware-vdiskmanager.exe OPTIONS diskName | drive-letter:

Offline disk manipulation utility



  : create disk; need to specify other create options


  : defragment the specified virtual disk


  : shrink the specified virtual disk

-n <source-disk>

  : rename the specified virtual disk; need to specify destination disk-name


  : prepare the mounted virtual disk specified by the drive-letter for shrinking


  : do not log messages

-r <source-disk>

  : convert the specified disk; need to specify destination disk-type

-x <new-capacity>

  : expand the disk to the specified capacity

Additional options for create and convert:


-a <adapter>

: adapter type (ide, buslogic or lsilogic)

-s <size>

: capacity of the virtual disk

-t <disk-type>

: disk type id

Disk types:


: single growable virtual disk


: growable virtual disk split in 2Gb files


: preallocated virtual disk


: preallocated virtual disk split in 2Gb files

The capacity can be specified in sectors, Kb, Mb or Gb.

The acceptable ranges:

ide adapter : [100.0Mb, 950.0Gb]

scsi adapter: [100.0Mb, 950.0Gb]

As you can see, I choose lsilogic, which was default to my environment. In any case, go ahead and create the *.vmdk disks. I would suggest that you allocate all space to it, it is faster to create a *growable* disk but clusterware might complain during installation, as the disk has to grow during installation.

Step 2: Now try to store all your *.vmdk disks in separate folders, something like this. And remember what I said about backups, then all you need is to just backup your root folder W2K3.

Step 3: Edit both the Virtual Machines *.vmx files like this:

Step 4: Having created the disks, you will need to convert these newly created and discovered disks on both machines into RAW disks before going for OCFS and/or ASM scenario. See Part II of this series for more details.

Step 5: Enable automount on all nodes.

What is this thing with missing/hidden NIC's?

This can be really frustrating, especially with VM's. I often had problems with lost NIC cards and then if I recreated OR tried to assign an IP address to the (strangely) newfound device (NIC card), I got an error of a hidden NIC device having that same IP address! Follow these print screens below to save yourself from such irritation.

Step1: After the error you try to locate that hidden device.

Click for larger image

Step 2: Well you don't see it, now do you?

Step 3: Go to the command line and do the following:

Click for larger image

Step 4: And then try to find the hidden device.

Step 5: And look what we have here…

Step 6: Get rid of all the hidden NIC's.

Click for larger image

Step 7: Make sure to edit your host file for only your private NIC cards (all node entries on all nodes!).

Step 8: Ping all nodes from all machines (including your own node).

Click for larger image

Step 8: Make sure the public and private cards have same names across all nodes (just to add on that, also the machine names, DNS entries should reflect the same behavior as well, remember case sensitivity). Clusterware is a very stubborn tool and will not accept any mistakes or errors.

Click for larger image

Anyways, I'm assuming that you are all ready to install your Oracle 10g Release 2 clusterware software across the nodes.

Installing Oracle 10g Release 2 Clusterware

Running pre cluvfy tests

The pre tests are primarily to test that your machines are cluster aware.

Step 1: You are all set, so let's run the cluvfy command.

Step 2: Check machine requirements.

Step 3: Check node connectivity.

Step 4: Check Shared Storage Availability across all nodes.

You are done with the checks and it is time to run the installer.

Running installer

Step 1: Start

Step 2: Welcome.

Click for larger image

Step 3: Choose default.

Click for larger image

Step 4: Running checks.

Click for larger image

Step 5: It will find the node, which launched the installer, and you will have to add more nodes.

Step 6: Next.

Step 7: Specify Network Usage.

Step 8: Cluster Configuration.

Step 9: I typically choose two for votingdisk (you can however choose more to add redundancy) and one for OCR and other for the mirror. But don't worry; the installer will guide you appropriately.

Click for larger image

Step 10: Like I said, edit the options that suit you best, here. Although, if you do format all disks, meaning also the ASM1, ASM2 and ASM3 disks, then you will not be able to stamp them should you choose an ASM scenario, which we will do in the next article.

Step 11: Sage Advice.

Step 12: After two of these errors:

Step 13: We finally reach the install button.

Step 14: Extracting.

Step 15: Setting up CRS.

I do check to see if the services were installed.

Step 16: Distributing across all nodes:

Step 17: Configuring.

Step 18: Running Assistants.

Step 19: I do get stuck up at the VIP Configuration Assistant, better known as VIPCA. It does a funny –silent installation that simply fails.

Click for larger image

Quick check all services are running (on all nodes). Well, by this time you will be head deep in RAC and will be paranoid enough to check all nodes all the time. A real good role model for your Sysadmin!

Step 20: Never mind the warning; we will continue the rest later.

You can get the commands to run the failed Assistants/Services here:

Running VIPCA

Step 1: Start vipca.

Click for larger image

Step 2: Public, very good.

Click for larger image

Step 3: Edit values.

Click for larger image

Step 4: Summary, press Finish.

Click for larger image

Step 5: Almost done.

Click for larger image

Step 6: Done!

Step 7: Configuration Results, Exit.

Running post cluvfy tests

Step 1: find the command in the directory.

Click for full image

Step 2: Run it.

Step 3: Check that all services are running and time to BACKUP your whole VM + Disks!


What we have learned here is that preparation, patience, good practices (like backing up your working environment) is VERY CRUCIAL to a successful implementation. Clusterware is a nasty tool and its installation is a bit dodgy; it is very picky and does not allow for any mistakes. Which is good! What we have also simulated is that with VMware you can build and test (as we will see in the upcoming articles) such enterprise class software and technology at practically no cost at all! I recommend you to use VMware to test it in the confinements of your test environment as it is not only cheap, but as you have seen above it is also worth the trouble. In the next article we will install a database with ASM option /and maybe with an OCFS option; we will configure our listener and take a peek at the ADS integration.

» See All Articles by Columnist Tarry Singh

Mobile Site | Full Site