Infolinks

Saturday, 23 June 2012

oracle 10g RAC on Linux AS 3

oracle 10g RAC on Linux AS 3


oracle 10g RAC on Linux AS 3


Installing oracle 10g RAC on Linux AS 3
This following article is a step-by-step guide to install Oracle 10g RAC on Linux AS 3. I have tried to cover almost
all steps but you may need to do some additional configuration as per your environment. I didn’t cover some basic
steps like creating user or group or permission etc. It is expected that whomsoever is installing 10g RAC has good
experience in installing single instance 8i, 9i or 10g. For more information you may refer oracle installation guide.
RAC Architecture:
Pre installation steps
Creating Required Operating System Groups and User – Make sure that uid and gid are same.
Configuring SSH on All Cluster Nodes.
Creating RSA and keys on each node.
$ su – oracle
$ mkdir ~/.ssh
$chmod 700 ~/.ssh
$ /usr/bin/ssh-keygen –t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
Now add key for local machine (cat id_rsa.pub>> authorized_keys
Copy the public key for all nodes into the authorized keys file
-scp id.rsa.pub node2:/tmp
-cat /tmp/id.rsa.pub >> authorised_keys
Repeat the above steps on all nodes and check for the user equivalence. You should be able

to ssh to all nodes including yourself.
Configuring the oracle User's Environment
ORACLE_BASE=/u01/apps/product
export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/crs
export ORACLE_HOME
# User specific environment and startup programs
PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin
export PATH

Setting up network

Each node in cluster must have 2 adapters, one for public and one for private. All public, private and

VIPs should be in /etc/hosts file. You should not have node hostname associated with loopback

ip (127.0.0.1), also never remove loopback entry from /etc/hosts file. VIP and public

IP should be in same subnet.
One private network is required but you can configure multiple private networks using bonding/teaming.
Don’t forget to set MTU to 1500 on Linux. You can do that with following command:
ifconfig <adapter> mtu 1500 - where adapter is like eth0, eth1…
Put this entry in /etc/rc.local so that every time sever reboots it will be in effect.
Common bonding/teaming solutions
–Bonding (Linux)
–IPMP (Solaris)
–EtherChannel/HACMP (AIX)
Checking Network Setup with Cluster verify
For more details on CVU please refer to documentation. This utility is on the CRS software CD,

/mountpoint/cluvfy/

Name of this utility is runcluvfy.sh

To use this CVU utility you first need to install an RPM that comes as part of Oracle CRS Installation CD.

Name of this rpm is cvuqdisk-1.0.1-1.rpm, which is located on:
/mountpoint/rpm
$./runcluvfy.sh comp nodecon -n node1,node1 -verbose

Setting and/or checking Kernel parameters

root@ 10.2.0# /sbin/sysctl -a | grep sem
root@ 10.2.0# /sbin/sysctl -a | grep shm
root@ 10.2.0# /sbin/sysctl -a | grep file-max
root@ 10.2.0# /sbin/sysctl -a | grep ip_local_port_range
root@ 10.2.0# /sbin/sysctl -a | grep net.core
Set the appropriate parameter as per your environment and as per oracle’s recommendation.
Prepare shared file system:
10g RAC require shared file system/raw devices for:
Voting and OCR
Datafile, contolfile, online redo log file, spfile.
Here we will be using raw device for OCR and voting and ASMlib devices for database. In kernel 2.4 device file must be bound to
/dev/raw/rawn devices to simulate block device IO. Binding is not required when using ASMLib to manage ASM storage.
Partitioning raw device:




In my test I used /dev/sdc device for OCR, voting and database.
/dev/sdc5 and /dev/sdc6 (125mb) for OCR
/dev/sdc7 /dev/sdc8 and /dev/sdc9 (20mb) for voting
/dev/sdc10, /dev/sdc11 and /dev/sdc12 (10g) for database files.
/dev/sdc13 (20mb) for spfile
Binding raw device:



As you can see clearly I just bound OCR and voting disk.
Initialize all raw partitions –
dd if=/dev/zero of=/dev/sdxx bs=1024k count=10



Installing ASMlib RPMs:



Configuring ASMLib (on all nodes):



Creating ASM Disks:



Changing permission for OCR and voting disk (on all nodes):



Configuring the "hangcheck-timer" Kernel Module
Oracle uses the Linux kernel module hangcheck-timer to monitor the system health of the cluster and to reset a RAC node in case of failures.
The hangcheck-timer module has the following two parameters:
hangcheck_tick: This parameter defines the period of time between checks of system health.   The default value is 60 seconds.
Oracle recommends to set it to 30 seconds.
hangcheck_margin: This parameter defines the maximum hang delay that should be tolerated before
hangcheck-timer resets the RAC node. It defines the margin of error in seconds. The default value 
is 180 seconds. Oracle recommends to set it to 180 seconds.
# su - root
# echo "options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180" >> /etc/modules.conf
Now you can run modprobe to load the module with the configured parameters in /etc/modules.conf:
# su - root
# modprobe hangcheck-timer
Installing CRS:
Before running runInstaller, verify xterm configuration.
$./runInstaller
Click “Next” on Welcome Screen:


Click “Next” on “Inventory directory and credentials”:


Change the path to “/u01/oracle/product/10.2.0/crs”:


Click “Next”:

Specifying the public IP, VIP and private IP’s for both the nodes:
Specifying the NIC to be used for public and private interconnect traffic, “do not use” if you are not sure which ip is that for some interface.
Specifying the location for the OCR disk. /dev/raw/raw1 and /dev/raw/raw2 are devices which we bound earlier with /dev/sdc5 and /dev/sdc6.
Specifying the voting disk. /dev/raw/raw3 raw4 and raw5are devices which we bound earlier with /dev/sdc7 sdc8 and sdc9.
Click on “install”:

Run orainstRoot.sh and root.sh on all nodes:

One way to verify the CRS installation is to display all the nodes where CRS was installed:
oracle$ $ORACLE_HOME/bin/olsnodes
Installing Oracle Database 10g Software with Real Application Clusters (RAC)
Note: It’s always recommended to have 2 different HOMEs for ASM and DATABASE in 10g release 2. So it is advisable to perform
below installation with 2 different home and sids.
For ASM-
ORACLE_HOME=/u01/oracle/product/10.2.0/asm
ORACLE_SID=+ASM1 and +ASM2 …….(+ASM1 and +ASM2 are your 2 ASM instances on 2 node configuration)
For DB-
ORACLE_HOME=/u01/oracle/product/10.2.0/DB
ORACLE_SID=+ORCL1 and +ORCL2 ……(Where ORCL is your DB name)
This is really helpful when you want patch ASM home and Database home separately. So in this installation you will be having total 3 HOMEs:
  • For CRS
  • For ASM
  • For DB
Don’t forget to set different HOME/SID while performing operation on these different components.
Click on “Next”:


Click on “Next”:

Since the cluster is up and running, we can see both the nodes now. Select all nodes.


Post Installation Task:
  • Backup the OCR
The OCR is backed up every four hours, daily and weekly.
Run ocrconfig –showbackup to display backup location
Backup can be done manually
–ocrconfig –export <filename>

No comments:

Post a Comment