ocp_logo Step by Step of Adding Node to 11g RAC cluster on Linux:
By Bhavin Hingu
bhavin@oracledba.org


                                              <<HOME>>                  


                                                                                                                                                                                                                                                 
  Add Node To 11g RAC Cluster:

This document explains the step by step process of adding a Node to 11g RAC Cluster. In this process, I am going to add a single node (node2-pub)
to single node RAC cluster online without affecting the availability of the Existing RAC Database.


Existing RAC Architecture:

RAC Nodes:

Node 1:
    Public:  node1-pub
    Private: node1-prv
    Virtual: node1-vip
 

ORACLE_HOMES (Local on Each Node);

    CRS_HOME: /u01/app/crs
    DB_HOME:  /u01/app/oracle/
product/11g/db_2
    ASM_HOME: /u01/app/asm/product/11gr1


Database / ASM::

    DB Name: test
    DB Instances: test1 on node1-pub
    ASM Instances: +ASM1 on node1-pub.

Node to be added: node2-pub

Assumptions:

MAXINSTANCES in controlfile is set to atleast 2.
Node node2-pub is physically connected to the Cluster.
Same Linux OS version as node 1 is installed on the new node
 
Tasks List (to be executed in Order):

Provision the New Node (node2-pub)
Install CRS
Install 11g RAC DB Software for ASM_HOME on Node2-pub
Add LISTENER on node2-pub
Add DB Software for ORACLE_HOME on node2-pub
Add ASM Instance +ASM2 on node2pub Manually
Add DB Instance teste on node2-pub Manually.




Provision the New Node (node2-pub);

Packages:

Oracle 11g R1 on CentOS EL 4 Update 5 requires below extra packages to be installed with the version same or higher than listed below.

binutils-2.15.92.0.2-18
compat-libstdc++-33.2.3-47.3
elfutils-libelf-0.97-5
elfutils-libelf-devel-0.97.5
glibc-2.3.9.4-2.19
glibc-common-2.3.9.4-2.19
glibc-devel-2.3.9.4-2.19
gcc-3.4.5-2
gcc-c++-3.4.5-2
libaio-devel-0.3.105-2
libaio-0.3.105-2
libgcc-3.4.5
libstdc++-3.4.5-2

libstdc++-devel-3.4.5-2
make-3.80-5
sysstat-5.0.5
unixODBC-2.2.11
unixODBC-devel-2.2.11
iscsi-initiator-utils-4.0.3.0-5 <-- For iscsi based SAN configuration
 
Configuring Public and Private network for the Cluster Nodes:

Each New node in the cluster must have 3 network adapter (eth0, eth1and eth2) one for the public, second one for the private
network interface (internode communication, interconnect) and third one for the Network Storage System (Private).
Follow the below steps to configre these networks:


(1) Change the hostname value by executing the below command:

[root@localhost ~]# hostname node2-pub.hingu.net

(2) Edit the /etc/hosts file as shown below:

[root@node2-pub ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost
 
## Public Node names
 
192.168.10.11          node1-pub.hingu.net     node1-pub
192.168.10.22          node2-pub.hingu.net     node2-pub
 
## Private Network (Interconnect)
 
192.168.0.11            node1-prv               node1-prv
192.168.0.22            node2-prv               node2-prv
 
## Private Network (Network storage)
 
192.168.1.11            node1-nas               node1-nas
192.168.1.22            node2-nas               node2-nas
192.168.1.33            nas-server              nas-server
 
## Virtual IPs
 
192.168.10.111          node1-vip.hingu.net     node1-vip
192.168.10.222          node2-vip.hingu.net     node2-vip



(3) Edit OR create the /etc/sysconfig/network-scripts/ifcfg-eth0 as shown below:

[root@node2-pub ~]# cat  /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
IPADDR=192.168.10.22
HWADDR=00:06:5B:AE:AE:7F
ONBOOT=yes
TYPE=Ethernet
[root@node2-pub ~]#


(4) Edit OR create the /etc/sysconfig/network-scripts/ifcfg-eth1 as shown below: -<-- For Cluster interconnects

[root@node2-pub ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=static
HWADDR=00:13:46:6A:FC:6D
ONBOOT=yes
IPADDR=192.168.0.22
NETMASK=255.255.255.0
TYPE=Ethernet
[root@node2-pub ~]#


(5) Edit OR create the /etc/sysconfig/network-scripts/ifcfg-eth2 on RAC Nodes as shown below: <-- For iSCSI SAN Storage Network

[root@node2-pub ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2
# ADMtek NC100 Network Everywhere Fast Ethernet 10/100
DEVICE=eth2
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.1.22
NETMASK=255.255.255.0
HWADDR=00:18:F8:0F:0D:C1
[root@node2-pub ~]#



(6) Edit the /etc/sysconfig/network file with the below contents:

[root@node2-pub ~]# cat  /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=node2-pub.hingu.net

(7) Restart the network service OR reboot the nodes:
Verify the network interface configurations by running the ifconfig command as shown below.

[root@node2-pub ~]# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 00:06:5B:AE:AE:7F 
          inet addr:192.168.10.22  Bcast:192.168.10.255  Mask:255.255.255.0
          inet6 addr: fe80::206:5bff:feae:ae7f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:28768 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11884 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1970115 (1.8 MiB)  TX bytes:3414698 (3.2 MiB)
          Interrupt:11 Base address:0xe480
 
eth1      Link encap:Ethernet  HWaddr 00:13:46:6A:FC:6D 
          inet addr:192.168.0.22  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::213:46ff:fe6a:fc6d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:472914 errors:0 dropped:0 overruns:0 frame:0
          TX packets:468223 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:93005393 (88.6 MiB)  TX bytes:105775559 (100.8 MiB)
          Interrupt:10 Base address:0x6800
 
eth2      Link encap:Ethernet  HWaddr 00:18:F8:0F:0D:C1 
          inet addr:192.168.1.22  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::218:f8ff:fe0f:dc1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:710843 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1107434 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:365638109 (348.6 MiB)  TX bytes:211568148 (201.7 MiB)
          Interrupt:9 Base address:0x6c00
 
lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:41797 errors:0 dropped:0 overruns:0 frame:0
          TX packets:41797 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:19150389 (18.2 MiB)  TX bytes:19150389 (18.2 MiB)
 
sit0      Link encap:IPv6-in-IPv4 
          NOARP  MTU:1480  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

 
Memory and swap Space:

Oracle 10g RAC requires to have 1GB of RAM available on each node to sucessfully install 11g RAC. Well, I have
managed to install it with 512 MB RAM. You will get warning during checking of pre-requise step of installation
step which you can ignore. Please goto Adding an Extra Swapspace if you want to have an extra swapspace added.

Kernel Parameters:

Oracle recommends that you set shared memory segment attributes as well as semaphores to the following values.
If not set, database instance creation will fail. I added the following lines to /etc/sysctl.conf file. Every OS process needs
semaphore where It waits on for the resources. For more on semaphore, please read the UNIX os documents.
Oracle 10g RAC requires to have 1GB of RAM available on each node to sucessfully install 11g RAC. Well, I have
managed to install it with 512 MB RAM. You will get warning during checking of pre-requise step of installation
step which you can ignore. Please goto Adding an Extra Swapspace if you want to have an extra swapspace added.

 NOTE: If the current value for any parameter is higher than the value listed in this table, then do not change
              the value of that parameter.

                   To see the current setting in the kernel, please use the below command.

/sbin/sysctl -a | grep sem      -- for semmsl, semmns, semopm, semmni
/sbin/sysctl -a | grep shm      -- for
shmall, shmmax, shmmni
/sbin/sysctl -a | grep file-max

/sbin/sysctl -a | grep
ip_local_port_range
/sbin/sysctl -a | grep rmem_default

Please add/change the appropriate variables value in the /etc/sysctl.conf file as shown below.

[root@node2-pub ~]# cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.
 
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
 
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
 
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
 
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
 
# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1
 
# Extra parameters For 11g RAC installation
 
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.sem = 250 32000 100 128
fs.file-max = 6553600
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 4194304
net.core.wmem_default = 262144
net.core.wmem_max= 262144
net.core.rmem_max = 4194304
 
[root@node2-pub ~]# 

After adding these lines to /etc/sysctl.conf, please run the below command as root to make them enabled.

[root@node2-pub ~]#  sysctl -p

Creating oracle OS User Account:
groupadd -g 900 dba
groupadd -g 901 oinstall
useradd -u 900 -g oinstall -G dba oracle
passwd oracle

Plesae verify that oracle user has same gid and uid on all the RAC nodes by executing the this command

[oracle@node2-pub ~]$ id
uid=900(oracle) gid=901(oinstall) groups=901(oinstall),900(dba)

[oracle@node1-pub ~]$ id
uid=900(oracle) gid=901(oinstall) groups=901(oinstall),900(dba)

Creating Oracle Software Directories:

          Perform the below steps on all the nodes in cluster.

[root@node2-pub ~]# mkdir -p /u01/app/crs
[root@node2-pub ~]# mkdir -p /u01/app/asm
[root@node2-pub ~]# mkdir -p /u01/app/oracle
[root@node2-pub ~]# mkdir -p /u02/ocfs2
[root@node2-pub root]# chown -R oracle:oinstall /u01
[root@node2-pub root]# chown -R oracle:oinstall /u02
[root@node2-pub root]# chmod -R 775 /u01/app/oracle
[root@node2-pub root]#
chmod -R 775 /u01

Setting Shell Limits for the Oracle User:

To improve the performance of the software on Linux systems, you must increase the following shell limits
 for the
oracle user:

Add the following lines to the /etc/security/limits.conf file:

             oracle soft nproc 2047
          oracle hard nproc 16384
          oracle soft nofile 1024
          oracle hard nofile 65536

Add or edit the following line in the /etc/pam.d/login file, if it does not already exist:

                session required /lib/security/pam_limits.so

For the Bourne, Bash, or Korn shell,  add the following lines to the /etc/profile:

        if [ $USER = "oracle" ]; then
           if [ $SHELL = "/bin/ksh" ]; then
              ulimit -p 16384
              ulimit -n 65536
           else
              ulimit -u 16384 -n 65536
           fi

        fi

For the C shell (csh or tcsh), add the following lines to the /etc/csh.login.

       if ( $USER == "oracle" ) then
          limit maxproc 16384
          limit descriptors 65536

       endif


Enable SSH oracle user Equivalency on Both the Cluster Nodes:

To configure SSH user equivalency, you must create RSA and DSA keys on each cluster node and copy these keys
from all the cluster node members into an authorised key file on each node. Follow the below steps to achieve this task.

su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh

(A) Generate the RSA and DSA keys on Each NODE:

/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa


(B) on node1-pub:

         Add keys to the authorised key file and then send the same file to every nodes in cluster:


touch ~/.ssh/authorized_keys
cd ~/.ssh

(1)

ssh node1-pub cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
ssh node1-pub cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys
ssh node2-pub cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
ssh node2-pub cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys

(2)
[oracle@node1-pub .ssh]$ scp authorized_keys node2-pub:/home/oracle/.ssh/

(C)  ON Node2-pub:

      (1)

ssh node1-pub cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
ssh node1-pub cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys
ssh node2-pub cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
ssh node2-pub cat /home/oracle/.ssh/id_dsa.pub >> authorized_keys

(2)
[oracle@node2-pub .ssh]$ scp authorized_keys node1-pub:/home/oracle/.ssh/


(D) Change the Permission of authorized_Keys file (on each node)

[oracle@node2-pub .ssh]$ chmod 600 ~/.ssh/authorized_keys

During executing step B - (1), you may be prompted as show below. Enter "yes" and continue.

[oracle@node2-pub .ssh]$ ssh node1-pub cat /home/oracle/.ssh/id_rsa.pub >> authorized_keys
The authenticity of host 'node1-pub (216.160.37.154)' can't be established.
RSA key fingerprint is <**********>.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1-pub,216.160.37.154' (RSA) to the list of known hosts.
Warning: No xauth data; using fake authentication data for X11 forwarding.

Now, try executing the date (or any other command) on remote node to make sure that oracle is not asked for the password.
Do the same on both the nodes in cluster. You should not recieve any error message while you execute these commands on
all the nodes. If you get any error, first fix them before you go further.

[oracle@node2-pub oracle]$ ssh node1-prv date
Thu Sep 20 21:16:56 CDT 2007
[oracle@node2-pub oracle]$ ssh node2-prv date
Thu Sep 20 21:17:03 CDT 20075
[oracle@node2-pub oracle]$ ssh node1-pub date
Thu Sep 20 21:17:09 CDT 2007
[oracle@node2-pub oracle]$ ssh node2-pub date
Thu Sep 20 21:17:19 CDT 2007

Errors / Warnings during the network configurations:

I got the below warning when I tried below command.

[oracle@node2-pub .ssh]$ ssh node1-pub date
Warning: No xauth data; using fake authentication data for X11 forwarding.
Thu Sep 20 20:43:39 CDT 2007

To fix the above warning, create the /home/oracle/.ssh/config file (logged in as oracle user) and make the below entry
in it. then run the same command again and the above warning would not show up.


[oracle@node2-pub oracle]$ cat .ssh/config
 
Host *
      Forwardx11 no

You may be promted to enter 'yes' or 'no' for the first time performing ssh commands. Simply ente'yes' and continue.
Afterwords, when oracle connect to the remote node, it won't be asked for the password.

[oracle@node2-pub oracle]$ ssh node1-prv date
The authenticity of host 'node1-prv (192.168.203.1)' can't be established.
RSA key fingerprint is <********************************************>
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1-prv,192.168.203.1' (RSA) to the list of known hosts.
Sun Dec 18 20:01:09 CST 2005
[oracle@node2-pub oracle]$ ssh node1-prv date
Sun Dec 18 20:01:13 CST 2005
[oracle@node2-pub oracle]$

[oracle@node2-pub oracle]$ ssh node2-prv date
Warning: Permanently added the RSA host key for IP address '192.168.203.2' to the list of known hosts.
Sun Dec 18 20:14:16 CST 2005
[oracle@node2-pub oracle]$ ssh node2-pub date
Sun Dec 18 20:15:05 CST 2005

If you get then below erorr message when try to connect to remote node, please make sure that the firewall is disabled on the
remote node.

[root@node2-pub root]# telnet node1-prv
Trying 192.168.203.1...
telnet: Unable to connect to remote host: No route to host

Configuring System for Shared Disk Storage Device (iSCSI based IP SAN):

 

The below 5 volumes  are created under the rac11g volume group.  For more Information on setting up and creating shared volumes, please goto here.

volume groups

Make sure that each volumes allows the shared access to all the nodes in the cluster. You can do that by clicking the 'Edit' link on the above screen
for each volume name as shown below.

openfiler_shared

The below screen shows that both the nodes in the cluster has shared access on the storage. Click on the General tab to add/modify the rac nodes info.
The network information provided in this table are provate network for the shared storage.

allow shared access

Discovering the volumes to the new node node2-pub as scsi devices

NOTE: make sure that SELinux and firewall has been disabled on all the RAC nodes. If not, then disable by "Administraion Sections"
--> "system settings" --> "security Level"

on node2-pub:
 
Make sure that the iscsi-initiator is installed. If not, then downlaod the specified RPM and istall them.
 
[root@node2-pub rpms]# rpm -qa | grep iscsi
iscsi-initiator-utils-4.0.3.0-5
 
[root@node2-pub rpms]# service iscsi start -- if not already running
 
[root@node2-pub rpms]# service iscsi status
iscsid (pid 3256 3253) is running...
 
Configure nodes for the Disk Discovery:
 
Edit the /etc/iscsi.conf and assign the address of the iscsi target
to the below parameter:

# --------------------------
# Discovery Address Category
# --------------------------
# To localize parameters to targets found on a particular discovery address
# (i.e., to override the global settings), you need to use the
# "DiscoveryAddress" setting. The format for the "DiscoveryAddress" setting is
# a.b.c.d, a.b.c.d:e (where e is a TCP port number), or an instance name. An
# example of these settings would be:
#
#DiscoveryAddress=10.4.100.0
#  or
#DiscoveryAddress=10.4.100.1:3260
#  or
DiscoveryAddress=192.168.1.33 # <-- This is the address of the nas-server
#
 
(b)
 
Reboot All the Nodes and run the iscsi-ls command to see if the volumes have been discovered on the RAC nodes as scsi devices.
 
(c)
 
[root@node2-pub rpms]# iscsi-ls
*******************************************************************************
SFNet iSCSI Driver Version ...4:0.1.11-4(15-Jan-2007)
*******************************************************************************
TARGET NAME             : iqn.2006-01.com.openfiler:rac11g.ocfs-dsk
TARGET ALIAS            :
HOST ID                 : 0
BUS ID                  : 0
TARGET ID               : 0
TARGET ADDRESS          : 192.168.1.33:3260,1
SESSION STATUS          : ESTABLISHED AT Sun Sep  2 18:49:57 CDT 2007
SESSION ID              : ISID 00023d000001 TSIH 1200
*******************************************************************************
TARGET NAME             : iqn.2006-01.com.openfiler:rac11g.asm-dsk4
TARGET ALIAS            :
HOST ID                 : 1
BUS ID                  : 0
TARGET ID               : 0
TARGET ADDRESS          : 192.168.1.33:3260,1
SESSION STATUS          : ESTABLISHED AT Sun Sep  2 18:49:57 CDT 2007
SESSION ID              : ISID 00023d000001 TSIH 1000
*******************************************************************************
TARGET NAME             : iqn.2006-01.com.openfiler:rac11g.asm-dsk3
TARGET ALIAS            :
HOST ID                 : 2
BUS ID                  : 0
TARGET ID               : 0
TARGET ADDRESS          : 192.168.1.33:3260,1
SESSION STATUS          : ESTABLISHED AT Sun Sep  2 18:49:57 CDT 2007
SESSION ID              : ISID 00023d000001 TSIH 1100
*******************************************************************************
TARGET NAME             : iqn.2006-01.com.openfiler:rac11g.asm-dsk2
TARGET ALIAS            :
HOST ID                 : 3
BUS ID                  : 0
TARGET ID               : 0
TARGET ADDRESS          : 192.168.1.33:3260,1
SESSION STATUS          : ESTABLISHED AT Sun Sep  2 18:49:57 CDT 2007
SESSION ID              : ISID 00023d000001 TSIH 1300
*******************************************************************************
TARGET NAME             : iqn.2006-01.com.openfiler:rac11g.asm-dsk1
TARGET ALIAS            :
HOST ID                 : 4
BUS ID                  : 0
TARGET ID               : 0
TARGET ADDRESS          : 192.168.1.33:3260,1
SESSION STATUS          : ESTABLISHED AT Sun Sep  2 18:49:58 CDT 2007
SESSION ID              : ISID 00023d000001 TSIH 1400
*******************************************************************************
[root@node2-pub rpms]#
 
(4) Map the Volumes on the iscsi-target (nas-server) to the Disks disceverd on the local RAC nodes.
 
    Host ID       Target ID                                       discovered as
   
      0           iqn.2006-01.com.openfiler:rac11g.ocfs-dsk          ---
      1           iqn.2006-01.com.openfiler:rac11g.asm-dsk4          ---
      2           iqn.2006-01.com.openfiler:rac11g.asm-dsk3          ---
      3           iqn.2006-01.com.openfiler:rac11g.asm-dsk2          ---
      4           iqn.2006-01.com.openfiler:rac11g.asm-dsk1          ---
 
Now, run the below command to find out the "Attached" devices to the Host IDs. The scsi Id in this
output maps to the Host ID on the "iscsi-ls" output.
 
[root@node2-pub rpms]# dmesg | grep Attached
 
Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
Attached scsi disk sdb at scsi1, channel 0, id 0, lun 0
Attached scsi disk sdc at scsi2, channel 0, id 0, lun 0
Attached scsi disk sdd at scsi3, channel 0, id 0, lun 0
Attached scsi disk sde at scsi4, channel 0, id 0, lun 0
 
In first line, scsi0 (Host Id 0) has device "sda" attahced to it. So, By filling the above table
with this information gives the mapping of discovered Disks at client to its actual Volums on the iscsi-target.
 
 Host ID          Target ID                                 Volume Name discovered as
   
      0           iqn.2006-01.com.openfiler:rac11g.ocfs-dsk   ocfs-dsk           sda
      1           iqn.2006-01.com.openfiler:rac11g.asm-dsk4   asm-dsk4           sdb
      2           iqn.2006-01.com.openfiler:rac11g.asm-dsk3   asm-dks3           sdc
      3           iqn.2006-01.com.openfiler:rac11g.asm-dsk2   asm-dsk2           sdd
      4           iqn.2006-01.com.openfiler:rac11g.asm-dsk1   asm-dsk1           sde
   
NOTE: You do not partition the Shared Disks on new Node. After successfull discovery of the Shared Volumes, as shown above,  all the available partitions on these volumes will become visible on node2-pub node.

[root@node2-pub ~]# fdisk -l
 
Disk /dev/hda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/hda1   *           1          13      104391   83  Linux
/dev/hda2              14       19457   156183930   8e  Linux LVM
 
Disk /dev/sda: 80.6 GB, 80664854528 bytes
255 heads, 63 sectors/track, 9806 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        9806    78766663+  83  Linux
 
Disk /dev/sdb: 104.8 GB, 104857600000 bytes
255 heads, 63 sectors/track, 12748 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1       12748   102398278+  83  Linux
 
Disk /dev/sdc: 104.8 GB, 104857600000 bytes
255 heads, 63 sectors/track, 12748 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       12748   102398278+  83  Linux
 
Disk /dev/sdd: 104.8 GB, 104857600000 bytes
255 heads, 63 sectors/track, 12748 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1       12748   102398278+  83  Linux
 
Disk /dev/sde: 104.8 GB, 104857600000 bytes
255 heads, 63 sectors/track, 12748 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1       12748   102398278+  83  Linux
[root@node1-pub ~]#

Make Device name persistent (node2-pub):


  1. DOWNLOAD the latest rpm of devlabel from Dell's website:
  2. Install it on all the RAC nodes:
[root@node1-pub Desktop]# rpm -ivh devlabel-0.48.01-1.i386.rpm
warning: devlabel-0.48.01-1.i386.rpm: V3 DSA signature: NOKEY, key ID 23b66a9d
Preparing...                ########################################### [100%]
   1:devlabel               ########################################### [100%]
[root@node1-pub Desktop]#

    3. Create the SYMLINK using the devlabel (All the RAC nodes):

          (A). Get the Host ID and the Devices attached to that ID by below command:

[root@node1-pub ~]# dmesg | grep Attached
Attached scsi disk sda at scsi5, channel 0, id 0, lun 0
Attached scsi disk sdb at scsi8, channel 0, id 0, lun 0
Attached scsi disk sdc at scsi6, channel 0, id 0, lun 0
Attached scsi disk sdd at scsi9, channel 0, id 0, lun 0
Attached scsi disk sde at scsi7, channel 0, id 0, lun 0

(B). Get the Target Name associated to each Host ID by below command:

[root@node1-pub ~]# iscsi-ls

(C). Create SYMLINK using below command:


[root@node1-pub ~]# devlabel add -d /dev/sdc1 -s /dev/ocfs2
SYMLINK: /dev/ocfs2 -> /dev/sde1
Added /dev/ocfs2 to /etc/sysconfig/devlabel
[root@node1-pub ~]#
[root@node1-pub ~]# devlabel add -d /dev/sde1 -s /dev/asmdsk4
SYMLINK: /dev/asmdsk4 -> /dev/sde1
Added /dev/asmdsk4 to /etc/sysconfig/devlabel
[root@node1-pub ~]#
[root@node1-pub ~]# devlabel add -d /dev/sde1 -s /dev/asmdsk3
SYMLINK: /dev/asmdsk3 -> /dev/sde1
Added /dev/asmdsk3 to /etc/sysconfig/devlabel
[root@node1-pub ~]#
[root@node1-pub ~]# devlabel add -d /dev/sdb1 -s /dev/asmdsk2
SYMLINK: /dev/asmdsk2 -> /dev/sdb1
[root@node1-pub ~]#
Added /dev/asmdsk2 to /etc/sysconfig/devlabel
[root@node1-pub ~]# devlabel add -d /dev/sdd1 -s /dev/asmdsk1
SYMLINK: /dev/asmdsk1 -> /dev/sdd1
[root@node1-pub ~]#
Added /dev/asmdsk1 to /etc/sysconfig/devlabel
[root@node1-pub ~]#

(D). Confirm that the symlinks are associated with the UUID of each physical devices.

[root@node1-pub ~]# cat /etc/sysconfig/devlabel
# devlabel configuration file
#
# This file should generally not be edited by hand.
# Instead, use the /sbin/devlabel program to make changes.
# devlabel by Gary Lerhaupt <gary_lerhaupt@dell.com>
#
# format:  <SYMLINK> <DEVICE> <UUID>
# or format:  <RAWDEVICE> <DEVICE> <UUID>

/dev/ocfs2 /dev/sda1 S83.1:4f70656e66696c000000000005000000b92700000e000000OpenfileVirtualdisksector63-157533389
/dev/asmdsk4 /dev/sdc1 S83.1:4f70656e66696c000000000004000000a52700000e000000OpenfileVirtualdisksector63-204796619
/dev/asmdsk3 /dev/sde1 S83.1:4f70656e66696c000000000003000000912700000e000000OpenfileVirtualdisksector63-204796619
/dev/asmdsk2 /dev/sdb1 S83.1:4f70656e66696c0000000000020000007d2700000e000000OpenfileVirtualdisksector63-204796619
/dev/asmdsk1 /dev/sdd1 S83.1:4f70656e66696c000000000001000000672700000e000000OpenfileVirtualdisksector63-204796619
[root@node1-pub ~]#

 (E). Mount the ocfs2 devices under the appropriate mount point and update the /etc/fstab (in my case it is /u02/ocfs2):

[root@node1-pub ~]# mount -t ocfs2 -o datavolume,nointr /dev/ocfs2 /u02/ocfs2
[root@node1-pub ~]# vi /etc/fstab

[root@node1-pub ~]# cat /etc/fstab

# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 /                      ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
none                    /dev/pts                devpts  gid=5,mode=620  0 0
none                    /dev/shm                tmpfs   defaults        0 0
none                    /proc                   proc    defaults        0 0
none                    /sys                    sysfs   defaults        0 0
/dev/VolGroup00/LogVol01 swap                   swap    defaults        0 0

## OCFS2 mountpoint

/dev/ocfs2              /u02/ocfs2              ocfs2   _netdev,datavolume,nointr      0 0
/dev/hdc                /media/cdrom            auto    pamconsole,exec,noauto,managed 0 0
/dev/fd0                /media/floppy           auto    pamconsole,exec,noauto,managed 0 0

(F). Update the /etc/rc.local files

Copy the existing /etc/rc.local file from node1-pub to node2-pub.

[root@node2-pub ~]# cat /etc/rc.local

THe below is the sample output of my /etc/rc.local

# Affirms the mappings of symlinks  to  storage  devices.   If  a
# device  cannot  be found by its UUID, the existing symlink will
# be deleted.  If the device name has changed, it will update the
# symlink.  Both start and restart are synonyms for this command.

service iscsi restart
devlabel reload

mount -t ocfs2 -o datavolume,nointr /dev/ocfs2 /u02/ocfs2

          Installing and Configuring OCFS2 (Oracle Cluster File System):

I have followed the OCFS2 setup guide to install and configure the OCFS2.

Download and Install the required rpms:

Please download the below rpms from Oracle's website and install them on new node node2-pub as shown below,

ocfs2-2.6.9-55.0.2.EL-1.2.5-6.i686.rpm
ocfs2-tools-1.2.4-1.i386.rpm
ocfs2-tools-debuginfo-1.2.4-1.i386.rpm
ocfs2-tools-devel-1.2.4-1.i386.rpm

[root@node2-pub rpms]# rpm -ivh ocfs2-2.6.9-55.0.2.EL-1.2.5-6.i686.rpm \
> ocfs2-tools-1.2.4-1.i386.rpm \
> ocfs2console-1.2.4-1.i386.rpm \
> ocfs2-tools-debuginfo-1.2.4-1.i386.rpm \
> ocfs2-tools-devel-1.2.4-1.i386.rpm
Preparing...                ########################################### [100%]
   1:ocfs2-tools            ########################################### [ 20%]
   2:ocfs2-2.6.9-55.0.2.EL  ########################################### [ 40%]
   3:ocfs2console           ########################################### [ 60%]
   4:ocfs2-tools-debuginfo  ########################################### [ 80%]
   5:ocfs2-tools-devel      ########################################### [100%]
[root@node2-pub rpms]#


Configure OCFS2:

As we are using OCFS2 filesystem for the OCR and VoteDisk, we need to install OCFS2 on the new Node as well so that we can mount the same volume on the new Node to get access to the OCFS2 filesystem. Configure the OCFS2 by running ocfs2console tool as shown below.

Running ocfs2console command line utility brings this screen.

ocfs2_config1

You can close this box as you can enable the o2cb service later.

ocfs_config2

Click Cluster  --> Configure Nodes. Add node names for node2-pub
Make sure to add exact same node name as it has been returned by the `hostname` command.
The hostname for the new node is node2-pub.hingu.net.


ocfs2_config4

Prapogate the files to all the nodes in cluster:

ocfs2_config5

So, the above settings of "Name" (node2-nas) in the ocfs2 configuraion will lead to below error when I try to enable o2cb service.

[root@node2-pub rpms]# /etc/init.d/o2cb enable
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: Failed
Cluster ocfs2 created
Node node1-nas added
Node node2-nas added
o2cb_ctl: Configuration error discovered while populating cluster ocfs2.  None of its nodes were considered local.  A node is considered local when its node name in the configuration matches this machine's host name.
Stopping O2CB cluster ocfs2: OK
[root@node2-pub rpms]#
 
So, stop o2cb service, open the /etc/ocfs2/cluster.conf file and update the hostname value to the one that is returned by `hostname` command. Do not update  the IP. start the service and load it again and this time error should go away.
--=================

[oracle@node2-pub ~]$ cat /etc/ocfs2/cluster.conf
node:
        ip_port = 7777
        ip_address = 192.168.0.11
        number = 0
        name = node1-pub.hingu.net
        cluster = ocfs2
 
node:
        ip_port = 7777
        ip_address = 192.168.0.22
        number = 1
        name = node2-pub.hingu.net
        cluster = ocfs2
 
cluster:
        node_count = 2
        name = ocfs2
--================

[root@node2-pub rpms]# /etc/init.d/o2cb load
Loading module "configfs": OK
Creating directory '/config': OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Creating directory '/dlm': OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
[root@node2-pub rpms]# /etc/init.d/o2cb status
Module "configfs": Loaded
Filesystem "configfs": Mounted
Module "ocfs2_nodemanager": Loaded
Module "ocfs2_dlm": Loaded
Module "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Offline
[root@node2-pub rpms]#
 
Configure o2cb to startup at Boot time:
 
[root@node2-pub rpms]# /etc/init.d/o2cb configure
Configuring the O2CB driver.
 
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.
 
Load O2CB driver on boot (y/n) [n]: y
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [7]:
Specify network idle timeout in ms (>=5000) [10000]:
Specify network keepalive delay in ms (>=1000) [5000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: OK
[root@node2-pub rpms]#
 
[root@node2-pub rpms]# chkconfig --add ocfs2
[root@node2-pub rpms]# chkconfig --add o2cb
[root@node2-pub rpms]# mkdir -p /u02/ocfs2 -- ocfs2 mountpoint

Mount the filesystem

Mount the ocfs2 devices under the appropriate mount point and update the /etc/fstab (in my case it is /u02/ocfs2):

[root@node2-pub ~]# mount -t ocfs2 -o datavolume,nointr /dev/ocfs2 /u02/ocfs2

[root@node2-pub ~]# cat /etc/fstab
# This file is edited by fstab-sync - see 'man fstab-sync' for details
/dev/VolGroup00/LogVol00 /                      ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
none                    /dev/pts                devpts  gid=5,mode=620  0 0
none                    /dev/shm                tmpfs   defaults        0 0
none                    /proc                   proc    defaults        0 0
none                    /sys                    sysfs   defaults        0 0
/dev/VolGroup00/LogVol01 swap                   swap    defaults        0 0

## OCFS2 mountpoint

/dev/ocfs2              /u02/ocfs2              ocfs2   _netdev,datavolume,nointr      0 0
/dev/hdc                /media/cdrom            auto    pamconsole,exec,noauto,managed 0 0
/dev/fd0                /media/floppy           auto    pamconsole,exec,noauto,managed 0 0

[root@node2-pub ~]# mount -t ocfs2 /dev/sda1 /u02/ocfs2
mount.ocfs2: Transport endpoint is not connected while mounting /dev/sda1 on /u02/ocfs2. Check 'dmesg' for more information on this error.
 
Solution: Make sure that SELinux and Firewall has been disabled. If not, Disable them and reboot the nodes,
and it should be ok.


Creating Automatic Storage Management (ASM) Disks for the Clustered Database:

  Download and Install these RPMs on node node2-pub.

[oracle@node2-pub ~]$ uname -a
Linux node2-pub.hingu.net 2.6.9-55.0.2.EL #1 Tue Jun 26 14:08:18 EDT 2007 i686 i686 i386 GNU/Linux


oracleasm-2.6.9-55.0.2.EL-2.0.3-1.i686.rpm
oracleasmlib-2.0.2-1.i386.rpm
oracleasm-support-2.0.3-1.i386.rpm


[root@node2-pub rpms]# rpm -ivh oracleasm-2.6.9-55.0.2.EL-2.0.3-1.i686.rpm \
> oracleasmlib-2.0.2-1.i386.rpm \
> oracleasm-support-2.0.3-1.i386.rpm
Preparing...                ########################################### [100%]
   1:oracleasm-support      ########################################### [ 33%]
   2:oracleasm-2.6.9-55.0.2.########################################### [ 67%]
   3:oracleasmlib           ########################################### [100%]
[root@node2-pub rpms]#
 
Configure ASM on node2-pub

[root@node2-pub ~]# /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.
 
This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.
 
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration:           [  OK  ]
Creating /dev/oracleasm mount point:                       [  OK  ]
Loading module "oracleasm":                                [  OK  ]
Mounting ASMlib driver filesystem:                         [  OK  ]
Scanning system for ASM disks:                             [  OK  ]
[root@node1-pub ~]#

Scan the ASM Disk Device(s) that are currently being used by ASM:

[root@node2-pub ~]# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks:                             [  OK  ]
[root@node2-pub ~]# /etc/init.d/oracleasm listdisks
DSK1
DSK2
DSK3
DSK4
[root@node2-pub ~]#
[root@node2-pub ~]# /etc/init.d/oracleasm status
Checking if ASM is loaded:                                 [  OK  ]
Checking if /dev/oracleasm is mounted:                     [  OK  ]

Install CRS;

Connect to an existing Node as oracle user  (node1-pub), and execute $CRS_HOME/oui/bin/addNode.sh script.
Make sure to run this scrips using X-terminal.

addNode_CRS_01

Specify the Public, Private and Virtual Name of the New Node.
.

addNode_CRS_02

addNode_CRS_03


Click Install
addNode_CRS_04

Run the mentioned script one by one in order on the mentioned nodes.
 
addNode_CRS_05

addNode_CRS_06


addNode_CRS_07

addNode_CRS_08

Verify the CRS  as well as nodeapps status on the new node node2-pub by running crsctl check cluster followed by crs_stat -t command.

Install ASM_HOME



                                              <<HOME>>                  


REFERENCES:

Oracle Documents