Hardware Used in setting
up 3-node 10g R2 RAC using iSCSI SAN (Openfiler):
· Total Machines: 4 (3 for RAC nodes + 1 for NAS)
· Network Switches: 3 (for Public, Private and Shared
Storage)
· Extra Network Adaptors: 7 (6 for RAC nodes (2 for each node) and
one for Storage Server)
· Network cables: 10 (9 for RAC nodes (3 for each node) and one
for Shared Storage)
· External USB HD: 1 (1 TB)
Machines Specifications:
DELL OPTIPLEX GX620
CPU: Intel 3800MHz
RAM: 4084MB
HD: 250GB
DVD, 10/100 NIC, 8 MB VRAM
Network
Adaptor Specifications:
Linksys
EG1032 Instant Gigabit Network Adapter
Network
Switch Specifications:
D-Link
24-Port Rackmountable Gigabit Switch
Network
Cables Specifications:
25-Foot
Cat6 Snagless Patch Cable – (Blue, Black and Grey)
Software Used for the
3-node RAC Setup using NAS (Openfiler):
· NAS Storage Solution: Openfiler 2.3
(2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686)
· Operating System: Oracle Enterprise Linux 5.5
(2.6.18-194.el5PAE)
· Clusterware: Oracle 10g R2 (10.2.0.1)
· Oracle RAC: Oracle RDBMS 10g R2 (10.2.0.1)
·
10.2.0.3 PatchSet to be applied on 10.2.0.1
Operating System: Oracle Enterprise Linux 5.5 (2.6.18-194.el5PAE):
Server: All the RAC Nodes
Oracle Clusterware (10.2.0.3):
Server: All the RAC Nodes
ORACLE_BASE: /u01/app/oracle
ORACLE_HOME: /u01/app/oracle/crs
Owner: oracle (Primary Group: oinstall, Secondary
Group: dba)
Permissions: 775
OCR/Voting Disk Storage Type: Raw Devices
Oracle Inventory Location: /u01/app/oraInventory
Oracle Database Software (RAC 10.2.0.3) for ASM_HOME:
Server: All the RAC Nodes
ORACLE_BASE: /u01/app/oracle
ORACLE_HOME: /u01/app/oracle/asm
Owner: oracle (Primary Group: oinstall, Secondary
Group: dba)
Permissions: 775
Oracle Inventory Location: /u01/app/oraInventory
Listener: LISTENER (TCP:1521)
Oracle Database Software (RAC 10.2.0.3) for DB_HOME:
Server: All the RAC Nodes
ORACLE_BASE: /u01/app/oracle
ORACLE_HOME: /u01/app/oracle/db
Owner: oracle (Primary Group: oinstall, Secondary
Group: dba)
Permissions: 775
Oracle Inventory Location: /u01/app/oraInventory
Database Name: labdb
Listener: LAB_LISTENER (TCP:1530)
Openfiler 2.3:
Server: single dedicated server acting as NAS.
OS: Openfiler 2.3 (2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686).
Existing 10gR2 RAC
Network Setup:
Machine |
Public Name |
Private Name |
VIP Name |
Resolved By |
RAC Node1 |
node1.hingu.net |
node1-prv |
node1-vip.hingu.net |
DNS
and /etc/hosts |
RAC Node2 |
node2.hingu.net |
node2-prv |
node2-vip.hingu.net |
DNS
and /etc/hosts |
RAC Node3 |
node3.hingu.net |
node3-prv |
node3-vip.hingu.net |
DNS
and /etc/hosts |
Storage |
nas-server |
N/A |
N/A |
N/A |
DNS server |
lab-dns.hingu.net |
N/A |
N/A |
N/A |
Public Network: 192.168.2.0/eth2
Private network (cluster Interconnect): 192.168.0.0/eth0
Private network (Storage Network): 192.168.1.0/eth1
Machine |
Public IP |
Private IP |
VIP |
Storage IP |
Resolved By |
RAC
Node1 |
192.168.2.1 |
192.168.0.1 |
192.168.2.51 |
192.168.1.1 |
DNS and /etc/hosts |
RAC
Node2 |
192.168.2.2 |
192.168.0.2 |
192.168.2.52 |
192.168.1.2 |
DNS and /etc/hosts |
RAC
Node3 |
192.168.2.3 |
192.168.0.3 |
192.168.2.53 |
192.168.1.3 |
DNS and /etc/hosts |
Storage |
N/A |
N/A |
N/A |
192.168.1.101 |
DNS and /etc/hosts |
DNS
server |
192.168.2.200 |
N/A |
N/A |
N/A |
N/A |
Network Configuration for RAC Nodes/NAS
Server/DNS Host:
Public, VIPs and SCAN VIPs are resolved by DNS. The
private IPs for Cluster Interconnects are resolved through /etc/hosts. The hostname along with public/private and NAS network
is configured at the time of OEL network installations. The final Network
Configurations files are listed here.
(a) hostname:
For Node
node1:
[root@node1 ~]#
hostname node1.hingu.net
node1.hingu.net: /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=node1.hingu.net
For Node
node2:
[root@node2 ~]#
hostname node2.hingu.net
node2.hingu.net: /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=node2.hingu.net
For Node
node3:
[root@node3 ~]#
hostname node3.hingu.net
node3.hingu.net: /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=node3.hingu.net
For Node
nas-server:
[root@nas-server ~]#
hostname nas-server
nas-server:
/etc/sysconfig/network
NETWORKING=yes
HOSTNAME=nas-server
For Node
lab-dns:
[root@lab-dns ~]#
hostname lab-dns
lab-dns.hingu.net: /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=yes
HOSTNAME=nas-server
(b)
Private Network for Cluster Interconnect:
node1.hingu.net:
/etc/sysconfig/network-scripts/ifcfg-eth0
# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:60
IPADDR=192.168.0.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ONBOOT=yes
node2.hingu.net:
/etc/sysconfig/network-scripts/ifcfg-eth0
# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:4B
IPADDR=192.168.0.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ONBOOT=yes
node3.hingu.net:
/etc/sysconfig/network-scripts/ifcfg-eth0
# Linksys Gigabit Network Adapter
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.0.255
HWADDR=00:22:6B:BF:4E:49
IPADDR=192.168.0.3
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.0.0
ONBOOT=yes
(c) Public
Network:
node1.hingu.net:
/etc/sysconfig/network-scripts/ifcfg-eth2
# Broadcom Corporation NetXtreme BCM5751
Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:18:8B:04:6A:62
IPADDR=192.168.2.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes
node2.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth2
# Broadcom Corporation NetXtreme BCM5751
Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:18:8B:24:F8:58
IPADDR=192.168.2.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes
node3.hingu.net:
/etc/sysconfig/network-scripts/ifcfg-eth2
# Broadcom Corporation NetXtreme BCM5751
Gigabit Ethernet PCI Express
DEVICE=eth2
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:19:B9:0C:E6:EF
IPADDR=192.168.2.3
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes
lab-dns.hingu.net:
/etc/sysconfig/network-scripts/ifcfg-eth0
# Broadcom Corporation NetXtreme BCM5751
Gigabit Ethernet PCI Express
DEVICE=eth0
BOOTPROTO=static
BROADCAST=192.168.2.255
HWADDR=00:13:72:A1:E9:1B
IPADDR=192.168.2.200
NETMASK=255.255.255.0
NETWORK=192.168.2.0
ONBOOT=yes
(d) Private
Network for Shared Storage:
node1.hingu.net:
/etc/sysconfig/network-scripts/ifcfg-eth1
# Linksys Gigabit Network Adapter
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:4E:60
IPADDR=192.168.1.1
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
node2.hingu.net:
/etc/sysconfig/network-scripts/ifcfg-eth1
# Linksys Gigabit Network Adapter
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:45:13
IPADDR=192.168.1.2
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
node3.hingu.net:
/etc/sysconfig/network-scripts/ifcfg-eth1
# Linksys Gigabit Network Adapter
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:4E:48
IPADDR=192.168.1.3
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
nas-server.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:22:6B:BF:43:D6
IPADDR=192.168.1.101
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
TYPE=Ethernet
(e) /etc/hosts files:
node1.hingu.net: /etc/hosts
# Do not remove the following line, or various
programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain
localhost
::1 localhost6.localdomain6
localhost6
##=======================================
# Pulic Network
##=======================================
192.168.2.1 node1.hingu.net node1
192.168.2.2 node2.hingu.net node2
192.168.2.3 node3.hingu.net node3
##=======================================
# VIPs
##=======================================
192.168.2.51 node1-vip.hingu.net node1-vip
192.168.2.52 node2-vip.hingu.net node2-vip
192.168.2.53 node3-vip.hingu.net node3-vip
##=======================================
# Private Network for Cluster Interconnect
##=======================================
192.168.0.1 node1-prv
192.168.0.2 node2-prv
192.168.0.3 node3-prv
##=======================================
##=======================================
node2.hingu.net:
/etc/hosts
# # Do not remove the following line,
or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain
localhost
::1 localhost6.localdomain6
localhost6
##=======================================
# Pulic Network
##=======================================
192.168.2.1 node1.hingu.net node1
192.168.2.2 node2.hingu.net node2
192.168.2.3 node3.hingu.net node3
##=======================================
# VIPs
##=======================================
192.168.2.51 node1-vip.hingu.net node1-vip
192.168.2.52 node2-vip.hingu.net node2-vip
192.168.2.53 node3-vip.hingu.net node3-vip
##=======================================
# Private Network for Cluster Interconnect
##=======================================
192.168.0.1 node1-prv
192.168.0.2 node2-prv
192.168.0.3 node3-prv
##=======================================
##=======================================
node3.hingu.net: /etc/hosts
# # Do not remove the following line,
or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain
localhost
::1 localhost6.localdomain6
localhost6
##=======================================
# Pulic Network
##=======================================
192.168.2.1 node1.hingu.net node1
192.168.2.2 node2.hingu.net node2
192.168.2.3 node3.hingu.net node3
##=======================================
# VIPs
##=======================================
192.168.2.51 node1-vip.hingu.net node1-vip
192.168.2.52 node2-vip.hingu.net node2-vip
192.168.2.53 node3-vip.hingu.net node3-vip
##=======================================
# Private Network for Cluster Interconnect
##=======================================
192.168.0.1 node1-prv
192.168.0.2 node2-prv
192.168.0.3 node3-prv
##=======================================
##=======================================
lab-dns.hingu.net: /etc/hosts
# # Do not remove the following line, or various
programs
# that require network functionality will fail.
127.0.0.1 lab-dns.hingu.net
lab-dns localhost.localdomain localhost
::1 localhost6.localdomain6
localhost6
Configure
DNS server Node VIPs and Node IPs:
DNS Server:
lab-dns.hingu.net
RPMs
required in setting up DNS server:
ypbind-1.19-12.el5
bind-utils-9.3.6-4.P1.el5_4.2
bind-9.3.6-4.P1.el5_4.2
system-config-bind-4.0.3-4.0.1.el5
bind-libs-9.3.6-4.P1.el5_4.2
bind-chroot-9.3.6-4.P1.el5_4.2
Configurations
files modified/created to set up DNS:
lab-dbs.hingu.net
/var/named/chroot/etc/named.conf
(modified)
/var/named/chroot/var/named/hingu.net.zone
(created)
/var/named/chroot/var/named/2.168.192.in-addr.arpa.zone
(created)
/var/named/chroot/var/named/1.168.192.in-addr.arpa.zone
(created)
On node1, node2 and node3
/etc/resolv.conf
(modified)
/var/named/chroot/etc/named.conf
// Enterprise Linux BIND Configuration Tool
//
// Default initial "Caching Only"
name server configuration
//
options {
directory
"/var/named";
dump-file
"/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
};
// Zone for this RAC configuration is hingu.net
zone "hingu.net" in {
type
master;
file
"hingu.net.zone";
allow-update { none; };
};
// For reverse lookups
zone "2.168.192.in-addr.arpa" in {
type
master;
file
"2.168.192.in-addr.arpa.zone";
allow-update { none; };
};
zone "1.168.192.in-addr.arpa" in {
type
master;
file
"1.168.192.in-addr.arpa.zone";
allow-update { none; };
};
include "/etc/rndc.key";
/var/named/chroot/var/named/hingu.net.zone
$TTL 1d
hingu.net.
IN SOA lab-dns.hingu.net. root.hingu.net. (
100 ; se = serial number
8h ; ref = refresh
5m ; ret = update retry
3w ; ex = expiry
3h ; min = minimum
)
IN NS lab-dns.hingu.net.
; DNS server
lab-dns
IN A 192.168.2.200
; RAC Nodes Public name
node1
IN A 192.168.2.1
node2
IN A 192.168.2.2
node3
IN A 192.168.2.3
; RAC Nodes Public VIPs
node1-vip
IN A 192.168.2.51
node2-vip
IN A 192.168.2.52
node3-vip
IN A 192.168.2.53
; Storage Network
nas-server IN
A 192.168.1.101
node1-nas
IN A 192.168.1.1
node2-nas
IN A 192.168.1.2
node3-nas
IN A 192.168.1.3
/var/named/chroot/var/named/2.168.192.in-addr.arpa.zone
$TTL
1d
@
IN SOA lab-dns.hingu.net. root.hingu.net. (
100 ; se = serial number
8h ; ref = refresh
5m ; ret = update retry
3w ; ex = expiry
3h
; min = minimum
)
IN NS lab-dns.hingu.net.
; DNS machine name in reverse
200
IN PTR lab-dns.hingu.net.
; RAC Nodes Public Name in Reverse
1
IN PTR node1.hingu.net.
2
IN PTR node2.hingu.net.
3 IN
PTR node3.hingu.net.
; RAC Nodes Public VIPs in Reverse
51
IN PTR node1-vip.hingu.net.
52
IN PTR node2-vip.hingu.net.
53
IN PTR node3-vip.hingu.net.
/var/named/chroot/var/named/1.168.192.in-addr.arpa.zone
$TTL
1d
@
IN SOA lab-dns.hingu.net. root.hingu.net. (
100 ; se = serial number
8h ; ref = refresh
5m ; ret = update retry
3w ; ex = expiry
3h ; min = minimum
)
IN NS lab-dns.hingu.net.
;
Storage Network Reverse Lookup
101
IN PTR nas-server.hingu.net.
1
IN PTR node1-nas.hingu.net.
2
IN PTR node2-nas.hingu.net.
3
IN PTR node3-nas.hingu.net.
/etc/resolv.conf (on RAC
nodes):
search hingu.net
nameserver 192.168.2.200
Start
the DNS Service (named):
service named start
chkconfig --level 35 named on
Enable Name Service
Cache Daemon nscd: (On all the RAC
Nodes)
chkconfig --level 35 nscd on
service ncsd start
Creating Oracle Users/Groups/Permissions and
Installation Paths: (On all the RAC Nodes):
userdel
oracle
groupdel
oinstall
groupdel
dba
groupadd -g
1000 oinstall
groupadd -g
1031 dba
useradd -u
1101 -g oinstall -G dba oracle
mkdir -p
/u01/app/oracle
chown –R
oracle:oinstall /u01/app/oracle
chmod -R
775 /u01
passwd
oracle
Install cvuqdisk Package: (On all the RAC Nodes):
This package is
located in the rpm directory on Clusterware Media and needs to be installed
after the group oinstall is created. In my case, as this was a fresh install of
10g R2 on a new machines, old versions of cvuqdisk was not present. If it is,
then the older version needs to be removed first.
export CVUQDISK_GRP=oinstall
echo $CVUQDISK
rpm –ivh cvuqdisk-1.0.1-1.rpm
[root@node1 rpm]# pwd
/home/oracle/10gr2/clusterware/rpm
[root@node1 rpm]# export CVUQDISK_GRP=oinstall
[root@node1 rpm]# echo $CVUQDISK_GRP
oinstall
[root@node1 rpm]# rpm -ivh rpm –ivh cvuqdisk-1.0.1-1.rpm
Preparing...
########################################### [100%]
1:cvuqdisk
########################################### [100%]
[root@node1 rpm]# rpm -qa | grep cvuqdisk
cvuqdisk-1.0.1-1
[root@node1 rpm]#
Configure Oracle RAC Owner’s User Environment (oracle):
(a) Set the umask
to 022 by putting below line into these users’ (oracle) .bash_profile files:
umask 022
Then, executed the .bash_profile and verified
that the correct value of umask is displayed.
[oracle@node1 ~]$ . .bash_profile
[oracle@node1 ~]$ umask
(b) Setting up X11
forwarding:
Created the file ~/.ssh/config to
disable the X11Forwadding by placing below line in it.
Host *
ForwardX11 no
(c) Suppressed
the Terminal output on STDOUT and STDERR to prevent Installation errors:
Modified the file ~/.bashrc (or .cshrc for C shell) with below entry.
Bourne, Bash, or Korn
shell:
if [ -t 0 ]; then
stty intr ^C
fi
C shell:
test -t 0
if ($status == 0) then
stty intr ^C
endif
(d) Increased the
Shell Limits:
Recommended:
Resource |
Soft Limit |
Hard Limit |
Processes |
2047 |
16384 |
Open File
Descriptors |
1024 |
65536 |
Stack |
10240 |
10240 - 32768 |
Set:
Resource |
Soft Limit |
Hard Limit |
Processes |
131072 |
131072 |
Open File
Descriptors |
131072 |
131072 |
Stack |
32768 |
32768 |
Added the following
lines to the /etc/security/limits.conf file:
oracle soft
nofile 131072
oracle hard
nofile 131072
oracle soft
nproc 131072
oracle hard
nproc 131072
oracle soft
core unlimited
oracle hard
core unlimited
oracle soft
memlock 3500000
oracle hard
memlock 3500000
#
Recommended stack hard limit 32MB for oracle installations
#
oracle hard stack
32768
Added the following
line in the /etc/pam.d/login file, if it does not already exist:
session
required /lib/security/pam_limits.so
For the Bourne, Bash,
or Korn shell, add the following lines
to the /etc/profile:
if [ $USER = "oracle" ];
then
if [ $SHELL = "/bin/ksh"
]; then
ulimit -p 131072
ulimit -n 131072
else
ulimit -u 131072 -n 131072
fi
fi
For the C shell (csh or
tcsh), add the following lines to the /etc/csh.login.
if ( $USER == "oracle" )
then
limit maxproc 131072
limit descriptors 131072
endif
(e) Set the below
Kernel Parameters with recommended range in /etc/sysctl.conf. This
was already set with the installation of oracle-validated package. I had to uncomment
the lines related to 10g parameters and comment the same parameters for 11g.
After the modification, ran the sysctl –p to enable the changes..
/etc/sysctl.conf
# Kernel sysctl configuration file for Oracle
Enterprise Linux
#
# For binary values, 0 is disabled, 1 is
enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging
functionality of the kernel
kernel.sysrq = 1
# Controls whether core dumps will append the PID to
the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536
# Controls the default maxmimum size of a mesage
queue
kernel.msgmax = 8192
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295
# Controls the maximum number of shared memory
segments, in pages
kernel.shmall = 1073741824
# For 11g, recommended value for file-max is 6815744
#fs.file-max = 6815744
# For 10g, uncomment 'fs.file-max 327679', comment
other entries for this parameter and re-run sysctl -p
fs.file-max =
327679
kernel.msgmni = 2878
kernel.sem = 250 32000 100 142
kernel.shmmni = 4096
net.core.rmem_default = 262144
# For 11g, recommended value for net.core.rmem_max
is 4194304
#net.core.rmem_max = 4194304
# For 10g, uncomment 'net.core.rmem_max 2097152',
comment other entries for this parameter and re-run sysctl -p
net.core.rmem_max=2097152
net.core.wmem_default = 262144
# For 11g, recommended value for wmem_max is 1048576
#net.core.wmem_max = 1048576
# For 10g, uncomment 'net.core.wmem_max 262144',
comment other entries for this parameter and re-run sysctl -p
net.core.wmem_max = 262144
fs.aio-max-nr = 3145728
# For 11g, recommended value for ip_local_port_range
is 9000 65500
#net.ipv4.ip_local_port_range = 9000 65500
# For 10g, uncomment 'net.ipv4.ip_local_port_range
1024 65000', comment other entries for this parameter and re-run sysctl -p
net.ipv4.ip_local_port_range = 1024 65000
# Added min_free_kbytes 50MB to avoid OOM killer on
EL4/EL5
vm.min_free_kbytes = 51200
SSH
user Equivalency configuration (oracle):
On All
the Cluster Nodes:
su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
Generate the RSA and DSA keys:
/usr/bin/ssh-keygen -t
rsa
/usr/bin/ssh-keygen
-t dsa
On node1:
touch
~/.ssh/authorized_keys
cd ~/.ssh
(a) Add these
Keys to the Authorized_keys file.
cat
id_rsa.pub >> authorized_keys
cat
id_dsa.pub >> authorized_keys
(b) Send this file
to node2.
scp
authorized_keys node2:.ssh/
On node2:
(a) Add these
Keys to the Authorized_keys file.
cd
~/.ssh
cat
id_rsa.pub >> authorized_keys
cat
id_dsa.pub >> authorized_keys
(b) Send this file
to node3.
scp
authorized_keys node3:.ssh/
On node3:
(a) Add these
Keys to the Authorized_keys file.
cd
~/.ssh
cat
id_rsa.pub >> authorized_keys
cat
id_dsa.pub >> authorized_keys
(b) Send this file
to node1 and node2.
scp
authorized_keys node1:.ssh/
scp
authorized_keys node2:.ssh/
On All the Nodes:
chmod
600 ~/.ssh/authorized_keys
ssh
node1 date
ssh
node2 date
ssh
node3 date
ssh
node1.hingu.net date
ssh
node2.hingu.net date
ssh
node3.hingu.net date
ssh
node1-prv date
ssh
node2-prv date
ssh
node3-prv date
Entered
'yes' and continued when prompted
Configure the iSCSI Disk Device for OCR and
Voting Disks (on All the RAC Nodes):
(a) Placed the below line in the /etc/scsi_id.config
options=-g
(b) Got the scsi disk mappings. (from one of the RAC nodes)
[root@node1 ~]# ls -la /dev/disk/by-path/ |
grep openfiler | awk '{ print $9 $10 $11 }'
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:datadisk01-lun-0->../../sde
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:datadisk01-lun-0-part1->../../sde1
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:fradisk01-lun-0->../../sdd
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:fradisk01-lun-0-part1->../../sdd1
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:ocrdisk01-lun-0->../../sdg
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:ocrdisk01-lun-0-part1->../../sdg1
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:ocrdisk02-lun-0->../../sdh
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:ocrdisk02-lun-0-part1->../../sdh1
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:votingdisk01-lun-0->../../sdc
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:votingdisk01-lun-0-part1->../../sdc1
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:votingdisk02-lun-0->../../sdb
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:votingdisk02-lun-0-part1->../../sdb1
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:votingdisk03-lun-0->../../sdf
ip-192.168.1.101:3260-iscsi-iqn.2006-01.com.openfiler:votingdisk03-lun-0-part1->../../sdf1
(c) Got SCSI IDs for all the OCRs and Voting Disks using scsi_id.
[root@node1 ~]# scsi_id -g -s /block/sdg/sdg1
14f504e46494c450073564b3158472d7943766d2d4c736459
[root@node1 ~]# scsi_id -g -s /block/sdh/sdh1
14f504e46494c4500786839656e792d75704b422d545a6e70
[root@node1 ~]#
[root@node1 ~]# scsi_id -g -s /block/sdc/sdc1
14f504e46494c45004d74494743652d384176312d5776594d
[root@node1 ~]# scsi_id -g -s /block/sdb/sdb1
14f504e46494c45007132726b4d4d2d6274616f2d33753456
[root@node1 ~]# scsi_id -g -s /block/sdf/sdf1
14f504e46494c450046305855714e2d324956712d674f4132
[root@node1 ~]#
(d) Created the file /etc/udev/rules.d/99-crs.rules to label the scsi devices with the appropriate name so that after reboot of machine their name remain persistent every time.
KERNEL=="sd?1", BUS=="scsi",
PROGRAM=="/sbin/scsi_id -g -s %p",
RESULT=="14f504e46494c450073564b3158472d7943766d2d4c736459" ,
NAME="ocrdisk01", OWNER="root", GROUP="oinstall",
MODE="0640"
KERNEL=="sd?1",
BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s %p",
RESULT=="14f504e46494c4500786839656e792d75704b422d545a6e70" ,
NAME="ocrdisk02", OWNER="root", GROUP="oinstall",
MODE="0640"
KERNEL=="sd?1",
BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s %p",
RESULT=="14f504e46494c45004d74494743652d384176312d5776594d" ,
NAME="votingdisk01", OWNER="oracle",
GROUP="oinstall", MODE="0640"
KERNEL=="sd?1",
BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -s %p",
RESULT=="14f504e46494c45007132726b4d4d2d6274616f2d33753456" ,
NAME="votingdisk02", OWNER="oracle",
GROUP="oinstall", MODE="0640"
KERNEL=="sd?1", BUS=="scsi",
PROGRAM=="/sbin/scsi_id -g -s %p",
RESULT=="14f504e46494c450046305855714e2d324956712d674f4132" ,
NAME="votingdisk03", OWNER="oracle",
GROUP="oinstall", MODE="0640"
(e) Reloaded the udev rules and started the udev services to make the changes effective.
udevcontrol reload_rules
start_udev
(f) Verified
that these disks now labeled appropriately
(g) Created
the Raw Bindings for OCR and Voting Devices (On All the RAC Nodes).
Placed
the below lines into the /etc/sysconfig/rawdevices
# raw device bindings
# format:
<rawdev> <major> <minor>
#
<rawdev> <blockdev>
# example: /dev/raw/raw1 /dev/sda1
#
/dev/raw/raw2 8 5
/dev/raw/raw1 /dev/ocrdisk01
/dev/raw/raw2 /dev/ocrdisk02
/dev/raw/raw3 /dev/votingdisk01
/dev/raw/raw4 /dev/votingdisk02
/dev/raw/raw5 /dev/votingdisk03
Change the permissions and ownership of the raw devices as shown below.
chown root:oinstall /dev/raw/raw1
chown root:oinstall /dev/raw/raw2
chown oracle:oinstall /dev/raw/raw3
chown oracle:oinstall /dev/raw/raw4
chown oracle:oinstall /dev/raw/raw5
chmod 640 /dev/raw/raw1
chmod 640 /dev/raw/raw2
chmod 640 /dev/raw/raw3
chmod 640 /dev/raw/raw4
chmod 640 /dev/raw/raw5
Restart the service rawdevices on all the nodes.
service rawdevices restart
Because the ownerships and permissions on the raw devices get reset to root on every reboot, the above lines were placed into /etc/rc.local to set the permissions back to oracle user after the reboot. (ON all the RAC Nodes):
/etc/rc.local:
[root@node1 sysconfig]# cat /etc/rc.local
#!/bin/sh
#
# This script will be executed *after* all the
other init scripts.
# You can put your own initialization stuff in
here if you don't
# want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
chown root:oinstall /dev/raw/raw1
chown root:oinstall /dev/raw/raw2
chown oracle:oinstall /dev/raw/raw3
chown oracle:oinstall /dev/raw/raw4
chown oracle:oinstall /dev/raw/raw5
chmod 640 /dev/raw/raw1
chmod 640 /dev/raw/raw2
chmod 640 /dev/raw/raw3
chmod 640 /dev/raw/raw4
chmod 640 /dev/raw/raw5
Configure the iSCSI
disk Devices for Oracle ASM with ASMLib:
(a) Verified that the below RPMs are installed before
configuring the ASM driver.
oracleasm-2.6.18-194.el5-2.0.5-1.el5
oracleasm-support-2.1.3-1.el5
oracleasm-2.6.18-194.el5xen-2.0.5-1.el5
oracleasm-2.6.18-194.el5PAE-2.0.5-1.el5
oracleasm-2.6.18-194.el5debug-2.0.5-1.el5
oracleasmlib-2.0.4-1.el5
(a)
Configured the
ASMLib (All the RAC nodes):
Provide the oracleasm owner “oracle” and group “oinstall”
while configuring the oracleasm.
oracleasm configure –i
(e) Loaded the ASMLib Module (All the RAC nodes):
[root@node1 ~]# oracleasm init
(f)
Created the ASM disks using oracleasm: (ONLY
from one of the RAC nodes)
oracleasm
createdisk DATADSK01 /dev/sdd1
oracleasm
createdisk FRADSK02 /dev/sde1
oracleasm
scandisks
oracleasm
listdisks
(g) On the Remaining RAC nodes, simply scanned the ASM disks
to instantiate these newly created disks
oracleasm
scandisks
oracleasm
listdisks
(h)
Verified that these ASM
disk can be discovered by ASM Libraries (oracleasmlib) on all the RAC
Nodes during Creation of ASM Disks.
Connected as “oracle” user and ran the
below command
/usr/sbin/oracleasm-discover
'ORCL:*'
[oracle@node1 ~]# /usr/sbin/oracleasm-discover 'ORCL:*'
Using ASMLib from /opt/oracle/extapi/32/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]
Discovered disk: ORCL:DATADSK01 [40959968 blocks (20971503616
bytes), maxio 512]
Discovered disk: ORCL:FRADSK01 [20512736 blocks (10502520832
bytes), maxio 512]
[oracle@node1 ~]#