 |
Step by Step of
Deleting / Removing Node(s) from 11g RAC cluster on Linux:
By Bhavin Hingu
bhavin@oracledba.org
|
Delete / Remove Node
from 11g RAC Cluster:
This document explains the step by
step process of Removing/Deleting
RAC Node from Cluster. In this process, I am going to remove a single
node (node2-pub) from 2
node RAC cluster online without affecting the availability of
the RAC Database running on ASM.
Existing RAC Architecture:
RAC Nodes:
Node 1:
Public: node1-pub
Private: node1-prv
Virtual:
node1-vip
Node
2:
Public: node2-pub
Private: node2-prv
Virtual:
node2-vip
ORACLE_HOMES (Local on Each
Node);
CRS_HOME: /u01/app/crs
DB_HOME:
/u01/app/oracle/product/11g/db_2
ASM_HOME: /u01/app/asm/product/11gr1
Database / ASM::
DB Name: test
DB Instances:
test1 on node1-pub, test2 on node2-pub.
ASM Instances:
+ASM1 on node1-pub and +ASM2 on
node2-pub.
Node to be deleted: node2-pub
Tasks
List (to be executed in Order):
Get the current status of CRS on
all the nodes before proceeding to
delete of node excercise. From the below output, it seems that all the
nodeapps, DB and ASM instances and services are up and running on both
the nodes. Point to Note here
is that the Database reource ora.test.db is running on node2-pub which
we want to delete from Cluster.

Modify Database Services:
Update
the Database service to run on all the nodes except the node that is
being deleted. This can be achieved by modifying the service by
providing the appropriate instnaces value as a preferred instances
where we want this service to be run at startup. So, in my case, the
preferred instance will be "test1" where we want test_srv to run after
deleting test2
This task is also being taken care by the dbca as part of deleting
Instance.
srvctl
status service -d test
srvctl
stop service -d test -s test_srv -i test2
srvctl config service -d test
srvctl modify service -d test -s test_srv -n -i test1
srvctl
config service -d test

Remove Database Instance
test2 on node2-pub:
At the
End, Stop the Nodeapps on node2-pub.
srvctl
stop nodeapps -n node2-pub

Remove DB HOME and
ASM_HOME from node2-pub:
DB_HOME:
- Connect to the
node2-pub as oracle user using X-terminal.
- Set the ORACLE_HOME to
DB HOME (in my case,
it is /u01/app/oracle/product/11g/db_2)
- Update the Oracle
Inventory with the CLUSTER_NODES to null to DETACH the ORACLE_HOME from
the rest of the nodes in CLUSTER so that runInstaller will only remove
the ORACLE_HOME from nbode2-pub local node.
- DeInstall the
ORACLE_HOME.
export
ORACLE_HOME=/u01/app/oracle/product/11g/db_2
echo
$ORACLE_HOME
cd
/u01/app/oracle/product/11g/db_2/oui/bin
./runInstaller
-ignoreSysPrereqs -updateNodeList ORACLE_HOME=$ORACLE_HOME
CLUSTER_NODES="" -local
./runInstaller
-ignoreSysPrereqs -silent "REMOVE_HOMES={$ORACLE_HOME}" -local
NOTE: 11g is not certified on
CentOS and so I have to use
ignoreSysPrereqs option.
Repeat the same procedure for
the ASM HOME.
ASM_HOME:
- Connect to the
node2-pub as oracle user using X-terminal.
- Set the ORACLE_HOME to
ASM HOME (in my
case, it is /u01/app/asm/product/11gr1)
- Update the Oracle
Inventory
with the CLUSTER_NODES to null to DETACH the ORACLE_HOME from the
rest of the nodes in CLUSTER so that runInstaller will only remove
the ORACLE_HOME from nbode2-pub local node.
- DeInstall the
ORACLE_HOME.
export
ORACLE_HOME=/u01/app/asm/product/11gr1
echo $ORACLE_HOME
cd /u01/app/asm/product/11gr1/oui/bin
./runInstaller -ignoreSysPrereqs -updateNodeList
ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES="" -local
./runInstaller
-ignoreSysPrereqs -silent "REMOVE_HOMES={$ORACLE_HOME}" -local
Remove nodeapps from
node2-pub:
Remove nodeapps on node2-pub. Connect
as oracle on any of the node and execute below command.
Make sure that the nodeapps are not ONLINE on node2-pub. If So, then
stop it before removing them.
srvctl remove nodeapps -n node2-pub

Update Inventory
of DB_HOME and ASM_HOME on the remaining Nodes:
After removing DB_HOME and ASM_HOME
from node2-pub, it is requred to update the Inventories for these HOMEs
on the remaining Nodes in Cluster with the new list of remaining Nodes.
Execute the below commands to update Inventory from any of the
remaining Node. The CLUSTER_NODES option must contain the list of all
the nodes except the ones that are being deleted. In my case of 2-node
RAC, the available nodes will be only one node i.e, node1-pub.
For DB_HOME:
- Connect to the
node2-pub as oracle user using X-terminal.
- Set the ORACLE_HOME to DB HOME (in my case,
it is /u01/app/oracle/product/11g/db_2)
- Update the Oracle
Inventory
with the CLUSTER_NODES for the remaining Nodes (CLUSTER_NODES variable).
export ORACLE_HOME=/u01/app/oracle/product/11g/db_2
echo $ORACLE_HOME
cd /u01/app/oracle/product/11g/db_2/oui/bin
./runInstaller -ignoreSysPrereqs
-updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=node1-pub
For
ASM_HOME:
- Connect to the
node2-pub as oracle user using X-terminal.
- Set the ORACLE_HOME to ASM HOME (in my
case, it is /u01/app/asm/product/11gr1)
- Update the Oracle
Inventory
with the CLUSTER_NODES for the remaining Nodes (CLUSTER_NODES variable)
export ORACLE_HOME=/u01/app/asm/product/11gr1
echo $ORACLE_HOME
cd /u01/app/asm/product/11gr1/oui/bin
./runInstaller -ignoreSysPrereqs -updateNodeList
ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=node1-pub
Remove CRS
(Clusterware) from node2-pub:
Prepare
node2-pub for the CRS removal:
Connect to the node being deleted
(node2-pub) as root and execute the rootdelete.sh script to prepare it
for the CRS removal.
/u01/app/crs/install/rootdelete.sh
local nosharedvar nosharedhome

Remove CRS
from node2-pub (Update OCR):
From any of the remaining nodes other
than the one that is being deleted, execute the rootdeletenode.sh
script as a root user to remove the node2-pub from the OCR.
You need node name as well as node number for the node that is being
deleted. You can get this information by running olsnodes -n command
line utility.
/u01/app/crs/install/rootdeletenode.sh
<node to be deleted>,<node_numnber>
/u01/app/crs/install/rootdeletenode.sh
node2-pub,2
Update the Inventory
for CRS on the remaining Nodes:
Connect to any of the remaining nodes
and execute the below command to update the inventory with the proper
no.of Nodes in cluster for the CRS_HOME.
The inventory has alredy been updated for the DB_HOME as well as
ASM_HOME. In my case, I connect to node1-pub and run below command.
export ORACLE_HOME=/u01/app/crs
echo $ORACLE_HOME
cd /u01/app/crs/oui/bin
./runInstaller -ignoreSysPrereqs -updateNodeList
ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES=node1-pub CRS=TRUE
Verify that the node is
removed successfully:
Verify that the node has been removed successfully by looking at OCR
through various command like olsnodes. Also, run the lsinventory to
make sure that the Inventory does not know of the node that has been
deleted. (in case of 2 or nore node RAC system).

On the Deleted Nodes, remove the OS Directory for DB_HOME, ASM_HOME as
well as CRS_HOME.
****** Node node2-pub has been deleted from the Cluster Successfully!!!
*****
REFERENCES:
Oracle
Documents