Step By Step: Upgrade 10gR2 RAC to 11gR1 RAC on Oracle Enterprise Linux 5 (32 bit) Platform.

By Bhavin Hingu

 

CLICK HERE for the”Step By Step: Upgrade 10gR2 RAC to 11gR2 RAC on Linux”

 

                                                                                                                        <<HOME>>                                                            

 

This Document shows the step by step of upgrading 3-Node 10gR2 RAC database on ASM to 11gR1 RAC. The below upgrade path allows upgrading 10gR2 CRS to 11gR1 CRS with no downtime using rolling upgrade. The rolling upgrade of ASM is possible starting from Release 11gR1 (11.1.0.6) and is not backward compatible. That means we cannot use Rolling upgrade method for upgrading 10gR2 ASM to 11gR1 ASM.

 

 

 

Upgrading 10gR2 RAC database will need outage and the total downtime may further be avoided or minimized by using Logical Standby Database in the upgrade process (not covered here).

 

 

 

Existing 10gR2 RAC setup (Before Upgrade)

Target 11gR1 RAC Setup (After Upgrade)

Clusterware

Oracle 10g R2 Clusterware 10.2.0.3

Oracle 11gR1 Clusterware 11.1.0.6

ASM Binaries

10g R2 RAC 10.2.0.3

11gR1 RAC 11.1.0.6

Cluster Name

Lab

Lab

Cluster Nodes

node1, node2, node3

node1, node2, node3

Clusterware Home

/u01/app/oracle/crs (CRS_HOME)

/u01/app/oracle/crs (CRS_HOME)

Clusterware Owner

oracle:(oinstall, dba)

oracle:(oinstall, dba)

VIPs

node1-vip, node2-vip, node3-vip

node1-vip, node2-vip, node3-vip

SCAN

N/A

N/A

SCAN_LISTENER Host/port

N/A

N/A

OCR and Voting Disks Storage Type

Raw Devices

Raw Devices

OCR Disks

/dev/raw/raw1, /dev/raw/raw2

/dev/raw/raw1, /dev/raw/raw2

Voting Disks

/dev/raw/raw3, /dev/raw/raw4, /dev/raw/raw5

/dev/raw/raw3, /dev/raw/raw4, /dev/raw/raw5

ASM_HOME

/u01/app/oracle/asm

/u01/app/oracle/asm11gr1

ASM_HOME Owner

oracle:(oinstall, dba)

oracle:(oinstall, dba)

ASMLib user:group

oracle:oinstall

oracle:oinstall

 

DB Binaries

Oracle 10gR2 RAC (10.2.0.3)

Oracle 11gR1 RAC (11.1.0.6)

DB_HOME

/u01/app/oracle/db

/u01/app/oracle/db11gr1

DB_HOME Owner

oracle:(oinstall, dba)

oracle:(oinstall, dba)

DB LISTENER

LAB_LISTENER

LAB_LISTENER

DB Listener Host/port

node1-vip, node2-vip, node3-vip (port 1530)

node1-vip, node2-vip, node3-vip (port 1530)

DB Storage Type, File Management

ASM with OMFs

ASM with OMFs

ASM diskgroups for DB and FRA

DATA, FRA

DATA, FRA

OS Platform

Oracle Enterprise Linux 5.5 (32 bit)

Oracle Enterprise Linux 5.5 (32 bit)

 

 

HERE’s the Existing 10gR2 RAC Setup in Detail

 

 

The Upgrade Process is composed of below 4 Stages:

 

·         Pre-Upgrade Tasks

·         Upgrade 10gR2 (10.2.0.1) Clusterware to 11gR1 (11.1.0.6) ß Rolling Upgrade No Downtime.

·         Upgrade 10gR2 ASM_HOME (10.2.0.1) to 11gR1 RAC (11.1.0.6) ß ASM and Database Downtime required.

·         Upgrade 10gR2 Database to 11gR1 RAC (11.1.0.6) ß Database Downtime required.

 

Pre-Upgrade tasks:

 

Install/Upgrade RPMs required for 11gR1 RAC and 11gR2 RAC Installation

Set 11gR1 specific Kernel Parameters  

Update the TIMEZONE file version  

Backing up ORACLE_HOMEs/database

 

 

Minimum Required RPMs for 11gR1 RAC on OEL 5.5 (All the 3 RAC Nodes):

 

binutils-2.17.50.0.6-2.el5

compat-libstdc++-33-3.2.3-61

elfutils-libelf-0.125

elfutils-libelf-devel-0.125

glibc-2.5-12

glibc-common-2.5-12

glibc-devel-2.5-12

glibc-headers-2.3.4-2

gcc-4.1.1-52

gcc-c++-4.1.1-52

libaio-0.3.106

libaio-devel-0.3.106

libgcc-4.1.1-52

libstdc++-4.1.1

libstdc++-devel-4.1.1-52.e15

make-3.81-1.1

sysstat-7.0.0

unixODBC-2.2.11

unixODBC-devel-2.2.11

 

Below command verifies whether the specified rpms are installed or not. Any missing rpms can be installed from the OEL Media Pack

 

rpm -q binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel \

              gcc gcc-c++ glibc glibc-common glibc-devel glibc-headers libaio libaio-devel \

              libgcc libstdc++ libstdc++-devel make sysstat unixODBC unixODBC-devel

 

NOTE: The cvuqdisk rpm in 11gR1 has the same version as one available in 10gR2 (cvuqdisk-1.0.1-1), so I did not have to replace the 10gR2 cvuqdisk by 11gR1 one.

 

Kernel Parameter Requirements (All the 3 RAC Nodes):

 

/etc/sysctl.conf:

 

# Kernel sysctl configuration file for Oracle Enterprise Linux

#

# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and

# sysctl.conf(5) for more details.

 

# Controls IP packet forwarding

net.ipv4.ip_forward = 0

 

# Controls source route verification

net.ipv4.conf.default.rp_filter = 1

 

# Do not accept source routing

net.ipv4.conf.default.accept_source_route = 0

 

# Controls the System Request debugging functionality of the kernel

kernel.sysrq = 1

 

# Controls whether core dumps will append the PID to the core filename

# Useful for debugging multi-threaded applications

kernel.core_uses_pid = 1

 

# Controls the use of TCP syncookies

net.ipv4.tcp_syncookies = 1

 

# Controls the maximum size of a message, in bytes

kernel.msgmnb = 65536

 

# Controls the default maxmimum size of a mesage queue

kernel.msgmax = 8192

 

# Controls the maximum shared segment size, in bytes

kernel.shmmax = 4294967295

 

# Controls the maximum number of shared memory segments, in pages

kernel.shmall = 1073741824

# For 11g, recommended value for file-max is 6815744

fs.file-max = 6815744

# For 10g, uncomment 'fs.file-max 327679', comment other entries for this parameter and re-run sysctl -p

# fs.file-max:327679

kernel.msgmni = 2878

kernel.sem = 250 32000 100 142

kernel.shmmni = 4096

net.core.rmem_default = 262144

# For 11g, recommended value for net.core.rmem_max is 4194304

net.core.rmem_max = 4194304

# For 10g, uncomment 'net.core.rmem_max 2097152', comment other entries for this parameter and re-run sysctl -p

# net.core.rmem_max=2097152

net.core.wmem_default = 262144

# For 11g, recommended value for wmem_max is 1048576

net.core.wmem_max = 1048576

# For 10g, uncomment 'net.core.wmem_max 262144', comment other entries for this parameter and re-run sysctl -p

# net.core.wmem_max:262144

fs.aio-max-nr = 3145728

# For 11g, recommended value for ip_local_port_range is 9000 65500

net.ipv4.ip_local_port_range = 9000 65500

# For 10g, uncomment 'net.ipv4.ip_local_port_range 1024 65000', comment other entries for this parameter and re-run sysctl -p

# net.ipv4.ip_local_port_range:1024 65000

# Added min_free_kbytes 50MB to avoid OOM killer on EL4/EL5

vm.min_free_kbytes = 51200

 

Enable these changes in the kernel.

 

sysctl -p

 

 

Update the TIMEZONE file Version:

 

The below database patches were applied in order to update the timezone file version to 4 if it is less than 4. DBUA may fail with the “ORA-01722: invalid number” error during the upgrade process.

 

Verify the current version of the timezone file in the 10gR2 database.

 

select * from v$timezone_file;

 

Patches to update the timezone file to version 4:

 

5746875

5632264 

 

Backing Up ORACLE_HOMEs/database:

 

Steps I followed to take the Backup of ORACLE_HOMEs before the upgrade: (This can be applied to 11gR1 and 10g databases)

 

On node1:

 

mkdir backup

cd backup

 

dd if=/dev/dev/raw1  of=ocr_disk_10gr2.bkp

dd if=/dev/dev/raw3  of=voting_disk_10gr2.bkp

 

tar cvf node1_crs_10gr2.tar /u01/app/oracle/crs/*

tar cvf node1_asm_10gr2.tar /u01/app/oracle/asm/*

tar cvf node1_db_10gr2.tar /u01/app/oracle/db/*

tar cvf node1_etc_oracle /etc/oracle/*

 

cp /etc/inittab etc_inittab

 

mkdir etc_init_d

cd etc_init_d

cp /etc/init.d/init* .

 

On node2:

 

mkdir backup

cd backup

 

tar cvf node2_crs_10gr2.tar /u01/app/oracle/crs/*

tar cvf node2_asm_10gr2.tar /u01/app/oracle/asm/*

tar cvf node2_db_10gr2.tar /u01/app/oracle/db/*

tar cvf node2_etc_oracle /etc/oracle/*

 

cp /etc/inittab etc_inittab

 

mkdir etc_init_d

cd etc_init_d

cp /etc/init.d/init* .

 

On node3:

 

mkdir backup

cd backup

 

tar cvf node3_crs_10gr2.tar /u01/app/oracle/crs/*

tar cvf node3_asm_10gr2.tar /u01/app/oracle/asm/*

tar cvf node3_db_10gr2.tar /u01/app/oracle/db/*

tar cvf node3_etc_oracle /etc/oracle/*

 

cp /etc/inittab etc_inittab

 

mkdir etc_init_d

cd etc_init_d

cp /etc/init.d/init* .

 

 

 

RMAN full backup was taken.

 

 

Step By Step: Upgrade Clusterware, ASM and Database from 10.2.0.3 to 11.1.0.6.

 

Upgrade 10gR2 CRS to 11gR1 CRS in Rolling Fashion (No Downtime):

 

On Node1:

 

·         Stop the labdb1 instance, asm instance, nodeapps.

·         Relocate any db or cluster service resources to the other nodes.

·         Stop and disable the CRS on this node

·         Reboot the node to clean up any oracle/CRS related processes on this node.

·         Enable the CRS after the node comes back up.

·         Run the preupdate.sh script from 11gR1 clusterware Media to unlock the CRS_HOME (so that oracle user can modify some of the root owned files).

·         Start the runInstaller from the 11gR1 Clusterware software directory.

 

01.JPG

 

srvctl stop instance -d labdb –i labdb1

srvctl stop asm –n node1

srvctl stop nodeapps –n node1

crs_relocate ora.labdb.db 

crs_relocate ora.labdb.oltp.cs

crsctl stop crs

crsctl disable crs

reboot

 

crsctl enable crs  ß once the node comes back up after the reboot.

Crsctl check crs  ß CRS stack will not be up as CRS was disabled before the reboot.

/home/oracle/11gR1/clusterware/upgrade/preupdate.sh –crshome /u01/app/oracle/crs –crsuser oracle -noshutdown

 

./runInstaller –ignoreSysPrereqs

/u01/app/oracle/crs/install/rootupgrade  ß run this script at the end as root

 

Verify the CRS

 

crsctl check crs (on all the RAC nodes) ß it should be up on all the nodes

crsctl query crs activeversion node1  ß The active version should be 10.2.0.3

crsctl query crs activeversion node2

crsctl query crs activeversion node3

crsctl query crs softwareversion node1  ß The Software Version should show 11.1.0.6

crsctl query crs softwareversion node2

crsctl query crs softwareversion node3

 

 

12.JPG

 

 

On Node2:

 

·         Stop the database instance labdb2, asm instance, nodeapps.

·         Relocate any db or cluster service resources to the other nodes.

·         Stop and disable the CRS on this node

·         Reboot the node to clean up any oracle related processes on this node.

·         Enable the crs after the node comes back up.

·         Run the preupdate.sh script from 11gR1 clusterware Media to unlock the CRS_HOME (so that oracle user can modify some of the root owned files).

·         Start the runInstaller from the 11gR1 Clusterware software directory.

 

13.JPG

 

srvctl stop instance -d labdb –i labdb2

srvctl stop asm –n node2

srvctl stop nodeapps –n node2

crs_relocate ora.labdb.db 

crsctl stop crs

crsctl disable crs

reboot

 

crsctl enable crs 

crsctl check crs 

/home/oracle/11gR1/clusterware/upgrade/preupdate.sh –crshome /u01/app/oracle/crs –crsuser oracle -noshutdown

 

./runInstaller –ignoreSysPrereqs

/u01/app/oracle/crs/install/rootupgrade 

 

Verify the CRS

 

crsctl check crs (on all the RAC nodes)

crsctl query crs activeversion node1  ß The active version should be 10.2.0.3

crsctl query crs activeversion node2  ß The active version should be 10.2.0.3

crsctl query crs activeversion node3

crsctl query crs softwareversion node1  ß The Software Version should show 11.1.0.6

crsctl query crs softwareversion node2  ß The Software Version should show 11.1.0.6

crsctl query crs softwareversion node3

 

 

23.JPG

 

 

On Node3:

 

·         Stop the database instance labdb3, asm instance, nodeapps.

·         Relocate any db or cluster service resources to the other nodes.

·         Stop and disable the CRS on this node

·         Reboot the node to clean up any oracle related processes on this node.

·         Enable the crs after the node comes back up.

·         Run the preupdate.sh script from 11gR1 clusterware Media to unlock the CRS_HOME (so that oracle user can modify some of the root owned files).

·         Start the runInstaller from the 11gR1 Clusterware software directory.

 

24.JPG

 

srvctl stop instance -d labdb –i labdb3

srvctl stop asm –n node3

srvctl stop nodeapps –n node3

crs_relocate ora.labdb.oltp.cs 

crsctl stop crs

crsctl disable crs

reboot

 

crsctl enable crs 

crsctl check crs 

/home/oracle/11gR1/clusterware/upgrade/preupdate.sh –crshome /u01/app/oracle/crs –crsuser oracle -noshutdown

 

./runInstaller –ignoreSysPrereqs

/u01/app/oracle/crs/install/rootupgrade 

 

Verify the CRS

 

crsctl check crs (on all the RAC nodes)

crsctl query crs activeversion node1

crsctl query crs activeversion node2

crsctl query crs activeversion node3

crsctl query crs softwareversion node1

crsctl query crs softwareversion node2

crsctl query crs softwareversion node3

 

The active Version and Software version should show 11.1.0.6 at this stage.

 

 

34.JPG

 

HERE’s the detailed Screen Shots of Upgrading 10gR2 CRS to 11gR1 CRS

 

Upgrade 10gR2 ASM to 11gR1 ASM:

 

·         Start the runInstaller from the 11gR1 database software Install Media and select the “Install Software Only” Configuration Option on the OUI screen.

 

./runInstaller –ignoreSysPrereqs  ß From the 11gR1 Software for Database

 

11.JPG

 

Run the root.sh on all the RAC nodes to finish the Installation.

 

Invoke the dbua from the new 11gR1 ASM_HOME to upgrade the 10gR2 ASM to 11gR1 ASM. Upgrading ASM instance is pretty much like Starting the ASM Instances and ASM Listeners using the newly Installed 11gR1 ASM_HOME.  DBUA simply copies the asm pfile and password files from 10gR2 dbs folder to 11gR1 dbs folder. It also copies the tnsnames.ora and listener.ora files as well. At the end, it modifies the ASM related CRS resources (asm and all the listeners belonging to 10gR2 ASM_HOME) to start from new 11gR1 ASM_HOME by modifying the ACTION_SCRIPT parameter to point it to the 11gR1 racgwrap.

 

export ORACLE_HOME=/u01/app/oracle/asm11gr1

export ORACLE_SID=+ASM1

/u01/app/oracle/asm11gr1/bin/dbua

 

19.JPG

 

HERE’s the detailed Screen Shots of Upgrading 10gR2 ASM to 11gR1 ASM.

 

Issues Encountered and Troubleshooting:

 

Issue #1:

 

During the ASM Upgrade by DBUA, I was prompted for migrating DB listeners running from 10gR2 existing Database HOME to 11gR1 ASM HOME. I did not want the DB Listener to migrate to ASM HOME but the Installer was not moving ahead if pressed “No”. So, I decided to move forward with the DB listener migration. I had to manually migrate this DB listener back to 10gR2 DB home at the end of the upgrade of ASM process.

 

15.JPG

 

Here’s how the DB listener LAB_LISTENER was moved back to 10gR2 DB HOME from 11gR1 ASM_HOME.

 

I made sure that the 10gR2 TNS_ADMIN still have the listener.ora and tnsnames.ora files with information about LAB_LISTENER.

Updated the CRS resources related to LAB_LISTENER as shown below to point the ACTION_SCRIPT to /u01/app/oracle/asm/bin/racgwrap

 

 

Generate the CAP files:

 

cd /u01/app/oracle/crs/crs/public

/u01/app/oracle/crs/bin/crs_stat –p ora.node1.LAB_LISTENER_NODE1.lsnr > ora.node1.LAB_LISTENER_NODE1.lsnr.cap

/u01/app/oracle/crs/bin/crs_stat –p ora.node2.LAB_LISTENER_NODE2.lsnr > ora.node2.LAB_LISTENER_NODE2.lsnr.cap

/u01/app/oracle/crs/bin/crs_stat –p ora.node3.LAB_LISTENER_NODE3.lsnr > ora.node3.LAB_LISTENER_NODE3.lsnr.cap

 

The ACTION_SCRIPT parameter was modified in these cap files.

 

Replaced the line…

 

ACTION_SCRIPT=/u01/app/oracle/asm11gr1/bin/racgwrap

 

WITH…

 

ACTION_SCRIPT=/u01/app/oracle/asm/bin/racgwrap

 

Ran the below commands to update the OCR with these changes.

 

/u01/app/oracle/crs/bin/crs_register –u ora.node1.LAB_LISTENER_NODE1.lsnr

/u01/app/oracle/crs/bin/crs_register –u ora.node2.LAB_LISTENER_NODE2.lsnr

/u01/app/oracle/crs/bin/crs_register –u ora.node3.LAB_LISTENER_NODE3.lsnr

 

 

Issue #2:

 

The DBUA could not start the ASM on node 2 during post upgrade steps at the end of the upgrade process and so it terminated with the below error in trace.log file.

 

18.JPG

 

I tried to manually start the ASM instance and it worked just fine but the OCR was still showing the “UNKNOWN” status for this resource. Even using crs_start/crs_stop to start or stop the ASM on node2 could not change the status from UNKNOWN. This means that the ASM resource entry for node2 in OCR was somehow logically corrupted by DBUA. I had to unregister this entry and register back with the same parameter to fix the issue. After doing that, the ASM was successfully started and stopped by srvctl command.

 

Generate the CAP file for ASM resource on node2.

 

ora.node2.ASM2.asm.cap:

 

NAME=ora.node2.ASM2.asm

TYPE=application

ACTION_SCRIPT=/u01/app/oracle/asm11gr1/bin/racgwrap

ACTIVE_PLACEMENT=0

AUTO_START=1

CHECK_INTERVAL=300

DESCRIPTION=CRS application for ASM instance

FAILOVER_DELAY=0

FAILURE_INTERVAL=0

FAILURE_THRESHOLD=0

HOSTING_MEMBERS=node2

OPTIONAL_RESOURCES=

PLACEMENT=restricted

REQUIRED_RESOURCES=

RESTART_ATTEMPTS=5

SCRIPT_TIMEOUT=600

START_TIMEOUT=900

STOP_TIMEOUT=180

UPTIME_THRESHOLD=7d

USR_ORA_ALERT_NAME=

USR_ORA_CHECK_TIMEOUT=0

USR_ORA_CONNECT_STR=/ as sysasm

USR_ORA_DEBUG=0

USR_ORA_DISCONNECT=false

USR_ORA_FLAGS=

USR_ORA_IF=

USR_ORA_INST_NOT_SHUTDOWN=

USR_ORA_LANG=

USR_ORA_NETMASK=

USR_ORA_OPEN_MODE=mount

USR_ORA_OPI=false

USR_ORA_PFILE=

USR_ORA_PRECONNECT=none

USR_ORA_SRV=

USR_ORA_START_TIMEOUT=0

USR_ORA_STOP_MODE=immediate

USR_ORA_STOP_TIMEOUT=0

USR_ORA_VIP=

 

Unregister the ASM instance resource from OCR for node2

 

srvctl remove instance –d labdb –i labdb2

crs_unregister ora.node2.ASM2.asm

 

Registered the same ASM resource back to OCR

 

crs_register ora.node2.ASM2.asm

srvctl add instance –d labdb –i labdb2 –n node2

srvctl modify instance –d labdb –I labdb2 –t +ASM2

srvctl modify service -d labdb -s oltp -n -i "labdb1,labdb2,labdb3"

 

 

Upgrade 10gR2 database to 11gR1 Database:

 

·         Start the runInstaller from the 11gR1 database software Install Media and select the “Install Software Only” Configuration Option on the OUI screen.

·         Invoke the dbua to upgrade the database.

·         There may be warnings about INVALID objects and Stale Statistics which can safely be ignored.

 

./runInstaller –ignoreSysPrereqs  ß From the 11gR1 Software for Database

 

04.JPG

 

Invoke the DBUA from the newly installed 11gR1 HOME for the database using X terminal

 

/u01/app/oracle/db11gr1/bin/dbua

 

DBUA invalidates some of the objects in the database during the upgrade of database and so DBUA runs the utlrcmp.sql script at the end as part of post upgrade steps to make them valid. I would start running this script when the DBUA progress is around 75% so that when the DBUA runs this script to recompile the INVALID objects, it has to recompile fewer objects and helps reduce the overall timing of upgrade process as a whole.

 

14.JPG

 

 

15.JPG 

 

Move the listener LAB_LISTENER from 10gR2 DB home to 11gR1 HOME.

 

The DBUA did not move the DB listener LAB_LISTENER from 10gR2 HOME to the 11gR1 Home and I had to manually moved it as below:

 

 

Copy the 10gR2 listener.ora and tnsnames.ora files to the 11gR2 TNS_ADMIN

 

ssh node1 cp /u01/app/oracle/db/network/admin/listener.ora /u01/app/oracle/db11gr1/network/admin

ssh node1 cp /u01/app/oracle/db/network/admin/tnsnames.ora /u01/app/oracle/db11gr1/network/admin

ssh node2 cp /u01/app/oracle/db/network/admin/listener.ora /u01/app/oracle/db11gr1/network/admin

ssh node2 cp /u01/app/oracle/db/network/admin/tnsnames.ora /u01/app/oracle/db11gr1/network/admin

ssh node3 cp /u01/app/oracle/db/network/admin/listener.ora /u01/app/oracle/db11gr1/network/admin

ssh node3 cp /u01/app/oracle/db/network/admin/tnsnames.ora /u01/app/oracle/db11gr1/network/admin

 

Generate the CAP files

 

cd /u01/app/oracle/crs/crs/public

 

crs_stat -p ora.node1.LAB_LISTENER_NODE1.lsnr > ora.node1.LAB_LISTENER_NODE1.lsnr.cap

crs_stat -p ora.node2.LAB_LISTENER_NODE2.lsnr > ora.node2.LAB_LISTENER_NODE2.lsnr.cap

crs_stat -p ora.node3.LAB_LISTENER_NODE3.lsnr > ora.node3.LAB_LISTENER_NODE3.lsnr.cap

 

Modify the ACTION_SCRIPT parameter in these CAP files to point to the new 11gR1 home.

 

Update the OCR with these changes

 

crs_register -u ora.node1.LAB_LISTENER_NODE1.lsnr

crs_register -u ora.node2.LAB_LISTENER_NODE2.lsnr

crs_register -u ora.node3.LAB_LISTENER_NODE3.lsnr

 

Start the listener using srvctl

 

srvctl stop lsnr -l LAB_LISTENER_NODE1 -n node1

srvctl stop lsnr -l LAB_LISTENER_NODE2 -n node2

srvctl stop lsnr -l LAB_LISTENER_NODE3 -n node3

 

srvctl start lsnr -l LAB_LISTENER_NODE1 -n node1

srvctl start lsnr -l LAB_LISTENER_NODE2 -n node2

srvctl start lsnr -l LAB_LISTENER_NODE3 -n node3

 

 

HERE’s the detailed Screen Shots of Upgrading database from 10gR2 RAC to 11gR1 RAC.

 

                                                                                                                        <<HOME>>                                                           

 

 

HTML Comment Box is loading comments...