Wednesday, March 20, 2013

Create Restore Point and Recover the database using Restore Point


To create a flashback restore point, you must be using FRA and flashback must be turned on.
Check to see if flashback is turned on with the following:

Enabling Flashback Database

Step 1 . Set the parameters
Step 2 . Shutdown the database


Step 3 . Startup mount the database (one node) and turn on Flash Back

Step 4 . Make Sure Flashback is Turned ON and shutdown the instance.
Step 5 . Start up RAC instances

Create Restore Point


Recover Dataabse with Restore Point


PRVG-11050 : No matching interfaces "bond0" for subnet "90.xxx.127.0" on nodes "myrac1,myrac2,myrac3,myrac4"


I got the below errors while running the pre-checks before upgrading from 11gR1 (11.1.0.7) to 11gR2 (11.2.0.3)

PRVG-11050 : No matching interfaces "bond0" for subnet "90.xxx.127.0" on nodes "myrac1,myrac2,myrac3,myrac4"

Check: Node connectivity for interface "bond0"
Result: Node connectivity failed for interface "bond0"

Check the following on one of the nodes.

myrac1:/usr/local/opt/oracle/ $ oifcfg iflist -p -n
bond0 90.xxx.126.0 UNKNOWN 255.255.254.0
bond1 172.29.70.0 PRIVATE 255.255.255.0

myrac1:/usr/local/opt/oracle/ $ oifcfg getif
bond0 90.xxx.127.0 global public
bond1 172.29.70.0 global cluster_interconnect

You would notice that values for bond0 after running oifcfg getif
is different when you run oifcfg iflist -p -n

To solve the issue, work with your Network Admin to find out the real values for the bond0 and update.
In our case it should have been 90.xxx.126.0

Login as ROOT

# oifcfg delif -global bond0/90.xxx.127.0

# oifcfg setif -global bond0/90.xxx.126.0:public

Running the above should solve the issue.

Monday, March 11, 2013

RMAN - unregister database from recovery catalog

SQL> select db_key, db_name, reset_time, dbinc_status from RMANCAT.DBINC where db_name = 'MYRACDB' ;

DB_KEY DB_NAME RESET_TIME DBINC_ST
---------- -------- ----------- --------
91068668 MYRACDB 17-nov-2010 PARENT
91068668 MYRACDB 12-mar-2008 PARENT
91068668 MYRACDB 02-may-2011 CURRENT
91068668 MYRACDB 28-apr-2011 ORPHAN
91068668 MYRACDB 27-apr-2011 ORPHAN
91068668 MYRACDB 27-apr-2011 ORPHAN
91068668 MYRACDB 28-apr-2011 ORPHAN

7 rows selected.

SQL> select db_key, db_id from RMANCAT.DB where DB_KEY=91068668;

DB_KEY DB_ID
---------- ----------
91068668 232532794

Now Login as RMAN

${ORACLE_HOME}/bin/rman catalog rmancat/cat@rmandb.world.com


RMAN>

Monday, March 4, 2013

Database runInstaller "Nodes Selection" Window Does not Show RAC Nodes


Oracle Clusterware (CRS or GI) is up and running as confirmed by $CRS_HOME/bin/crsctl check crs on all nodes, and $CRS_HOME/bin/olsnodes -n show all the nodes, but database runInstaller does not show all cluster nodes.

There might be an issue with the Inventory for Clusterware home.

Look at inventory.xml file under the oraInventory/ContentsXML directory.
It should show CRS="true" against the correct CRS or GI home and only one entry should have CRS="true" even if there are multiple (older) CRS or GI homes listed.








Do not update the inventory.xml manually. Use the below commands to fix the issue.

$GRID_HOME/oui/bin/runInstaller -silent -ignoreSysPrereqs -updateNodeList ORACLE_HOME="/opt/app/oragrid/oracle/product/11.2.0.3" LOCAL_NODE="myracd1" CLUSTER_NODES="{myracd1,myracd2}" CRS=true

Change the LOCAL_NODE to point to the node from where you are running the command. This needs to be run from every node where you want the inventory.xml updated.

If another CRS_HOME also has CRS="true" as in example below.



then use the below command to set it to false.

$GRID_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME="/opt/app/t5cim1d/oracle/product/crs" CRS=false

Oracle Notes 1327486.1 and 1053393.1 has more details.

Sunday, March 3, 2013

Test Box


Font 1
Font 2
Font 3
Font 4
Font 5








select job ,to_char(LAST_DATE,'YYYYMMDD HH24:MI:SS'),to_char( NEXT_DATE ,'YYYYMMDD HH24:MI:SS') from dba_jobs where NEXT_DATE < sysdate; select job, what, log_user, priv_user from dba_jobs where job= ;


select job ,to_char(LAST_DATE,'YYYYMMDD HH24:MI:SS'),to_char( NEXT_DATE ,'YYYYMMDD HH24:MI:SS')
from dba_jobs where NEXT_DATE < sysdate; select job, what, log_user, priv_user from dba_jobs where job= ;
select job ,to_char(LAST_DATE,'YYYYMMDD HH24:MI:SS'),to_char( NEXT_DATE ,'YYYYMMDD HH24:MI:SS')
from dba_jobs where NEXT_DATE < sysdate; select job, what, log_user, priv_user from dba_jobs where job= ;


Friday, March 1, 2013

11gR1 to 11gR2 Upgrade: cluvfy tool found some mandatory patches are not installed


The cluvfy tool found some mandatory patches are not installed.
These patches need to be installed before the upgrade can proceed.
The pre-upgrade checks failed, aborting the upgrade

The above error is mis-leading sometimes. Look under the log file at
$GRID_HOME/cfgtoollogs/crsconfig and re-run the cluvfy commands listed there manually by removing the "-_patch_only"


/bin/su oragrid -c ' /opt/app/oragrid/oracle/product/11.2.0.3/bin/cluvfy stage -pre crsinst -n myrac1,myrac2 -upgrade -src_crshome /opt/app/oracle/product/crs -dest_crshome /opt/app/oragrid/oracle/product/11.2.0.3 -dest_version 11.2.0.3.0 '
if the above comes back without issues then the problem is the environment variables ORA_CRS_HOME. If that is set at the session from where you are running rootupgrade.sh, then you will see the above error.

unset ORA_CRS_HOME and re-run the rootupgrade.sh and it should finish without errors.

Oracle Note 1498538.1 has more details about it as well.