Monday, September 28, 2009

Flashback Table to a time in the past


Scenario: Someone deleted some data accidently from a database accidently and now wants to get back the data erroneously deleted.

Solution: You can do flashback table to a particular point in time.
(Flashback Table uses undo segments to retrieve data, so all depends if the data is still there)

Login as schema owner and enable the row movement.

SQL> alter table EMPLOYEE enable row movement;

Get time stamp to which you want to go back and then

SQL> flashback table EMPLOYEE to timestamp to_timestamp('Jan 15 2009 10:00:00','Mon DD YYYY HH24:MI:SS');

Find all files having the string in Linux


To Find all files having the string "SPECIALMAIL"

find . -exec grep -i -l "SPECIALMAIL" {} \;

-i => Ignore Case

-l => List file names only

DataPump Command EXCLUDE/INCLUDE/REMAP_SCHEMA


Export the schema but leave two of the big tables out.

expdp scott/tiger DIRECTORY=DATA_PUMP dumpfile=scott%u.dmp filesize=5G JOB_NAME=SCOTT_J1 SCHEMAS=SCOTT EXCLUDE=TABLE:\"IN \(\'EMPLOYEE\', \'DEPT\'\)\"

You exported from SCOTT schema and now wanted to import some tables into a different schema (SMITH) and into different tablespaces

impdp SMITH/PASSWORD directory=data_pump dumpfile=scott%u.dmp REMAP_SCHEMA=SCOTT:SMITH REMAP_TABLESPACE=SCOTT_DATA:SMITH_DATA REMAP_TABLESPACE=SCOTT_IDX:SMITH_IDX TABLES=TABLE1, TABLE2, TABLE3

'gcs log flush sync' resolution


1)You fired an update statement on Instance-2.
2)However, the request for desired blocks was gone to Instance-1. So Instance-2 was waiting on 'gc cr request'.
3)Instance-1 had the requested blocks but before it ships the blocks to Instance-2, it need to flush the changes from current block to redo logs on disks. Until this is done Instance-2 waits on event - 'gcs log flush sync'.

The cause of this wait event 'gcs log flush sync' is mainly - Redo log IO performance.

To avoid this problem you need to =
1)Improve the Redo log I/o performance.
2) Set undersore parameter "_cr_server_log_flush" =false.

Performance - Isolating Waits in a RAC environment


Performance - Isolating Waits in a RAC environment.

Determine the snap IDs you are interested in

For example, to obtain a list of snap IDs from the previous day, execute the following SQL:

SQL> SELECT snap_id, begin_interval_time FROM dba_hist_snapshot WHERE TRUNC(begin_interval_time) = TRUNC(sysdate-1) ;

Step 1 :
--------
Identify the Wait Class

select wait_class_id, wait_class, count(*) cnt
from dba_hist_active_sess_history
where snap_id between &1 and &2
group by wait_class_id, wait_class
order by 3;
2723168908 Idle 1
3290255840 Configuration 9
3386400367 Commit 90
4108307767 System I/O 149
3875070507 Concurrency 182
1740759767 User I/O 184
1893977003 Other 244
4217450380 Application 365
2000153315 Network 475
[NULL] [NULL] 916
3871361733 Cluster 1844

Step 2
-------
Identify the event_id associated with above wait class ID
select event_id, event, count(*) cnt from dba_hist_active_sess_history
where snap_id between 18231 and 18232 and wait_class_id=3871361733
group by event_id, event
order by 3;
EVENT_ID EVENT COUNT(*)
1742950045 gc current retry 1
3897775868 gc current multi block request 1
512320954 gc cr request 4
661121159 gc cr multi block request 9
2685450749 gc current grant 2-way 11
3201690383 gc cr grant 2-way 18
1457266432 gc current split 27
3046984244 gc cr block 3-way 41
111015833 gc current block 2-way 62
3570184881 gc current block 3-way 62
737661873 gc cr block 2-way 67
2277737081 gc current grant busy 95
1520064534 gc cr block busy 235
2701629120 gc current block busy 396
1478861578 gc buffer busy 815

Step 3
-------
Identify the SQL_ID associated with the above event_id
select 'gc buffer busy' ,sql_id, count(*) cnt from dba_hist_active_sess_history
where snap_id between 18231 and 18232
and event_id in (1478861578)
group by sql_id having count(*) > 55
UNION
select 'gc current block busy',sql_id, count(*) cnt from dba_hist_active_sess_history
where snap_id between 18231 and 18232
and event_id in (2701629120)
group by sql_id having count(*) > 55
UNION
select 'gc cr block busy',sql_id, count(*) cnt from dba_hist_active_sess_history
where snap_id between 18231 and 18232
and event_id in (1520064534)
group by sql_id having count(*) > 55
order by 2 ;

Wait Event SQL ID waits
--------------------------------------------------------------------------------------
gc buffer busy 5qwhj3nru2jtq 765
gc current block busy 5qwhj3nru2jtq 332

Step 4 :
--------
Identify the SQL statement associated with the above SQL ID
select sql_id,sql_text from dba_hist_sqltext where sql_id in ('5qwhj3nru2jtq')
Output:
INSERT INTO Component_attrMap (Component_id, key, value) VALUES (:1, :2, :3)
Step 5 :
--------
Identify the object associated with the above statement
select current_obj#, count(*) cnt from dba_hist_active_sess_history
where snap_id between 18231 and 18232
and event_id in (1478861578,2701629120)and sql_id='5qwhj3nru2jtq'
group by current_obj#
order by 2;

Obj # Count(*)
67818 1
67988 1096

Step 6 :
-------
Identify the Object associated with the above Object ID
select object_id, owner, object_name, subobject_name, object_type from dba_objects
where object_id in (67988);
OBJECT_ID OWNER OBJECT_NAME SUBOBJECT_NAME
--------- ----- ------------- --------------
67988 SCOTT COMP_ID_INDX1 INDEX

In this case creating a REVERSE KEY index provided the required solution.

Friday, September 25, 2009

Copy CRS Home from one node to another


It for some reason your CRS_HOME is messed up on node node and you unable to bring up the 'crs' then you can copy the CRS_HOME from the working node and copy it onto the node having non-working CRS_HOME.

tar the CRS_HOME from working node to non-working node, un-tar it and make the following change.
In the file $CRS_HOME/inventory/ContentsXML/oraclehomeproperties.xml
look for LOCAL_NODE NAME and change it to the node name where you have un-tar'ed the CRS_HOME

CLUSTER_INFO>
LOCAL_NODE NAME="rklx2"/>
NODE_LIST>
NODE NAME="rklx1"/>
NODE NAME="rklx2"/>

Also make sure under /etc/init.d the following files have fully qualified CRS and ORACLE Home values, if not, replace the variables with actual values.

init.cssd
init.evmd
init.crsd

How to Manually remove the OEM Agent


How to Manually remove the OEM Agent

Purpose and Scope: You are trying to de-install the old agent and then re-install the newer version of the Agent and running into various issues.

Best Method:
1) Stop the Agent (all nodes in case of RAC)
2) Go to OEM Grid Control and remove all targets associated with the host/database to be removed, which includes databases, listeners, hosts, agents.
3) Make sure to verify from the OEM Grid Control that the targets you were trying to remove are completely removed. Try looking up from Targets=>All Targets and search for the name and you should NOT see any results.
4) Go to the $AGENT_HOME and remove Agent home from Linux/Unix box.
a) $ rm –rf agent10g
b) $ rm –rf agent10gInventory
c) $ rm –f $TNS_ADMIN/oraInst.agent10g..loc

If you want to re-install the agent, the re-install of the agent should work without issues but if for some reason the above does not work, try executing the following steps to manually remove the agent.

You can also remove the Agent by following the Metalink Note:436679.1 on
How to Remove an Orphaned Target Using the EMDiag Kit 10.2

The above should do the trick but sometimes it is not completely removed from the repository and when you install the newer version it gives various erros.

The following method should remove it completely

1) Login to oemdb as SYS and -

select * from sysman.mgmt_targets_delete
where delete_complete_time is NULL
order by target_name

This lists all target deletions that did not complete for some reason (normally the delet_complete_time would show the timestamp of when the target was removed.

2) Find all targets that belong to the system you are having issues with

3) Run the following to complete the deletion –

exec mgmt_admin.delete_target('rklx1_rk_crs','cluster');
where ‘'rklx1_rk_crs' is the clustername and also is the “target_name column from the earlier query and “cluster” is the target_type

4) If this doesn’t work then do this –
Log into oemdb and run –

alter index SYSMAN.MGMT_STRING_METRIC_HISTORY_PK rebuild;

5) Then remove it using this procedure –
exec mgmt_admin.delete_target_internal('rklx1_rk_crs','cluster');

For the agent you could try –

exec mgmt_admin.cleanup_agent(’host.domain:3872’);