Thursday, November 5, 2009

(APEX) & the Embedded PL/SQL Gateway (EPG) in an 11G


After installing Oracle 11g, run the following to configure APEX
Run apxconf.sql from $ORACLE_HOME/apex
When prompted, enter the port for the Oracle XML DB HTTP server. The default port number is 8080.
Unlock the anonymous user
SQL> ALTER USER ANONYMOUS ACCOUNT UNLOCK;

You should be able to log into apex as the admin user from a browser using -> http://machine.domain:port/apex

The machine is the DB host and the port is the one input during configure step.

If you get an error and can't log in, verify the EPG is up by running the following in your browser ->

http://machine.domain:port

If it's up, you should be prompted for a username and password for XDB.

If the EPG is not up, accomplish the following to start it:

1. Log in as SYS as SYSDBA
2. Run the following statement:
3. EXEC DBMS_XDB.SETHTTPPORT(port); ==>> Where port is the plsql gatway port.
4. COMMIT;

For example:
EXEC DBMS_XDB.SETHTTPPORT(8080);
COMMIT;

Monday, November 2, 2009

How to find size of LOB


Select b.table_name,b.Column_name,c.data_type,a.Segment_name,a."size"
from
(Select Segment_name , (bytes/(1024*1024*1024)) "size"
from User_Segments
where (bytes/(1024*1024*1024))>0.5 )a,
(Select Table_name,Column_name,Segment_name
from User_Lobs)b,
(Select table_name,Column_Name,Data_type from User_Tab_Columns
Where Data_Type in ('CLOB','BLOB','LONG','LONG RAW') ) c
Where a.segment_name=b.segment_name
and b.table_name=c.table_name
and b.column_name=c.column_name
Order by c.data_type
/

Monday, October 26, 2009

RMAN Backup on the Standby Database


Running RMAN Backup on the Standby Database

We can put the standby database in good use by running the RMAN backups there along with all the good reasons we have the standby database in place.

. If your Standby database is a Physical Standby database and you are taking backups ONLY on the physical standby database.

. The data file directories on the primary and standby database are identical.

. RMAN recovery catalog is required. Since the standby database has the same DBID as the primary database and is always from the same incarnation, the RMAN datafile backups are interchangeable.

. RMAN will connect to the standby database as target database. The backups taken can be used to restore the Primary Database.

. Primary database should not use Oracle Managed Files (OMF) for this to work. If we are using OMF then the file names of Primary and Standby could differ.

Configuration required on Primary and Standby Database.

. Configure Flash Recovery Area
. Use of SPFILE

Friday, October 23, 2009

Split the file in two


I have a file with 10 lines and want to split the file in two but with even rows in one file and odd rows in one file.

sed -n '2,${p;n;}' stat1.sql > even.sql
sed -n '1,${p;n;}' stat1.sql > odd.sql

Tuesday, October 6, 2009

Update table and commit every n rows


Declare

i integer;
x NUMBER ;
v_min NUMBER ;
v_max NUMBER ;

begin

select max(EMPID) into x from EMPLOYEE ;

v_min :=0 ;
v_max :=25000 ;

loop
update EMPLOYEE set CIO_NAME = 'JOHN' where EMPID >= v_min and EMPID < v_max ;
commit ;
v_min := v_min+25000 ;
v_max := v_min+25000 ;
if v_max > (x+30000) then
commit ;
dbms_output.put_line('All rows updated successfully ....') ;
exit ;
end if ;
end loop ;

Exception When others then
dbms_output.put_line('Error Occured ...') ;

end ;
/

Monday, October 5, 2009

Sequence cache misses were consuming significant database time


Many times looking at the AWR Report, you come across "Sequence cache misses were consuming significant database time" when there is a slow performance on inserts.

Try increasing the cache size of the Sequence and use noorder if you are running a RAC database. Increasing the cache size would help improve the performance of inserts.

More details to follow on this topic ......

Monday, September 28, 2009

Flashback Table to a time in the past


Scenario: Someone deleted some data accidently from a database accidently and now wants to get back the data erroneously deleted.

Solution: You can do flashback table to a particular point in time.
(Flashback Table uses undo segments to retrieve data, so all depends if the data is still there)

Login as schema owner and enable the row movement.

SQL> alter table EMPLOYEE enable row movement;

Get time stamp to which you want to go back and then

SQL> flashback table EMPLOYEE to timestamp to_timestamp('Jan 15 2009 10:00:00','Mon DD YYYY HH24:MI:SS');

Find all files having the string in Linux


To Find all files having the string "SPECIALMAIL"

find . -exec grep -i -l "SPECIALMAIL" {} \;

-i => Ignore Case

-l => List file names only

DataPump Command EXCLUDE/INCLUDE/REMAP_SCHEMA


Export the schema but leave two of the big tables out.

expdp scott/tiger DIRECTORY=DATA_PUMP dumpfile=scott%u.dmp filesize=5G JOB_NAME=SCOTT_J1 SCHEMAS=SCOTT EXCLUDE=TABLE:\"IN \(\'EMPLOYEE\', \'DEPT\'\)\"

You exported from SCOTT schema and now wanted to import some tables into a different schema (SMITH) and into different tablespaces

impdp SMITH/PASSWORD directory=data_pump dumpfile=scott%u.dmp REMAP_SCHEMA=SCOTT:SMITH REMAP_TABLESPACE=SCOTT_DATA:SMITH_DATA REMAP_TABLESPACE=SCOTT_IDX:SMITH_IDX TABLES=TABLE1, TABLE2, TABLE3

'gcs log flush sync' resolution


1)You fired an update statement on Instance-2.
2)However, the request for desired blocks was gone to Instance-1. So Instance-2 was waiting on 'gc cr request'.
3)Instance-1 had the requested blocks but before it ships the blocks to Instance-2, it need to flush the changes from current block to redo logs on disks. Until this is done Instance-2 waits on event - 'gcs log flush sync'.

The cause of this wait event 'gcs log flush sync' is mainly - Redo log IO performance.

To avoid this problem you need to =
1)Improve the Redo log I/o performance.
2) Set undersore parameter "_cr_server_log_flush" =false.

Performance - Isolating Waits in a RAC environment


Performance - Isolating Waits in a RAC environment.

Determine the snap IDs you are interested in

For example, to obtain a list of snap IDs from the previous day, execute the following SQL:

SQL> SELECT snap_id, begin_interval_time FROM dba_hist_snapshot WHERE TRUNC(begin_interval_time) = TRUNC(sysdate-1) ;

Step 1 :
--------
Identify the Wait Class

select wait_class_id, wait_class, count(*) cnt
from dba_hist_active_sess_history
where snap_id between &1 and &2
group by wait_class_id, wait_class
order by 3;
2723168908 Idle 1
3290255840 Configuration 9
3386400367 Commit 90
4108307767 System I/O 149
3875070507 Concurrency 182
1740759767 User I/O 184
1893977003 Other 244
4217450380 Application 365
2000153315 Network 475
[NULL] [NULL] 916
3871361733 Cluster 1844

Step 2
-------
Identify the event_id associated with above wait class ID
select event_id, event, count(*) cnt from dba_hist_active_sess_history
where snap_id between 18231 and 18232 and wait_class_id=3871361733
group by event_id, event
order by 3;
EVENT_ID EVENT COUNT(*)
1742950045 gc current retry 1
3897775868 gc current multi block request 1
512320954 gc cr request 4
661121159 gc cr multi block request 9
2685450749 gc current grant 2-way 11
3201690383 gc cr grant 2-way 18
1457266432 gc current split 27
3046984244 gc cr block 3-way 41
111015833 gc current block 2-way 62
3570184881 gc current block 3-way 62
737661873 gc cr block 2-way 67
2277737081 gc current grant busy 95
1520064534 gc cr block busy 235
2701629120 gc current block busy 396
1478861578 gc buffer busy 815

Step 3
-------
Identify the SQL_ID associated with the above event_id
select 'gc buffer busy' ,sql_id, count(*) cnt from dba_hist_active_sess_history
where snap_id between 18231 and 18232
and event_id in (1478861578)
group by sql_id having count(*) > 55
UNION
select 'gc current block busy',sql_id, count(*) cnt from dba_hist_active_sess_history
where snap_id between 18231 and 18232
and event_id in (2701629120)
group by sql_id having count(*) > 55
UNION
select 'gc cr block busy',sql_id, count(*) cnt from dba_hist_active_sess_history
where snap_id between 18231 and 18232
and event_id in (1520064534)
group by sql_id having count(*) > 55
order by 2 ;

Wait Event SQL ID waits
--------------------------------------------------------------------------------------
gc buffer busy 5qwhj3nru2jtq 765
gc current block busy 5qwhj3nru2jtq 332

Step 4 :
--------
Identify the SQL statement associated with the above SQL ID
select sql_id,sql_text from dba_hist_sqltext where sql_id in ('5qwhj3nru2jtq')
Output:
INSERT INTO Component_attrMap (Component_id, key, value) VALUES (:1, :2, :3)
Step 5 :
--------
Identify the object associated with the above statement
select current_obj#, count(*) cnt from dba_hist_active_sess_history
where snap_id between 18231 and 18232
and event_id in (1478861578,2701629120)and sql_id='5qwhj3nru2jtq'
group by current_obj#
order by 2;

Obj # Count(*)
67818 1
67988 1096

Step 6 :
-------
Identify the Object associated with the above Object ID
select object_id, owner, object_name, subobject_name, object_type from dba_objects
where object_id in (67988);
OBJECT_ID OWNER OBJECT_NAME SUBOBJECT_NAME
--------- ----- ------------- --------------
67988 SCOTT COMP_ID_INDX1 INDEX

In this case creating a REVERSE KEY index provided the required solution.

Friday, September 25, 2009

Copy CRS Home from one node to another


It for some reason your CRS_HOME is messed up on node node and you unable to bring up the 'crs' then you can copy the CRS_HOME from the working node and copy it onto the node having non-working CRS_HOME.

tar the CRS_HOME from working node to non-working node, un-tar it and make the following change.
In the file $CRS_HOME/inventory/ContentsXML/oraclehomeproperties.xml
look for LOCAL_NODE NAME and change it to the node name where you have un-tar'ed the CRS_HOME

CLUSTER_INFO>
LOCAL_NODE NAME="rklx2"/>
NODE_LIST>
NODE NAME="rklx1"/>
NODE NAME="rklx2"/>

Also make sure under /etc/init.d the following files have fully qualified CRS and ORACLE Home values, if not, replace the variables with actual values.

init.cssd
init.evmd
init.crsd

How to Manually remove the OEM Agent


How to Manually remove the OEM Agent

Purpose and Scope: You are trying to de-install the old agent and then re-install the newer version of the Agent and running into various issues.

Best Method:
1) Stop the Agent (all nodes in case of RAC)
2) Go to OEM Grid Control and remove all targets associated with the host/database to be removed, which includes databases, listeners, hosts, agents.
3) Make sure to verify from the OEM Grid Control that the targets you were trying to remove are completely removed. Try looking up from Targets=>All Targets and search for the name and you should NOT see any results.
4) Go to the $AGENT_HOME and remove Agent home from Linux/Unix box.
a) $ rm –rf agent10g
b) $ rm –rf agent10gInventory
c) $ rm –f $TNS_ADMIN/oraInst.agent10g..loc

If you want to re-install the agent, the re-install of the agent should work without issues but if for some reason the above does not work, try executing the following steps to manually remove the agent.

You can also remove the Agent by following the Metalink Note:436679.1 on
How to Remove an Orphaned Target Using the EMDiag Kit 10.2

The above should do the trick but sometimes it is not completely removed from the repository and when you install the newer version it gives various erros.

The following method should remove it completely

1) Login to oemdb as SYS and -

select * from sysman.mgmt_targets_delete
where delete_complete_time is NULL
order by target_name

This lists all target deletions that did not complete for some reason (normally the delet_complete_time would show the timestamp of when the target was removed.

2) Find all targets that belong to the system you are having issues with

3) Run the following to complete the deletion –

exec mgmt_admin.delete_target('rklx1_rk_crs','cluster');
where ‘'rklx1_rk_crs' is the clustername and also is the “target_name column from the earlier query and “cluster” is the target_type

4) If this doesn’t work then do this –
Log into oemdb and run –

alter index SYSMAN.MGMT_STRING_METRIC_HISTORY_PK rebuild;

5) Then remove it using this procedure –
exec mgmt_admin.delete_target_internal('rklx1_rk_crs','cluster');

For the agent you could try –

exec mgmt_admin.cleanup_agent(’host.domain:3872’);

Wednesday, July 29, 2009

Script to find elapsed time based on a SQL_ID from DBA_HIST Tables


Script to find execution time based on a SQL_ID

SET LINESIZE 120
SET PAGESIZE 1000
COL executions FOR 999,999,999
COL elapsed_time FOR 999,999,999,999
COL avg_ms FOR 999999.99
COL min_ms FOR 999999.99
COL max_ms FOR 999999.99
SELECT
TO_CHAR(TRUNC(snapshot.begin_interval_time + 1/8,'HH24'),'DD-MON-YYYY HH24:MI:SS') || ' EDT' SNAP_BEGIN,
sqlstat.instance_number,
SUM(sqlstat.executions_delta) executions,
MIN(sqlstat.elapsed_time_delta / sqlstat.executions_delta / 1000) min_ms,
MAX(sqlstat.elapsed_time_delta / sqlstat.executions_delta / 1000) max_ms,
SUM(sqlstat.elapsed_time_delta) / SUM(sqlstat.executions_delta) / 1000 avg_Ms
FROM
dba_hist_sqlstat sqlstat,
dba_hist_snapshot snapshot
WHERE
sqlstat.dbid = snapshot.dbid
AND sqlstat.instance_number = snapshot.instance_number
AND sqlstat.snap_id = snapshot.snap_id
AND sqlstat.sql_id = '&1'
AND snapshot.begin_interval_time >= TO_DATE('27-JUL-2009 21:00:00','DD-MON-YYYY HH24:MI:SS')
GROUP
BY TO_CHAR(TRUNC(snapshot.begin_interval_time + 1/8,'HH24'),'DD-MON-YYYY HH24:MI:SS') || ' EDT',
sqlstat.instance_number
ORDER
BY snap_begin
/

Wednesday, July 1, 2009

Hash-Partitioned Reverse-Key Index


Hash Paritioned global indexes provides higher throughput for applications with large numbers of concurrent insertions. In some applications, new insertions into the indexes are towards the right side of the index, usually this happends when you have an index column that is a monotonically increasing sequence number. Hash Partitioned indexes can improve performance in situations where a small number of nonpartitioned index's leaf blocks are experiencing high contention in an OLTP environment. Queries that use with an equality or IN operator in the WHERE clause can benefit significantly from a hash-partitioned global index.

For monotonically increasing key situatins, reverse keying the index will spread the activity, but only across the highest partition. Hash Partitioning will distribute the workload across all the index partitions, but still with contention at each index's right edge, reverse-key hash partitioning will not only distribute the activity across all the partitions, but also spread it within each partition.

Create Index CUSTOMER_IDX1 on CUSTOMER(ZIP_CODE)
global partition by hash(ZIP_CODE)
(partition P1 tablespace TBS_INDEX_1,
partition P2 tablespace TBS_INDEX_2,
partition P3 tablespace TBS_INDEX_3,
partition P4 tablespace TBS_INDEX_4)
REVERSE
/

Wednesday, February 25, 2009

Query to find un-indexed Foreigh Key Constraints

SELECT * FROM (SELECT c.table_name, cc.column_name, cc.position column_position FROM user_constraints c, user_cons_columns cc
WHERE c.constraint_name = cc.constraint_name AND c.constraint_type = 'R' MINUS SELECT i.table_name, ic.column_name, ic.column_position
FROM user_indexes i, user_ind_columns ic
WHERE i.index_name = ic.index_name )
ORDER BY table_name, column_position ;