Check Oracle Patch Status

Posted: July 19, 2011 in oracle

Practical Tip to find out oracle patch/patchset on database server.

1. In order to find out oracle binary status or patch level

Login on DB Server as oracle user

$ cd $ORACLE_HOME/OPatch

opatch lsinventory 

This will show output and check value for “OUI version” field.  This is version for oracle binary for given ORACLE_HOME.

2. In order to find out oracle DB status or patch level — connect to oracle database using sys/system/any db privs user and run below queries.

sql> select *from sys.v$version;

sql> select * from sys.registry$history;

Query result from registry.$history will help you to find patchset level and when was applied.

HTH

ORA-06564 or ORA-01403 errors are misleading while removing oracle stream configuration using DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION().

Cause

While removing oracle stream some of object(s) is/are needed to reference and not exist with original name.  It is due to referred object(s) is/are moved to recycle bin with new name with standard naming convention ‘BIN$<xxxx>==$0’. When object moved to recycle bin then only name changed and not object_id of it causing DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION procedure to fail. This is observed in Oracle 10g and 11g as well.

Solution

Purge the object from the recycle bin.

connect <owner>/<pw>
PURGE TABLE "BIN$lsVmsYjfLD7gRAADurPfzA==$0";

where <owner>/<pw> are the owning schema and password for the schema that owns the recycle bin object.

Run again DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION – It may possible that package fails again but it will be for different object(s). Remove recycle bin until clean run OR simply purge all objects of that schema OR purge dba_recyclebin using sysdba if you don’t care about recovery of any.

Hands on scenario:

Scott@ORCL> execute DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();
BEGIN DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION(); END;

*
ORA-06564: object “SCOTT”.”BIN$lsVmsYjfLD7gRAADurPfzA==$0” does not exist
ORA-06512: at “SYS.DBMS_LOGREP_UTIL”, line 84
ORA-06512: at “SYS.DBMS_LOGREP_UTIL”, line 117
ORA-06512: at “SYS.DBMS_CAPTURE_ADM_INTERNAL”, line 631
ORA-01403: no data found
ORA-06512: at “SYS.DBMS_CAPTURE_ADM”, line 515
ORA-06512: at “SYS.DBMS_STREAMS_ADM”, line 1933 ORA-06512: at line 1
ORA-06512: at “FWSYNC.FWSYNC_CONFIG_PKG”, line 2768

Scott@ORCL> show recyclebin;
ORIGINAL NAME     RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
—————-  —————————— ———— ——————-
EMP               BIN$lsVmsYi1LD7gRAADurPfzA==$0 TABLE        2010-12-06:13:47:20

Scott@ORCL> purge recyclebin;

Recyclebin purged.

Elapsed: 00:00:20.07
Scott@ORCL> execute DBMS_STREAMS_ADM.REMOVE_STREAMS_CONFIGURATION();

PL/SQL procedure successfully completed.

Scott@ORCL>

It is a good practice to keep eye on database growth so one can a) have better vision on capacity planning b) find unusual load which is not normal and c) build trending

v$datafile view stores datafile information from the control file.

Below SQL is to get overall database growth by month

SELECT   TO_CHAR (creation_time, ‘RRRR-MM’) TimeLine,
         ROUND (SUM (BYTES) / 1024 / 1024 / 1024) gb
    FROM v$datafile
GROUP BY TO_CHAR (creation_time, ‘RRRR-MM’)
ORDER BY TO_CHAR (creation_time, ‘RRRR-MM’);

Below SQL is to get overall database growth by tablespace

SELECT   TO_CHAR (vd.creation_time, ‘RRRR-MM’) TimeLine,
         ddf.tablespace_name,
         ROUND (SUM (vd.BYTES) / 1024 / 1024 / 1024) gb
    FROM v$datafile vd, dba_data_files ddf
   WHERE vd.FILE#=ddf.FILE_ID
GROUP BY TO_CHAR (vd.creation_time, ‘RRRR-MM’), ddf.tablespace_name
ORDER BY TO_CHAR (vd.creation_time, ‘RRRR-MM’), ddf.tablespace_name
/

Traditional exp/imp tools are getting decommission soon .. In heterogeneous environments we still support interfaces. exp/imp are client tools so often found that many unwanted users uses to export data from production system.  It sometimes degrade db performance.

To prevent such scenario you can use below tricks to avoid unwanted access to production DB.

Trick #1 Revoke SELECT privilege from EXU8FUL table. If connecting user have DBA, SELECT ANY DICTIONARY and EXP_FULL_DATABASE grants then trick will not work.

GOTO DB server –>

$ sqlplus “/ as SYSDBA”

SQL> revoke select on EXU8FUL from public;
Revoke succeeded.

And, now try to exp from client machine.

$ exp scott/tiger@oracle_db file=xp.dmp log=xp.log
Export: Release 10.2.0.4.0 – Production on Mon Jan 24 16:02:05 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.
EXP-00008: ORACLE error 942 encountered

ORA-00942: table or view does not exist

EXP-00024: Export views not installed,please notify your DBA

EXP-00000: Export terminated unsuccessfully

Trick #2: Write an oracle trigger on database level.

To make trick manageable create an oracle table no_exp_imp_users as below.

create table no_exp_imp_users ( block_user varchar2(30) not null );

create or replace trigger no_exp_imp
after logon on database
begin
if  ( user in (select block_user from no_exp_imp_users )
and
( upper(sys_context(‘userenv’,’module’)) in (‘exp’,’imp’)
OR
upper(sys_context(‘userenv’,’module’)) in (‘EXP.EXE’,’IMP.EXE’)
)
then
raise_application_error (-20999, ‘DOOMED! You are not allowed to run exp/imp’);
end if;
end;
/

Now insert record into no_exp_imp_users to prevent them to access it.

insert into no_exp_imp_user values (‘SCOTT’);
commit;

C:\>exp scott/tiger@oracle_db file=xp.dmp log=xp.log

Export: Release 10.2.0.4.0 – Production on Mon Jan 24 16:02:05 2011
Copyright (c) 1982, 2007, Oracle.  All rights reserved.

EXP-00056: ORACLE error 604 encountered
ORA-00604: error occurred at recursive SQL level 1
ORA-20999: ‘DOOMED! You are not allowed to run exp/imp
ORA-06512: at line 3

NOTE: One can break this trick if smart guy change exp.exe to anyname.exe ;) or exp to anyname.

Hope this helps to reader!

 

This is really useful reference in addition to our classic Oracle 11G online documentation.  It is handy and quick to find views/processes etc by category.  Must have guide in your pocket.

Your Essential Guide to Oracle Database 11g Release 2

 

Have you ever encounter an issue when oracle 11gR2 exp ( traditional tool ) do not export object which is exist in schema?  If yes, you may get hint here why?

First of all – why one would/should use exp/imp tools where in new oracle datapump is available.  Yeah, Yeah!! But, in many organizations by that time every database(s) come on same version one has to support all sorts of applications which is serving as bridge in heterogeneous network.

Imagine that – 11gR2 DB is a source for one of 8i DB.  And, we have to supply data using export only. You have no choice other than using our classic traditional tool exp.

Lets do practical discussion –

connect scott/tiger
create table hv_fun_a (no number, value varchar2(20));

create table hv_fun_b (no number, value varchar2(20));
insert into have_fun_b values (1, ‘rakesh’);
commit;

exp scott/tiger file=have_fun.dmp tables=have_fun_a, have_fun_b

Export: Release 11.2.0.1.0 – Production on Wed Jan 19 10:38:10 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
About to export specified tables via Conventional Path …

EXP-00011: SCOTT.HAVE_FUN_A does not exist

. . exporting table                  HAVE_FUN_B             1 rows exported
Export terminated successfully with warnings.

Darn! What happened?  I had created have_fun_a so oracle should have exported it.  I bet, it is an oracle  bug.. we like to throw as many as blames to our best friend oracle server.

Oracle has introduced new feature called deferred segment creation.

Beginning in Oracle Database 11g Release 2, when creating a non-partitioned heap-organized table in a locally managed tablespace, table segment creation is deferred until the first row is inserted.  This is set via setting the initialization parameter DEFERRED_SEGMENT_CREATION TRUE, which is default when you create DB.  Having said that unless you insert first record object is not visible to our traditional exp tool.

How do I resolve this behavior?

Option – 1: Add dummy entry into have_fun_a and let oracle to allocate first extent one-time.

Option – 2: Create a table with segment creation immediate option.

create table have_fun_a (no number, value varchar2(20)) segment creation immediate;

Option 3: set initialize parameter DEFERRED_SEGMENT_CREATION=FALSE.  It requires bounce.

Option 4: upgrade target db to 11g and use datapump

How to find objects which are invisible to exp tool?

There is a new SEGMENT_CREATED column in *_TABLES, *_INDEXES, and *_LOBS that can be used to verify deferred segment creation.

select owner, table_name, segment_created from dba_tables; – if it is set to NO means exp will throw IMP-00011 error.

Advantages?

  1. A significant amount of disk space can be saved for applications that create hundreds or thousands of tables upon installation, many of which might never be populated.
  2. Application installation time is reduced
  3. Application which has many objects with future days partitions can save physical space too.


In production world, it is mandatory to have archive-log backed up properly for many reasons.  However, in QA and Dev it is annoying as it chew up storage space. I know why to even enable archive mode in Development and QA at first place — however some development is required to have this rich feature enable to validate functionality.

$ export ORACLE_SID=<your_db> or source .profile with proper ORACLE_SID

$ rman target /

RMAN> list archivelog all;

using target database control file instead of recovery catalog
List of Archived Log Copies for database with db_unique_name your_db
=====================================================================

Key     Thrd Seq     S Low Time
——- —- ——- – ———
1       1    13      A 30-NOV-10
Name: /opt/app/oradata/your_db/archive/1_13_725565527.dbf

2       1    14      A 14-DEC-10
Name: /opt/app/oradata/your_db/archive/1_14_725565527.dbf

3       1    15      A 22-DEC-10
Name: /opt/app/oradata/your_db/archive/1_15_725565527.dbf

Now, go to archive directory /opt/app/oradata/your_db/archive/ and remove all files except one which is current on time.

Launch RMAN again

rman target /

RMAN> run {
crosscheck archivelog all;
delete expired archivelog all;
}

It will ask your confirmation as below. Type YES when prompted.

Do you really want to delete the above objects (enter YES or NO)? yes
deleted archived log
archived log file name=/opt/app/oradata/your_db/archive/1_13_725565527.dbf RECID=1 STAMP=737742911
deleted archived log
archived log file name=/opt/app/oradata/your_db/archive/1_14_725565527.dbf  RECID=2 STAMP=738418581
Deleted 2 EXPIRED objects

This will remove data from oracle dictionary table as well.  It won’t impact any of your work :).

This is tiny work but annoying if don’t clean up in development and QA – so I thought to share with you.