Wednesday, October 15, 2014

Index monitoring for Indexes in other schemas [ V$OBJECT_USAGE ]


How to find if my Index is used ?

V$OBJECT_USAGE displays statistics about index usage gathered from the database. You can use this view to monitor index usage. All indexes that have been used at least once can be monitored and displayed in this view.

To populate  V$OBJECT_USAGE view, you have to set monitoring on INDEX in question.
SQL>alter index AJAYT.XIE3HISTORY monitoring usage;
Index altered.
 


Once the Index monitoring is on, It can be viewed as below

select * from v$object_usage;

Only the owner of Index can use above command to view usage details.

What if I don't have password for schema ?
You can use proxy authentication method to login
OR
Create a view as below and then It can be used for monitoring any schema objects.


CREATE VIEW V$ALL_OBJECT_USAGE
(
   OWNER,
   INDEX_NAME,
   TABLE_NAME,
   MONITORING,
   USED,
   START_MONITORING,
   END_MONITORING
)
AS
   SELECT u.name,
          io.name,
          t.name,
          DECODE (BITAND (i.flags, 65536), 0, 'NO', 'YES'),
          DECODE (BITAND (ou.flags, 1), 0, 'NO', 'YES'),
          ou.start_monitoring,
          ou.end_monitoring
     FROM sys.user$ u,
          sys.obj$ io,
          sys.obj$ t,
          sys.ind$ i,
          sys.object_usage ou
    WHERE     i.obj# = ou.obj#
          AND io.obj# = ou.obj#
          AND t.obj# = i.bo#
          AND u.user# = io.owner#;


SQL>show user
USER is "SYS"




SQL>select * from v$object_usage;

no rows selected


SQL>select * from V$ALL_OBJECT_USAGE;

OWNER                          INDEX_NAME                     TABLE_NAME                     MON USE START_MONITORING    END_MONITORING
------------------------------ ------------------------------ ------------------------------ --- --- ------------------- -------------------
AJAYT                           XIE3HISTORY                    HISTORY                        YES NO  10/15/2014 15:20:47
AJAYT                           XIE2HISTORY                    HISTORY                        YES NO  10/15/2014 15:20:59


You can also use alternate method by querying DBA_HIST_SQL_PLAN, V$SQL_PLAN_STATISTICS_ALL and V$SQL_PLAN

Wednesday, May 1, 2013

11GR2: DUPLICATE DATABASE USING RMAN WITHOUT CONNECTING TO TARGET (PRODUCTION)


11GR2: DUPLICATE DATABASE USING RMAN WITHOUT CONNECTING TO TARGET (PRODUCTION)
11gR2 has excellent feature to duplicate production database without connecting to Production database. This reduces risk touching production accidently as well as eliminates impacting production. Prior to 11gR2, we had to take backup of production, mount backups on Auxillary server (where prod will be duplicated) and run duplicate database rman command connecting to target (production) database. 11gR2 makes this task very easy by duplicating database without connecting to production database.
Step 1 – Backup production database.

RMAN> Configure controlfile autobackup on;
RMAN> backup database plus archivelog;

Step 2 – Copy backups to Auxillary server
This step can be done by
(a)    mounting production backup filesystem to Auxillary server OR
(b)   scp backupsets from production to Auxillary instance
We will be using our backup location as /rman_backup

Step 3 – Prepare Auxillary server instance
Create init.ora for Auxillary instance with just 1 parameter db_name as below.
db_name=CLONEDB1

Step 4 – Start Auxillary instance
$ export ORACLE_SID=CLONEDB1
$ sqlplus "/ as sysdba"
SQL> startup nomount;


Step 5 – Using RMAN connect to Auxillary instance and duplicate database
$ export ORACLE_SID=CLONEDB1
RMAN> rman Auxillary /
RMAN>
Duplicate database to CLONEDB1
spfile
set control_files=’+DGCLONEDB’
set db_file_name_convert=’+DGPRODDB’,’+DGCLONEDB′
set LOG_FILE_NAME_CONVERT=’+DGPRODDB’,’+DGCLONEDB′

set db_create_online_log_dest_1=’+DGCLONEDB′
set db_create_online_log_dest_2=’+DGCLONEDB′ 
set db_recovery_file_dest=’+DGCLONEDB_FRA’
set diagnostic_dest=’
/ora/app/oracle’
backup location ’/rman_backup/’
NOFILENAMECHECK;

Friday, April 5, 2013

Grid Control : The database target is currently unavailable. The state of the components are listed below

Problem

Applied patch on 1 instance of 4-node RAC database. Patch was successful.
When logged into grid control, It shows status of database instance up but no details in it. Message "The database target is currently unavailable. The state of the components are listed below" is displayed.
All other 3 instances(where no patch applied) were fine.

Actions


emctl status agent
--It shows "Agent is Running and Ready"

emctl upload agent
--It shows "EMD upload completed successfully" and 0 pending uploads

Solution

emctl clearstate agent
Message "EMD clearstate completed successfully" will be displayed

Verify in Grid Control. Problem resolved!


Thursday, April 4, 2013

DataGuard - Warning: ORA-16714: the value of property ArchiveLagTarget is inconsistent with the database setting

Problem
Warning: ORA-16714 shown in "Show database tprisby" dgmgrl command

Action
Find the reason for warnings

Step 1

DGMGRL> show database verbose tprisby

Database - tprisby

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds
  Apply Lag:       0 seconds
  Real Time Query: OFF
  Instance(s):
    tprisby1 (apply instance)
    tprisby2
      Warning: ORA-16714: the value of property ArchiveLagTarget is inconsistent with the database setting
      Warning: ORA-16714: the value of property LogArchiveMaxProcesses is inconsistent with the database setting
      Warning: ORA-16714: the value of property LogArchiveMinSucceedDest is inconsistent with the database setting


  Properties:
    DGConnectIdentifier             = 'tprisby'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = '+PSSDEV, +CSPDEVD01, +PSSFLASH, +CSPDEVF1'
    LogFileNameConvert              = '+PSSDEV, +CSPDEVD01, +PSSFLASH, +CSPDEVF1'
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    SidName(*)
    StaticConnectIdentifier(*)
    StandbyArchiveLocation(*)
    AlternateLocation(*)
    LogArchiveTrace(*)
    LogArchiveFormat(*)
    TopWaitEvents(*)
    (*) - Please check specific instance for the property value

Database Status:
WARNING


Step 2 : Find which properties are Inconsistent

DGMGRL> show database tpri InconsistentProperties
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE 

--> No Inconsistent properties found 

DGMGRL> show database tprisby InconsistentProperties
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE
        tprisby2     ArchiveLagTarget                    0                                         0
        tprisby2 LogArchiveMaxProcesses                    4                                         4
        tprisby2 LogArchiveMinSucceedDest                    1                                         1 

--> Here you go. 3 Inconsistent properties found.

Solution :
SQL(tprisby2)>alter system set log_archive_max_processes=4 scope=both sid='*';

System altered.

SQL(tprisby2)>alter system set archive_lag_target=0 scope=both sid='*';

System altered.

SQL(tprisby2)>alter system set log_archive_min_succeed_dest=1 scope=both sid='*';

System altered.





Verify 

DGMGRL> show database tprisby
Database - tprisby

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds
  Apply Lag:       6 minutes 39 seconds
  Real Time Query: OFF
  Instance(s):
    tprisby1 (apply instance)
    tprisby2

Database Status:
SUCCESS




Friday, October 5, 2012

Data Guard Errors: ORA-00308 ORA-17503 ORA-15012

Error
DataGuard redo apply is broken. No more logs are applied. Alert Log of Standby shows following error.
This happened after server was crashed.
Errors with log +PSSFLASH/PRODSBY/archivelog/2012_10_04/thread_3_seq_19610.689.795807751
Errors in file /ora/app/oracle/diag/rdbms/PRODSBY/PRODSBY2/trace/PRODSBY2_pr00_18575.trc:
ORA-00308: cannot open archived log '+PSSFLASH/PRODSBY/archivelog/2012_10_04/thread_3_seq_19610.689.795807751'
ORA-17503: ksfdopn:2 Failed to open file +PSSFLASH/PRODSBY/archivelog/2012_10_04/thread_3_seq_19610.689.795807751

Missing File
As per alertlog below is missing file
+PSSFLASH/PRODSBY/archivelog/2012_10_04/thread_3_seq_19610.689.795807751

Production (PROD)
Login to asmcmd and copy file from prod(PROD) to filesystem
asmcmd -p
cd +PSSFLASH/PROD/ARCHIVELOG/2012_10_04
cp thread_3_seq_19610.2065.795819191 /tmp/thread_3_seq_19610.2065.795819191

Standby Side (PRODSBY)
Stop Recovery if any in progress [ It would have already broken with error ]
SQL > ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
scp <primaryhost>:/tmp/thread_3_seq_19610.2065.795819191 /prodstage01/keepsafe
SQL> ALTER DATABASE REGISTER OR REPLACE PHYSICAL LOGFILE '/prodstage01/keepsafe/thread_3_seq_19610.2065.795819191';
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;

Problem is fixed
Standby Starts Media recovery and will soon catchup

Tuesday, September 4, 2012

ORA-00600: internal error code, arguments: [kqd-objerror$]

Database : 11.2.0.2
Problem   : ORA-00600: internal error code, arguments: [kqd-objerror$]
Reason    : This issue is due to inconsistency in data dictionary
SQL>alter TRIGGER ADDRESS_PREVENT_OVERLAP compile  ;
alter TRIGGER ADDRESS_PREVENT_OVERLAP compile
              *
ERROR at line 1:
ORA-04045: errors during recompilation/revalidation of
PUBLIC.TRIGGER_OVERLAP_PKG
ORA-00600: internal error code, arguments: [kqd-objerror$], [U], [0], [369],
[EMAIL_PREVENT_OVERLAP_AFTER], [], [], [], [], [], [], []
SQL>alter TRIGGER CUSP.ADDRESS_PREVENT_OVERLAP compile  ;
alter TRIGGER ADDRESS_PREVENT_OVERLAP compile
              *
ERROR at line 1:
ORA-04045: errors during recompilation/revalidation of
PUBLIC.TRIGGER_OVERLAP_PKG
ORA-00600: internal error code, arguments: [kqd-objerror$], [U], [0], [369],
[EMAIL_PREVENT_OVERLAP_AFTER], [], [], [], [], [], [], []
drop public synonym TRIGGER_OVERLAP_PKG
                    *
ERROR at line 1:
ORA-00600: internal error code, arguments: [kqd-objerror$], [U], [0], [369], [EMAIL_PREVENT_OVERLAP], [], [], [], [], [], [], []

Actions
1] Tried purge dba_recyclebin - Did not help
2] Tried recyclebin off and bounce Instance - Did not help
3] Tried dropping object EMAIL_PREVENT_OVERLAP_AFTER - Gives same ORA-600
3] Tried dropping public synonym TRIGGER_OVERLAP_PKG - Gives same ORA-600

Solution1] Find the object_id of the object giving problem for e.g. synonym TRIGGER_OVERLAP_PKG
 select object_id from dba_objects where object_name='TRIGGER_OVERLAP_PKG' and object_type='SYNONYM';
2] insert record in sys.objerror$
 insert into sys.objerror$ values(99601);
 commit;
3] Very important - Shutdown abort. Do not gracefully shutdown. Abort will not let rowcache to be flushed to dictionary
4] Start Instance
5] Drop public synonym TRIGGER_OVERLAP_PKG - Success
6] recreate public synonym TRIGGER_OVERLAP_PKG - Success
7] compile trigger CUSP.ADDRESS_PREVENT_OVERLAP - Success

Disclaimer : UNLESS ORACLE SUPPORT PROVIDES SUCH INSTRUCTION THE DATA DICTIONARY SHOULD NOT BE TAMPERED. RUN YOUR FINDINGS WITH ORACLE BEFORE EXECUTING ON YOUR ORACLE/VENDOR SUPPORTED ENVIRONMENT.

Thursday, August 23, 2012

Wait Event : log file sequential read

Case study: log file sequential read

Recently one of my clients asked me to analyze his database problem post factum. At the time of request the problem itself was already solved of "self-solved", so the task was to explain it's causes and prevent similar issues in future.

Symptoms

Here's problem's description given by client:

  • third party software developers were rebuilding (REBUILD ONLINE) some indexes in the database;
  • suddenly, the whole system became much slower; main database wait event observed was "log file sequential read";
  • IO workload for storage partition holding online redo was so significant that all users of that storage device noticed performance problems - and it was hi-end storage system, main storage device for that enterprise;
  • problem disappeared when Oracle instance had been bounced.

Diagnostics

As far as "log file sequential read" isn't the most frequently seen wait event, I've looked up Oracle documentation for it's description. It says: "Waiting for the read from this log file to return. This is used to read redo records from the log file". Obvious but useless, as for me, as there's no information why may Oracle read online redo. Google and Metalink gave me a couple of link describing ARCH and Logminer performance problems, but they were not the case.

However, there was an interesting message in alert.log: "Some indexes or index [sub]partitions of table SOMESCHEMA.SOMETABLE have been marked unusable" followed by number of "Errors in file..." records without actual error messages. All indexes on SOMESCHEMA.SOMETABLE were valid at the time of investigation. But in trace files mentioned in alert.log I saw, inter alia, message "oer 8102.2 - obj# 55780", i.e. "ORA-8102: index key not found" for index with object_id = 55780.

After that it was easy to:
  • find a bug #7329252 "Index corruption after rebuild index ONLINE" saying "If there is a heavy concurrency on a table during online index rebuild, the index can be corrupt missing some keys." Fixed in 10.2.0.4.4 and 11.2.0.1;
  • understand that "log file sequential read" wait event was caused by sessions that had found corrupted block and tried to dump corresponding redo blocks. By the way, documentation hints that "log file sequential read" is used when dumping redo in event parameters description: "log# - The relative sequence number of the logfiles within a log group (used only when dumping the logfiles)";
  • understand that initial information "problem disappeared when Oracle instance had been bounced" was incorrect. In fact, software developers just rebuilt corrupted index after RDBMS reboot and concealed that fact from client

Key findings:
  • now I know at least three situations when "log file sequential read" can be observed: when ARCH as reading file to archive it; when Logmine is working and when some session is trying to create dump of redo;
  • again, REBUID ONLINE is not so ONLINE as it seems - at least, in Oracle 10g.
ref : http://ua-dba.blogspot.com/2010/09/case-study-log-file-sequential-read.html