Friday, June 20, 2014

Enable Supplemental Logging

We recently encountered an issue in the production database due to a bug in the application batch processing code that resulted in corrupted data in one of the key tables. Developers suspected that the code, which performed extensive processing before persisting data to the database, might have caused this data corruption issue. Since we couldn't reproduce the same issue in our test environment, developers requested the DBA team that a complete history of update and delete statements that were fired against this production table be generated to facilitate troubleshooting. DBA team decided to deploy Oracle Log Miner to dig out the history of all update and delete statements that were fired against this table. DBA team had used LogMiner successfully in the past several times without any issue.

As you know, Oracle LogMiner presents a complete history of changes to the database by reading and processing redo vectors in the online and archived redo logs. DBAs started the LogMiner, added required archive log files and queried V$LOGMINER_CONTENTS to get the history of changes that had occurred against the table in question. The following is the LogMiner session log:

REM Step 1: Build LogMiner Dictionary

SQL> EXECUTE dbms_logmnr_d.build('dictionary.ora','e:\oradba\temp', options => dbms_logmnr_d.store_in_flat_file);

PL/SQL procedure successfully completed.

REM Step 2: Generate a script to add archived log files to LogMiner

SQL> set pagesi 0
SQL> set linesi 300
SQL> SELECT 'exec  dbms_logmnr.add_logfile(' || '''' || name || ''''|| ',DBMS_LOGMNR.ADDFILE );'
  FROM V$ARCHIVED_LOG
  WHERE FIRST_TIME >= TO_DATE('06/16/2014 00:00','mm/dd/yyyy hh24:mi')
  AND FIRST_TIME <= TO_DATE('06/16/2014 16:59','mm/dd/yyyy hh24:mi')
  AND dest_ID = 1 ;

exec  dbms_logmnr.add_logfile('E:\ORARCH\CTEST\CTEST8468602021012.1',DBMS_LOGMNR.ADDFILE );
exec  dbms_logmnr.add_logfile('E:\ORARCH\CTEST\CTEST8468602021039.1',DBMS_LOGMNR.ADDFILE );

REM Step 3: Add Redo Archived Log Files

SQL> exec  dbms_logmnr.add_logfile('E:\ORARCH\CTEST\CTEST8468602021012.1',DBMS_LOGMNR.ADDFILE );

PL/SQL procedure successfully completed.

SQL > exec  dbms_logmnr.add_logfile('E:\ORARCH\CTEST\CTEST8468602021039.1',DBMS_LOGMNR.ADDFILE );

PL/SQL procedure successfully completed.

REM Step 4: Start LogMiner

SQL > exec   DBMS_LOGMNR.START_LOGMNR(DICTFILENAME =>'e:\Oradba\temp\dictionary.ora');

PL/SQL procedure successfully completed.

REM Step 5: Create a table to store LogMiner contents

SQL > SELECT MIN(TIMESTAMP), MAX(TIMESTAMP) FROM V$LOGMNR_CONTENTS ;

06/16/2014:00:04:57 06/16/2014:16:45:07

SQL > CREATE TABLE TAB1_DML
  2  AS
  3  select *
  4  from v$logmnr_contents
  5  where table_name='DOC ;

Table created.

REM Step 6: End LogMiner Sesssion

SQL > exec DBMS_LOGMNR.END_LOGMNR;

PL/SQL procedure successfully completed.

The LogMiner contents that DBAs sent to the developers had few insert statements, but there were no update statements. This was surprising. We knew that the table was updated, but Oracle LogMiner contents didn't show any update statements. DBAs asserted that LogMiner wouldn't miss a single update or delete statement. A long meeting ensued. Lots of discussion and debate. Who updated the data? How was the data updated? Malicious activity was now a suspect. Security team was involved. Long story short, this became a big issue.

DBA team re-ran the LogMiner to double-check and triple-check everything. That's when DBAs realized that Supplemental Logging wasn't enabled as shown below:

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE ;

SUPPLEME
--------------
NO

As Oracle® Database Utilities 11g Release 2 (11.2) mentions, "Supplemental logging, or minimal supplemental logging, logs the minimal amount of information needed for LogMiner to identify, group, and merge the redo operations associated with DML changes. It ensures that LogMiner (and any product building on LogMiner technology) has sufficient information to support chained rows and various storage arrangements, such as cluster tables and index-organized tables."

It is not a common knowledge that Supplemental Logging is a pre-requisite to Oracle LogMiner. I don't think supplemental logging has been on many best practices check lists. It hasn't gotten the same press as archive logs or flashback logs. Many DBAs use LogMiner without realizing that they may not get the complete history of changes to the database if supplemental logging is not enabled. Unfortunately, it would work without supplemental logging, but it should not be used without it.

Indeed, we had not enabled minimal supplemental logging in this database. Due to the update activities and smaller value of PCTFREE for this table, there were chained and migrated rows in the tables. We realized that LogMiner wouldn't be able to produce a complete list of changes to the table as we were expecting that it would. This was not a good news. Due to missing minimal supplemental logging information, it was not possible to produce a history of changes that had occurred through LogMiner.

That brings me to the purpose of this blog which is a recommendation to enable minimal supplemental logging. Minimal Supplemental logging is not default in Oracle9i or higher.  Quite often DBAs are not aware that it may be crucial in detecting/repairing logical database corruption issues through LogMiner. If you haven’t enable supplemental logging, then you should enable it now. Enabling supplemental logging is quite simple, as shown below:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Database altered.

SQL> SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE ;

SUPPLEME
--------------
YES

In addition, you may want to enable primary keys (PK) supplemental logging as so that Oracle logs values of primary keys in the redo logs for UPDATE and DELETE statements. Without PK supplemental logging, Oracle only logs ROWIDs in the redo log vectors and hence it is not easy to identify updated or deleted rows just by reading SQL_UNDO and SQL_REDO columns of V$LOGMNR_CONTENTS. I would suggest enabling  PK supplemental logging as well. Please note that there is an overhead of PK supplemental logging so you need to weigh pros and cons before enabling PK supplemental logging. It's worth mentioning that replication tools such as Oracle Golden Gate, Oracle Logical Standby Database, Oracle Streams do require PK supplemental logging. Enabling PK supplemental logging is quite simple as shown below:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS ;

Database altered.



No comments:

Post a Comment