Sunday, August 23, 2009

Do It Yourself: Implement Performance Monitoring Tool - Eagle

During the era of Oracle7 back in 1998, I wrote an article about how to build a performance monitoring tool. This article, titled ‘Do It Yourself: Implement Performance Monitoring Tool - Eagle’, was published in Select magazine. This article received several kudos from fellow DBAs.

Although this article may not be as relavent as it was 10 years ago, it still demonstrate a concept that can be applied to solve real-life business problems. Eagle introduces a monitoring framework that can be easily extended and fully customized. Some of the key features of Eagle are: Multiple database monitoring from control center database, trend analysis, implemented using PL/SQL and Unix shell, easy to implement and use, customizable and extensible to meet your needs. Source code and procedures to implement Eagle are included here.


-----------------------------------------------------------------------------------------------

Do It Yourself

Implement Performance Monitoring Tool

Keval is a database administrator for a financial company in New York. One day, John queried dynamic performance views to calculate database buffer hit ratio for a Sales database. John found that database buffer hit ratio was 65% which was low and not acceptable. He knew that database buffer hit ratio was 90% just a week ago. He thought this drop in the buffer hit ratio might be due to some spike activities. But he wasn’t sure. John decided to watch database buffer hit ratio everyday for at least a week before taking any corrective actions. John also decided to watch library cache hit ratio, dictionary cache hit ratio and sort area hit ratio to identify memory parameters tuning requirements, if any. John needed a performance monitoring tool that could collect database performance statistics in an automated fashion.

Rupa, a database administrator for a communication company in Fairfax, VA, was informed by her human resources department that they were going to make a heavy use of HR database as company was going through a merger. As she was running low on free space in HR database, she decided to watch % free space and object extensions more closely along with other performance parameters. Rupa needed an automated database monitoring procedures to do this.

Kunal is a database administrator for a consulting firm in Chicago. His company decided to form a centralized DBA service to support all six US offices from a single site. This re-engineering move placed a heavy burden on Mark and his team, as there were 50+ Oracle databases in US six offices. Mark found it very time consuming to log into every system at all six US offices, and examine the health of all Oracle databases he was responsible for. Mark needed an automated database monitoring tool that can reach out to all six sites from a central site, and collect information pertaining to the health of all databases.

Like Keval, Kunal or Rupa, if you are looking for a database monitoring tool, then Eagle may be of interest to you. In fact, I was facing the same challenge as many of us do: monitor multiple databases proactively and efficiently. I evaluated few commercially available DBA tools. Some of them were good but too expensive while some of them didn’t do what I wanted. Finally, I decided to design and implement a tool, Eagle, for database monitoring and administration. I started using Eagle for day-to-day dba chorus and found it very useful. I passed Eagle to couple of my DBA friends, and they also found it very useful. That’s when I decided to write an article about Eagle. I feel that Eagle will also be very useful to all of you. I will be using the name Eagle throughout this article to refer to this performance monitoring tool.

Eagle is a tool designed by the DBA for the DBA to monitor Oracle databases. It falls into do-it-yourself category. With the help of Eagle, you can monitor critical performance tuning parameters like database buffer hit ratio, library cache hit ratio, dictionary cache hit ratio or sort area hit ratio, and space management parameters like % free space and object extension. Eagle can also be customized and extended easily to meet ever-changing requirements.

Now let’s look at the results produced by Eagle. Eagle's output report is shown in Figure 1. Watch Name is the name of the watch for the database. Parameter column is used to provide an additional information for the watch. The rest of the columns represent values of the watch 4 weeks ago, 3 weeks ago, 2 weeks ago, 1 week ago, 6 days ago, 5 days ago, and so on. Today column shows today's value for the watch. Change column represents the change in value of the watch in last 4 weeks.

For example, Database buffer hit ratio in the Sales database was 98% 4 weeks ago, 95% 3 weeks age, 95% 2 weeks age, 90% 1 week ago, and 65% today. The drop of 33%, as shown in change column of Figure 1, is the difference between the values of database buffer hit ratio in 4 weeks. By the same token, you can also examine the trend in the values of library cache hit ratio, dictionary cache hit ratio and sort area hit ratio. % Free Space and Object Extension watches help you analyze database space usage pattern.


Overview
Eagle is based on the concept of watch. Watches are used extensively by programmers to debug code or to evaluate expressions at runtime. In Eagle’s terminology, a watch is used to monitor a parameter of importance. A parameter could be a database buffer hit ratio or library cache hit ratio or % free space. Watches are run against databases being monitored. Eagle lets you monitor watches over a period of time. Not only Eagle collects monitoring data, but also displays trend in the value of a parameter. Here are some of the features of Eagle:

Control Center Database: Eagle uses a control center database to store monitoring information and database statistics of all the databases that are being monitored. You can monitor multiple databases from a single site. Control center database can be an independent Oracle database, or it can be a part of any database. Advantage of having an independent control center database for Eagle is an isolation of monitoring information and database statistics from your production data. Control center database provides a centralized monitoring capability.

Multiple Databases: In today's world, most of the DBAs are called upon to support, monitor and tune more than one Oracle databases. Eagle can help you monitor multiple databases simultaneously. Databases being monitored by Eagle can be on different platforms (e.g. UNIX, NT, Open VMS) at different locations (New York, Washington, San Francisco). There is no limit on number of monitored databases in Eagle. It’s very easy to add new databases for monitoring in Eagle.

History of Watches: Eagle lets you monitor a watch over a period of time. It is very important to collect sufficient data before a conclusion can be drawn as to the behavior of a parameter. You need to distinguish between a drop in the value of a critical parameter due to an unusual activity and a drop in the value of a critical parameter due to a routine activity against your database. Eagle’s output report shows a watch for last 28 days, and you can determine a trend in the value of a watch.

Standard Watches: Eagle provides quite a few standard watches. Standard watches, described below, include categories like memory tuning to I/O tuning, auditing to space management. All you have to do is activate watches for the database that you wish to monitor. Note that the same watch can be used to monitor more than one databases.

Extensibility: Eagle is fully extensible. If you are not satisfied with standard watches available in Eagle, you can add watches of your own. If you want to add a watch to monitor, for example, rollback segment contention, you can do so easily. You define a watch, activate the watch, and you are ready to go. Watches in Eagle are defined in SQL language. This is one of the advantages of Eagle. We all are familiar with SQL, and have used SQL scripts extensively to manage our day-to-day chorus of database administration.

Customization of Watches: It’s likely that features available in a tool may not meet your tuning and monitoring requirements. You may need to customize performance monitoring tool to accommodate conflicting tuning needs of your databases. For example, OLTP database may have differing tuning requirements than OLAP database. It's very easy to customize standard and/or user defined watches in Eagle. Just modify the criteria of the watch.
Standard Watches

To gain more insight, let’s go through standard watches, and how you can use them to tune your database.

Database Buffer Hit Ratio: Database buffer hit ratio shows how effective your database buffers in memory are to satisfy database block access requests. If a database block is not in buffers, then Oracle must perform disk I/O to access this database block, and disk I/O is always more expensive than memory access. This ratio is calculated using the formulae (consistent gets + db block gets) / (consistent gets + db block gets + physical reads). Dynamic performance view v$sysstat stores information about these parameters. You should try to keep database buffer ratio as high as possible. Increase db_block_buffers parameter of init.ora to achieve high database buffer hit ratio.

Dictionary Cache Hit Ratio: Dictionary cache hit ratio represents the percentage of dictionary cache information requests satisfied by dictionary cache in memory. This ratio is calculated using the formulae sum(gets) / (sum(gets) + sum(misses)). Values of sum(gets) and sum(misses) are obtained from v$rowcache dynamic performance view. If you find this ratio, for example, to be 90 or lower over the period of time, consider increasing shared_pool_size parameter of init.ora.

Library Cache Hit Ratio: Library cache hit ratio represents the percentage of library objects requests satisfied by library cache area of SGA. This ratio is calculated using the formulae sum(pins) / (sum(pins) + sum(reloads) ). Values of sums(pins) and sum(reloads) can be obtained from v$library cache dynamic performance view table. Consider increasing shared_pool_size parameter if you find library cache hit ratio in the low range consistently.

Sort Area Hit Ratio: Sort area hit ratio shows the percentage of sorts performed in memory. Sort area hit ratio is calculated by using the formulae sorts(memory) / (sorts(memory) + sorts(disk)). Values of sorts(memory) and sorts(disk) are obtained from v$sysstat dynamic performance view. Sort area hit ratio can vary from 100 to 0 where value of 100 is the ideal situation. If you find sort area hit ratio consistently low, you should consider increasing sort_area_size parameter of init.ora.

% Free Space Alert: One of the daily DBA chores is to ensure that there is enough free space in your database for growth. % free space alert warns you whenever % free space in tablespace drops below 10% of the total space or if there is change of 5% in free space in last 28 days. % free space alert derives its information from data dictionary.

Object Extension Alert: One of the other DBA chores is to watch for object extension. If an object tries to grow beyond maximum extent, you will get “Maximum number of extents reached” error. Object extension alert will alert you about the rate at which object is growing. You can also identify “hot” objects in your database, and take appropriate actions. Even though the later version of Oracle will allow unlimited number of extents, it’s good to identify “hot” objects.

Design
Entity relationship diagram for Eagle is shown in Figure 2, and DDL for Eagle is shown in the script eddl.sql. There are four entities in Eagle: Database, Watch, Database Watch and Database Watch Result.

Information about databases that are going to be monitored is stored in database entity. This entity contains three attributes: database id, name and a database link name. Database id and name attributes of this entity are the id and the name of the database. Database link name is the name of the database link from control center database to the database we want to monitor. You need to add one row in this entity for each database you wish to monitor. As database that is being monitored is queried via database link, you need to create a database link via SQL command. At run time, value of the database link is substituted in the database link parameter of the watch criteria. This allows us to use the same watch for multiple databases.

Watch entity contains watch id, name and watch criteria. Watch criteria is written in SQL language. SQL statement is broken into three parts for simplicity and manageability: insert clause, select clause and where clause. As names imply, Insert clause contains an insert part of the SQL statement while select clause contains select part of the SQL statement and where clause contains where clause of SQL statement, as shown in the script estdwtch.sql.

To simplify SQL programming, you can use prepare1 clause, prepare2 clause, close1 clause and close2 clause attributes of watch entity. One example would be to store SQL statements that create views, and refer them in select clause. One of the standard watch that monitors % free space creates two views: free space view and data file view. Both of these views are referred in subsequent select clause. Since these views are created for temporary purposes, SQL statements to drop them are stored in close1 clause and close2 clause. Refer to estdwtch.sql script to see how watch entity and its components are used to build a watch criteria.

Database watch entity contains information about the watches that are used for a database. It allows you to mix and match databases and watches for monitoring. You can use the same watch for more than one database, and one database can have many watches. There are three attributes in database watch entity: watch id, database id and active yesno. You can turn the watch on by setting it to ‘Y’ and off by setting it to ‘N’.

Database watch result entity stores the monitoring information. There are five attributes in this entity: database id, watch id, run time, parameter and value. Database id and watch id attributes are self-explanatory. Run time attribute contains the time the watch was run. Parameter attribute is used to store optional information about monitoring data. For example, % free space watch uses parameter attribute to store tablespace name, and extent watch uses this attribute to store segment name. Value attribute stores the result or value of the watch.
Execution of Watches

A PL/SQL script, eplsql.sql, is at the heart of Eagle. This script is run at a pre-determined time everyday. First of all, it queries database watch entity for active database watches. A watch is active if active yesno attribute of database watch entity is set to ‘Y’. This script queries database entity to retrieve database link name for the database that is going to be watched. Then it queries watch entity to retrieve watch criteria. Database link name is substituted in the watch criteria. Before executing watch criteria, it executes watch prepare clauses if existent. For example, % free space watch has prepare clauses. Then, it executes watch criteria, and stores results in database watch result entity. Then Eagle performs cleanup, if any, by executing close clauses. The same process is repeated for all active watches.

Installation
If you like what you read so far, then let’s discuss a step-by-step procedure to install Eagle on your system. All scripts have been included in this article

Creating an Oracle Account: Eagle employs a control center database to store monitoring information. If you have a database exclusively for administration, you can use it for Eagle. If you plan to monitor quite a few databases, you might be better off creating a separate database just for Eagle. If you don't want to create a separate database for Eagle, then you can pick any of your database as an Eagle control center database. In any event, create an Oracle account for Eagle with dba privileges. You also need to grant two other privileges as shown in the script eaccount.sql.

Creating Objects, Standard Watches and PL/SQL procedure: Run DDL script, as shown in eddl.sql script, to create necessary tables and indexes. Run estdwtch.sql script to create Eagle’s standard watches. Create execute_db_watch PL/SQL stored procedure by executing a eplsql.sql script. All of these scripts should be run from Eagle Oracle account.

Creating Database Link: Create a database link from the control center database to the database that you wish to monitor, as shown below:

CREATE DATABASE LINK
CONNECT TO EAGLE IDENTIFIED BY
USING 'ConnectString

Note that you can use SQL*Net 1.X connect string or SQL*Net 2.X connect descriptor while creating a database link. If you want to use any Oracle account other than SYS in your database link create statement, then you need to grant appropriate privileges to that Oracle account so that database monitoring information can be selected. Eagle needs select privileges on V$ dynamic performance views and data dictionary views. If you want to monitor control center database itself, then you also need to create a database link to itself as database that is being monitored is queried via database link. Another scenario in which you need to create a database link to the same database is when you are using your production database for Eagle and you want to monitor your production database.

Adding Databases: The next step is to add rows in database table for the databases that are going to be monitored by Eagle. There are three columns in this table: database id, name and database link name. Use the following statement with appropriate values for your database.
INSERT INTO DATABASE (DB_ID, NAME, DB_LINK)
VALUES ('','','');

Activating Watches: Use the following statement to add watches:
INSERT INTO DB_WATCH(DB_ID, WATCH_ID, ACTIVE_YN)
VALUES('','WATCH_ID>','Y') ;
DB_ID and WATCH_ID columns in this table are foreign keys from database and watch tables, respectively. You can de-activate a watch for a database by setting active_yn column to ‘N’ for a corresponding row in db_watch table.

Running Watches: You should execute execute_db_watch stored procedure from SQL*Plus once everyday. You might want to run watches in the evening rather than in the morning so as to take into account daily activities. We have provided eagle.sh Unix shell script that executes EXECUTE_DB_WATCH stored procedure. You can submit this shell script via scheduling utilities like cron in Unix or you can execute this stored procedure via dbms_jobs package. Note that results of watches like Database Buffer Hit Ratio are based on statistics from V$ views, and hence reflect the values since the last database startup, not from the time you ran Eagle last time.

Generating Report: Run a SQL*Plus script, eoutput.sql, to generate an output report. The output produced by this script is shown in Figure 1. The script eagle.sh executes eoutput.sql script along with running execute_db_watch stored procedure. If you run execute_db_watch more than once a day, then you need to modify eoutput.sql script to take into account the script running frequency.

Customizing Watches: As shown in eddl.sql script, Watch criteria is divided into insert clause, select clause and where clause. If you need to modify watch criteria, just update the where clause of the watch. For example, as shown in estdwtch.sql script, the where clause of Extents watch contains extents > 1 clause. If you want to monitor objects with 10 or more extents, you just need to modify where clause to extents > 10. We have shown all standard watches in estdwtch.sql. Make sure that you use two single quotes for character string in the where clause.

Creating a Watch of Your Own: With some modifications, you may be able to import your SQL scripts into Eagle as watches, and use them over and over again to monitor databases. To wtire a watch of your own, you need to construct a SQL statement and store it in watch table. estdwtch.sql script gives you some examples on how to create a watch of your own. Note in the watch criteria. acts as a formal parameter, and is replaced by the database link name during runtime.
Summary

There are two aspects of performance monitoring process: in-depth knowledge of Oracle database and necessary statistics collected over the period of time. Eagle will help you do the later part. By using the sample scripts provided in this article, you should be able to automate performance statistics collection procedures. The purpose of this article is not to provide a shrink-wrapped performance monitoring tool, but to provide a frame-work upon which you can build a sound performance monitoring tool to meet your needs. Eagle maynot meet all of your requirements, but you should be able to customize Eagle to meet your needs. Eagle maynot have all the watches you need, but you should be able to add new watches to extend its functionality. Eagle is a do-it-yourself tool and you are encouraged to enhance and improve Eagle to meet your requirements.







Script 1: Oracle Account Script

REM File Name: eaccount.sql
REM Description: Script to create Eagle Oracle Account
REM Usage: Run from sys or privileged Oracle account
REM Assigns USERS as default and TEMP as temporary tablespace
REM

CREATE USER EAGLE
IDENTIFIED BY EAGLE
DEFAULT TABLESPACE USERS
TEMPORARY TABLESPACE TEMP ;

GRANT DBA TO EAGLE ;
GRANT CREATE VIEW TO EAGLE ;
GRANT EXECUTE ON DBMS_SQL TO EAGLE ;
GRANT SELECT ON TO EAGLE;





Script 2 : Objects Creation Script
REM Name: eddl.sql
REM Description: Creates Objects(Tables/Indexes) in USERS and USERS_IDX tablespace
REM Usage: Run this script from SQL*Plus, Use Eagle Oracle account
REM

CREATE TABLE WATCH (
WATCH_ID VARCHAR2(12) NOT NULL,
NAME VARCHAR2(30) NOT NULL,
PREPARE1_CLAUSE VARCHAR2(256) NULL,
PREPARE2_CLAUSE VARCHAR2(256) NULL,
INSERT_CLAUSE VARCHAR2(256) NOT NULL,
SELECT_CLAUSE VARCHAR2(256) NOT NULL,
WHERE_CLAUSE VARCHAR2(256) NULL,
CLOSE1_CLAUSE VARCHAR2(256) NULL,
CLOSE2_CLAUSE VARCHAR2(256) NULL,
CONSTRAINT WATCK_PK
PRIMARY KEY (WATCH_ID)
USING INDEX TABLESPACE USERS_IDX
STORAGE(INITIAL 16K NEXT 16K PCTINCREASE 0)
) TABLESPACE USERS
STORAGE ( INITIAL 16K NEXT 16K PCTINCREASE 0) ;


CREATE TABLE DATABASE (
DB_ID VARCHAR2(12) NOT NULL,
NAME VARCHAR2(30) NOT NULL,
DB_LINK VARCHAR2(30) NOT NULL,
CONSTRAINT DATABASE_PK
PRIMARY KEY (DB_ID)
USING INDEX TABLESPACE USERS_IDX
STORAGE(INITIAL 16K NEXT 16K PCTINCREASE 0)
) TABLESPACE USERS
STORAGE ( INITIAL 16K NEXT 16K PCTINCREASE 0) ;


CREATE TABLE DB_WATCH (
DB_ID VARCHAR2(12) NOT NULL,
WATCH_ID VARCHAR2(12) NOT NULL,
ACTIVE_YN VARCHAR2(1),
CONSTRAINT DB_WATCH_PK
PRIMARY KEY (DB_ID, WATCH_ID)
USING INDEX TABLESPACE USERS_IDX
STORAGE(INITIAL 16K NEXT 16K PCTINCREASE 0),
CONSTRAINT DB_WATCH_FK1
FOREIGN KEY (DB_ID)
REFERENCES DATABASE (DB_ID),
CONSTRAINT DB_WATCH_FK2
FOREIGN KEY (WATCH_ID)
REFERENCES WATCH (WATCH_ID)
) TABLESPACE USERS
STORAGE ( INITIAL 16K NEXT 16K PCTINCREASE 0) ;


CREATE TABLE DB_WATCH_RESULT(
DB_ID VARCHAR2(12) NOT NULL,
WATCH_ID VARCHAR2(12) NOT NULL,
RUN_TIME DATE NOT NULL,
PARAMETER VARCHAR2(256) NULL,
VALUE NUMBER(12,2) NOT NULL,
CONSTRAINT DB_WATCH_RESULT_FK
FOREIGN KEY (DB_ID,WATCH_ID)
REFERENCES DB_WATCH(DB_ID, WATCH_ID)
) TABLESPACE USERS
STORAGE ( INITIAL 16K NEXT 16K PCTINCREASE 0) ;


Script 3 : Standard Watches Creation Script
REM Name: estdwtch.sql
REM Description: Script to create Standard Watches
REM Usage: Run from SQL*Plus, Use Eagle account
REM
rem
rem
rem Database Buffer Watch
rem
INSERT INTO WATCH(WATCH_ID,NAME,INSERT_CLAUSE,SELECT_CLAUSE,WHERE_CLAUSE)
VALUES('DB_BUF_HR','Database Buffer Hit Ratio',
'INSERT INTO DB_WATCH_RESULT(DB_ID,WATCH_ID,RUN_TIME,VALUE) ',
'SELECT '''',''DB_BUF_HR'',SYSDATE,100*ROUND(((A.VALUE + B.VALUE - C.VALUE)/(A.VALUE + B.VALUE)),3) FROM SYS.V$SYSSTAT@ A,SYS.V$SYSSTAT@ B,SYS.V$SYSSTAT@ C ',
'WHERE C.NAME = ''physical reads'' AND A.NAME = ''consistent gets'' AND B.NAME = ''db block gets'' ') ;
rem
rem Sort Area Hit Ratio
rem
INSERT INTO WATCH(WATCH_ID,NAME,INSERT_CLAUSE,SELECT_CLAUSE,WHERE_CLAUSE)
VALUES('SORT_HR','Sort Area Hit Ratio',
'INSERT INTO DB_WATCH_RESULT(DB_ID,WATCH_ID,RUN_TIME,VALUE) ',
'SELECT '''',''SORT_HR'',SYSDATE,100*ROUND(((A.VALUE - B.VALUE)/(NVL(A.VALUE,1))),3) FROM SYS.V$SYSSTAT@ A,SYS.V$SYSSTAT@ B ',
'WHERE A.NAME = ''sorts (memory)'' AND B.NAME = ''sorts (disk)'' ') ;
rem
rem Dictionary Cache Hit Ratio
rem
INSERT INTO WATCH(WATCH_ID,NAME,INSERT_CLAUSE,SELECT_CLAUSE,WHERE_CLAUSE)
VALUES('DC_HR','Dictionary Cache Hit Ratio',
'INSERT INTO DB_WATCH_RESULT(DB_ID,WATCH_ID,RUN_TIME,VALUE) ',
'SELECT '''',''DC_HR'',SYSDATE,100*ROUND(((SUM(GETS) - SUM(GETMISSES))/(SUM(GETS))),3) FROM SYS.V$ROWCACHE@ ',
' ');
rem
rem Library Cache Hit Ratio
rem
INSERT INTO WATCH(WATCH_ID,NAME,INSERT_CLAUSE,SELECT_CLAUSE,WHERE_CLAUSE)
VALUES('LIB_HR','Library Cache Hit Ratio',
'INSERT INTO DB_WATCH_RESULT(DB_ID,WATCH_ID,RUN_TIME,VALUE) ',
'SELECT '''',''LIB_HR'',SYSDATE,100*ROUND(((SUM(PINS) - SUM(RELOADS))/(SUM(PINS))),3) FROM SYS.V$LIBRARYCACHE@ ',
' ');


rem
rem Extents Watch
rem
INSERT INTO WATCH(WATCH_ID,NAME,INSERT_CLAUSE,SELECT_CLAUSE,WHERE_CLAUSE)
VALUES('EXTENTS','Object Extention',
'INSERT INTO DB_WATCH_RESULT(DB_ID,WATCH_ID,RUN_TIME,PARAMETER,VALUE) ',
'SELECT '''',''EXTENTS'',SYSDATE,OWNER''.''SEGMENT_NAME,EXTENTS FROM SYS.DBA_SEGMENTS@ ',
'WHERE (EXTENTS > 10 AND OWNER NOT IN (''SYSTEM'',''SYS'')) OR SEGMENT_TYPE = ''ROLLBACK''') ;
rem
rem Free Space Watch
rem
INSERT INTO WATCH(WATCH_ID,NAME,INSERT_CLAUSE,SELECT_CLAUSE,WHERE_CLAUSE,PREPARE1_CLAUSE,PREPARE2_CLAUSE,CLOSE1_CLAUSE,CLOSE2_CLAUSE)
VALUES('FREESPC','% Free Space',
'INSERT INTO DB_WATCH_RESULT(DB_ID,WATCH_ID,RUN_TIME,PARAMETER,VALUE) ',
'SELECT '''',''FREESPC'',SYSDATE,B.TABLESPACE_NAME,((NVL(A.BYTES,0))/((B.BYTES)))*100 FROM FREE_SPACE_VIEW A,DATA_FILE_VIEW B ',
'WHERE A.TABLESPACE_NAME(+)=B.TABLESPACE_NAME ',
'CREATE VIEW FREE_SPACE_VIEW(TABLESPACE_NAME,BYTES) AS SELECT TABLESPACE_NAME,SUM(BYTES) FROM SYS.DBA_FREE_SPACE@ GROUP BY TABLESPACE_NAME',
'CREATE VIEW DATA_FILE_VIEW(TABLESPACE_NAME,BYTES) AS SELECT TABLESPACE_NAME,SUM(BYTES) FROM SYS.DBA_DATA_FILES@ GROUP BY TABLESPACE_NAME',
'DROP VIEW FREE_SPACE_VIEW',
'DROP VIEW DATA_FILE_VIEW');
rem
rem
commit;

Script 4 : PL/SQL Stored Procedure
REM
REM Name: eplsql.sql
REM Description: Create PL/SQL procudure that executes active watched
REM Usage: Run from SQL*Plus, Use Eagle account
REM
CREATE OR REPLACE PROCEDURE EXECUTE_DB_WATCH
AS
CURSOR C_DB_WATCH IS
SELECT DB_ID, WATCH_ID, ACTIVE_YN
FROM DB_WATCH ;
db_watch_rec DB_WATCH%ROWTYPE ;
watch_rec WATCH%ROWTYPE ;
database_rec DATABASE%ROWTYPE ;
sql_stmt VARCHAR2(1000) ;
ret_val INTEGER ;
cursor_id INTEGER ;
c_get_data INTEGER ;
temp_clause VARCHAR2(256) ;
BEGIN
OPEN C_DB_WATCH;
LOOP
FETCH C_DB_WATCH INTO db_watch_rec;
EXIT WHEN C_DB_WATCH%NOTFOUND ;
IF db_watch_rec.active_yn = 'Y' THEN
SELECT *
INTO watch_rec
FROM WATCH
WHERE WATCH_ID = db_watch_rec.watch_id ;
/* fetch database record */
SELECT *
INTO database_rec
FROM DATABASE
WHERE DB_ID = db_watch_rec.db_id ;
watch_rec.select_clause := REPLACE(watch_rec.select_clause,
'',database_rec.db_id);
watch_rec.select_clause := REPLACE(watch_rec.select_clause,
'',database_rec.db_link);
sql_stmt := watch_rec.insert_clause
watch_rec.select_clause
watch_rec.where_clause;
IF watch_rec.prepare1_clause IS NOT NULL THEN
watch_rec.prepare1_clause := REPLACE(watch_rec.prepare1_clause,
'',database_rec.db_link);
cursor_id := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(cursor_id,watch_rec.prepare1_clause,DBMS_SQL.V7);
ret_val := DBMS_SQL.EXECUTE(cursor_id) ;
DBMS_SQL.CLOSE_CURSOR(cursor_id);
END IF;

IF watch_rec.prepare2_clause IS NOT NULL THEN
watch_rec.prepare2_clause := REPLACE(watch_rec.prepare2_clause,
'',database_rec.db_link);
cursor_id := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(cursor_id,watch_rec.prepare2_clause,DBMS_SQL.V7);
ret_val := DBMS_SQL.EXECUTE(cursor_id) ;
DBMS_SQL.CLOSE_CURSOR(cursor_id);
END IF;

c_get_data := DBMS_SQL.OPEN_CURSOR ;
DBMS_SQL.PARSE(c_get_data,sql_stmt,DBMS_SQL.V7) ;
ret_val := DBMS_SQL.EXECUTE(c_get_data) ;
DBMS_SQL.CLOSE_CURSOR(c_get_data) ;

IF watch_rec.close1_clause IS NOT NULL THEN
cursor_id := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(cursor_id,watch_rec.close1_clause,DBMS_SQL.V7);
ret_val := DBMS_SQL.EXECUTE(cursor_id) ;
DBMS_SQL.CLOSE_CURSOR(cursor_id);
END IF;

IF watch_rec.close2_clause IS NOT NULL THEN
cursor_id := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(cursor_id,watch_rec.close2_clause,DBMS_SQL.V7);
ret_val := DBMS_SQL.EXECUTE(cursor_id) ;
DBMS_SQL.CLOSE_CURSOR(cursor_id);
END IF;

END IF;
END LOOP;
END ;
/

Script 5 : Output Report Script
REM
REM Name: eoutput.sql
REM Description: Script to generate output report
REM Usage: Run from SQL*Plus, Use Eagle Oracle account
REM sqlplus eagle/eagle
SPOOL &1
COLUMN NAME FORMAT A26 HEADING 'WatchName'
COLUMN PARAMETER FORMAT A28 HEADING 'Parameter Name'
COLUMN WEEK4 FORMAT 999 HEADING '4wkAgo'
COLUMN WEEK3 FORMAT 999 HEADING '3wkAgo'
COLUMN WEEK2 FORMAT 999 HEADING '2wkAgo'
COLUMN WEEK1 FORMAT 999 HEADING '1wkAgo'
COLUMN DAYS6 FORMAT 999 HEADING '6dyAgo'
COLUMN DAYS5 FORMAT 999 HEADING '5dyAgo'
COLUMN DAYS4 FORMAT 999 HEADING '4dyAgo'
COLUMN DAYS3 FORMAT 999 HEADING '3dyAgo'
COLUMN DAYS2 FORMAT 999 HEADING '2dyAgo'
COLUMN DAYS1 FORMAT 999 HEADING '1dyAgo'
COLUMN TODAY FORMAT 999 HEADING 'Tdy'
COLUMN CHANGE FORMAT 999 HEADING 'Chg'
rem
SET PAGESIZE 60 LINESIZE 130
SET TERMOUT OFF ECHO OFF FEEDBACK OFF
rem
rem
BREAK ON DB_ID NODUP SKIP 2 ON NAME NODUP SKIP 1
rem TTITLE CENTER 'Eagle Output' SKIP 2

SELECT DB_ID,
WATCH.NAME,
PARAMETER,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE-28),VALUE,0)) WEEK4,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE-21),VALUE,0)) WEEK3,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE-14),VALUE,0)) WEEK2,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE-7),VALUE,0)) WEEK1,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE-6),VALUE,0)) DAYS6,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE-5),VALUE,0)) DAYS5,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE-4),VALUE,0)) DAYS4,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE-3),VALUE,0)) DAYS3,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE-2),VALUE,0)) DAYS2,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE-1),VALUE,0)) DAYS1,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE),VALUE,0)) TODAY,
MAX(DECODE(TRUNC(RUN_TIME),TRUNC(SYSDATE),VALUE,0)) -
MAX(DECODE(TRUNC(RUN_TIME), TRUNC(SYSDATE-28),VALUE,0)) CHANGE
FROM DB_WATCH_RESULT ,WATCH
WHERE WATCH.WATCH_ID = DB_WATCH_RESULT.WATCH_ID
GROUP BY DB_ID,WATCH.NAME,PARAMETER
ORDER BY DB_ID, WATCH.NAME,PARAMETER;
REM
SET NEWPAGE 0
REM
EXIT

Script 6 : Unix Shell Script
# Name: eagle.sh
# Description: Unix shell script to run PL/SQL procudure and generate report
# Set ORACLE_HOME and ORACLE_SID in this script
# eoutput.sql is in home directory
# Output file also created in home directory
#
REPORTDIR=$HOME
REPORTFILE=$REPORTDIR/eagle`date "+%m%d"`.out
touch $REPORTFILE
sqlplus -s eagle/eagle <

Tuesday, July 7, 2009

Cloning an Oracle Database .... Slower!

While I was about to clone an Oracle database using RMAN this morning, I remembered that I had written an article about database cloning in a magazine in 1998. Immediately, I searched for an electronic copy of this article and found it buried on the my compute. The name of this article was, "Cloning an Oracle Database - Faster." Although it may be nostalgic to look back in the rear view mirror, I realized that this article should be renamed as "Cloning an Oracle Database - Slower." Today is 2009. That was 1998. Thanks to Oracle10g and RMAN, Cloning is whole lot easier than it was in 1998.

I have written two additional blogs that illustrates how to clone an Oracle database using RMAN - Faster!
Here is the article that I wrote in 1998. You may find it interesting!

Cloning an Oracle Database

The recent news of cloning a sheep didn’t come as a surprise to the Oracle DBA community since the concept of cloning isn’t new to the Oracle DBA community. An Oracle DBA routinely performs a procedure to clone databases to support the life cycle of an application. During the application life cycle, it’s not uncommon to have one or more databases to support production, acceptance, quality assurance, and test and development environments. An Oracle DBA is responsible for building databases for these environments, and most of the times, it’s cloning: build production from acceptance database, test from development database, quality assurance from production database, and so on.

Let’s take an example of database cloning which is very common: the users of HRPROD database have been experiencing poor performance. HRTEST, a test database being significantly smaller than HRPROD database, it’s not possible to duplicate HRPROD’s problems using HRTEST database. You need the data from the production environment to resolve these problems, but you don’t (and shouldn’t) want to use your production database for testing and tuning. You need to clone your production environment. This is just one example where database cloning is required. We will be referring to this example throughout this article to illustrate the procedures.


Export/Import Approach

Most DBAs utilize export/import approach to copy a database. Let’s assume that you want to copy HRTEST database from HRPROD database, then here is a list of high-level steps you need to follow:
* Export HRPROD database to the dump file in consistent mode
* Create HRTEST database with create database command
* Run catalog.sql, catproc.sql and other scripts to build catalog
* Create additional rollback segments and necessary tablespaces in HRTEST
* Import a dump file extracted from HRPROD into HRTEST
* Make sure that import worked without any problems. If not, fix problems.

There are times when export/import approach is the best, or sometimes only choice. For example, you want to copy a database from the Unix server to the Windows NT server or to a box with different version of operating system which doesn’t offer file level compatibility. Another example is when you want to reorganize a database which is highly fragmented due to update and/or delete activities. During import, Oracle reorganizes data and removes fragmentation. This approach is also useful when you want to resize the target database by precreating tablespaces, tables or indexes of appropriate size. But the following are the main disadvantages of export/import approach:

Export Time: The time to export a database is directly proportional to the amount of data stored in the database: the larger the database, the longer it takes to export. On a mid-range unix box, it’s not unusual for an export to take 4-6 hours to dump a 10GB database. This article explains an approach which doesn’t require export.

Consistent Export: During database export, especially for a large database with heavy activities, you might encounter “snapshot too old” error if rollback segments wrap around and overwrite before-image information which export needs to reconstruct a consistent snapshot. To resolve “snapshot too old” error, you need to increase the size and/or number of the rollback segments. If this error reoccurs, then you need to export database while keeping all users out of the database which may not be acceptable for 24 X 7 systems. The approach detailed in this article doesn’t encounter errors like “snapshot too old”, and use of backup datafiles eliminates any need for additional downtime.

Import Time: The time to import a database could be even more than the time to export as import, being a write operation, is more resource intensive which may lead to system overloading. Like export operation, the time to import a database is directly proportional to the size of the database, and usually more than the time to export a database.. The approach detailed in this article doesn’t require an import operation.

Backup/Restore/Rename/Recover Approach
This approach is based on the concept of backup, restore, rename and recover, and doesn’t require export/import. Export/import approach is a high level approach which involves exporting data from the database and then importing back into the database, and that’s redundant. It’s more like emptying a bucket into an another one, and then filling it back with the same water. Export copies data from the datafiles into dump file and then import puts the data back into the datafiles from the dump file. Then why not copy the datafiles directly which doesn’t require copying data back and forth? And that’s what this approach does. This approach copies data at the datafile level. So your target database will inherit all the properties, good and bad, of the source database, but this approach can be significantly faster, especially for a large database.
Under this approach, we restore all the datafiles from the backup, but these files can’t be used without renaming the database name in the headers of the datafiles as it’s not possible to have two Oracle databases with the same name on the same server. So we rename the database. If necessary, we apply archived redo logs for database point-in-time recovery.
We will be using the example described above to illustrate the backup/restore/rename approach. Let’s say we want to clone HRPROD(Human Resource Production) database and call it HRTEST(Human Resource Test). HRTEST database will be residing on the same server as HRPROD database. Note that you can use this approach even if target database is on the another server. A step-by-step approach for cloning HRTEST database is explained below:

Step 1 : Determine Disk Space Requirements for HRTEST

The first step is to determine the free disk space that will be needed on the server for HRTEST database. Since HRTEST is going to be a clone of HRPROD, free disk space needed to build HRTEST is equal to the disk space used by HRPROD database. The script, fs_usage.sql, can be used to determine the disk space used by HRPROD database, and the output produced by it is shown in Figure 1. As you can see in Figure 1, the space usage report is broken down by the file system or directory or sub-directory, depending upon the configuration of the database and the server, but we will be using the term file systems in the subsequent sections. Note that this query is written specifically for unix platforms, but can be easily utilized on other platforms as well by replacing “/” character in the query with “]” for Open VMS platforms or “\” for Windows NT platforms.


rem ****************************************************************************
rem Name: fs_usage.sql
rem Author: Jay Mehta
rem Description: Disk space usage report
rem Usage Notes: Needs access to v$datafile view
rem ****************************************************************************
set pagesize 60
set heading on
set feedback off
column filesys format A30 heading "File System"
column filesys_size FORMAT 9,999,999,999 Heading "Bytes"
break on report
compute sum of filesys_size on report
spool filesys_usage.lst
select substr(name,1,instr(name,'/',-1,1)-1) filesys,
sum(bytes) filesys_size
from sys.v$datafile
group by substr(name,1,instr(name,'/',-1,1)-1)
/
spool off

Figure 1: HRPORD Database : Space Usage Report

File System Bytes
------------------------------ --------------
/oracle/fs01a/HRPROD 120,586,240
/oracle/fs03a/HRPROD 314,572,800
/oracle/fs04a/HRPROD 220,200,960
-----------------
sum 655,360,000

Make sure that you have enough free disk space on the server not only to create HRTEST database, but also for the database growth and administrative scripts/logs. Note that this script does not include space utilized by controlfiles and redo logfiles.

Step 2 : Choose Physical Database Layout for HRTEST

The next step is to determine the physical database layout for HRTEST database, i.e. determine the placement of datafile, contriolfiles and redo logfiles. The number of physical database layouts that are available for HRTEST depends upon the factors like configuration of the server, free disk space available, performance desired, etc. For example, you can choose a physical database layout for optimal performance or efficient disk space usage or a combination. A physical database layout achieves optimal performance by spreading datafiles across multiple disks and controllers. A physical layout achieves efficient disk space usage by storing all the database files on one disk.

For the purpose of simplicity, let’s choose a physical database layout for HRTEST which is identical to the physical database layout of HRPROD.
In other words, HRTEST’s file systems map one-to-one HRPROD’s file systems, as shown in Figure 2. As you can see from Figure 1, HRPROD database uses three file systems. Under one-to-one mapping, each file system of HRPROD will be mapped to one and only one file system. As shown in Figure 2, /oracle/fs01a/HRPROD will be mapped to /oracle/fs11a/HRTEST, and so on. In other words, HRPROD’s datafiles that reside on /oracle/fs01a/HRPROD will be copied or restored to /oracle/fs11a/HRTEST and so on. From figure 1 and 2, the free disk space required in /oracle/fs11a/HRTEST file system is 20,971,520 bytes, and so on.

Figure 2: File Systems Mapping

HRPROD’s File Systems HRTEST’s File Systems
/oracle/fs01a/HRPROD /oracle/fs11a/HRTEST
/oracle/fs03a/HRPORD /oracle/fs13a/HRTEST
/oracle/fs04a/HRPORD /oracle/fs14a/HRTEST


Step 3 : Update init. ora and config.ora

The next step is to configure parameters in initHRTEST.ora and configHRTEST.ora. If HRTEST database doesn’t exist on the server, then copy HRPROD’s configuration files, rename them to initHRTEST.ora and configHRTEST.ora, and make appropriate changes. If HRTEST database already exist, then you have configurations files in place. You need to watch for those parameters, as shown in Figure 3, that either include database name in its values or contain physical locations of the file systems. Depending upon requirements, you might want to change memory parameters including sort_area_size, db_block_buffers, and other memory parameters if necessary.

Figure 3

* ifile
* db_name
* control_files
* audit_file_dest, core_dump_dest, background_dump_dest, log_archive_dest user_dump_dest, core_dump_dest
* mts parameters


Step 4 : Generate Control File Script

The next step is to generate a script that will be used to create the controlfiles for HRTEST database. Genrate a text copy of HRPROD’s controlfile using the sqldba or srvrmgr command: ALTER DATABASE BACKUP CONTROLFILE TO TRACE. A script generated by this command can be found in the location specified by backgound_dump_dest parameter of init.ora, and is shown in Figure 4. Rename this file to control.ctl as it’s easy to remember.

Figure 4 : HRPROD’s Control File

# The following commands will create a new control file and use it
# to open the database.
# No data other than log history will be lost. Additional logs may
# be required for media recovery of offline data files. Use this
# only if the current version of all online logs are available.
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "HRPROD" NORESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 4
MAXDATAFILES 1022
MAXINSTANCES 1
MAXLOGHISTORY 100
LOGFILE
GROUP 1 '/oracle/fs01a/HRPROD/log1a.rdo' SIZE 10M,
GROUP 2 '/oracle/fs01a/HRPROD/log2a.rdo' SIZE 10M,
GROUP 3 '/oracle/fs01a/HRPROD/log3a.rdo' SIZE 10M
DATAFILE
'/oracle/fs01a/HRPROD/system01.dbf' SIZE 50M,
'/oracle/fs01a/HRPROD/tools01.dbf' SIZE 5M,
'/oracle/fs04a/HRPROD/rbs01.dbf' SIZE 50M,
'/oracle/fs01a/HRPROD/temp01.dbf' SIZE 60M,
'/oracle/fs03a/HRPROD/users01.dbf' SIZE 100M,
'/oracle/fs04a/HRPROD/users_idx01.dbf' SIZE 60M,
'/oracle/fs03a/HRPROD/eq_data01.dbf' SIZE 200M,
'/oracle/fs04a/HRPROD/eq_idx01.dbf' SIZE 100M
;
# Recovery is required if any of the datafiles are restored backups,
# or if the last shutdown was not normal or immediate.
RECOVER DATABASE
# All logs need archiving and a log switch is needed.
ALTER SYSTEM ARCHIVE LOG ALL;
# Database can now be opened normally.
ALTER DATABASE OPEN;


Step 5 : Edit the Control File Script

Make the following changes to control.ctl:

• Delete all but create controlfile statement from the control.ctl file. The control.ctl file should include only create controlfile statement as it is advisable to execute one command at a time, determine its success or failure and then proceed to the next step.

• Remove all occurrences of REUSE in the control.ctl file as REUSE option grants Oracle an authorization to overwrite any existing files of the same size without any prompt or notification at runtime which could be disastrous, particularly when you are cloning a live production database.

• Replace NORESETLOGS with RESETLOGS as RESETLOGS option is required with the use of backup controlfile by Oracle.

• Replace HRPROD with HRTEST in create controlfile statement as HRTEST is the name of the new database. Add SET, as shown in Figure 4, just before DATABASE word in the script. This step renames the name of the database to HRTEST.

• Replace the pathnames of all the datafiles and logfiles with the pathnames for HRTEST database. Refer to the mapping chart for the replacements. Since we have chosen one-to-one mapping, global search and replace function of text editor can be used to speed up pathname replacement.
• If necessary, change the values of MAXLOGFILES, MAXDATAFILES, and other parameters. You can also take this opportunity to change the size of redo log files or add or remove redo log groups and/or members. You can also choose the desired mode for redo log archiving.

• Check and recheck this trace file to avoid any potential disasters.


Figure 4: Updated Control File

CREATE CONTROLFILE SET DATABASE "HRTEST" RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 4
MAXDATAFILES 1022
MAXINSTANCES 1
MAXLOGHISTORY 100
LOGFILE
GROUP 1 '/oracle/fs53a/HRTEST/log1a.rdo' SIZE 10M,
GROUP 2 '/oracle/fs53a/HRTEST/log2a.rdo' SIZE 10M,
GROUP 3 '/oracle/fs53a/HRTEST/log3a.rdo' SIZE 10M
DATAFILE
'/oracle/fs11a/HRPROD/system01.dbf' SIZE 50M,
'/oracle/fs11a/HRPROD/tools01.dbf' SIZE 5M,
'/oracle/fs14a/HRPROD/rbs01.dbf' SIZE 50M,
'/oracle/fs11a/HRPROD/temp01.dbf' SIZE 60M,
'/oracle/fs13a/HRPROD/users01.dbf' SIZE 100M,
'/oracle/fs14a/HRPROD/users_idx01.dbf' SIZE 60M,
'/oracle/fs13a/HRPROD/eq_data01.dbf' SIZE 200M,
'/oracle/fs14a/HRPROD/eq_idx01.dbf' SIZE 100M
;

Step 6 : Restore Datafiles

The purpose of this step is to restore HRPROD’s datafiles to the HRTEST’s file systems. During this restore, datafiles are not restored to the HRPROD’s file systems, but to HRTEST’s file systems. It’s more like a situation in which you lost HRPROD database completely due to disk failure, and you need to resore datafiles to alternative file systems.

A copy of HRPROD’s datafiles can be obtained from either cold backup or hot backup. Backup/Recovery strategy employed varies, often greatly, from database to database and shop to shop, but the idea here is to copy all the datafiles produced by the hot or cold backup to the file systems designated in Step 2. Make sure that you don’t overwrite any of HRPOD’s datafiles during backup restore.

If you are not doing hot or cold backup or if HRPROD can be brought down for the time that is required to copy datafiles, then copy all HRPROD’s datafiles to the mapped file systems, as determined in the Step 2 using operating system commands. Make sure that database is down during file copy. Since we have chosen one-to-one file system mapping, multiple files can be copied using wild cards. For example, use “cp /oracle/fs01a/HRPORD/* /oracle/fs11a/HRTEST” on the Unix platforms.

If HRTEST database already exists, then delete all HRTEST’s database files, including datafiles, controlfiles and redo logfiles, before copy or restore.


Step 7 : Restore Archived Redo Logs

If you want to roll forward HRTEST to a particular time, then make sure that you have all the necessary archived redo logs from the time backup was taken up to the time you want HRTEST to roll forward. If you are using hot backup files to restore, then you will need archived redo logs. Not only you can restore archived redo logs from tape, if necessary, but you can also apply the last archived redo log generated by Oracle. Note that you cannot carry-over transactions that are in the active redo logfile of HRPROD, but you can perform a log switch in HRPROD to archive the current redo logfile and then apply that archived redo log.

To apply archived redo logs to HRTEST, you need to point log_archive_dest parameter of HRTEST’s init.ora file to the location where you have restored archived redo logs from tapes. Make sure that you have enough free disk space to restore archived redo logs for the duration of roll forward process, and restored archived redo logs can be deleted afterwards.

Step 8 : Set-up HRTEST environment

Set Unix environment variables including ORACLE_SID by using oraenv or coraenv. Make sure that ORACLE_SID is set to HRTEST.


Step 9 : Create Control File

The next step is to create control files for HRTEST database. As shown in Figure 9, start the HRTEST instance with startup nomount command and run control.ctl by @control.ctl. Oracle creates the control files, specified by control_files parameter of initHRTEST.ora and mounts the database. If initHRTEST.ora isn’t stored in the default location, then start the instance with pfile option. Note that SQLDBA is started in line mode. For the Oracle 7.2 or later, SQLDBA is replaced by server manager.


Step 10 : Recover Database, if necessary

If you have copied HRPROD’s datafiles while HRPROD being down or restored datafiles from the cold backup, then no recovery action is required. But if you would like to roll forward database, then media recovery should be performed. If you have restored a database from hot backup, then media recovery is required. Perform the media recovery by using recover database using backup controlfile command. Oracle will suggest the name and location of the archived redo log it needs. If archived redo logs aren’t in the location suggested by Oracle, then change the location and continue. A log of recovery actions is shown in Figure 9. Start HRTEST database using alter database open resetlogs. Oracle responds by the message, “Instance Started”, and you have successfully completed cloning.


Figure 9 : SQLDBA Session

SQLDBA>
SQLDBA> startup nomount ;
ORACLE instance started.
SQLDBA>
SQLDBA> @control.ctl
Statement processed.
SQLDBA>
SQLDBA> alter database open resetlogs ;
ORA-01195: online backup of file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/oracle/fs53a/HRTEST/system01.dbf'
SQLDBA>
SQLDBA> recover database using backup controlfile ;
.....
.....
Applying suggested logfile...
Log applied.
.....
.....
Media recovery canceled.
SQLDBA>
SQLDBA> alter database open resetlogs ;
Statement processed.
SQLDBA> exit
SQL*DBA complete.

Step 11 : Rename GLOBAL_NAME

The last step in this procedure is to rename the GLOBAL_NAME of the database which can be done by the following statement:

ALTER DATABSE RENAME GLOBAL_NAME to HRTEST ;

Summary

The approach outlined in this article to clone a database can be significantly faster than the export/import approach,particulary for a large database. I tested database cloning for a 3GB database using both the approaches, and backup/rename/recover approach was 3 times faster: 3 hours compared to 9 hours for export/import approach. But a word of caution is warrented here. Backup/Rename/Recover approach requires in-depth knowledge of Oracle database administration, in general, and principles of backup/recovery in particular. Here you are killing two birds with one stone: restore a backup and rename the database. And don’t forget to backup your target database if it already exists before you perform this procedure. You need to make sure that your backup is valid and if necesary, can be used to recover a database successfully.
Happy cloning!

Oracle Recycle Bin


Flashback

In computing world, a concept of recycle bin or trash is not new. A recycle bin, or trash can, is temporary storage for files that have been deleted by the user, but not yet permanently erased by the operating system from the disk drive. Apple first implemented a trash can icon during the development of Lisa. The concept was carried over to the Apple Macintosh, and has existed and proliferated since then.

Although MS-DOS operating systems and some third parties utilities did offer UNDELETE feature for DOS and Windows users, Microsoft didn’t offer an icon similar to the trash can until it introduced a Recycle Bin icon in Windows 95. In fact, this concept of trash was subject to a lawsuit which Apple eventually lost.

Oracle Database Recycle Bin

And, Oracle waited until Oracle10g to introduce a recycle bin concept. Better late than never! Oracle Recycle Bin is actually a data dictionary table containing information about dropped objects. When you drop a table in Oracle, the database does not immediately remove/purge the table from the database. It is just a logical delete. The table is not permanently removed from the database. Instead, it renames the table and places it and any dependant objects such as indexes, constraints and triggers in a recycle bin.

Oops!

In the life of a DBA or an application developer, there are "Oops" moments that they love to hate or hate to love for a long time. Many of us might have accidentally dropped a table that very well be an "Oops" moment. For example,

DROP TABLE US_STATES;

Thanks!

Recycle Bin could very well be a job-saver for “Oops” drops! It has been a life-saver for many who had dropped some important database tables either accidentally and/or unconsciously. You can view the recycle bin by:

JMEHTA > show recyclebin ;

ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
---------------- ------------------------------ ------------ -------------------
US_STATES BIN$OozJnFttRLa3AntqEdXy/g==$0 TABLE 2009-07-07:10:42:58

What a relief! US_STATES table is peacefully resting in Recycle Bin! Although Oracle has renamed this dropped table with a system generated name that begins with BIN$.

You can continue to access data from the dropped table through its recycled name, as shown below:

JMEHTA > select * from "BIN$OozJnFttRLa3AntqEdXy/g==$0" ;

ID CODE NAME
---------- -- ------------------------------
1 VA Virginia

2 MD Maryland

But what about indexes, triggers and grants that were associated with this table? They are also recoverable.

“SELECT * FROM USER_RECYCLEBIN” or “SELECT * FROM RECYCLEBIN” yields the following output.

Note: RECYCLEBIN or USER_RECYCLEBIN are synonyms and can be used interchangeably.

JMEHTA > select OBJECT_NAME, ORIGINAL_NAME, TYPE from user_recyclebin ;

OBJECT_NAME ORIGINAL_NAME TYPE

------------------------------ -------------------------- -------------------------
BIN$fIgPImIyQsi9yL3c3F8+8g==$0 US_STATES_PK INDEX
BIN$s4VUcdg2SDuHuOVLq3Gyyg==$0 US_STATES_BRI_SEQ TRIGGER
BIN$OozJnFttRLa3AntqEdXy/g==$0 US_STATES TABLE

If you are a DBA, you can also view database level recycle bin by:

JMEHTA> select * from dba_recyclebin

Bingo!

You can recover a table and its dependant objects through flashback command, as shown below.

JMEHTA > flashback table us_states to before drop ;

Let's verify table recovery.

JMEHTA > SELECT TABLE_NAME FROM USER_TABLES WHERE TABLE_NAME = 'US_STATES' ;

TABLE_NAME
------------------------------
US_STATES

What!

You may shout “What!” when you realize that Oracle didn’t rename associated indexes and triggers to their original names.

JMEHTA > select index_name from user_indexes where table_name = 'US_STATES' ;

INDEX_NAME
------------------------------
BIN$leIeukwWQL6zfEoR7XKk5g==$0

JMEHTA > select trigger_name from user_triggers where table_name = 'US_STATES' ;

TRIGGER_NAME
------------------------------
BIN$43f91QgqSBe3gG055leeVA==$0


You can rename indexes and triggers as shown below:

JMEHTA > alter index "BIN$leIeukwWQL6zfEoR7XKk5g==$0" rename to US_STATES_PK ;

Index altered.

JMEHTA > alter trigger "BIN$43f91QgqSBe3gG055leeVA==$0" rename to US_STATES_BRI_SEQ ;

Trigger altered.

Managing Recycle Bin

Oracle provides various commands to manage recycle bin.

To Purge a single table from Recycle Bin:

PURGE TABLE TABLE_NAME

To empty current user's recycle bin and release all space associated with objects in the recycle bin:

PURGE RECYCLEBIN or PURGE USER_RECYCLEBIN

To Empty Recycle Bin at Database Level:

PURGE DBA_RECYCLEBIN
SYSDBA privilege is required to purge DBA_RECYCLEBIN

Bypassing Recycle Bin

Specify PURGE if you want to drop the table and release the space associated with it in a single step. If you specify PURGE, then the database does not place the table and its dependent objects into the recycle bin. Using this clause is equivalent to first dropping the table and then purging it from the recycle bin. This clause lets you save one step in the process.

DROP TABLE TABLE_NAME PURGE;

To disable Recycle Bin:

ALTER SESSION SET recyclebin = OFF; (Session Level)

ALTER SYSTEM SET recyclebin = OFF; (system level)

recyclebin=off (system level init.ora parameter)

When the recycle bin is disabled, dropped tables and their dependent objects are not placed in the recycle bin; they are just dropped, and you must use other means to recover them (such as recovering from backup).

To enable Recycle Bin

ALTER SESSION SET recyclebin = ON; (Session Level)

ALTER SYSTEM SET recyclebin = ON; (system level)

recyclebin=on (system level init.ora parameter)

When the recycle bin is enabled, dropped tables and their dependent objects are placed in the recycle bin. The recycle bin is enabled by default.

Space Pressures

Dropped objects continue to stay in the recycle bin until the owner of the dropped objects decides to permanently remove them using the PURGE command. Objects in the Recycle Bin will be automatically purged by the database if
  • A user creates a new table or adds data that causes their quota to be exceeded.
  • The tablespace needs to extend its file size to accommodate create/insert operations.
  • Corresponging tablespace or user is dropped
Misc.

Please note individual index drops or trigger drops are not recoverable through recycle bin. For example, if you drop an index or a trigger, then they are not placed in recycle bin. But if you drop a table, all dependant objects such as indexes and triggers are placed in recycle bin, and are recoverable.


Happy Recycling! Go Green!