Sunday, November 26, 2017

Eagle

It was in 1997 when I designed and implemented a tool to monitor Oracle databases. I called this tool Eagle. I had also written an article describing Eagle in IOUG Select magazine in 1997.

After 20 years in 2017, I just came across an article, Enhancing Database Security: Concepts and Tools for the DBA by Peter J. Magee of SQRIBE Technologies that utilizes Eagle and extends it further.

Quite exciting to see an application of Eagle! More information on Eagle can be found in my blog as well. https://jmehtablog.blogspot.com/2009/08/do-it-yourself-implement-performance.html

Implement SQL*Net Encryption and Checksum

My article on SQL*Net Encryption and Checksum was published in the 2017 Winter edition of UKOUG Scene. 

-----

Implement SQL*Net Encryption and Checksum

Oracle SQL*Net encryption is included in the database license. At one point, it required an Oracle Advanced Security option license at an additional cost, but not anymore. I would strongly recommend that you take advantage of this feature to bolster the security of your database, if you haven't done so yet.

Without encryption, database communication will be in clear-text, potentially compromising the confidentiality of your data. Malicious users and/or hackers would be able to eavesdrop on the data communications using network sniffer tools such as Wireshark. Not only data communications could be at risk, but DBA activities such as password changes would also be visible to the sniffer tools.

I believe the primary reasons for not implementing SQL*Net encryption are: (a) Many DBAs are still not aware that SQL*Net encryption feature is now available at no additional cost; (b) Many DBAs and organizations do not see a need to implement SQL*Net encryption due to the fact that databases are within the internal network and behind the network firewall; (c) SQL*Net encryption is not a mandate or organizational requirement or (d) What if this breaks my applications?

Even though databases are behind the network firewall, it is important to implement SQL*Net encryption/check-sum for two main reasons: (a) hackers may find a vulnerable system elsewhere on the network and establish a post to sniff the network traffic, including Oracle database, and (b) one cannot discount insider threats from malicious/disgruntled users who can sniff the network traffic.

Although SQL*Net encryption implementation details are readily available on the web, this article attempts to demonstrate an approach that should help you implement this feature without any service disruptions.

Start with REQUESTED

My recommendation is to set sqlnet.ora encryption and checksum parameters to REQUESTED on the database server, as shown below:

SQLNET.CRYPTO_CHECKSUM_SERVER=requested
SQLNET.ENCRYPTION_SERVER=requested
SQLNET.CRYPTO_CHECKSUM_CLIENT=requested
SQLNET.ENCRYPTION_CLIENT=requested

Configuring and managing these parameters at the database server level, instead of at all client locations, provides a centralized approach to implement encryption and check-sum. It requests that all bi-directional database traffic be encrypted and check-summed.

Please note that Oracle does not implement encryption and check-sum out-of-the-box. The default value for these parameters is ACCEPTED which triggers encryption and checksum only if specifically required or requested by the client. Setting these parameters to REQUESTED on the database server notifies the database clients that the database has requested encryption and checksum services. However, the clients may reject database server's request through sqlnet.ora settings or may not be able honor it due to incompatible client Oracle software. In this situation, the database server will proceed without encryption/checksum services and establish clear-text communications with the clients. The point is that REQUESETD setting won't break your connections to the database.

The two _CLIENT parameters mentioned above are for database connections originating from the database server, including database link connections to other databases. Just an FYI that SQLNET.CRYPTO_SEED parameter is obsolete since Oracle9i.

I would suggest not specifying any encryption and checksum protocols at this point. Let Oracle database server negotiate with the client and choose the strongest protocols that are acceptable to both parties.

Run the following query (a) to identify clients who have (or have not) implemented the encryption and checksum services and (b) protocols that have been negotiated by the server and the clients:

select * from v$session_connect_info order by sid;

If this query returns two rows with protocol names as shown below, then you are using encryption and checksum services. Please note that protocols may vary for some clients.

AES256 Encryption service adapter for 64-bit Windows: Version 12.1.0.2.0 - Production
SHA1 Crypto-checksumming service adapter for 64-bit Windows: Version 12.1.0.2.0 - Production

If you don’t see these two rows with protocol names, then use SID and SERIAL# returned by this query to identify the clients who are not using these services and work with them to resolve issues. Older versions of SQL*Net client software may be the most likely issue that you will need to resolve.

In addition, the above mentioned query also returns the following rows for each client which is for informational purposes only.

Windows NT TCP/IP NT Protocol Adapter for 64-bit Windows: Version 12.1.0.2.0 - Production
Encryption service for 64-bit Windows: Version 12.1.0.2.0 - Production
Crypto-checksumming service for 64-bit Windows: Version 12.1.0.2.0 - Production

As you may have noticed, REQUESTED setting, in essence, implements a “monitoring” mode. Database server requests the clients to support the encryption and checksum services, but doesn't require/force them to do so. If database clients are unable to fulfill this request, then the database server proceeds with establishing a database connection in clear-text. Database connections are not rejected or denied and hence nothing breaks!

Configure Encryption and Checksum Protocols

The next step is to configure encryption and check-sum protocols. Your organization may have a mandate for strong encryption standards such as FIPS 140-2. Or your objective may be to remove weaker protocols such as MD5 or RC4. Whatever may be the reason, review the results of the query against v$session_connect_info view and compile a list of protocols that are being used for encryption and checksum. If the protocols that are being used are weaker as per your organization policy, then you may need to identify the root cause which very well may be an older version of the SQL*Net client.  You need to work with the users to upgrade the software before stronger protocols can be deployed.

You could turn on SQL*Net tracing, as shown below, to troubleshoot the issue:

TRACE_LEVEL_CLIENT=16
TRACE_DIRECTORY_CLIENT = c:\Oracle\temp
TRACE_FILE_CLIENT= SQLNet.trc
TRACE_TIMESTAMP_CLIENT = ON
TRACE_UNIQUE_CLIENT = ON

Once you have ensured that all the clients are supporting the protocols that you would like them to, adjust the values of the following parameters in sqlnet.ora on the database server. More than one protocols can be specified as shown below.

SQLNET.ENCRYPTION_TYPES_SERVER = (AES256)
SQLNET.ENCRYPTION_TYPES_CLIENT = (AES256)

SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA512, SHA384, SHA256, SHA1)
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT = (SHA512, SHA384, SHA256, SHA1)

If you have a combination of Oracle10g, Oracle11g and Oracle 12cR1 database servers and clients, then the above protocols should work for you. Of course, you need to test the configuration in your environments (development, test and/or UAT and finally the production).

Oracle10g, Oracle11g and Oracle 12cR1 supports the following encryption protocols:
·        Advanced Encryption Standard (AES): AES256, AES192 and AES128 
·        Triple DES: 3DES168, 3DES112 
·        DES: DES and DES40
·        RC: RC4_256, RC4_128, RC4_56, RC4_40 

Oracle10g, Oracle11g and Oracle12c R1 (12.2.0.1) supports the following checksum protocols:
·        MD5
·        SHA1

Oracle12c R1 introduces the following three additional protocols for checksum
·        SHA512
·        SHA384
·        SHA256 

Switch to REQUIRED

Once you are satisfied that all clients are supporting the encryption and checksum services with the approved protocols, then you should switch the values of these parameters to REQUIRED as follows:

SQLNET.CRYPTO_CHECKSUM_SERVER=required
SQLNET.ENCRYPTION_SERVER=required
SQLNET.CRYPTO_CHECKSUM_CLIENT=required
SQLNET.ENCRYPTION_CLIENT=required

Typical Errors

 You will get “ORA-12649: Unknown encryption or data integrity algorithm” when encryption or checksum protocols that you specified in SQLNET.ORA are unknown or not supported. For example, you have specified SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA512) but SHA512 is not supported by Oracle11g client.

Oracle error “ORA-12650: No common encryption or data integrity algorithm” will be reported when there are no common encryption protocols between client and server. For example, SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(MD5) on the client side and SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SH1) on the server.

Oracle error “ORA-12660: Encryption or crypto-checksumming parameters incompatible” will be reported when the server is requesting encryption or check-sum, but the client is not supporting it or vice versa.


You should monitor Oracle alert logs and SQL*Net logs on the database server to see if there are any issues. You may see the above mentioned ORA- errors. In addition, alert log also shows error “opiodr aborting process unknown ospid (11992) as a result of ORA-609”

Tuesday, September 12, 2017

Implement Oracle Database Firewall using Valid Node Checking

I recently published an article on Information Security Buzz that explains how to implement a database firewall using Oracle Database Listener’s Valid Node Checking (VNC) feature to protect your Oracle databases from malicious attacks from unauthorized locations. The article can be found at http://www.informationsecuritybuzz.com/articles/implement-oracle-database-firewall-using-valid-node-checking/

Below is the text for the same article:

Introduction

One of the Oracle Database Listener features for protecting your Oracle databases from malicious attacks from unauthorized locations is by implementing the Valid Node Checking (VNC) feature. Through this feature, access to the database can be restricted based on the IP address (or host name) of the client machine attempting to connect to the database. The Oracle database Listener validates the IP address (or host name) of the incoming client machine against the “allow” or “deny” rules specified in sqlnet.ora file before opening up a connection to the database.
Although this is not a new feature and has been available since Oracle8, it has not garnered traction within the Oracle database community mainly due to the following reasons:
  1.  Many organizations do not see a need for additional security controls such as database Valid Node Checking. Since today's enterprise databases are behind the network firewalls, they reckon that the security risk to the databases is low. Although this may be true in some cases, we recommend that the defense-in-depth methodology be adopted to protect today's mission critical databases.Without Database-based Valid Node Checking, anyone with network connectivity could attempt to connect to the database. With this feature enabled, only pre-authorized IP addresses would be permitted to connect to the database, reducing the scope of internal threats as well as attack surfaces/vectors from which malicious attacks can be launched.
  2.  Many organizations assume that they may have a large number of IP addresses that should be allowed to connect to the database, rendering VNC feature difficult to implement and cumbersome to manage. This is true for the client-server computing model which has become outdated. Instead most of today's databases are accessed through a small but finite number of middle-tier application servers. Oracle DBAs need to specify IP addresses of these application servers as allowed nodes in tcp.invited_nodes. In addition, you will need to enter IP addresses of administrative and operational team members' machines so that they can connect directly to the database through SQL*Plus, TOAD, SQL*Developer or other tools.
  3. There is a widely-held notion that each and every IP address must be explicitly specified and that wild cards are not allowed, making the IP address management process quite complex and difficult to manage. That is no longer true. Beginning with Oracle11g R2, allow and deny rules, which are specified via tcp.invited_nodes and tcp.excluded_nodes parameters, can include wild cards, greatly simplifying implementation of this feature. In addition, CIDR notations are allowed as well.

Implementation

Valid Node Checking feature is enabled by specifying the tcp.validnode_checking parameter in sqlnet.ora. Once enabled, you can configure allow rules through tcp.invited_nodes or deny rules through tcp.excluded_nodes in sqlnet.ora file as shown below:
tcp.validnode_checking=yes
tcp.invited_nodes=(localhost,192.168.10.101,192.168.10.102)

With the above configuration, the listener will accept connections from localhost, 192.168.10.101 and 192.168.10.102. Please note that if you don’t include “localhost” or IP addresses of the local host, Oracle listener will not start and you won’t be able to connect to the database at all. All other nodes will get the following error:  
C:\Users\jmehta>sqlplus scott/tiger@testdb

SQL*Plus: Release 11.2.0.4.0 Production on Mon May 16 18:15:34 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR: ORA-12537: TNS:connection closed

 The listener log will show the following error:
12-MAY-2016 21:32:18 * service_update * CDB2 * 0
Incoming connection from 10.0.2.2 rejected
12-MAY-2016 21:32:38 * 12546
TNS-12546: TNS:permission denied
 TNS-12560: TNS:protocol adapter error
  TNS-00516: Permission denied

You can also specify tcp.excluded_nodes to deny access to the specific nodes, but there is no reason to do so. Note that if you only specify invited nodes with tcp.invited_nodes, all other nodes will be excluded anyway. If both are present, then tcp.invited_nodes takes precedence over tcp.excluded_nodes parameter.
These parameters can use wildcards for IPv4 addresses and CIDR notation for IPv4 and IPv6 addresses, as shown below:
tcp.invited_nodes=(192.168.10.*,192.162.*.*)
tcp.invited_nodes=(sales,hr,192.168.*,2001:DB8:200C:433B/32)
tcp.invited_nodes=(192.168.10.1/24,192.168.20.1/24)

Obtain IP Addresses through Database Auditing

Although VNC implementation appears to be quite simple, it may not be in real-life. The challenge is to obtain and identify all IP addresses that should be allowed to connect to the database. If you miss an IP or two, then those users (including application servers, remote databases, devices, etc.) won't be able to connect to the database, causing unwarranted issues and/or outages.
Instead of requesting all the users to supply their IP addresses, you should enable auditing on logon and then query Oracle audit trail to obtain a complete list of IP addresses that have ever connected to the database. You need to make sure that the audit trail covers sufficient period of time so that all the users, including those who connect infrequently, are also included in the audit trail.
As shown below, enable logon auditing in pre-11g databases or in 12c with conventional mode auditing. You need to wait for sufficient time to have audit trail populated with connection information from all users and then execute the following query.
audit create session ;

select distinct substr(comment_text,instr(comment_text,'HOST=')+5,
instr(comment_text,'PORT=')-instr(comment_text,'HOST=')-7)
from dba_audit_trail
where action = 100
order by 1;

If you are running in 12c unified auditing mode, then create an audit policy and query UNIFIED_AUDIT_TRAIL table to get a complete list of all IP addresses over the given period of time:
create audit policy ora_logon
actions logon;

audit policy ora_logon;

select distinct substr(authentication_type,instr(authentication_type,'host=')+5,
instr(authentication_type,'PORT=')-instr(authentication_type,'host=')-7)
from unified_audit_trail
order by 1;

Ensure that IP Addresses are Correctly Specified

The best approach to ensure that you have specified tcp.invited_nodes correctly and it works as intended is through listener tracing at ADMIN or DEBUG level. Enable listener tracing by setting TRACE_LEVEL_<listener_name> = ADMIN in listener.ora which would generate trace of Oracle Listener's actions and you would be able to see the IP addresses being loaded into Valid Node Checking table.

Gateway Server for Large User Base

If you have a very large number of database users requiring direct connection to the database, then IP address management task could be complex. An ideal solution to remediate this issue would be to configure a gateway server such as Citrix with requisite software (SQL*Plus, TOAD, SQL*Developer, other tools etc.) that would be used to access the database. You add the IP address of the Citrix server to tcp.invited_nodes to allow access from the Citrix server. All users should be asked to connect to the Citrix to connect to the database. With this approach, the security is further bolstered as only path to the database would be from the Citrix server. A gateway server not only resolves IP address management issue, but it can also help implement additional security controls which would prevent data dump, printing, copying, etc.

Special Considerations

It's important to note that the listener must be stopped and started (not reloaded through lsnrctl reload) for valid node checking to become active. Tcp.invited_nodes and tcp.exlcuded_nodes parameters are read only during the listener startup. This means that the IP addresses and hostnames cannot be dynamically added or removed. Although listener restart (stop and start) usually takes less than few seconds, your database is essentially down for the incoming database connections for this small duration. You need to consider its impact on your high availability requirements. Please note that the listener can be restarted without impacting existing database connection
All hostnames specified in tcp.invited_nodes or tcp.excluded_nodes must be resolvable. Otherwise, Oracle Listener fails to start, aborting with "TNS-00584: Valid node checking configuration error" message. When enabled by setting TRACE_LEVEL_LISTENER=admin, Listener tracing records "ntvllt:Problem in Loading tcp.invited_nodes" as error message.
However, the good news is that unresolvable IP addresses don't cause any issues during the listener startup. If listener tracing is enabled, the trace file does show that all IP addresses and hostnames are processed by the listener and populated into Valid Node table appropriately.
Oracle support note says that all IP addresses or host names must be specified on one line in sqlnet.ora file, but I was able to specify multiple IP addresses on multiple lines. In fact, I specified only one IP address on each line, and it worked fine in Oracle 12c (12.1.0.2). I do know that it didn't work in previous releases so do test your configuration against the Oracle versions/releases that you are running to check whether it works or not.
Oracle does seem to perform basic validation on the hostnames and IP addresses specified in tcp.invited_nodes and tcp.excluded_nodes parameters, but documentation doesn't elaborate on the level of validations being performed, causing misunderstanding, confusion and unnecessary issues. Oracle does seem to ignore duplicated IP addresses or host names. Two consecutive comma causes Oracle Listener to ignore the rest of the entries. Some invalid IP addresses are accepted but some are not. To make matter worse, Oracle Listener quite often simply ignores the incorrect syntax or specifications without warning which results in unexpected behaviors and outcomes.
Please note that VNC feature may not protect you against advanced attacks, involving IP address and hostname spoofing. Other options available to implement capabilities similar to VNC are server-based firewall and Oracle Connection Manager. You should consider pros and cons of each approach and choose an option that best meets your needs.

Conclusion

Oracle’s VNC feature is an interesting feature akin to a basic firewall. With support for wild cards and access through middle-tier application servers, IP address management might not be as complex as initially thought. As pointed out in this article, although there are few special considerations you need to be aware of, VNC is still a powerful security control that should be considered as part of defense-in-depth methodology and overall security posture.


Joe's Ride

Rupa and I got a chance to take part in an event called Joe's Ride and Stride. It was on May 20th 2017 at Oakdale Emory Church in Olney MD. This event was organized to remember and celebrate the lives of those who passed away due to cancer, and to support those who are currently fighting cancer. The event included 10K, 30K and 45K bike rides. It also had a 5K walk.

I took part in the 30K bike ride. It was a beautiful ride. It started at 7.00 AM at Oakdale Emory Church in Olney MD, passed through scenic and lush green surrounding areas and finished back at the church. An ideal temperature of 63 made the ride a pleasure and fun. There were some winding roads and some small climbs but mostly flat. My smartphone app showed an elevation gain of 1075 feet over the distance on 30K, making it an easy ride. I am sure I will take part in 45K next year!

But it was the organization of the event that made the ride more pleasurable. More than few hundred participants took part in the bike rides as well as walks. There were volunteers throughout the bike route to guide and cheer the riders. Some were standing at various intersections to make sure that we don’t take a wrong turn. Some were at the rest stops with water and fruit to keep our energy level up. And of course to greet and cheer and welcome us at the finish line. Lots of food and music at the church. All credit goes to Russ and Betsy Sanford, and Oakdale Emory Church for organizing this event for such a great and worthy cause.

Cancer detection, prevention and cure is very close to my heart. Like most of you, I have personally seen and experienced my close friends and family members succumb to this dreadful disease, but not without a courageous and brave fight. And they are the true fighters! The fight against cancer is full of emotions. It is like a roller-coaster ride. Initial shock of cancer diagnosis just makes your spine shiver and heart stop. Then there is treatment regimen that goes through cycles of chemo and may be surgery. One day there is hope and the next day it may dwindle. I can't even fathom what they go through during this fight. Just salute to those who are fighting this battle and salute to those who are supporting the fighters. 

Let's hope cure for cancer is not too far. In the meantime, do support this fight and the fighters.

For more information on Joe's ride, visit http://joesanford.com/

Thursday, March 10, 2016

Oracle TNS Listener Poison Attack

Oracle TNS Poison Attack vulnerability, which was published as CVE 2012-1675 with CVSS score of 7.5, allows a malicious hacker to take complete control of the database server without authentication.

As explained in the article Oracle TNS Listener Poison Attack that I wrote for the Information Security Buzz, the fix for this vulnerability depends upon the Oracle version that you are running. 

The default listener configuration in 11.2.0.4 is vulnerable. You have to explicitly specify VALID_NODE_CHECKING_REGISTRATION_<listener_name> to LOCAL or ON or 1 in listener.ora to address this issue.

What makes this vulnerability still relevant is that there are many organizations still running 11.2.0.4 and they haven’t addressed this vulnerability. As explained in this article, the fix is quite easy.

Here is the text of the article which was published on the Information Security Buzz:

A flaw in the Oracle database listener, if not mitigated, could allow an attacker to take complete control of an Oracle database through an attack known as TNS Poison Attack. This vulnerability is remotely exploitable without authentication credentials. This classic man-in-the-middle (MITM) vulnerability has been published as security alert CVE 2012-1675 and received a CVSS base score of 7.5. It impacts confidentiality, integrity and availability of the database. Joxean Koret discovered this vulnerability in 2008 and publicly disclosed in 2012.
TNS Poison Attack vulnerability exploits Oracle listener’s database service registration functionality. Oracle database users connect to the database services through Oracle TNS Listener which acts as a traffic cop. A malicious attacker, residing on the same network as the database, registers a malicious service with the database listener with the same service name as legitimate database service. No credentials are required to register a database service with the listener. An attacker can use Oracle database software or easily available other tools to register a malicious database service.
After completion of the malicious database service registration with the same name as legitimate service name, Oracle listener has two services to choose from – a legitimate service and a malicious service. With two database services available, Oracle listener switches to the load balancing traffic cop mode, directing users alternatively to the legitimate service and the malicious service. At least, 50% of the user sessions are directed to the malicious service. Database user sessions, which are now communicating through the malicious service, can be hijacked by the attacker. An attacker is in the middle. All communication from the users to the database is now passing through the malicious attacker. Attack post stablished. Attacker has full purview of what users are communicating with the database. At a minimum, the attacker can view and steal the data. Additional SQL commands may be injected to broaden the scope or carry out additional attacks. If a database user communicating with the database happens to be a privileged user with the DBA role, then the attacker has complete control of the database. Database compromised. Mission accomplished.
TNS Poison Attack vulnerability is mitigated through Valid Node Checking Registration (VNCR) setting which permits service registration from only known nodes or IPs. Specific mitigation steps depend on the version of the database that you are running as shown below:
(1)    Oracle Database Releases 12.1 or above: If you are running Oracle database 12.1 or above, then you don’t need to further read this article unless you are just curious. The default Oracle listener configuration in Oracle 12c would protect you against this vulnerability. Although you don’t need to specify VALID_NODE_CHECKING_REGISTRATION_<listener_name> parameter to LOCAL in listener.ora, I would suggest that you explicitly do so just to make sure, as shown below:

LISTENER_DB =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.100)(PORT=1521))
    )
  )
VALID_NODE_CHECKING_REGISTRATION_LISTENER_DB=LOCAL

This parameter ensures that databases that are on the same server as the listener are permitted to register services with the listener. No remote registration of the services is permitted. If a malicious attacker attempts to register a service with the listener from a remote server, you will see the following error message in the listener log:

Listener(VNCR option 1) rejected Registration request from destination 192.168.200.131
12-NOV-2015 17:35:42 * service_register_NSGR * 1182

Oracle clustering solution, Oracle RAC, requires remote registration of services. In order to protect Oracle RAC from TNS poison Attack, you also need to set REGISTRATION_INVITED_NODES_<listener name> to specify IP addresses of the nodes from which remote registration is required.
(2)    Oracle Database Release 11.2.0.4: If you are running Oracle database 11g R2 11.2.0.4, then you must mitigate this risk through listener configuration. As illustrated above, you need to set VALID_NODE_CHECKING_REGISTRATION_<listener_name> to LOCAL. Alternate values for this parameter are ON or 1 and accomplishes the same objective. The default value for this parameter is OFF, leaving the door open to an attack. As mentioned above, if you are running RAC, then you also need to set REGISTRATION_INVITED_NODES_<listener name> to allow instance registration from trusted/valid nodes.
(3)    Oracle Database Release 11.2.0.3 or older releases: Before I describe the mitigation for older releases, let me mention that you should not be running Oracle databases 11.2.0.3 or older. Oracle has already de-supported older releases. No security patches are available for older database releases. You should upgrade as soon as possible.
Oracle, however, does provide a workaround for older releases through Class of Secure Transport (COST) parameters. There are three parameters SECURE_PROTOCOL_<listener_name>, SECURE_REGISTER_<listener_name> and SECURE_REGISTER_<listener_name> that can be configured to control registration of services from valid nodes only. Please refer to Oracle documentation for more information.
Please note that COST parameters can also be used for Oracle database releases 11.2.0.4 or newer to protect against TNS Poison Attack, but the procedure is more complex and requires additional configuration.

What makes this vulnerability still relevant, even after its full disclosure 3 years ago, is that there are many many organizations running various flavors of Oracle database 11g R2 releases such as 11.2.0.3, 11.2.0.3, 11.2.0.4, etc. haven’t yet mitigated this flaw. If you haven’t, you should as soon as possible. 

Monday, December 21, 2015

Oracle, please bundle OPatch with PSUs/SPUs!

OPatch is an Oracle-supplied utility that is used by DBAs to apply patches (PSUs/SPUs/Bundles etc.) to Oracle installations. It is the only supported and documented way to apply Oracle PSUs and SPUs. Of course, it is recommended and required as well.

Before applying Oracle patches, we need to verify that the OPatch version recommended by Oracle in PSU/SPU README is already there on the server.

If it is not, then we will need to download the recommended version of OPatch. (By the way, we don’t attempt to use the non-recommended OPatch!) The fastest way to download OPatch that I know of is to get it directly from https://updates.oracle.com/download/6880880.html  URL. You still need to choose Oracle software version and platform. After downloading it to your PC, you will need to copy it to the server through VPN tunnel using copy or ftp utilities, rename the old OPatch directory under Oracle Home, move it there and then unzip it. Quite a few tasks! Of course, takes time!

So my questions, or request to Oracle is to bundle the OPatch with Oracle patches (PSUs/CPUS etc), eliminating all the additional steps mentioned above. OPatch will be moved to the server along with PSUs/SPUs. No need to move it separately. We can easily save 15 minutes or so of our time. Why not? It is the first step towards automation.


DBAs do spend considerable amount of time applying quarterly patches. The time saved could be used elsewhere to improve the health of the databases! Not possible now. Hopefully, in near future! 

Friday, March 13, 2015

Gold from the ASHes!

Oracle's ASH, or Active Session History, statistics are simply a gold mine. ASH's power stems from its capability to correlate performance issues to a specific database user or a SQL statement very quickly. Active Session History, as name implies, stores session states for each and every active database session. A session is considered active if it is on CPU or waiting for a non-idle event (or OS call). 

Since Oracle samples active sessions every second, each sample is an statistical approximation of one second. If an ASH record indicates that a session is on CPU, then you could say that the session has consumed 1 second of CPU. By the same token, if an ASH record indicates that a session is waiting for a non-idle event, then you could say that the session has waited for a second for that event. 

For example, if there are ASH 100 records showing various sessions on CPU for a 30-minute time interval, then you could say DB CPU Time is 100 seconds. During the same interval, if there are 200 ASH records showing various wait events(OS calls), then you could say that the database has waited for 200 seconds. In this case, DB Time would be 100 + 200 = 300 seconds. 

As we know, DB Time and DB CPU time can also be derived from an AWR report which naturally raises a question - do DB TIme and DB CPU Time statistics, reported by AWR report, match with those derived from the ASH? I would contend that it is not important to have a 100% match for ASH and AWR statistics. What is more important is that ASH statistics would help you identify a problem very quickly which AWR may not be able to do do. Anyway, let me first attempt to answer this question and then explain why it is futile to attempt to compare two sets of statistics. 

Back to the question, ASH and AWR statistics may or may not match. AWR statistics represent the difference between two snapshots while ASH statistics are samples of all active session collected every second. There may be sampling errors in ASH Data. 

ASH experts or statistician may be able to elaborate more on ASH sampling algorithm and answer this question intelligently. In my small world, I have observed ASH statistics to be off by 10 to 20%. Let me show you how to calculate differences in DB Time and DB CPU Time as reported by AWR and ASH.

  SELECT TO_CHAR (ASHTime, 'MM/DD/YY:hh24') ASHTime,
         ASHDBTime,
         ASHCPUTime,
         AWRDBTime,
         AWRCPUTime,
         ROUND ( (ASHDBTime - AWRDBTime) / ASHDBTime * 100) DBTimeDiff,
         ROUND ( (ASHCPUTime - AWRCPUTime) / ASHCPUTime * 100) CPUTimeDiff
   FROM (  SELECT TRUNC (SAMPLE_TIME, 'HH24') ASHTime,
                   ROUND (COUNT (*) / 60) ASHDBTime,
                   ROUND (SUM (DECODE (session_state, 'ON CPU', 1, 0)) / 60)
                      ASHCPUTime
             FROM v$active_session_history ash
             WHERE     sample_time >= TO_DATE ('&&s_begin_time', 'MM/DD/YYYY:HH24:MI:SS')
                   AND sample_time <= TO_DATE ('&&s_end_time', 'MM/DD/YYYY:HH24:MI:SS')
          GROUP BY TRUNC (SAMPLE_TIME, 'HH24')) ASH,
         (  SELECT TRUNC (BEGIN_INTERVAL_TIME, 'HH24') AWRTime,
                   ROUND (SUM (EE1.VALUE - EE2.VALUE) / 60000000) AWRCPUTime
             FROM DBA_HIST_SNAPSHOT SS,
                   DBA_HIST_SYS_TIME_MODEL EE1,
                   DBA_HIST_SYS_TIME_MODEL EE2
             WHERE     EE1.SNAP_ID = EE2.SNAP_ID + 1
                   AND EE1.INSTANCE_NUMBER = EE2.INSTANCE_NUMBER
                   AND EE1.STAT_NAME = EE2.STAT_NAME
                   AND EE1.SNAP_ID = SS.SNAP_ID
                   AND EE1.INSTANCE_NUMBER = SS.INSTANCE_NUMBER
                   AND SS.BEGIN_INTERVAL_TIME >=
                          TO_DATE ('&&s_begin_time', 'MM/DD/YYYY:HH24:MI:SS')
                   AND SS.BEGIN_INTERVAL_TIME <=
                          TO_DATE ('&&s_end_time', 'MM/DD/YYYY:HH24:MI:SS')
                   AND EE1.STAT_NAME IN ('DB CPU')
                   AND EE2.STAT_NAME IN ('DB CPU')
             GROUP BY TRUNC (BEGIN_INTERVAL_TIME, 'HH24'), EE1.STAT_NAME) AWRCPUTime,
         (  SELECT TRUNC (BEGIN_INTERVAL_TIME, 'HH24') AWRTime,
                   ROUND (SUM (EE1.VALUE - EE2.VALUE) / 60000000) AWRDBTime
             FROM DBA_HIST_SNAPSHOT SS,
                   DBA_HIST_SYS_TIME_MODEL EE1,
                   DBA_HIST_SYS_TIME_MODEL EE2
             WHERE     EE1.SNAP_ID = EE2.SNAP_ID + 1
                   AND EE1.INSTANCE_NUMBER = EE2.INSTANCE_NUMBER
                   AND EE1.STAT_NAME = EE2.STAT_NAME
                   AND EE1.SNAP_ID = SS.SNAP_ID
                   AND EE1.INSTANCE_NUMBER = SS.INSTANCE_NUMBER
                   AND SS.BEGIN_INTERVAL_TIME >=
                          TO_DATE ('&&s_begin_time', 'MM/DD/YYYY:HH24:MI:SS')
                   AND SS.BEGIN_INTERVAL_TIME <=
                          TO_DATE ('&&s_end_time', 'MM/DD/YYYY:HH24:MI:SS')
                   AND EE1.STAT_NAME IN ('DB time')
                   AND EE2.STAT_NAME IN ('DB time')
             GROUP BY TRUNC (BEGIN_INTERVAL_TIME, 'HH24'), EE1.STAT_NAME) AWRDBTime
   WHERE ASH.ASHTIME = AWRCPUTime.AWRTime 
   AND ASH.ASHTIME = AWRDBTime.AWRTime
   ORDER BY 1;

ASHTIME      ASHDBTIME ASHCPUTIME  AWRDBTIME AWRCPUTIME DBTIMEDIFF% CPUTIMEDIFF%
----------- ---------- ---------- ---------- ---------- ----------- ------------
03/12/15:22         51         48         44         42         14          13
03/12/15:23         51         49         45         43         12          12
03/13/15:00         25         22         20         18         20          18
03/13/15:01         50         48         45         42         10          13
03/13/15:02         47         45         41         40         13          11
03/13/15:03         57         55         51         50         11           9
03/13/15:04         46         45         41         40         11          11
03/13/15:05         44         42         39         37         11          12
03/13/15:06         43         41         37         36         14          12
03/13/15:07         43         42         37         36         14          14
03/13/15:08         42         41         38         36         10          12
03/13/15:09         42         41         38         36         10          12
03/13/15:10         43         41         38         37         12          10
03/13/15:11         44         42         38         36         14          14
03/13/15:12         43         41         38         37         12          10
03/13/15:13         43         42         38         36         12          14
03/13/15:14         46         42         39         37         15          12
03/13/15:15         43         42         37         36         14          14

As you could see from the above report, DB Time and DB CPU Time statistics diverge by 10 to 20%, but that's OK. I think you are more interested in knowing that there was a problem than knowing 100% accurate values in ASH stats. Let me explain you why. 

Since samples are taken every second, it may miss an event or activity which started and completed between the two consecutive samples. I don't think as a performance analyst, you are interested in an event or activity which took less than a second. Any wait events lasting more than a second will be captured. Any SQLs running more than a second will be captured. Any long running queries lasting more than a second will be captured. So ASH does capture any events or activities of significance. It may not capture events or activities that don't span two consecutive ASH samples, but I don't think it will impede your abilities to diagnose database issues.

As mentioned above, SQLs, which complete within a second between the two samples, are not captured, but if they are run frequently enough, then ASH sampling algorithm would capture it and you would know about it.

If we can predict presidential elections through polls which are based on statistical sampling algorithms, then ASH surely can tell you about performance issues that your database may be facing. ASH is a gold mine of diagnostic information. Just dig it, but beware. ASH comes with a cost - it requires diagnostic pack license.


Tuesday, March 3, 2015

Performance impact of sub-query factoring

Oracle introduced the WITH clause, or sub-query factoring clause, in Oracle 9.2. The WITH clause is processed as an inline view or resolved as a temporary table. The WITH clause improves the readability of the code for sure, but may not yield performance benefits in all cases as one might have thought. Each case should be specifically examined to assess performance benefits. I will discuss a SQL with has UNION.

Original SQL Statement without sub-query factoring

SELECT D1.ID1, D2.ID2
FROM MAP_HISTORY  MH, DEPT D1, DEPT D2
WHERE     UPD_TABLE = 'ROSTER'
       AND D1.ID = MH.NEW_VALUE
       AND D2.ID = MH.OLD_VALUE
UNION
SELECT D1.ID1, D1ID2
FROM MAP_HISTORY  MH, DEPT D1, DEPT D2
WHERE     UPD_TABLE = 'ROSTER'
       AND D1.ID = MH.NEW_VALUE
       AND D2.ID = MH.OLD_VALUE;

As you could see from the SQL above, the MAP_HISTORY  table is accessed in each UNION. The EXPLAIN PLAN of this statement shows that MAP_HISTORY  table was accessed twice. The cost of each index access was 13. The total cost for two accesses was 26

SQL Autotrace statistics show that this query accessed 3745 blocks, as shown below.

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       3745  consistent gets
          0  physical reads
          0  redo size
      18535  bytes sent via SQL*Net to client
       1057  bytes received via SQL*Net from client
         65  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
        959  rows processed


Modified SQL with sub-query factoring

WITH MH as
  ( SELECT NEW_VALUE, OLD_VALUE
    FROM MAP_HISTORY
    WHERE UPD_TABLE = 'ROSTER' )
SELECT D1.ID1, D2ID2
  FROM  MH, DEPT D1, DEPT D2
 WHERE D1.ID = MH.NEW_VALUE
       AND D2.ID = MH.OLD_VALUE
UNION
SELECT D1.ID1, D1.ID2
  FROM MH, DEPT D1, DEPT D2
 WHERE D1.ID = MH.NEW_VALUE
       AND D2.ID = MH.OLD_VALUE;


The EXPLAIN PLAN shows that Oracle created a temporary table. The cost to create the temporary table was 13. The temporary table was accessed twice, each with a cost of 4. So the total cost for the new structure was 13 + 4 + 4 = 21 which is less than 26 in the original query, but the execution plan changed and overall cost increased to 739. (Note: cost may not translate into actual time taken by the query. This is a completely separate topic and discussion for some other time).

As could see from the AUTOTRACE statistics, the number of consistent gets are 3768, but also there are 10 physical reads. There are also physical writes to write out the temporary table.

Statistics
----------------------------------------------------------
         65  recursive calls
         20  db block gets
       3768  consistent gets
         10  physical reads
       1156  redo size
      18535  bytes sent via SQL*Net to client
       1057  bytes received via SQL*Net from client
         65  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
        959  rows processed

What factors impact performance of sub-query factoring?
  • Change in Execution Plan which may be slower
  • Cost to write out data to the temporary tablespace 
  • Cost to read data from the temporary table
  • No indexes on temporary table and hence Oracle may choose HASH join in the SQL with the WITH clause
  • Frequency of temporary table access