Monday, October 15, 2018

CIS Oracle12c Security Benchmark v2.1

Center for Internet Security (CIS) has just released Oracle12c Database Security Benchmark version 2.1. As an editor for CIS Oracle12c security benchmarks, I made significant contributions to the release of the benchmarks.

This new version includes additional security controls and clarifications/corrections to the existing controls. CIS Oracle12c benchmark provides a checklist of security configurationguidelines/recommendations to safeguard Oracle databases. These recommendationsare security best practices that should be implemented to protect and secureOracle databases against cyber threats.

CIS Oracle 12c benchmarks, like other 100+ benchmarks published by CIS, are unique for the following reasons:

  • CIS benchmarks are consensus-based. They are developed by security professionals world-wide.
  • CIS benchmarks are completely free. They are available in PDF, Exce and Word formats. CIS benchmarks can be downloaded from https://learn.cisecurity.org/benchmarks
  • CIS benchmarks are universally accepted by the government, public corporations, privare businesses and academic institutes.
  • CIS Benchmarks are available for 100+ platforms and technologies, including Desktops, Browsers, Networking Devices, Databases, Operating Systems and Virtualization platforms.

Sunday, April 1, 2018

Securely Grant SYS objects to Database Users


My article published by the Scene magazine of UKOUG. There are lots of good articles by the experts in the magazine.

The below is the text of the article:

As a DBA, you may be asked to grant SELECT and/or EXECUTE privileges on the Oracle data dictionary objects which are SYS objects. For example, a security auditor needs access to the data dictionary objects to perform security compliance audits; a Jr. DBA needs access to learn more about the database environment; an application DBA needs access to DBA_ views; a performance analyst who is helping your organization optimize your applications, needs access to the performance views and tools; third-party tools that your organization has procured require access to the data dictionary, etc.

As a DBA, what are your options? What roles and privileges should you grant? Should you grant SELECT_ANY_CATALOG role or SELECT ANY DICTIONARY privilege? You are concerned about security implications of granting access on SYS objects. This article explores various options your options that you have at your disposal.

SELECT_CATALOG_ROLE Role

SELECT_CATALOG_ROLE is the first option that may be considered. This is an Oracle-supplied pre-defined role. The primary purpose of this role is to grant SELECT privileges on the Oracle data dictionary (catalog) views that include DBA_, GV_$, V_$ views and CDB_ views(12c only). Please note that underlying SYS tables such as TAB$, OBJ$, etc. are not granted through this role. X$ views are not granted as well. In addition, the grantees of this role won't be able to create stored PL/SQL objects such as procedures, packages or views on the underlying data dictionary objects granted through this role. Just an FYI that you will need to grant direct privileges on the underlying objects for this purpose.

This pre-defined role is quite handy when a user needs SELECT access on catalog views, however, you should carefully consider security implications before granting this role.

  • The primary security consideration that you need to weigh is the metadata about your database that this role could reveal. As mentioned above, this role grants SELECT on most DBA_ views which would pretty much reveal all the metadata about your database. Many, if not most of these views granted trough this role are sensitive. For example, DBA_USERS reveals information about your database users. Good news is that DBA_USERS no longer shows password hashes in 11g/12c, but it did in 10g. DBA_SOURCE and DBA_TRIGGERS reveal your PL/SQL code. You would be surprised what these two views may reveal, including business logic, security framework, encryption/decryption keys, packages susceptible to injection vulnerabilities, etc. DBA_AUDIT_ reveals your database audit logs. 12c unified auditing views are not included in this role, but are covered by unified audit specific roles (AUDIT_ADMIN and AUDIT_VIEWER). There are other catalogs views as well that could potentially reveal sensitive information.
  • The second security consideration is the number of objects that this role grants. The number of objects granted through this role in my 12.2.0.1 database in 3,625. It grants 2,300+ objects in my 11gR2 database. As one would expect, the number of objects granted through this role varies from version to version and options that are installed in your database. A query on DBA_TAB_PRIVS provides you the names of the objects being granted through this role. The sheer number of objects included in this role goes on to show you how powerful this role is.
Although most objects granted through this role are DBA_, V$, GV$ and CDB_, there are few odd balls with this role as well. Although the name of this role implies SELECT privileges, EXECUTE grant on few packages is included in this role (3 packages and 2 functions in my 12c database). Along with majority of SYS views, few objects (tables and views) owned by SYSTEM, XDB and OUTLN are also granted. SELECT_CATALOG_ROLE includes HS_ADMIN role which includes two more role - HS_ADMIN_SELECT_ROLE and HS_ADMIN_EXECUTE_ROLE.

SELECT ANY DICTIONARY Privilege

The second option is SELECT ANY DICTIONARY system privilege. The primary purpose of this system privilege is to grant SELECT access to the Oracle data dictionary, however, this privilege is more powerful than SELECT_CATALOG_ROLE. Like SELECT_CATALOG_ROLE, this privilege grants SELECTs on DBA_, GV_$, V_$ views, and 12c CDB_ views but also overcomes two limitations of SELECT_CATALOG_ROLE:
  • This privilege grants SELECTs on underlying SYS tables such as TAB$, OBJ$ etc.
  • This privilege grants SELECT on X$ views.
  • This privilege also allows grantees to create stored PL/SQL objects such as procedures, packages or views on the objects it grants.
Since this is a system privilege, the names of objects and types of privileges being granted is not documented in Oracle documentation. It is easy to obtain such list for SELECT_CATALOG_ROLE through DBA_TAB_PRIVS, but no such documentation is available for this privilege. However, Oracle documentation mentions that underlying SYS tables with password hashes such as USER$, DEFAULT_PWD$, USER_HISTORY$ and ENC$ cannot be accessed through this privilege in 12c. In 11g, such access was granted. This privilege allowed SELECT access on LINK$ until Oracle10g, but this access has been removed since then.

Two security considerations mentioned above - very high number of SYS objects being granted and sensitivity of the metadata being revealed, apply to this privilege as well. In addition, you need to consider whether there is any need to grant (a) SELECT on underlying SYS tables and (b) privilege to create PL/SQL code on underlying SYS objects.

SELECT ANY DICTIONARY privilege may be a quick solution to meet your users’ requirements. As described here, SELECT ANY DICTIONARY privilege is more powerful than SELECT_CATALOG_ROLE and hence caution must be exercised before granting this privilege. I would recommend weighing in security considerations before granting this privilege.

SELECT ANY TABLE Privilege with o7_dictionary_accessibility = true

SELECT ANY TABLE is a system privilege that grants SELECT on all non-SYS objects. Since our requirement is to grant SELECT access on SYS objects, this privilege won't be sufficient unless you have set o7_dictionary_accessibility = true. SELECT_ANY_TABLE alone, without o7_dictionary_accessibility init.ora parameter, doesn't grant SELECT access on the data dictionary. In other words, this option is a combination of granting SELECT ANY TABLE privilege and setting o7_dictionary_accessibility init.ora parameter to TRUE.

I strongly recommend against using this option for the following reasons:
  • Although with this option, SYS objects can be queried, it also opens up the doors for other security issues. o7_dictionary_accessibility parameter extends the access to SYS schema for other *ANY* privileges as well. For example, if you have CREATE ANY TABLE privilege, then you can create tables in SYS schema. If you have DELETE ANY TABLE privilege, then one can delete from SYS tables. There are quite few *ANY* privileges that would be extended to include SYS objects, and hence it is quite dangerous.
  • Center for Internet Security (CIS) Oracle Benchmarks and many other industry standard security best practices recommend that o7_dictionary_accessibiity parameter be set to false.
  • This parameter was primarily implemented to facilitate Oracle7 to Oracle8 migration and should not be used now.
  • The side effect of SELECT ANY TABLE privilege is that it would allow grantee to select all the tables/views in the database, including applications/users data. The requirement was to grant SELECT on the SYS objects, not SELECT on all tables within the database. 
Execute_Catalog_Role

EXECUTE_CATALLOG_ROLE may be an option when you need to grant EXECUTE privilege on Oracle-supplied PL/SQL packages. This is an Oracle-supplied role. As the name implies, this role grants EXECUTE privilege on the Oracle data dictionary (catalog) packages to the grantees. The number of objects granted through this role in my 12.2.0.1 database in 110. The number was 80 in Oracle10g database. As one would expect, the number of objects granted through this role varies from version to version and installed options. A query on DBA_TAB_PRIVS provides you the names of the packages being granted through this role.

This role grants EXECUTE privilege on very powerful PL/SQL packages to the grantee. Just to name a few categories of packages being granted, this role includes packages related advanced queuing, log miner, redaction, SQL redefinition, replication, streams, row level security, etc.

This is a very powerful role. Before granting this role to anyone, please review the requirements and then make a determination whether this role should be granted on not.

Delete_Catalog_Role

There is one more role which sounds pretty dangerous - DELETE_CATELOG_ROLE. Your first reaction would be that this role allows deletes on the data dictionary objects, but don't worry - no objects are granted through this role. In Oracle11g/10g, this role includes only one privilege - DELETE on FGA_LOG$. DELETE on AUD$ was included in 9i but has been removed since then. This role has been deprecated in 12c.

Create your Own Roles based on specific requirements

As described in this article, you have various options to grant access to the data dictionary. Both SELECT_CATALOG_ROLE or SELECT ANY DICTIONARY privilege offer a quick solution but security considerations should be weighed in. These role and privilege are very powerful and can be misused. In addition, the number of objects granted through these roles and privileges is quite large. Your users may not need access to all the objects that are granted trough these roles and privileges. SELECT ANY TABLE with o7_DICTIONARY_ACCESSIBILITY is not an option that I would recommend. My recommendation is to create a new role specific to your requirements, grant objects that are needed to this role and then grant the new role to the user.


Monday, December 25, 2017

Tax Talk: Prepay Property Taxes or Not!

Happy Holidays!

And here is some tax talk in an Oracle blog! New tax bill has been signed into the law last week. There is lots of information on the web on this subject.There are lots of news paper articles on it. Many articles focus on how to save or reduce your tax bill, specifically in 2017. Some of these articles are useful while others are confusing! I did some research on prepaying 2018 property taxes in 2017. The below is the summary of what I learned. 

If you are taking a standard deduction, or you do not itemize, then the discussion of prepaying local property taxes is a moot point. You don’t need to read this message any further.

If you are not taking a standard deduction, then read on. The primary rationale behind the recommendation to prepay local property taxes this year is the provision in the new tax law that will limit, or cap, SALT deductions to 10K. The SALT, which stands for State and Local Taxes, includes state income tax and house/car property taxes. There is no limit or cap on the SALT taxes in the current tax law - you can deduct all your SALT taxes, but the new law imposes 10K limit on it.

For many home owners, the property tax burden is more than 10K. And that’s only the home property tax. To calculate total SALT tax bill, you need to add state income tax and car property tax bill to it, and you are likely to be quite above 10K. If you itemize now, then you can deduct all your SALT taxes, but you won’t be able to deduct more than 10K next year. The max you can deduct in 2018 is 10K. 

Assuming your SALT tax bill is 20K, you are likely to lose 10K deduction in 2018. So the recommendation is – why not to prepay property taxes so that we can deduct all SALT taxes while the current law allows it. Let’s assume that you prepay 10K in property taxes which could save you 2K in taxes (assuming your tax rate is 20%). That’s 20% return on your 10K investment! Quite nice!

Although 20% ROI is quite nice, everyone’s tax situation is quite unique and different. You may not get 20% ROI! One factor that could limit your potential ROI is AMT (Alternative Minimum Taxes). If you prepay now, your total deductions would be further increase which could trigger AMT or increase your AMT tax bill. If you are currently paying AMT, then prepaying property taxes may not yield any federal tax savings. Your tax savings could very well be a wash due to AMT. If you are not currently paying AMT, then additional property tax deductions may trigger AMT and could reduce your federal tax savings. 

In summary, if you decide to prepay property taxes, your federal income tax savings will be based on your situation, or more specifically AMT situation. However, your state tax bill would be lower as AMT calculations are only applicable to the federal taxes, but the state tax savings are not as large as federal tax savings.

Please do crunch numbers for your situation just to make sure before you decide to prepay! 

Hope this helps! 

Sunday, November 26, 2017

Eagle

It was in 1997 when I designed and implemented a tool to monitor Oracle databases. I called this tool Eagle. I had also written an article describing Eagle in IOUG Select magazine in 1997.

After 20 years in 2017, I just came across an article, Enhancing Database Security: Concepts and Tools for the DBA by Peter J. Magee of SQRIBE Technologies that utilizes Eagle and extends it further.

Quite exciting to see an application of Eagle! More information on Eagle can be found in my blog as well.

Implement SQL*Net Encryption and Checksum

My article on SQL*Net Encryption and Checksum was published in the 2017 Winter edition of UKOUG Scene. 

-----

Implement SQL*Net Encryption and Checksum

Oracle SQL*Net encryption is included in the database license. At one point, it required an Oracle Advanced Security option license at an additional cost, but not anymore. I would strongly recommend that you take advantage of this feature to bolster the security of your database, if you haven't done so yet.

Without encryption, database communication will be in clear-text, potentially compromising the confidentiality of your data. Malicious users and/or hackers would be able to eavesdrop on the data communications using network sniffer tools such as Wireshark. Not only data communications could be at risk, but DBA activities such as password changes would also be visible to the sniffer tools.

I believe the primary reasons for not implementing SQL*Net encryption are: (a) Many DBAs are still not aware that SQL*Net encryption feature is now available at no additional cost; (b) Many DBAs and organizations do not see a need to implement SQL*Net encryption due to the fact that databases are within the internal network and behind the network firewall; (c) SQL*Net encryption is not a mandate or organizational requirement or (d) What if this breaks my applications?

Even though databases are behind the network firewall, it is important to implement SQL*Net encryption/check-sum for two main reasons: (a) hackers may find a vulnerable system elsewhere on the network and establish a post to sniff the network traffic, including Oracle database, and (b) one cannot discount insider threats from malicious/disgruntled users who can sniff the network traffic.

Although SQL*Net encryption implementation details are readily available on the web, this article attempts to demonstrate an approach that should help you implement this feature without any service disruptions.

Start with REQUESTED

My recommendation is to set sqlnet.ora encryption and checksum parameters to REQUESTED on the database server, as shown below:

SQLNET.CRYPTO_CHECKSUM_SERVER=requested
SQLNET.ENCRYPTION_SERVER=requested
SQLNET.CRYPTO_CHECKSUM_CLIENT=requested
SQLNET.ENCRYPTION_CLIENT=requested

Configuring and managing these parameters at the database server level, instead of at all client locations, provides a centralized approach to implement encryption and check-sum. It requests that all bi-directional database traffic be encrypted and check-summed.

Please note that Oracle does not implement encryption and check-sum out-of-the-box. The default value for these parameters is ACCEPTED which triggers encryption and checksum only if specifically required or requested by the client. Setting these parameters to REQUESTED on the database server notifies the database clients that the database has requested encryption and checksum services. However, the clients may reject database server's request through sqlnet.ora settings or may not be able honor it due to incompatible client Oracle software. In this situation, the database server will proceed without encryption/checksum services and establish clear-text communications with the clients. The point is that REQUESETD setting won't break your connections to the database.

The two _CLIENT parameters mentioned above are for database connections originating from the database server, including database link connections to other databases. Just an FYI that SQLNET.CRYPTO_SEED parameter is obsolete since Oracle9i.

I would suggest not specifying any encryption and checksum protocols at this point. Let Oracle database server negotiate with the client and choose the strongest protocols that are acceptable to both parties.

Run the following query (a) to identify clients who have (or have not) implemented the encryption and checksum services and (b) protocols that have been negotiated by the server and the clients:

select * from v$session_connect_info order by sid;

If this query returns two rows with protocol names as shown below, then you are using encryption and checksum services. Please note that protocols may vary for some clients.

AES256 Encryption service adapter for 64-bit Windows: Version 12.1.0.2.0 - Production
SHA1 Crypto-checksumming service adapter for 64-bit Windows: Version 12.1.0.2.0 - Production

If you don’t see these two rows with protocol names, then use SID and SERIAL# returned by this query to identify the clients who are not using these services and work with them to resolve issues. Older versions of SQL*Net client software may be the most likely issue that you will need to resolve.

In addition, the above mentioned query also returns the following rows for each client which is for informational purposes only.

Windows NT TCP/IP NT Protocol Adapter for 64-bit Windows: Version 12.1.0.2.0 - Production
Encryption service for 64-bit Windows: Version 12.1.0.2.0 - Production
Crypto-checksumming service for 64-bit Windows: Version 12.1.0.2.0 - Production

As you may have noticed, REQUESTED setting, in essence, implements a “monitoring” mode. Database server requests the clients to support the encryption and checksum services, but doesn't require/force them to do so. If database clients are unable to fulfill this request, then the database server proceeds with establishing a database connection in clear-text. Database connections are not rejected or denied and hence nothing breaks!

Configure Encryption and Checksum Protocols

The next step is to configure encryption and check-sum protocols. Your organization may have a mandate for strong encryption standards such as FIPS 140-2. Or your objective may be to remove weaker protocols such as MD5 or RC4. Whatever may be the reason, review the results of the query against v$session_connect_info view and compile a list of protocols that are being used for encryption and checksum. If the protocols that are being used are weaker as per your organization policy, then you may need to identify the root cause which very well may be an older version of the SQL*Net client.  You need to work with the users to upgrade the software before stronger protocols can be deployed.

You could turn on SQL*Net tracing, as shown below, to troubleshoot the issue:

TRACE_LEVEL_CLIENT=16
TRACE_DIRECTORY_CLIENT = c:\Oracle\temp
TRACE_FILE_CLIENT= SQLNet.trc
TRACE_TIMESTAMP_CLIENT = ON
TRACE_UNIQUE_CLIENT = ON

Once you have ensured that all the clients are supporting the protocols that you would like them to, adjust the values of the following parameters in sqlnet.ora on the database server. More than one protocols can be specified as shown below.

SQLNET.ENCRYPTION_TYPES_SERVER = (AES256)
SQLNET.ENCRYPTION_TYPES_CLIENT = (AES256)

SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER = (SHA512, SHA384, SHA256, SHA1)
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT = (SHA512, SHA384, SHA256, SHA1)

If you have a combination of Oracle10g, Oracle11g and Oracle 12cR1 database servers and clients, then the above protocols should work for you. Of course, you need to test the configuration in your environments (development, test and/or UAT and finally the production).

Oracle10g, Oracle11g and Oracle 12cR1 supports the following encryption protocols:
·        Advanced Encryption Standard (AES): AES256, AES192 and AES128 
·        Triple DES: 3DES168, 3DES112 
·        DES: DES and DES40
·        RC: RC4_256, RC4_128, RC4_56, RC4_40 

Oracle10g, Oracle11g and Oracle12c R1 (12.2.0.1) supports the following checksum protocols:
·        MD5
·        SHA1

Oracle12c R1 introduces the following three additional protocols for checksum
·        SHA512
·        SHA384
·        SHA256 

Switch to REQUIRED

Once you are satisfied that all clients are supporting the encryption and checksum services with the approved protocols, then you should switch the values of these parameters to REQUIRED as follows:

SQLNET.CRYPTO_CHECKSUM_SERVER=required
SQLNET.ENCRYPTION_SERVER=required
SQLNET.CRYPTO_CHECKSUM_CLIENT=required
SQLNET.ENCRYPTION_CLIENT=required

Typical Errors

 You will get “ORA-12649: Unknown encryption or data integrity algorithm” when encryption or checksum protocols that you specified in SQLNET.ORA are unknown or not supported. For example, you have specified SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA512) but SHA512 is not supported by Oracle11g client.

Oracle error “ORA-12650: No common encryption or data integrity algorithm” will be reported when there are no common encryption protocols between client and server. For example, SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(MD5) on the client side and SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SH1) on the server.

Oracle error “ORA-12660: Encryption or crypto-checksumming parameters incompatible” will be reported when the server is requesting encryption or check-sum, but the client is not supporting it or vice versa.


You should monitor Oracle alert logs and SQL*Net logs on the database server to see if there are any issues. You may see the above mentioned ORA- errors. In addition, alert log also shows error “opiodr aborting process unknown ospid (11992) as a result of ORA-609”

Tuesday, September 12, 2017

Implement Oracle Database Firewall using Valid Node Checking

I recently published an article on Information Security Buzz that explains how to implement a database firewall using Oracle Database Listener’s Valid Node Checking (VNC) feature to protect your Oracle databases from malicious attacks from unauthorized locations. The article can be found at http://www.informationsecuritybuzz.com/articles/implement-oracle-database-firewall-using-valid-node-checking/

Below is the text for the same article:

Introduction

One of the Oracle Database Listener features for protecting your Oracle databases from malicious attacks from unauthorized locations is by implementing the Valid Node Checking (VNC) feature. Through this feature, access to the database can be restricted based on the IP address (or host name) of the client machine attempting to connect to the database. The Oracle database Listener validates the IP address (or host name) of the incoming client machine against the “allow” or “deny” rules specified in sqlnet.ora file before opening up a connection to the database.
Although this is not a new feature and has been available since Oracle8, it has not garnered traction within the Oracle database community mainly due to the following reasons:
  1.  Many organizations do not see a need for additional security controls such as database Valid Node Checking. Since today's enterprise databases are behind the network firewalls, they reckon that the security risk to the databases is low. Although this may be true in some cases, we recommend that the defense-in-depth methodology be adopted to protect today's mission critical databases.Without Database-based Valid Node Checking, anyone with network connectivity could attempt to connect to the database. With this feature enabled, only pre-authorized IP addresses would be permitted to connect to the database, reducing the scope of internal threats as well as attack surfaces/vectors from which malicious attacks can be launched.
  2.  Many organizations assume that they may have a large number of IP addresses that should be allowed to connect to the database, rendering VNC feature difficult to implement and cumbersome to manage. This is true for the client-server computing model which has become outdated. Instead most of today's databases are accessed through a small but finite number of middle-tier application servers. Oracle DBAs need to specify IP addresses of these application servers as allowed nodes in tcp.invited_nodes. In addition, you will need to enter IP addresses of administrative and operational team members' machines so that they can connect directly to the database through SQL*Plus, TOAD, SQL*Developer or other tools.
  3. There is a widely-held notion that each and every IP address must be explicitly specified and that wild cards are not allowed, making the IP address management process quite complex and difficult to manage. That is no longer true. Beginning with Oracle11g R2, allow and deny rules, which are specified via tcp.invited_nodes and tcp.excluded_nodes parameters, can include wild cards, greatly simplifying implementation of this feature. In addition, CIDR notations are allowed as well.

Implementation

Valid Node Checking feature is enabled by specifying the tcp.validnode_checking parameter in sqlnet.ora. Once enabled, you can configure allow rules through tcp.invited_nodes or deny rules through tcp.excluded_nodes in sqlnet.ora file as shown below:
tcp.validnode_checking=yes
tcp.invited_nodes=(localhost,192.168.10.101,192.168.10.102)

With the above configuration, the listener will accept connections from localhost, 192.168.10.101 and 192.168.10.102. Please note that if you don’t include “localhost” or IP addresses of the local host, Oracle listener will not start and you won’t be able to connect to the database at all. All other nodes will get the following error:  
C:\Users\jmehta>sqlplus scott/tiger@testdb

SQL*Plus: Release 11.2.0.4.0 Production on Mon May 16 18:15:34 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR: ORA-12537: TNS:connection closed

 The listener log will show the following error:
12-MAY-2016 21:32:18 * service_update * CDB2 * 0
Incoming connection from 10.0.2.2 rejected
12-MAY-2016 21:32:38 * 12546
TNS-12546: TNS:permission denied
 TNS-12560: TNS:protocol adapter error
  TNS-00516: Permission denied

You can also specify tcp.excluded_nodes to deny access to the specific nodes, but there is no reason to do so. Note that if you only specify invited nodes with tcp.invited_nodes, all other nodes will be excluded anyway. If both are present, then tcp.invited_nodes takes precedence over tcp.excluded_nodes parameter.
These parameters can use wildcards for IPv4 addresses and CIDR notation for IPv4 and IPv6 addresses, as shown below:
tcp.invited_nodes=(192.168.10.*,192.162.*.*)
tcp.invited_nodes=(sales,hr,192.168.*,2001:DB8:200C:433B/32)
tcp.invited_nodes=(192.168.10.1/24,192.168.20.1/24)

Obtain IP Addresses through Database Auditing

Although VNC implementation appears to be quite simple, it may not be in real-life. The challenge is to obtain and identify all IP addresses that should be allowed to connect to the database. If you miss an IP or two, then those users (including application servers, remote databases, devices, etc.) won't be able to connect to the database, causing unwarranted issues and/or outages.
Instead of requesting all the users to supply their IP addresses, you should enable auditing on logon and then query Oracle audit trail to obtain a complete list of IP addresses that have ever connected to the database. You need to make sure that the audit trail covers sufficient period of time so that all the users, including those who connect infrequently, are also included in the audit trail.
As shown below, enable logon auditing in pre-11g databases or in 12c with conventional mode auditing. You need to wait for sufficient time to have audit trail populated with connection information from all users and then execute the following query.
audit create session ;

select distinct substr(comment_text,instr(comment_text,'HOST=')+5,
instr(comment_text,'PORT=')-instr(comment_text,'HOST=')-7)
from dba_audit_trail
where action = 100
order by 1;

If you are running in 12c unified auditing mode, then create an audit policy and query UNIFIED_AUDIT_TRAIL table to get a complete list of all IP addresses over the given period of time:
create audit policy ora_logon
actions logon;

audit policy ora_logon;

select distinct substr(authentication_type,instr(authentication_type,'host=')+5,
instr(authentication_type,'PORT=')-instr(authentication_type,'host=')-7)
from unified_audit_trail
order by 1;

Ensure that IP Addresses are Correctly Specified

The best approach to ensure that you have specified tcp.invited_nodes correctly and it works as intended is through listener tracing at ADMIN or DEBUG level. Enable listener tracing by setting TRACE_LEVEL_<listener_name> = ADMIN in listener.ora which would generate trace of Oracle Listener's actions and you would be able to see the IP addresses being loaded into Valid Node Checking table.

Gateway Server for Large User Base

If you have a very large number of database users requiring direct connection to the database, then IP address management task could be complex. An ideal solution to remediate this issue would be to configure a gateway server such as Citrix with requisite software (SQL*Plus, TOAD, SQL*Developer, other tools etc.) that would be used to access the database. You add the IP address of the Citrix server to tcp.invited_nodes to allow access from the Citrix server. All users should be asked to connect to the Citrix to connect to the database. With this approach, the security is further bolstered as only path to the database would be from the Citrix server. A gateway server not only resolves IP address management issue, but it can also help implement additional security controls which would prevent data dump, printing, copying, etc.

Special Considerations

It's important to note that the listener must be stopped and started (not reloaded through lsnrctl reload) for valid node checking to become active. Tcp.invited_nodes and tcp.exlcuded_nodes parameters are read only during the listener startup. This means that the IP addresses and hostnames cannot be dynamically added or removed. Although listener restart (stop and start) usually takes less than few seconds, your database is essentially down for the incoming database connections for this small duration. You need to consider its impact on your high availability requirements. Please note that the listener can be restarted without impacting existing database connection
All hostnames specified in tcp.invited_nodes or tcp.excluded_nodes must be resolvable. Otherwise, Oracle Listener fails to start, aborting with "TNS-00584: Valid node checking configuration error" message. When enabled by setting TRACE_LEVEL_LISTENER=admin, Listener tracing records "ntvllt:Problem in Loading tcp.invited_nodes" as error message.
However, the good news is that unresolvable IP addresses don't cause any issues during the listener startup. If listener tracing is enabled, the trace file does show that all IP addresses and hostnames are processed by the listener and populated into Valid Node table appropriately.
Oracle support note says that all IP addresses or host names must be specified on one line in sqlnet.ora file, but I was able to specify multiple IP addresses on multiple lines. In fact, I specified only one IP address on each line, and it worked fine in Oracle 12c (12.1.0.2). I do know that it didn't work in previous releases so do test your configuration against the Oracle versions/releases that you are running to check whether it works or not.
Oracle does seem to perform basic validation on the hostnames and IP addresses specified in tcp.invited_nodes and tcp.excluded_nodes parameters, but documentation doesn't elaborate on the level of validations being performed, causing misunderstanding, confusion and unnecessary issues. Oracle does seem to ignore duplicated IP addresses or host names. Two consecutive comma causes Oracle Listener to ignore the rest of the entries. Some invalid IP addresses are accepted but some are not. To make matter worse, Oracle Listener quite often simply ignores the incorrect syntax or specifications without warning which results in unexpected behaviors and outcomes.
Please note that VNC feature may not protect you against advanced attacks, involving IP address and hostname spoofing. Other options available to implement capabilities similar to VNC are server-based firewall and Oracle Connection Manager. You should consider pros and cons of each approach and choose an option that best meets your needs.

Conclusion

Oracle’s VNC feature is an interesting feature akin to a basic firewall. With support for wild cards and access through middle-tier application servers, IP address management might not be as complex as initially thought. As pointed out in this article, although there are few special considerations you need to be aware of, VNC is still a powerful security control that should be considered as part of defense-in-depth methodology and overall security posture.


Joe's Ride

Rupa and I got a chance to take part in an event called Joe's Ride and Stride. It was on May 20th 2017 at Oakdale Emory Church in Olney MD. This event was organized to remember and celebrate the lives of those who passed away due to cancer, and to support those who are currently fighting cancer. The event included 10K, 30K and 45K bike rides. It also had a 5K walk.

I took part in the 30K bike ride. It was a beautiful ride. It started at 7.00 AM at Oakdale Emory Church in Olney MD, passed through scenic and lush green surrounding areas and finished back at the church. An ideal temperature of 63 made the ride a pleasure and fun. There were some winding roads and some small climbs but mostly flat. My smartphone app showed an elevation gain of 1075 feet over the distance on 30K, making it an easy ride. I am sure I will take part in 45K next year!

But it was the organization of the event that made the ride more pleasurable. More than few hundred participants took part in the bike rides as well as walks. There were volunteers throughout the bike route to guide and cheer the riders. Some were standing at various intersections to make sure that we don’t take a wrong turn. Some were at the rest stops with water and fruit to keep our energy level up. And of course to greet and cheer and welcome us at the finish line. Lots of food and music at the church. All credit goes to Russ and Betsy Sanford, and Oakdale Emory Church for organizing this event for such a great and worthy cause.

Cancer detection, prevention and cure is very close to my heart. Like most of you, I have personally seen and experienced my close friends and family members succumb to this dreadful disease, but not without a courageous and brave fight. And they are the true fighters! The fight against cancer is full of emotions. It is like a roller-coaster ride. Initial shock of cancer diagnosis just makes your spine shiver and heart stop. Then there is treatment regimen that goes through cycles of chemo and may be surgery. One day there is hope and the next day it may dwindle. I can't even fathom what they go through during this fight. Just salute to those who are fighting this battle and salute to those who are supporting the fighters. 

Let's hope cure for cancer is not too far. In the meantime, do support this fight and the fighters.

For more information on Joe's ride, visit http://joesanford.com/

Thursday, March 10, 2016

Oracle TNS Listener Poison Attack

Oracle TNS Poison Attack vulnerability, which was published as CVE 2012-1675 with CVSS score of 7.5, allows a malicious hacker to take complete control of the database server without authentication.

As explained in the article Oracle TNS Listener Poison Attack that I wrote for the Information Security Buzz, the fix for this vulnerability depends upon the Oracle version that you are running. 

The default listener configuration in 11.2.0.4 is vulnerable. You have to explicitly specify VALID_NODE_CHECKING_REGISTRATION_<listener_name> to LOCAL or ON or 1 in listener.ora to address this issue.

What makes this vulnerability still relevant is that there are many organizations still running 11.2.0.4 and they haven’t addressed this vulnerability. As explained in this article, the fix is quite easy.

Here is the text of the article which was published on the Information Security Buzz:

A flaw in the Oracle database listener, if not mitigated, could allow an attacker to take complete control of an Oracle database through an attack known as TNS Poison Attack. This vulnerability is remotely exploitable without authentication credentials. This classic man-in-the-middle (MITM) vulnerability has been published as security alert CVE 2012-1675 and received a CVSS base score of 7.5. It impacts confidentiality, integrity and availability of the database. Joxean Koret discovered this vulnerability in 2008 and publicly disclosed in 2012.
TNS Poison Attack vulnerability exploits Oracle listener’s database service registration functionality. Oracle database users connect to the database services through Oracle TNS Listener which acts as a traffic cop. A malicious attacker, residing on the same network as the database, registers a malicious service with the database listener with the same service name as legitimate database service. No credentials are required to register a database service with the listener. An attacker can use Oracle database software or easily available other tools to register a malicious database service.
After completion of the malicious database service registration with the same name as legitimate service name, Oracle listener has two services to choose from – a legitimate service and a malicious service. With two database services available, Oracle listener switches to the load balancing traffic cop mode, directing users alternatively to the legitimate service and the malicious service. At least, 50% of the user sessions are directed to the malicious service. Database user sessions, which are now communicating through the malicious service, can be hijacked by the attacker. An attacker is in the middle. All communication from the users to the database is now passing through the malicious attacker. Attack post stablished. Attacker has full purview of what users are communicating with the database. At a minimum, the attacker can view and steal the data. Additional SQL commands may be injected to broaden the scope or carry out additional attacks. If a database user communicating with the database happens to be a privileged user with the DBA role, then the attacker has complete control of the database. Database compromised. Mission accomplished.
TNS Poison Attack vulnerability is mitigated through Valid Node Checking Registration (VNCR) setting which permits service registration from only known nodes or IPs. Specific mitigation steps depend on the version of the database that you are running as shown below:
(1)    Oracle Database Releases 12.1 or above: If you are running Oracle database 12.1 or above, then you don’t need to further read this article unless you are just curious. The default Oracle listener configuration in Oracle 12c would protect you against this vulnerability. Although you don’t need to specify VALID_NODE_CHECKING_REGISTRATION_<listener_name> parameter to LOCAL in listener.ora, I would suggest that you explicitly do so just to make sure, as shown below:

LISTENER_DB =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.100)(PORT=1521))
    )
  )
VALID_NODE_CHECKING_REGISTRATION_LISTENER_DB=LOCAL

This parameter ensures that databases that are on the same server as the listener are permitted to register services with the listener. No remote registration of the services is permitted. If a malicious attacker attempts to register a service with the listener from a remote server, you will see the following error message in the listener log:

Listener(VNCR option 1) rejected Registration request from destination 192.168.200.131
12-NOV-2015 17:35:42 * service_register_NSGR * 1182

Oracle clustering solution, Oracle RAC, requires remote registration of services. In order to protect Oracle RAC from TNS poison Attack, you also need to set REGISTRATION_INVITED_NODES_<listener name> to specify IP addresses of the nodes from which remote registration is required.
(2)    Oracle Database Release 11.2.0.4: If you are running Oracle database 11g R2 11.2.0.4, then you must mitigate this risk through listener configuration. As illustrated above, you need to set VALID_NODE_CHECKING_REGISTRATION_<listener_name> to LOCAL. Alternate values for this parameter are ON or 1 and accomplishes the same objective. The default value for this parameter is OFF, leaving the door open to an attack. As mentioned above, if you are running RAC, then you also need to set REGISTRATION_INVITED_NODES_<listener name> to allow instance registration from trusted/valid nodes.
(3)    Oracle Database Release 11.2.0.3 or older releases: Before I describe the mitigation for older releases, let me mention that you should not be running Oracle databases 11.2.0.3 or older. Oracle has already de-supported older releases. No security patches are available for older database releases. You should upgrade as soon as possible.
Oracle, however, does provide a workaround for older releases through Class of Secure Transport (COST) parameters. There are three parameters SECURE_PROTOCOL_<listener_name>, SECURE_REGISTER_<listener_name> and SECURE_REGISTER_<listener_name> that can be configured to control registration of services from valid nodes only. Please refer to Oracle documentation for more information.
Please note that COST parameters can also be used for Oracle database releases 11.2.0.4 or newer to protect against TNS Poison Attack, but the procedure is more complex and requires additional configuration.

What makes this vulnerability still relevant, even after its full disclosure 3 years ago, is that there are many many organizations running various flavors of Oracle database 11g R2 releases such as 11.2.0.3, 11.2.0.3, 11.2.0.4, etc. haven’t yet mitigated this flaw. If you haven’t, you should as soon as possible. 

Monday, December 21, 2015

Oracle, please bundle OPatch with PSUs/SPUs!

OPatch is an Oracle-supplied utility that is used by DBAs to apply patches (PSUs/SPUs/Bundles etc.) to Oracle installations. It is the only supported and documented way to apply Oracle PSUs and SPUs. Of course, it is recommended and required as well.

Before applying Oracle patches, we need to verify that the OPatch version recommended by Oracle in PSU/SPU README is already there on the server.

If it is not, then we will need to download the recommended version of OPatch. (By the way, we don’t attempt to use the non-recommended OPatch!) The fastest way to download OPatch that I know of is to get it directly from https://updates.oracle.com/download/6880880.html  URL. You still need to choose Oracle software version and platform. After downloading it to your PC, you will need to copy it to the server through VPN tunnel using copy or ftp utilities, rename the old OPatch directory under Oracle Home, move it there and then unzip it. Quite a few tasks! Of course, takes time!

So my questions, or request to Oracle is to bundle the OPatch with Oracle patches (PSUs/CPUS etc), eliminating all the additional steps mentioned above. OPatch will be moved to the server along with PSUs/SPUs. No need to move it separately. We can easily save 15 minutes or so of our time. Why not? It is the first step towards automation.


DBAs do spend considerable amount of time applying quarterly patches. The time saved could be used elsewhere to improve the health of the databases! Not possible now. Hopefully, in near future! 

Friday, March 13, 2015

Gold from the ASHes!

Oracle's ASH, or Active Session History, statistics are simply a gold mine. ASH's power stems from its capability to correlate performance issues to a specific database user or a SQL statement very quickly. Active Session History, as name implies, stores session states for each and every active database session. A session is considered active if it is on CPU or waiting for a non-idle event (or OS call). 

Since Oracle samples active sessions every second, each sample is an statistical approximation of one second. If an ASH record indicates that a session is on CPU, then you could say that the session has consumed 1 second of CPU. By the same token, if an ASH record indicates that a session is waiting for a non-idle event, then you could say that the session has waited for a second for that event. 

For example, if there are ASH 100 records showing various sessions on CPU for a 30-minute time interval, then you could say DB CPU Time is 100 seconds. During the same interval, if there are 200 ASH records showing various wait events(OS calls), then you could say that the database has waited for 200 seconds. In this case, DB Time would be 100 + 200 = 300 seconds. 

As we know, DB Time and DB CPU time can also be derived from an AWR report which naturally raises a question - do DB TIme and DB CPU Time statistics, reported by AWR report, match with those derived from the ASH? I would contend that it is not important to have a 100% match for ASH and AWR statistics. What is more important is that ASH statistics would help you identify a problem very quickly which AWR may not be able to do do. Anyway, let me first attempt to answer this question and then explain why it is futile to attempt to compare two sets of statistics. 

Back to the question, ASH and AWR statistics may or may not match. AWR statistics represent the difference between two snapshots while ASH statistics are samples of all active session collected every second. There may be sampling errors in ASH Data. 

ASH experts or statistician may be able to elaborate more on ASH sampling algorithm and answer this question intelligently. In my small world, I have observed ASH statistics to be off by 10 to 20%. Let me show you how to calculate differences in DB Time and DB CPU Time as reported by AWR and ASH.

  SELECT TO_CHAR (ASHTime, 'MM/DD/YY:hh24') ASHTime,
         ASHDBTime,
         ASHCPUTime,
         AWRDBTime,
         AWRCPUTime,
         ROUND ( (ASHDBTime - AWRDBTime) / ASHDBTime * 100) DBTimeDiff,
         ROUND ( (ASHCPUTime - AWRCPUTime) / ASHCPUTime * 100) CPUTimeDiff
   FROM (  SELECT TRUNC (SAMPLE_TIME, 'HH24') ASHTime,
                   ROUND (COUNT (*) / 60) ASHDBTime,
                   ROUND (SUM (DECODE (session_state, 'ON CPU', 1, 0)) / 60)
                      ASHCPUTime
             FROM v$active_session_history ash
             WHERE     sample_time >= TO_DATE ('&&s_begin_time', 'MM/DD/YYYY:HH24:MI:SS')
                   AND sample_time <= TO_DATE ('&&s_end_time', 'MM/DD/YYYY:HH24:MI:SS')
          GROUP BY TRUNC (SAMPLE_TIME, 'HH24')) ASH,
         (  SELECT TRUNC (BEGIN_INTERVAL_TIME, 'HH24') AWRTime,
                   ROUND (SUM (EE1.VALUE - EE2.VALUE) / 60000000) AWRCPUTime
             FROM DBA_HIST_SNAPSHOT SS,
                   DBA_HIST_SYS_TIME_MODEL EE1,
                   DBA_HIST_SYS_TIME_MODEL EE2
             WHERE     EE1.SNAP_ID = EE2.SNAP_ID + 1
                   AND EE1.INSTANCE_NUMBER = EE2.INSTANCE_NUMBER
                   AND EE1.STAT_NAME = EE2.STAT_NAME
                   AND EE1.SNAP_ID = SS.SNAP_ID
                   AND EE1.INSTANCE_NUMBER = SS.INSTANCE_NUMBER
                   AND SS.BEGIN_INTERVAL_TIME >=
                          TO_DATE ('&&s_begin_time', 'MM/DD/YYYY:HH24:MI:SS')
                   AND SS.BEGIN_INTERVAL_TIME <=
                          TO_DATE ('&&s_end_time', 'MM/DD/YYYY:HH24:MI:SS')
                   AND EE1.STAT_NAME IN ('DB CPU')
                   AND EE2.STAT_NAME IN ('DB CPU')
             GROUP BY TRUNC (BEGIN_INTERVAL_TIME, 'HH24'), EE1.STAT_NAME) AWRCPUTime,
         (  SELECT TRUNC (BEGIN_INTERVAL_TIME, 'HH24') AWRTime,
                   ROUND (SUM (EE1.VALUE - EE2.VALUE) / 60000000) AWRDBTime
             FROM DBA_HIST_SNAPSHOT SS,
                   DBA_HIST_SYS_TIME_MODEL EE1,
                   DBA_HIST_SYS_TIME_MODEL EE2
             WHERE     EE1.SNAP_ID = EE2.SNAP_ID + 1
                   AND EE1.INSTANCE_NUMBER = EE2.INSTANCE_NUMBER
                   AND EE1.STAT_NAME = EE2.STAT_NAME
                   AND EE1.SNAP_ID = SS.SNAP_ID
                   AND EE1.INSTANCE_NUMBER = SS.INSTANCE_NUMBER
                   AND SS.BEGIN_INTERVAL_TIME >=
                          TO_DATE ('&&s_begin_time', 'MM/DD/YYYY:HH24:MI:SS')
                   AND SS.BEGIN_INTERVAL_TIME <=
                          TO_DATE ('&&s_end_time', 'MM/DD/YYYY:HH24:MI:SS')
                   AND EE1.STAT_NAME IN ('DB time')
                   AND EE2.STAT_NAME IN ('DB time')
             GROUP BY TRUNC (BEGIN_INTERVAL_TIME, 'HH24'), EE1.STAT_NAME) AWRDBTime
   WHERE ASH.ASHTIME = AWRCPUTime.AWRTime 
   AND ASH.ASHTIME = AWRDBTime.AWRTime
   ORDER BY 1;

ASHTIME      ASHDBTIME ASHCPUTIME  AWRDBTIME AWRCPUTIME DBTIMEDIFF% CPUTIMEDIFF%
----------- ---------- ---------- ---------- ---------- ----------- ------------
03/12/15:22         51         48         44         42         14          13
03/12/15:23         51         49         45         43         12          12
03/13/15:00         25         22         20         18         20          18
03/13/15:01         50         48         45         42         10          13
03/13/15:02         47         45         41         40         13          11
03/13/15:03         57         55         51         50         11           9
03/13/15:04         46         45         41         40         11          11
03/13/15:05         44         42         39         37         11          12
03/13/15:06         43         41         37         36         14          12
03/13/15:07         43         42         37         36         14          14
03/13/15:08         42         41         38         36         10          12
03/13/15:09         42         41         38         36         10          12
03/13/15:10         43         41         38         37         12          10
03/13/15:11         44         42         38         36         14          14
03/13/15:12         43         41         38         37         12          10
03/13/15:13         43         42         38         36         12          14
03/13/15:14         46         42         39         37         15          12
03/13/15:15         43         42         37         36         14          14

As you could see from the above report, DB Time and DB CPU Time statistics diverge by 10 to 20%, but that's OK. I think you are more interested in knowing that there was a problem than knowing 100% accurate values in ASH stats. Let me explain you why. 

Since samples are taken every second, it may miss an event or activity which started and completed between the two consecutive samples. I don't think as a performance analyst, you are interested in an event or activity which took less than a second. Any wait events lasting more than a second will be captured. Any SQLs running more than a second will be captured. Any long running queries lasting more than a second will be captured. So ASH does capture any events or activities of significance. It may not capture events or activities that don't span two consecutive ASH samples, but I don't think it will impede your abilities to diagnose database issues.

As mentioned above, SQLs, which complete within a second between the two samples, are not captured, but if they are run frequently enough, then ASH sampling algorithm would capture it and you would know about it.

If we can predict presidential elections through polls which are based on statistical sampling algorithms, then ASH surely can tell you about performance issues that your database may be facing. ASH is a gold mine of diagnostic information. Just dig it, but beware. ASH comes with a cost - it requires diagnostic pack license.