The One 'Security' Feature in Oracle You Probably Shouldn't Use

Monday Feb 29th 2016 by David Fitzjarrell

Oracle offers a number of security-related settings, but one could definitely create more harm than good.  Read on to see which parameter that is, and why.

Security is on everyone's mind, especially database administrators, and some DBMS vendors supply tools and/or settings, which allow the database to monitor and report possibly suspicious activity. Oracle is no different, yet one setting can be particularly distressing by creating Denial of Service (DoS) attacks from within. Let's look at all of the supplied security settings and see why that parameter can disrupt business if it's set 'incorrectly'.

Oracle provides several security parameters that can be set in versions from 11.1 onward; those settings are:


The first in the list, sec_case_sensitive_login, is deprecated in 12c, as case-sensitive logons are enabled by default. The second is an additional attempt to limit failed login attempts, adding to the profile limit FAILED_LOGIN_ATTEMPTS (which may or may not be set to a value other than the default for some profiles). The init parameter takes precedence over the profile setting so if sec_max_failed_login_attempts is set to 5, for example, every user account will be limited to 5 failed login attempts before it is locked, regardless of the FAILED_LOGIN_ATTEMPTS setting in the granted profile.

The next two are intended to thwart DoS attacks by specifying how to proceed after receiving bad packets and what tracing action to take, usually (but not always) from malicious clients. The settings for sec_protocol_error_further_action are:

CONTINUE (the default):   this continues the server process execution by accepting the packets sent from the (possibly) 
                          suspect client.

(DELAY, <integer>): this implements a delay of <integer> seconds before the next packet from the same client
                          connection is accepted.  For malicious connections this limits the consumption of resources, yet
                          for valid, trusted connections it causes a delay in processing which degrades performance.

(DROP, <integer>):  causing the server to forcibly terminate the client connection after <integer> bad packets have
                          been received.  When network issues interfere with valid clients this can cause loss of transactions. 
                          The client can reconnect and try the transactions again.

Allowing server processes to continue does provide for better performance at the expense of allowing malicious clients to continue sending suspect/bad packets. For those with a database server on a network accessible to 'outsiders' this can be a security risk that isn't worth taking. Fortunately I have not personally seen any database server that isn't behind a firewall (or two or three) and not on a private network inaccessible to the public directly.

We come to the setting that can create problems when none actually exist; sec_protocol_error_trace_action. There are four possible settings which can be used:

NONE:	Bad packets are ignored and no trace or log files are generated.

TRACE:	A detailed trace file is generated when suspect (bad) packets are received.  Although such trace files can
	be used to debug problems in client/server communication and can detail where bad packets originate it's
	also possible that left unchecked this setting can consume disk space at an alarming rate, stopping database
	processing due to 'No space left on device' errors.

LOG:	Minimal log messages are printed in the alert log and the server trace files, considerably reducing the disk
	space consumed.

ALERT:	Alert messages are written to the alert log.

When network problems, which normally don't interfere with transaction processing present themselves, a large number of trace files can be generated when this parameter is set to the default value of TRACE. I have seen up to 28 trace files generated in a five-minute span, which would result in 336 trace files per hour. Many times these trace files are small; over time if this trace file generation is left unchecked/unmonitored millions of these trace files can result, filling a file system to capacity and possibly halting database processing and killing processes and OEM agents. Setting this parameter to LOG can greatly reduce the disk space consumed. Using ALERT also reduces the disk space usage but requires an 'alert log scraper' to find the messages and notify the DBAs. Using NONE is not recommended.

It would be a good idea to regularly check various file systems (notably the one where the alert log resides) for available space; network problems can occur without warning and databases that once were not generating trace data may now be filling directories with essentially useless files. The trace files I've read that are generated in this manner don't provide really useful information:

2016-02-11 11:55:12.751634 : nagbltrm:entry
2016-02-11 11:55:12.833433 : nau_gtm:entry
2016-02-11 11:55:12.833462 : nau_gtm:exit
2016-02-11 11:55:12.833469 : nagbltrm:exit
2016-02-11 11:55:12.833482 : nnfgdei:entry

The first line in such trace files provides the trace file name, including the full path. The next lines, shown above, report the date/time and the module being called and whether it's an entry or exit call. No IP address, no information that could be used to actually trace such connections. No entries exist in the alert log when TRACE is the setting for this parameter, making it difficult to know where the suspect packets originate or why they appear. One piece of information Oracle does provide is the O/S process id, the first number in the trace file name but once the trace file is generated I've found that the O/S process has 'disappeared', most likely disconnected due to errors. No trace files are generated in $ORACLE_BASE/diag/rdbms/<database name>/<sid>/trace, making investigation even more difficult. Since this setting is causing trace files to be generated when suspect network packets arrive it's only logical that the Network team be involved in troubleshooting the issue; a 'sniffer' can be invaluable in locating and identifying bad packets. Once the source of the bad packets is found it should be a simple matter to correct the issue (if it is an internal network issue) or block the offending (possibly malicious) client and stop the trace files from being generated.

Security can't be an afterthought, and any tools or settings a DBMS vendor can provide to aid in stopping malicious traffic are welcome additions to the security arsenal. When those tools or settings generate false positives because the algorithm is, well, 'too simple', then using such settings can be harmful. Care must be taken to monitor how these settings behave in your database, with your network configuration so database performance and availability aren't compromised.

See all articles by David Fitzjarrell

Mobile Site | Full Site