Oracle Database 11g Release 2 New Features Summary, Part 2
Synopsis. After what seemed an eternity, Oracle finally released the long-awaited Oracle Database 11g Release 2 (11gR2) in September 2009. This final article in this two-part series completes a high-level evaluation of Oracle 11gR2s plethora of new features so that Oracle DBAs can decide whether to upgrade their Oracle 9i, 10g, and 11gR1 databases to Oracle 11g Release 2 in the near term.
I summarized five new features in Oracle Database 11g Release 2 in the prior article in this series that are sure to assist Oracle DBAs, including:
- Ubiquitous new Grid Infrastructure components
- Intelligent data placement for ASM disks
- ACFS, ASMs new clustered file system
- Zero downtime patching for Oracle Clusterware
- DBMS_SCHEDULER upgrades
Heres my vote for the next five but no less impressive! features of this new release that simplify deployment of application code versions, improve data warehouse performance, increase the efficiency of Recovery Manager (RMAN), extend the disaster recovery and reporting capabilities of Data Guard, and offer extremely efficient segment storage. Please be sure to visit Database Journals Oracle 11g Central periodically over the next several months as I probe these new features to demonstrate how they can help increase a DBAs productivity and leverage her time effectively.
#6: Data Warehouse Performance Gets Pumped
Oracle 11gR1 added some excellent new features primarily for data warehousing environments most especially, the several new partitioning methods, the ability to limit gathering optimizer statistics only for affected partitions, and an improved SQL Access Advisor that can even recommend partitioning for large tables. Oracle 11gR2 builds upon these features to improve parallel query performance, increase the performance and efficiency of data warehouse extraction, transformation and load operations, and refresh materialized views with even greater speed:
Instance Caging. Oracle 8i introduced Database Resource Manager (DRM), which offers the ability to limit CPU resource usage to groups of application sessions via resource plan directives. Subsequent database releases improved significantly the range and granularity of DRM limitations, including the capability to limit even I/O throughput to specific resource consumer groups. Perhaps the greatest limitation to wider acceptance of DRM, however, was its inability to restrict any one database instance from capturing a majority of the CPU resources to the detriment of all other instances running on the same server.
Since its not uncommon today to encounter a database server whose 16, 32, or even 64 CPUs might be shared across dozens of database instances, this was a serious flaw. Oracle 11gR2 overcomes this limitation with an easy-to-implement feature called instance caging. By simply setting the CPU_COUNT initialization parameter to an appropriate value for one or more database instances, DRM can limit CPU resources across multiple databases on the same server to insure that no single instance consumes all CPU resources.
Automatic Degree of Parallelism. Data warehousing applications often take advantage of parallelism to process information quickly and effectively, especially when running queries against tables that are extremely large or that are joined in a complex fashion. The degree of parallelism (DOP) that the optimizer should use when performing parallel operations against accessed objects can be specified either within the query itself (via the +PARALLEL optimizer hint) or as a distinct attribute of the table or index itself (via its PARALLEL attribute). However, determining exactly what is an appropriate DOP often requires a detailed understanding of just how the tables are typically joined, which indexes could most benefit from parallelism, and even what type of workload might be executing at the same time as the parallel query.
The good news is that Oracle 11gR2 can now determine the DOP automatically for any parallel statement. The optimizer uses the settings from two new initialization parameters, PARALLEL_DEGREE_POLICY and PARALLEL_MIN_TIME_THRESHOLD, to calculate the automatic degree of parallelism (ADOP). For example, if PARALLEL_DEGREE_POLICY is set to a value of AUTO, the 11gR2 optimizer will first determine if the query could indeed benefit from parallel operations and what would be an appropriate DOP value. Oracle 11gR2 will next ascertain if the querys estimated execution time is likely to run longer than the acceptable value (in seconds) for PARALLEL_MIN_TIME_THRESHOLD and, if sufficient resources for parallel execution exist right now, it will allow the query to execute; otherwise, it will delay its execution until sufficient resources exist. This helps prevent a single parallel query from consuming excessive resources for example, all possible parallel execution threads, or all CPUs in a clustered environment at the cost of other non-parallelizable operations. Its important to note that ADOP features arent extended to parallel recovery or parallel replication; they only apply to parallel query execution.
Parallel Data Cache: In-Memory Parallel Execution. Many Oracle database shops have discovered that a Real Application Cluster (RAC) database provides both high availability for OLTP applications, but also provides outstanding performance for data warehousing applications because it can parallelize queries across multiple instances on different nodes in the RAC cluster. Oracle 11gR2 takes even better advantage of this architecture because it can intelligently cache the buffers used for parallelized queries within a RAC clustered databases buffer caches.
If the new PARALLEL_DEGREE_POLICY initialization parameter is set to AUTO and a parallelized query is executed, then Oracle 11gR2 develops a strategy to map all of the data pieces (i.e. data or index blocks) necessary for its efficient execution into multiple database buffer caches of the RAC databases instances. If one RAC instance needs to read that same buffer to complete its task during the parallelized querys execution, it will automatically use this mapping to determine which remote instance is already a holder of that buffer. This can have a tremendous positive impact on physical I/O because it leverages the entire database buffer cache of all instances that are assigned to service the parallel querys execution.
Performing Parallel DML In Chunks. I havent seen an IT shop yet that doesnt have at least one requirement to perform DML against large amounts of data in bulk. A simple example would be a bulk invoicing module that needs to scan through millions of customer payments received from multiple transaction sources (e.g. check, credit card, or ETF) and then apply the resulting receipts against tens of thousands of outstanding invoices to update each invoices balance as well as the total balance due for the customer.
The new DBMS_PARALLEL_EXECUTE package offers an intriguing set of features that would be most useful in this situation because it allows a large DML statement like the one described above to be executed in parallel so that it processes data in several large chunks. The chunks can be defined based on simple selection criteria against the source tables ROWID, primary key values, or virtually any application-defined ruleset. Once these chunking rules are defined, DBMS_SCHEDULER is used to process each data chunk and commit the changed data. If a particular chunk should fail during its processing, its even possible to reschedule the chunk for reprocessing. Finally, Oracle 11gR2 provides several new parallel task metadata views (DBA_PARALLEL_EXECUTE_*) that track the boundaries that DBMS_PARALLEL_EXECUTE has defined for the chunks as well as the progression of each chunks processing.
Faster Refreshes for Materialized Views. Data warehousing applications tend to leverage materialized views (MVs) heavily, especially when rewriting a query so that it can gather the necessary information from the MVs much smaller dataset than by querying the base tables directly. When the underlying base tables for a MV change frequently, materialized view logs are used to populate just the changed data to the MV. Oracle 11gR2 now offers the ability to purge the MV logs from outside the refresh process, and that means improved performance, especially for any MV that contains aggregates, joins, or both. Also, the new WITH COMMIT SCN directive of the ON COMMIT FAST REFRESH clause tells Oracle 11gR2 to use commit SCN-based materialized view logs for faster refresh times.
Preprocessors for ORACLE_LOADER. Loading data into data warehouses from legacy systems can be an expensive operation because of the system resources required for loading data into the database via the SQL*Loader bulk loading utility, the cost of the tier 1 storage for the data warehouses tablespaces, and even the cost of storage of the legacy systems flat files.
Its not unusual for these legacy files to be stored in a compressed format on lower cost storage; however, before SQL*Loader could load the legacy data into the warehouse, these files would have to be uncompressed before loading and that meant additional storage to hold the uncompressed data. Unfortunately, this restriction also applied to any external table that used the ORACLE_LOADER access method to retrieve data directly from legacy files. Many Oracle shops have leveraged external tables to quickly construct reports against legacy files, or even used an INSERT INTO SELECT FROM SQL statement to first aggregate or otherwise massage legacy data and then load it directly into data warehousing tables in the database.
All this changes in Oracle 11gR2 because the ORACLE_LOADER access method has been enhanced so that it can preprocess workloads for both SQL*Loader operations and external tables. The new PREPROCESSOR clause specifies an existing DIRECTORY object and an executable program within that directory that will preprocess the legacy files data for example, uncompressing it using the Linux gunzip decompression utility and then the ORACLE_LOADER access method reads the processed data from the standard output stream (stdout) as if it were reading directly from the legacy file itself.
#7: Data Guard: Disaster Recovery On Steroids
Ive demonstrated the resiliency and flexibility of the Oracle 11gR1 Data Guard disaster recovery methodology in my Oracle Database 11g: Data Guard series. Heres my vote for some of most welcome augmented Data Guard features in Oracle 11gR2:
Manageable Standby Databases Increased. In anticipation of the expansion of Data Guard platforms for intense data warehousing operations, Oracle 11gR2 has increased the number of manageable standby databases from only nine to a total of thirty (30) in any combination (physical standby, logical standby, or snapshot standby).
Integrated Application Failover. Earlier releases of Data Guard provided for relatively quick failover of application sessions during switchover and failover operations using Transparent Application Failover (TAF), but Oracle 11gR2 now offers the ability to leverage Fast Application Notification (FAN) features for even faster application failover.
Real Time Apply Lag Time Apply Limits. As part of the separately-licensed Active Data Guard option, Oracle 11gR1 offered Real Time Apply (the ability to apply redo to a physical standby database while it was still open for read-only reporting) in concert with Real Time Query (the ability to query a read-only standby database while redo was still being applied). This provided an alternative to using basic replication for read-only data warehouse reporting, with one important exception: there was no way to respond to any significant delays in the application of redo information on the physical standby database. Oracle 11gR2 provides a new session-level parameter, STANDBY_MAX_DATA_DELAY, that specifies an acceptable lag time (in seconds) for stale data in this situation. Should redo application cause data to become unacceptably stale, Oracle 11gR2 can simply refuse to execute that query and instead raise an exception (ORA-03172).
Support for Compressed Tables and SecureFiles. Oracle 11gR2 now fully supports the application of redo to logical standby databases with SQL Apply for primary database tables that use either basic (DSS) or advanced (OLTP) table compression because the LogMiner utility also now supports translation of redo entries for any tables using these data compression features. In addition, LogMiner and SQL Apply now support application of redo on logical standby databases for SecureFile LOBs.
#8: RMAN Automation and Simplification
Oracle 11gR2 extends the already-robust suite of backup and recovery tools that comprise Recovery Manager (RMAN) with some notable new enhancements, many of which are obvious extensions of earlier features:
Automatic Block Recovery. In my humble opinion, Oracle 10gR1s introduction of block-level media recovery (BMR) should have been enough to convince even the most stalwart advocate of user-managed recovery to at least consider using RMAN instead of the antiquated ALTER TABLESPACE BEGIN BACKUP method to back up her database. BMR meant it was no longer necessary to restore and recover an entire datafile when only a few blocks needed to be recovered. Oracle 11gR2 has now automated BMR so that if one or more corrupted blocks within a datafile are detected, RMAN will automatically restore and recover the blocks without the need for DBA intervention. And if Data Guards Real Time Query mode is enabled, Oracle 11gR2 can check the corresponding physical standby for a more recent version of the non-corrupted block and transmit that block to the primary database.
Flexible SET NEWNAME Directives. If youve ever had to key in a long list of several dozen datafiles while restoring a database to a different platform or file system using either RMAN, DUPLICATE DATABASE, or tablespace point-in-time recovery, you know how welcome these new features are. Oracle 11gR2 now accepts either a tablespace-level or database-level FORMAT specification for datafile names. In addition, its now possible to list either a unique identifier (%U) or the base name (%b) for easier specification of new datafile names.
Targetless Database Duplication. Cloning an Oracle database in prior database releases required a connection to a target database, but that wasnt always the most convenient option because the target might not be available for duplication at the appropriate time. So Oracle 11gR2 now offers the ability to perform a targetless duplication of any selected database via the DUPLICATE DATABASE command. While its still necessary to make an AUXILIARY connection to the eventually-duplicated database, of course, a CATALOG connection to a recovery catalog database is now required as well. Targetless duplication also requires the specification of the NOREDO and UNDO TABLESPACE directives because theres no way for the duplication operation to detect if the source database is in ARCHIVELOG mode and which of the source databases tablespace(s) are UNDO tablespace(s).
Enhanced TSPITR. Oracle 11gR2 has removed some glaring limitations from Tablespace Point-In-Time Recovery (TSPITR) an extremely useful tool for recovering a tablespace set to a previous point in time earlier than the databases current SCN. TSPITR can now also be performed multiple times against the same tablespace set regardless if a recovery catalog was available. Also, TSPITR can also be used to recover a tablespace thats been dropped.
Backing Up to the Cloud. Its obvious that the next big thing is the approaching sea change of cloud computing. Oracle 11gR2 now provides the ability to back up an Oracle database to Amazon Simple Storage Service (S3) through its Oracle Secure Backup media management layer software.
#9: Edition-Based Redefinition
I started my IT career as an applications developer, so until I became an Oracle DBA I never really knew how much grief my systems administrator (and later, database administrator) had to go through to make sure that my latest application code changes were deployed at just the right time to limit disruption to the application user community. And it wasnt until the first time I had to reverse a complete application release at the database level (i.e. all related procedures, functions, packages, types, and triggers) that I really appreciated how difficult that task can be unless my teams application developers had kept meticulous track of exactly which version each object needed to be rolled back to so that the applications original functionality could be restored.
Oracle Database 11gR2 adds a powerful new tool to any Oracle DBAs application deployment tool belt: the ability to post the next release of an applications database objects to a logical construct called an edition. Not all database objects are editionable, but private synonyms, views, and almost all PL/SQL objects, including procedures, functions, types, type bodies, packages, package bodies, and triggers certainly are. The real beauty of editioning is that it makes it simpler than ever to deploy application code changes to a production database just by advancing the database to the appropriate next edition and if the deployment should encounter a serious bug or other failure, its just as easy to revert to the prior edition to roll back the changes to all affected objects.
(Its obviously impossible to cover editioning in just a few paragraphs, so I promise to delve much more deeply into these and many other myriad features of editioning in an upcoming article.)
Finally, heres a few new Oracle 11gR2 features that defy an easy classification:
DDL Restrictions Lifted on Flashback Data Archive. I delved into Oracle 11gR1s new Flashback Data Archive (FBDA) features in a prior article. FDBA also known as Total Recall offers the ability to capture only the deltas of modified data for selected tables and retain those delta vectors within a special set of objects that comprise a FBDA. When a user queries deep into the past history of that table via, say, a Flashback Versions query, Oracle will return the most recently changed data directly from the databases UNDO tablespace, but will use the FDBA to return older versions of that data.
As powerful as this feature was, it placed some rather severe strictures on what types of DDL commands could be issued against a table whose data was being tracked in an FDBA, including restrictions against adding, modifying, renaming, or dropping the tables columns, truncating the table, modifying the tables constraints, and (especially frustrating!) modifying a partitioned tables partitioning specifications. In Oracle 11gR2, however, almost all of these restrictions against straightforward DDL statements have been removed. For more complex DDL operations for example, using the DBMS_REDEFINITION package to redefine the base table thats already stored within a FBDA -- Oracle 11gR2 provides the new DBMS_FLASHBACK_ARCHIVE package. Procedure DISASSOCIATE_FBA will disassociate the base table from the FDBA; then, once all desired changes are complete, procedure REASSOCIATE_FBA is used to re-associate the modified table once again with the base table.
On-demand Segment Creation. In prior releases, whenever a table was created with the CREATE TABLE statement, the initial segment for the table was automatically created at the same time. Starting in Oracle 11gR2, this default behavior is changed: the segment is not created until a row has been inserted into the table. In addition, any indexes or LOB segments that depend on the table are not created until row insertion occurs. The SEGMENT CREATION DEFERRED storage attribute for the table specifies this default behavior; however, it can be overridden by specifying SEGMENT CREATION IMMEDIATE instead.
Zero Sized Unusable Indexes. When its time to reload any large table say, a data warehouse fact table of several million rows one neat trick to speed the tables loading is to simply make any indexes for the table unusable, and then rebuild the indexes after data loading is complete. Oracle 11gR2 acknowledges and augments this technique by automatically dropping any index segment when the index is marked unusable.
Oracle Database 11g Release 2 continues the massive paradigm shift that started in Oracle Database 10g toward self-managed, self- tuning, and self-healing databases. This new release offers a plethora of features some long-overdue, some simply revolutionary! - that any Oracle DBA can use as a force multiplier to improve her efficiency and effectiveness as a true information engineer.
Before you proceed to experiment with any of these new features, I strongly suggest that you first look over the corresponding detailed Oracle documentation before trying them out for the first time. Ive drawn upon the following Oracle Database 11g Release 2 documents for this articles technical details:
E10471-04 Oracle Database 11gR2 Advanced Application Developers Guide
E10500-02 Oracle Database 11gR2 Storage Administrators Guide
E10592-03 Oracle Database 11gR2 SQL Language Reference
E10595-05 Oracle Database 11gR2 Administrators Guide
E10700-01 Oracle Database 11gR2 DataGuard Concepts and Administration
E10713-03 Oracle Database 11gR2 Concepts
E10820-02 Oracle Database 11gR2 Reference
E10837-02 Oracle Database 11gR2 VLDB and Partitioning Guide
E10881-02 Oracle Database 11gR2 New Features