Oracle Optimizer: Moving to and working with CBO: Part 3

Monday Sep 29th 2003 by Amar Kumar Padhi

Part 3 of this series discusses key points that should be considered for moving to CBO, as well as highlighting a number of good maintenance practices.

Part 3 of Oracle Optimizer: Moving to and working with CBO discusses the setup changes that should be considered for migrating to CBO.

8. Setup changes for migrating to CBO

This article highlights a number of key points to consider when moving to CBO. In addition, it highlights a number of good maintenance practices. Tuning in CBO is an ongoing process and proper analysis should be done. You may encounter scenarios specific to your environment that are not mentioned here. Make it a point to refer the documentation and check with Oracle support for any kind of anomalies.

8.1) Set the Initialization parameters properly

The parameters specified in Part 2 are very critical for your setup. Parameters like OPTIMIZER_MODE, OPTIMIZER_MAX_PERMUTATIONS, OPTIMIZER_INDEX_COST_ADJ, OPTIMIZER_INDEX_CACHING, OPTIMIZER_FEATURES_ENABLE, COMPATIBLE etc. directly affect the optimizer. Please provide appropriate values based on the type of environment you require. Again, these parameters have been covered in details in Part 2.

8.2) Set the optimizer mode properly

Set the OPTIMIZER_MODE parameter to FIRST_ROWS(_nnn) for OLTP systems. For batch processing or data warhousing systems set it to ALL_ROWS.

Using ALL_ROWS in an OLTP system will result in a slight (sometimes negligible) stand still before the data is shown in online screens as Oracle concentrates on completing the query and processing of the rows before retrieving them. On the other hand, the delay may be worse as there are more occurrences of full table scans in ALL_ROWS than in FIRST_ROWS.

You may also consider CHOOSE mode as an intermediate option between RBO and CBO. In fact, CHOOSE tries to run the query in either CBO or RBO, depending on the availability or unavailability of statistics. This is the preferred mode if the system can be migrated in phases. Problems may arise if tables with statistics are being used along with tables without statistics in queries; the Optimizer may sometimes choose bad execution plans in such cases.

If features like partitioning or materialized views are being used, related queries will always resort to CBO mode.

8.3) Provide additional memory

I have found it beneficial to increase the memory allocation parameter sizes by 3-10% to accommodate the additional changes/features and avoid response time issues. Later, you can evaluate the increase or decrease in memory utilization and adjust the parameters accordingly. Parameters DB_BLOCK_BUFFERS, SHARED_POOL_SIZE, SHARED_POOL_RESERVED_SIZE, JAVA_POOL_SIZE and LARGE_POOL_SIZE should be considered.

8.4) SQL and PL/SQL

The Optimizer mode parameter is meant only for statements that are directly fired and not for statements fired from PL/SQL. Therefore, if you have tested your query from SQL*Plus or any other tool, and then incorporated the same in a PL/SQL block (anonymous or stored routines), the query may or may not run the same! DML statements from PL/SQL are run in CHOOSE mode (ALL_ROWS if statistics is present) by default and OPTIMIZER_MODE set at session level does not influence them.

If you have enabled CBO in your setup, then PL/SQL statements default to ALL_ROWS. This is logical, as stored procedures return results only after everything is processed. However, I often find queries run better in FIRST_ROWS than in ALL_ROWS, the reason being that indexes look more appealing in FIRST_ROWS.

For OLTP systems, this may result in response time issues.

We can prevent PL/SQL DML statements from running in ALL_ROWS by doing the following.

1. Talk to Oracle support and set the parameter _OPTIMIZER_MODE_FORCE to true. This parameter was introduced to force the optimizer mode set at session level to be used in PL/SQL as well. Therefore, if your session is running in FIRST_ROWS, then the recursive SQLs (or PL/SQL statements) will also be executed in FIRST_ROWS.

You may evaluate the importance of this parameter in a test environment by setting it in the initialization file or at individual session level. As this parameter begins with an underscore, use double quotes to set it.

SQL> alter session set "_optimizer_mode_force" = true;
Session altered.

2. Use Hints and direct Optimizer to use a particular mode for individual queries. This is a very powerful option and will become a key inclusion in coding for the CBO environment.

I have come across, and read about cases where setting the _OPTIMIZER_MODE_FORCE to true did not force some specific queries to use the session level mode. Since this is a hidden parameter not much is documented about various scenarios. If you come across such cases, please make use of hints to direct the optimizer.

8.5) Use Oracle Resource Manager to control adhoc queries and third party tools

It is understood that the development team must occasionally fire adhoc queries to extract data or check on facts and figures in Production. It may also be possible that third party tools are being extensively used.

Most of us do not spend time on tuning adhoc queries or check on how resource intensive third party tools are as it is not part of the main application. This may sometimes result in situations where a particular process affects all other sessions and slows down the server for a noticeable duration. With CBO, your chances of such scenarios may be increased.

You may do the following to avoid problems that are external to your application:

1. Avoid the use of third party tools in the initial stages. This will help the development team to concentrate on problems related to the core application and prevent other tools/utilities from diverting the tuning objectives.

2. Irrespective of whether the above point can be implemented or not, you may start using Oracle Resource Manager. This feature allows you to prioritize various processes and slice resources amongst various processes. For this, a resource plan needs to be created that directs Oracle to share the available resources in the given ratios. For example, I have enabled the following resource plan for a database on a server with 4 CPUs.

GROUP             USERS               DIRECTIVE
HIGH_PRIORITY     Online users        Allow 87% x 4 CPU usage. 
                                      This means 87% resource on all the 4 CPUs.

MEDIUM_PRIORITY   Online Report       Allow 10% x 4 CPU usage.
                  generation          This means 10% resource on all the 4 CPUs.
                  programs, batch
                  jobs, Business
                  Objects tool, 
                  Datastage tool, 
                  data upload and 
                  download interface.

LOW_PRIORITY      Adhoc online and    Allow 3% x 4 CPU usage.
                  batch reports,      This means 3% resource on all the 4 CPUs. 
                  Team sessions.
                  Maintenance activity.

The above plan states that a process in a low priority group will not affect high priority users as they have a major share of the CPU. Please note that if resources are not used completely in one group, they will be made available to other groups. For example if high priority group users are utilizing only 40% of the CPU, the remaining portion may be allocated to a medium or low priority process that is in need of it.

You may also consider having multiple resource plans, one for online processing, another for batch processing etc. that could be enabled at different times of the day or on different days as per the processing needs.

8.6) Using CBO modules in RBO setup

If your setup is very large, you might consider moving to CBO one module at a time. Some coding will have to be done explicitly for this. For example if there is an independent schema dedicated to interfacing data, statistics could be generated for tables and indexes in this schema. Optimizer mode for sessions connecting to this schema can be set to FIRST_ROWS(_nnn) or ALL_ROWS, at the session level. While the application runs in RBO, one set of modules can be tuned to run in CBO. However, this should not be permanent; the ultimate goal should be to move the complete application to CBO.

Please note that statistics can be generated for all objects. These will be ignored by RBO but will be taken into consideration by CBO sessions.

e.g.: Setting optimizer at session level.
SQL> select value from v$parameter where name = 'optimizer_mode';


SQL> alter session set optimizer_mode = first_rows;

Session altered.

SQL> select value from v$parameter where name = 'optimizer_mode';


Oracle also provides a session level parameter OPTIMIZER_GOAL that serves the same purpose as above. Setting OPTIMIZER_GOAL will affect that particular session only and the value set can be viewed from V$PARAMETER as in the above example.

8.7) Generate adequate statistics at proper intervals

Use DBMS_STATS for generating statistics on a periodic basis. You may also categorize objects under various groups depending on their frequency of data change. For example, a daily transaction table will always change and statistics may be generated more often. Master tables change subtly and statistics could be generated less often.

In the case of ESTIMATE statistics, arrive at an optimal sample size. A percentage of 5-10 gives adequate statistics.

Consider using COMPUTE statistics for indexes and index-organized tables.

The interval at which statistics are generated should not be too frequent. It will depend a lot on the extent of DML activities carried. Please note that statistics generation is not incremental and doing it too frequently does not add up to the existing statistics nor reduce the computation time. Statistics are always generated from scratch and the existing statistics are overwritten.

If queries on a particular set of tables always require more accurate statistics to behave properly, and otherwise fail to perform, consider using hints to direct the optimizer and avoid dependency on statistics generation.

Whenever a heavy upload of data is done, consider explicit generation of statistics on the concerned tables.

Please refer section 9 (Generating Statistics) and 10 (DML Monitoring) to be covered in subsequent part of this series.

8.8) Statistics for Global Temporary tables

No statistics are collected for Global Temporary Tables; handle these with care. Make use of hints to drive queries on these. Provide explicit statistics by using DBMS_STATS.SET_TABLE_STATS if your temporary table is used for processing huge amounts of data.


SQL> select num_rows, blocks, avg_row_len, temporary, user_stats
  2  from dba_tables
  3  where table_name = 'AM21';

---------- ---------- ----------- - ---
                                  Y NO

jaDA>exec dbms_stats.gather_table_stats('DATASTAGE', 'TMP_GL_VALUE_DIFF');
BEGIN dbms_stats.gather_table_stats('DATASTAGE', 'TMP_GL_VALUE_DIFF'); END; 
ERROR at line 1:
ORA-20000: Gathering statistics for a temporary table is not supported
ORA-06512: at "SYS.DBMS_STATS", line 4481
ORA-06512: at line 1

SQL> exec dbms_stats.set_table_stats(ownname => 'SYS', tabname => 'AM21', -
> numrows => 3000, numblks => 300, avgrlen => 50);

PL/SQL procedure successfully completed.

SQL> select num_rows, blocks, avg_row_len, temporary, user_stats
  2  from dba_tables
  3  where table_name = 'AM21';

---------- ---------- ----------- - ---
      3000        300          50 Y YES

8.9) Statistics for SYS schema

The general rule is to not generate statistics for SYS schema if you are on Oracle 8i. In Oracle 9i, you will have to test this out in your setup to arrive at a conclusion. The dictionary will need to be analyzed in Oracle 10i, as RBO will be unsupported then.

Please refer Section 13 (Statistics for SYS schema) to be covered in subsequent part of the series.


This package allows you to insert your application specific information into the dynamic dictionary tables. This is very handy for implementing certain logics and for analyzing and tuning. I mention this package here mainly for analyzing and tuning purposes. By using this feature, you can integrate your application into the database and find vital information at any point of time, such as what is presently running in the database and from which part of the application a particular query is being executed and what the user sessions are doing.

Any time a performance issue arises, look at the application specific information to identify the problematic area instantly! Application specific information is set in dictionary tables such as V$SESSION (MODULE, ACTION, CLIENT_INFO columns).

8.11) Provide sufficient time for each site to settle down

If you are supporting multiple client installations, my recommendation is to consider migrating each setup on different dates. Each site may have its own unique issues relating to individual setups, and this will give you more time to examine performance issues at each site. Make sure that testing is done on individual site test boxes before moving the production box to CBO.

8.12) Change your scripts!!!

Most DBAs rely on scripts. These may be outdated. For example, include columns such as LAST_ANALYZED, MONITORING, GLOBAL_STATS and USER_STATS in scripts that look at the object information. Modify your tuning scripts to find out in what mode the database, session or particular queries are running.

8.13) Coding habits and Technical Guidelines

Something you can co-relate to when moving from RBO to CBO. I had to change my coding habits when I moved from RBO to CBO. Oracle says that most of the RBO code will thrive in CBO and changes are not major. However, coding habits will have to be altered for better output on CBO and for easy maintenance of the code.

Moving to CBO opens many new features for developing and designing, something that may not be present in your Technical Documentation Guidelines (if you have one) that developers rely on for standards. Liaise with the complete team to update your conventions.

In RBO, we have a habit of ordering tables right-to-left in queries, right being the driving table for the query. In CBO, I had to adapt to ordering from left-to-right, left being the driving table. The ORDERED hint used in CBO picks up tables left-to-right for processing. Take a pick.

Avoid RBO style coding techniques. Techniques used to prevent the use of indexes in RBO should be avoided. CBO has advanced features such as function-based and bitmap indexes, control processing of queries with proper where clauses and hints.

For example do not do the following anymore:

SQL> select * from tmp_cntx
  2  where  seg4 || '' = '1700019'
  3  and    catseg1 = 'WSCNTX';

SQL> select * from tmp_bad_attribute1
  2  where inventory_item_id = 3666
  3  and   organization_id + 0 = 54;

SQL> select * from tmp_bad_attribute1
  2  where inventory_item_id = 3666
  3  and   nvl(organization_id, 0) = 54;

8.14) Plan stability using stored outlines

You may evaluate using stored outlines for queries that behave badly on CBO, or show different execution plans for different setups. Using this option tells optimizer to consider the execution path specified explicitly.

Please refer section 12 (Stored Outlines) to be covered in a subsequent part of this series..

8.15) Use Hints

The use of Hints will become a favorite practice for developers. Make use of hints in queries to direct optimizer to consider an alternative path than the one being chosen. Hints may be required to be given to queries that behave differently on different databases. If proper hints are given to make a query run as desired, CBO will give more preference to these than to the statistics present.

Refer section 11 (Hints) to be covered in a subsequent part of this series.

8.16) Use bind variables

That's right, no hard coding of values in production system statements what so ever! Most of us have followed this practice of not hard coding values on the production system, and should continue to follow it. This becomes all the more important as the existence or non-existence of bind variables affects the optimizer in making decisions. Preferably, put your values in tables even if it requires one additional statement to retrieve them.

For example, below is an existing legacy system code that always uses a predicate with a hard coded value 1666. Though this may not change in the lifetime of the code, best practice is to avoid such coding and define this as a variable.

Original code:

      select sum(qty) qty
      into   rec.qty
      from   jncdm_ra_transactions a, ra_customer_trx_all b, ra_cust_trx_types_all c
      where  calendar_month_id = pi_calendar_month_id
      and    b.customer_trx_id = a.customer_trx_id
      and    b.cust_trx_type_id != 1666
      and    a.inventory_item_id = rec.inventory_item_id
      and    a.warehouse_id = rec.warehouse_id
      and    c.cust_trx_type_id = b.cust_trx_type_id
      and    c.type = 'INV';

Change it to use a variable:

  l_exclude_trx_type_id  number := '1666';
  l_mod_type varchar2(3) := 'INV';
      select sum(qty) qty
      into   rec.qty
      from   jncdm_ra_transactions a, ra_customer_trx_all b, ra_cust_trx_types_all c
      where  calendar_month_id = pi_calendar_month_id
      and    b.customer_trx_id = a.customer_trx_id
      and    b.cust_trx_type_id != l_exclude_trx_type_id
      and    a.inventory_item_id = rec.inventory_item_id
      and    a.warehouse_id = rec.warehouse_id
      and    c.cust_trx_type_id = b.cust_trx_type_id
      and    c.type = l_mod_type;

CBO works best when statements use literals in the predicates, this allows the optimizer to use histograms on columns to decide on a proper execution plan. When using a bind variable the optimizer is not able to compute what percentage of rows fall below the variable value. Though it is true that literals provide optimizer with more information to choose an execution plan, it is still recommended to use bind variables to allow Sharable SQLs in the system.

8.17) Trace facility from front-end

(This is my own personal experience on Oracle Applications.) Oracle Applications provide an entry in the drop-down menu to enable/disable trace for individual running sessions. It is a key feature in instantly getting details regarding a session that is running slow. Enabling trace generates a trace file on the server for all transactions done in the screen. This file can then be reviewed by the development team to identify all of the bad queries. This is better than trying to simulate the same condition in a test, which may not always be feasible in CBO. Therefore, the next time a user complains about response time, you can request that a trace to be enabled for that session and then analyze the generated file. Instant Targeting of the issue! Consider using password protection to enable trace at session, this will prevent users from experimenting with the option and generating files on the server.

If you are using Oracle Applications you already have this option, if not, you can design something similar for your setup.

8.18) Provide sufficient sort space

Gathering statistics on tables requires sorting to be done and this takes up sort-area space in memory as well as temporary tablespace. Make sure you have enough temporary space to generate statistics (depending on ESTIMATE or COMPUTE) for the largest table. You may consider increasing the value of SORT_AREA_SIZE to allow more operations to take place in memory and save on I/O.

Gathering statistics on indexes do not require sorting.

8.19) FGAC changes

Execution plan may change if you are using Fine Grained Access control (FGAC). FGAC adds additional predicates to an existing query that may sometimes result in a change of execution plan. Test your queries with these additional predicates. Make use of hints to direct optimizer to do what is needful.

We will discuss statistics generation and Monitoring in Part-4 of this series.

» See All Articles by Columnist Amar Kumar Padhi

Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved