Keeping track of all the different memory areas in Oracle taxes my memory. With Oracle 10g, there can be fewer parameters to keep track of if you remember which parameters drive or affect other parameters. It is very common to see numerous reporting scripts in a production environment, and chances are that most reports include a set some_type_of_memory_area statement in them. Given that youre using 10g, are those set statements still necessary? The answer is it depends, but probably not, depending on how your instance is configured.
Two Typical Set Statements
Two common settings (set via an alter session statement) are for the HASH_AREA_SIZE and SORT_AREA_SIZE memory parameters. How and when are these memory areas called and needed? Lets start with basic definitions using the Reference Guide and look at some use cases.
Oracles documentation states the HASH_AREA_SIZE is relevant to parallel execution operations and to the query portion of DML or DDL statements. It specifies the maximum amount of memory, in bytes, to be used for hash joins. This definition leads to a useful review question: what is a hash join? The Performance Tuning Guide states the following.
Hash joins are used for joining large data sets. The optimizer uses the smaller of two tables or data sources to build a hash table on the join key in memory. It then scans the larger table, probing the hash table to find the joined rows. This method is best used when the smaller table fits in available memory. The cost is then limited to a single read pass over the data for the two tables.
Example 13-7 from the guide uses the OE schema and a join between the ORDERS and ORDER_ITEMS tables. The ORDERS table, with 105 rows, is used as the hash table.
Can we recreate this in a default installation/instance? With the PLAN_TABLE created via the UTLXPLAN script (and with the PLUSTRACE role created and granted to OE), the output is slightly different.
In my out of the box instance, why did the optimizer use nested loops instead of a hash join like the documentation shows? Maybe because I didnt set the HASH_JOIN_ENABLED parameter to true? Failure to set that parameter is not the reason, as HASH_JOIN_ENABLED does not exist in 10g. So, lets force the optimizer to use a hash join via the HASH_JOIN hint. Since the tables are aliased in the statement, dont forget to use the aliases in the list of tables for the hint.
At least now the live instance matches the documentation (with the exception of Cost). For the reporting scripts, someone may have examined the tables involved and took a (completely valid) big table joined with a little table means use a hash join approach.
There are (or were, anyway) lots of reasons to specify a sort area size for involved or complex select statements. Four reasons are stated in the 10g Reference guide.
Increasing SORT_AREA_SIZE size improves the efficiency of large sorts.
Each sort in a query can consume memory up to the amount specified by SORT_AREA_SIZE, and there can be multiple sorts in a query. Also, if a query is executed in parallel, each PQ slave can consume memory up to the amount specified by SORT_AREA_SIZE for each sort it does.
SORT_AREA_SIZE is also used for inserts and updates to bitmap indexes. Setting this value appropriately results in a bitmap segment being updated only once for each DML operation, even if more than one row in that segment changes.
Larger values of SORT_AREA_SIZE permit more sorts to be performed in memory. If more space is required to complete the sort than will fit into the memory provided, then temporary segments on disk are used to hold the intermediate sort runs.
Oracel9i documentation states the following (in two sections, but combined below):
For best performance in OLTP systems, most sorts should occur solely within memory. Sorts written to disk can adversely affect performance. If your OLTP application frequently performs sorts that do not fit into sort area size, and if the application has been tuned to avoid unnecessary sorting, then consider increasing the SORT_AREA_SIZE parameter for the whole instance.
SORT_AREA_SIZE is a dynamically modifiable initialization parameter that specifies the maximum amount of memory to use for each sort. If a significant number of sorts require disk I/O to temporary segments, then your application's performance might benefit from increasing the value of SORT_AREA_SIZE. Alternatively in a DSS environment, increasing SORT_AREA_SIZE is not likely to make the sort a memory-only sort; however, depending on the current value and the new value chosen, it could make the sort faster.
Altering this setting is, apparently, a prudent step to take under certain circumstances or situations. Knowing the value/size to set it to is, of course, the hardest part to determine.
Using bitmap indexes
Perhaps a clever developer or DBA decided to take advantage of what bitmap indexes can offer. In that regard, another area_size parameter you may encounter is BITMAP_MERGE_AREA_SIZE. The Reference guide describes this parameter as follows.
BITMAP_MERGE_AREA_SIZE is relevant only for systems containing bitmap indexes. It specifies the amount of memory Oracle uses to merge bitmaps retrieved from a range scan of the index. The default value is 1 MB. A larger value usually improves performance, because the bitmap segments must be sorted before being merged into a single bitmap.
Clearly this is a nifty trick to keep up your sleeve, but also with the attendant problem of knowing what value to set this parameter to.
Putting the parameters to use
The easiest, and in the future, only way to use all of these parameters is by using the PGA_AGGREGATE_TARGET parameter. In Oracle 10g, this one parameter obviates the need to set any of the area_size parameters. The bitmap area was retained for backward compatibility. Thats Oracle-speak for being identified as being slated for deprecation and the parameter/feature bone yard. Hash and sort areas are still supported, but in both cases (and in multiple places), Oracle recommends not to explicitly set or use these parameters.
Like the other parameters, the nagging question has to do with setting a parameter to the right value. With this particular parameter, you get a default value based on the size of the SGA. The Reference guide states:
PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all server processes attached to the instance.
Setting PGA_AGGREGATE_TARGET to a nonzero value has the effect of automatically setting the WORKAREA_SIZE_POLICY parameter to AUTO. This means that SQL working areas used by memory-intensive SQL operators (such as sort, group-by, hash-join, bitmap merge, and bitmap create) will be automatically sized. A nonzero value for this parameter is the default since, unless you specify otherwise, Oracle sets it to 20% of the SGA or 10 MB, whichever is greater.
With more sophisticated interaction and control of internal memory management, Oracle is making it easier for you to concentrate on other issues. Knowing what these older/legacy/not recommended for use parameters are used for is useful, but with Oracle 10g, actually driving these parameters via alter session statements has pretty much gone by the wayside. Chances are each value for hash and sort operations set in your report scripts was a shot in the dark to begin with. Let Oracle manage these settings for you via the PGA_AGGREGATE_TARGET parameter and clean up your report scripts by removing unnecessary session memory settings.