Oracle 11gR2 I/O Performance Tuning: Using Solid State Drives

Friday Dec 2nd 2011 by Jim Czuprynski
Share:

Solid-state drives (SSDs) can make an incredible difference when applied judiciously to an apparently intractable I/O performance bottleneck.

Synopsis. Oracle 11gR2 introduced significant features that have made it easier than ever to isolate and identify potential root causes of poor I/O performance. This article – the eighth and last in this series – explores how solid-state devices (SSDs) can make an incredible difference when applied judiciously to an apparently intractable I/O performance bottleneck.

 

The prior article in this series explored how the methodologies I’d discussed throughout this series could be applied to:

  • Improve I/O response time via intra-database solutions
  • Improve I/O response time through enhancements to the I/O subsystem

And even though I was able to bring most of those methodologies to bear against an I/O performance issue, I’m pretty certain there are alternatives for improving the application workload’s I/O performance other than simply migrating the tablespaces’ datafiles of the hottest segments to ASM storage. So in the final article in this series, I’ll conclude my I/O performance tuning investigations by exploring how solid-state devices (SSDs) can be leveraged effectively to overcome I/O performance issues … and just as importantly, where they may make absolutely no difference in I/O performance.

Solid State Device (SSD) Architecture: It's All About the Cells

Solid state devices, colloquially known as SSDs, or solid state drives, have been around for some time now, but especially with the onset of the Oracle Database Machine X2-2 and its corresponding Exadata Storage Server, interest in utilizing SSDs has increased exponentially. In truth, the advantages of SSDs are not that difficult to grasp because the IT community has actually been using them for at least a decade, but in a much smaller format: USB ("thumb") drives incorporate the same technology as does any SSD, albeit on a much smaller scale.

SSDs are essentially composed of NAND "flash memory." However, note that this is most definitely not the same type of memory that’s used for cache memory SAN platforms or DRAM in database servers. SSDs are manufactured in two basic architectures, single-level cell (SLC) and multiple-level cell (MLC). In Table 8-1 below I’ve compared and contrasted some of the common characteristics of SLC and MLC SSDs:

 

Table 8-1. Single-Level Cell (SLC) vs. Multiple-Level Cell (MLC) Solid State Storage Devices

Type

Single-Level Cell

Multiple-Level Cell

Size

Lower in density, so SLCs trend toward smaller sizes (< 300GB) but they have slowly been getting larger

Higher in density, so MLCs typically trend toward larger sizes (300 GB and larger)

Speed

Less dense, so they are typically faster

Higher density, so they are typically slower

Life Cycle

Less density means a longer life

Higher density means a shorter life

Best Use

Extremely short retention period; perfect for smaller I/Os

Longer data retention period;  larger size I/Os (e.g. code tables, indexes)

 

In recent years it appears that most manufacturers of solid-state disk are trending toward the SLC architecture, mainly because of their higher speed and longer life expectancy. For my testing, I used a relatively inexpensive 30 GB SSD manufactured by OCZ that I’d purchased about 18 months back for use in one of my older laptops. This SSD is rated by the manufacturer to last for 2,000,000 hours between failures, and this compares favorably to most hard disk storage devices, which are generally manufacturer-rated at somewhere near 1,000,000 hours mean time between failure (MTBF).

Table 8.2 below compares the sizes and speeds of the storage devices I’ve used throughout my experiments in this article series.

 

Table 8-2. Storage Devices Used For Testing and Demonstration

Manufacturer

Model

Capacity (GB)

Rotational Speed

Cache Size (MB)

OCZ Flash Drive

Vertex

30

N/A

64

Samsung HDD

HD753LJ

750

7200

32

Western Digital HDD

800JD-00JN

80

7200

8

 

To SSD or Not to SSD: Is This a Question?

Most Oracle DBAs would not dispute the fact that SSDs are "the next new thing" in terms of faster, energy-efficient, and reliable storage. But as wondrous as they are for increasing I/O performance, they do still have some drawbacks that must be considered, especially from the viewpoint of database I/O processing.

Reading from SSDs is wicked fast … but writing to SSDs is significantly slower. SSD I/O rates, especially for random reads, are significantly higher - usually at least a full order of magnitude – than traditional hard disk devices (HDDs). There’s really no way to overcome this disadvantage because it’s simply inherent in the way NAND flash memory operates: Reads from flash memory can occur at a single byte at a time, but writes to flash memory require that all existing data first be erased and then rewritten. Since NAND flash memory architecture actually requires this erase-and-rewrite cycle happens at the individual block level, writes generally take significantly longer than reads, and sequential writes take much longer than random writes.

SSDs do wear out. Eventually, all hardware does fail, and while SLC SSDs have a significantly higher MBTF than HDDs, the individual cells do fail over time. This too is inherent in NAND flash memory architecture, a direct result of what’s called the program-erase (PE) cycle that I mentioned above. SLC cells can usually withstand over one million such PE cycles before failure occurs. The good news is that several sophisticated methods for bad block management (BBM) have already been built into flash memory to insure that bad cells are bypassed automatically, so the failure of an entire SSD device is extremely unlikely.

SSDs are still significantly more expensive than traditional hard disk devices (HDDs). One of my most recent purchases of consumer-market HDDs brought home to me how cheap disk storage has become. The per-gigabyte price of the HDDs I purchased had already fallen below $0.10, and it continues to fall as larger capacity HDDs are manufactured in greater quantities. SSDs, on the other hand, are still roughly ten to 15 times more expensive on a per-gigabyte basis, but this price continues to drop as the major manufacturers compete in an IT environment that’s driven more by hardware prices than any time in our industry’s history.

The key concern of most Oracle DBAs, therefore, is to focus on where the application of SSD and flash memory technology will generate the "biggest bang for the buck" in terms of Oracle database I/O performance improvement. Based on the intrinsic architecture of NAND-based devices, it should be readily apparent that random access database application workloads – especially OLTP - will probably benefit most. It’s equally important to recognize that sequential read application workloads might be able to benefit from SSDs, but that sequential write workloads may not benefit anywhere near as much.

Evaluating SSD vs. HDD I/O Performance: Preparations

To demonstrate these concepts, I’ll need to prepare sufficiently-sized targets for both random and sequential I/O operations. I used the code in Listing 8.1 to construct the necessary logical and physical structures in my test Oracle 11gR2 (11.2.0.1) database:

First, I allocated a 10 GB partition on the OCZ Flash Drive and a second 10 GB partition on one of  my HDDs.

  • Next, I formatted both partitions using the Linux native EXT3 file system and mounted these two partitions at mount points /u01/app/oracle/ssdonly and /u01/app/oracle/oradata, respectively.
  • I then created two new tablespaces, SSDONLY and HDDONLY, and placed an 8GB datafile for each tablespace on these two respective mount points.
  • In each tablespace I created an identical table – SH.SSD_SALES_COPY and SH.HDD_SALES_COPY - using the CTAS (Create Table As Select) method to initially populate data from the SH.SALES table. I also used a sequence to provide a unique key value for each table entry, and created a primary key index on this new column in each table.
  • Finally, to force the database to read from or write to the storage subsystem directly a majority of the time, I set the size of the database instance’s database buffer cache almost absurdly low – only 12 MB – so that I/O would tend to be forced to disk more frequently.

To generate application workloads of sufficient intensity and duration, I’ll utilize Dominic Giles’s excellent open-source workload testing tool, Swingbench. Dominic has obviously invested quite a bit of time to develop a relatively sophisticated tool that makes extremely effective use of the Oracle JDBC thin client interface. Swingbench can construct and execute simulated application workloads as well as monitor in real time exactly how an application workload is directly impacting database performance. I’ve included the Swingbench application workload profile as well as the PL/SQL objects that implement my customized application workloads against the SSD-based and HDD-based test objects.

When Will SSDs Improve Performance Most Dramatically?

Based on the strengths of SSD architecture, my expectation is that random access application workloads – especially those that feature random reads and random writes - will benefit the most.  To demonstrate this, I constructed a random read workload that executes a simple query against each of my test tables. I likewise constructed a random write workload that executes a simple UPDATE DML statement against each of my test tables. I implemented the DBMS_RANDOM function to randomly select the beginning and ending ranges for the bind variable values in each statement’s selection criteria, thus insuring the result set chosen for each iteration of the workload was kept relatively small.

I then executed these two simulated application workloads as a 50-user Swingbench benchmark over a 15-minute time frame. I’ve reproduced a sample of the tool’s main execution panel in Figure 8-1 below.

 

Swingbench Benchmark Execution Panel

 

Figure 8.1. Swingbench Benchmark Execution Panel.

Since the HDD-based table and index reside in the TSP_HDD tablespace and the SSD-based table and index reside in the TSP_SSD tablespace, the comparison in I/O performance is simple yet striking. Here’s   a portion of the Tablespace IO statistics from the complete AWR report I generated for this application workload simulation:

 

Tablespace

Reads

Av Reads/s

Av Rd(ms)

Av Blks/Rd

Writes

Av Writes/s

Buffer Waits

Av Buf Wt(ms)

TSP_HDD

33,977

37

11.22

1.00

1,602

2

1

10.00

TSP_SSD

33,368

37

1.46

1.00

1,739

2

1

0.00

SYSAUX

5,486

6

2.03

1.17

1,528

2

0

0.00

 

For the simulated random read workload, each tablespace encountered approximately the same number of physical reads at the same transaction rate. However, note that on average the SSD tablespace outperformed the HDD-based tablespace by almost 670% (1.46 reads/ms versus 11.22 reads/ms, respectively).

For the simulated random write application workload, however, the results are not quite as striking. Below I’ve reproduced a subset of the Tablespace IO statistics from the complete AWR report I generated for this simulation:

 

Tablespace

Reads

Av Reads/s

Av Rd(ms)

Av Blks/Rd

Writes

Av Writes/s

Buffer Waits

Av Buf Wt(ms)

TSP_HDD

44,719

49

10.28

1.00

15,037

17

3

10.00

TSP_SSD

42,422

47

1.09

1.00

14,027

16

0

0.00

SYSAUX

4,005

4

3.35

1.09

1,480

2

0

0.00

 

Note that there was a measurable but slight increase in I/O performance for the number of average writes per second, and no buffer waits were encountered when writing to the SSD device. Again, this result is not totally unexpected because SSDs are not necessarily likely to improve I/O performance for physical writes.

When May SSDs Fail to Improve Performance Dramatically?

Since the nature of SSD architecture favors read activity more so than write activity, it’s not unusual to encounter improved I/O performance for sequential read workloads. I’ve illustrated this concept with a Swingbench simulation of a ten-user workload that performs a single iteration of full table scans against each of the two test tables over a 15-minute time frame. I’ve excerpted the results from the IOStat by Function/Filetype Summary sections of the corresponding AWR reports for the HDD workload and SSD workload in Table 8.3 below.

 

Table 8-3. Data File Performance Improvements

Test Case:

Datafiles  on HDD

Datafiles  On SSD

Statistic

Reads: Data

Requests Per Second

Data Per Second

Reads: Data

Requests Per Second

Data Per Second

Direct Reads (Data File)

53.1G

1926.4

60.1846

57.2G

2072.19

64.7419

 

There was indeed some modest improvement in sequential read performance when the identical data was placed on HDD versus SSD storage. The I/O throughput performance for the SSD-based table improved by approximately 7.5% versus the HDD-based table based on the Data Per Second metric in each AWR report section (64.7419 MBPS versus 60.1846 MBPS). Again, this result was not unexpected because SSDs perform best when handling random access workloads.

If you’ve been keeping score, you already know that there’s only one IO performance category left, and that’s sequential writes. As I mentioned earlier, the nature of SSD architecture simply can’t help but improve random read and random write activity, but larger writes will almost certainly not be able to benefit from SSD technology. Interestingly, I’ve talked to many Oracle DBAs who believe that SSDs will dramatically benefit the "hottest" files in their databases, and those are usually the online redo logs, especially for an OLTP application workload. This may be counter-intuitive, but SSDs may not be able to relieve this performance bottleneck because it’s implicit in the nature of the LogWriter (LGWR) background process to write out online redo log entries as sequential writes.

To illustrate this paradox, I first migrated all of my database’s online redo logs to SSD storage, as shown in the code in Listing 8.2. I then reran the 50-user random-write workload I used in the first set of scenarios. I’ve captured the pertinent redo log statistics from the AWR reports from both test runs in Table 8.4 below.

 

Table 8-4. Online Redo Log Performance Improvements

Test Case:

Redo On HDD

Redo On SSD

Statistic

Total

per Second

per Trans

Total

per Second

per Trans

redo blocks written

239,509

264.9

8.55

238,390

263.48

8.54

redo entries

265,946

294.14

9.49

257,560

284.67

9.23

redo size

111,990,896

123,861.81

3,996.53

111,345,276

123,065.27

3,989.87

redo size for direct writes

0

0

0

57,596

63.66

2.06

redo wastage

6,503,176

7,192.51

232.07

6,574,024

7,265.99

235.57

redo write time

9,974

11.03

0.36

3,986

4.41

0.14

redo writes

27,974

30.94

1

28,521

31.52

1.02

 

I was actually pleasantly surprised to find a moderate performance improvement for this scenario. While almost all of the redo log performance statistics compare pretty closely, the redo write time did decrease by approximately 250% (9,974 ms vs. 3,986 ms). However, this performance improvement is more likely due to the fact that my application workload scenarios committed extremely frequently – essentially, after each UPDATE statement was issued – and this means that the LogWriter background process was writing to the online redo logs much more frequently than during a less homogenous or "hybrid" database application workload. As a result, the SSD media processed these LGWR transactions as random writes instead of sequential writes.

Conclusion

In this article series, I’ve done my best to cover the basic best practices behind I/O performance tuning for Oracle databases, especially in an Oracle 11gR2 environment. Hopefully I’ve shed some light on the pratfalls and difficulties of tuning individual storage devices and the related storage networks they comprise, especially when trying to obtain the best level of application workload performance in "hybrid" Oracle database environments.

References and Additional Reading

Before you proceed to experiment with any features I’ve described in this article series, I strongly suggest that you first look over the corresponding detailed Oracle documentation before trying them out for the first time. I’ve drawn upon the following Oracle Database 11g Release 2 documents for this article’s technical details:

E16508-05 Oracle Database 11gR2 Concepts

E16638-05 Oracle Database 11gR2 Performance Tuning Guide

E17110-08 Oracle Database 11gR2 Reference

E17120-07 Oracle Database 11gR2 Administrator’s Guide

E17128-04 Oracle Database 11gR2 New Features

 

See all articles by Jim Czuprynski

/*
|| Oracle 11gR2: I/O Performance Tuning Series, Listing 8
||
|| Contains:
|| -- Examples of solid-state device (SSD) IO performance benchmarking
|| -- << enumerate others >>
||
|| that demonstrate the potential of SSDs for I/O performance boosting.
||
|| Author: Jim Czuprynski
||
*/

/* 
|| Listing 8.1:
|| Creating test objects for HDD and SSD storage experiments
*/

/*
|| SSD-based objects
*/

-----
-- Create new tablespace
-----
DROP TABLESPACE tsp_ssd INCLUDING CONTENTS AND DATAFILES;
CREATE TABLESPACE tsp_ssd 
  DATAFILE '/u01/app/oracle/ssdonly/tsp_ssd.dbf'
  SIZE 8G;

-----
-- Create new sequence
-----
DROP SEQUENCE sh.seq_ssd_sales_copy_pkid;
CREATE SEQUENCE sh.seq_ssd_sales_copy_pkid
    MINVALUE 0
    MAXVALUE 999999999999
    START WITH 100
    CACHE 1000000
    NOCYCLE;

-----
-- Create new table in TSP_SSD tablespace
-----
DROP TABLE sh.ssd_sales_copy PURGE;
CREATE TABLE sh.ssd_sales_copy (
    key_id          NUMBER  NOT NULL
   ,prod_id         NUMBER  NOT NULL
   ,cust_id         NUMBER  NOT NULL
   ,time_id         DATE    NOT NULL
   ,channel_id      NUMBER  NOT NULL
   ,promo_id        NUMBER  NOT NULL
   ,quantity_sold   NUMBER  NOT NULL    
   ,amount_sold     NUMBER  NOT NULL
    )  
  TABLESPACE tsp_ssd
  NOLOGGING;

-----
-- Priming load
-----
INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.ssd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_ssd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.sales;

COMMIT;

-----
-- Extend SH.SSD_SALES_COPY
-----
INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.ssd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_ssd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.ssd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.ssd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_ssd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.ssd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.ssd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_ssd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.ssd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.ssd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_ssd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.ssd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.ssd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_ssd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.ssd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.ssd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_ssd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.ssd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.ssd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_ssd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.ssd_sales_copy;

COMMIT;

-- 7 iterations = xxx,xxx,xxx rows

-----
-- Create PK index and constraint
-----
ALTER TABLE sh.ssd_sales_copy DROP CONSTRAINT ssd_sales_copy_pk;
DROP INDEX sh.ssd_sales_copy_pk_idx;
ALTER TABLE sh.ssd_sales_copy
    ADD CONSTRAINT ssd_sales_copy_pk 
    PRIMARY KEY (key_id)
    USING INDEX ( 
        CREATE INDEX sh.ssd_sales_copy_pk_idx
            ON sh.ssd_sales_copy (key_id)
            TABLESPACE tsp_ssd
			PARALLEL 4
            NOLOGGING
    );

-----
-- Gather statistics
-----
BEGIN
	DBMS_STATS.GATHER_TABLE_STATS(ownname => 'SH', tabname => 'SSD_SALES_COPY', cascade => TRUE);
END;
/

/*
|| HDD-based objects
*/

-----
-- Create new tablespace
-----
DROP TABLESPACE tsp_hdd INCLUDING CONTENTS AND DATAFILES;
CREATE TABLESPACE tsp_hdd 
  DATAFILE '/u01/app/oracle/oradata/tsp_hdd.dbf'
--DATAFILE '+DATA'
  SIZE 8G;

-----
-- Create new sequence
-----
DROP SEQUENCE sh.seq_hdd_sales_copy_pkid;
CREATE SEQUENCE sh.seq_hdd_sales_copy_pkid
    MINVALUE 0
    MAXVALUE 999999999999
    START WITH 100
    CACHE 1000000
    NOCYCLE;

-----
-- Create new table in TSP_HDD tablespace
-----
DROP TABLE sh.hdd_sales_copy PURGE;
CREATE TABLE sh.hdd_sales_copy (
    key_id          NUMBER  NOT NULL
   ,prod_id         NUMBER  NOT NULL
   ,cust_id         NUMBER  NOT NULL
   ,time_id         DATE    NOT NULL
   ,channel_id      NUMBER  NOT NULL
   ,promo_id        NUMBER  NOT NULL
   ,quantity_sold   NUMBER  NOT NULL    
   ,amount_sold     NUMBER  NOT NULL
    )  
  TABLESPACE tsp_hdd
  NOLOGGING;

-----
-- Priming load
-----
INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.hdd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_hdd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.sales;

COMMIT;

-----
-- Extend SH.HDD_SALES_COPY
-----
INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.hdd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_hdd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.hdd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.hdd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_hdd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.hdd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.hdd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_hdd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.hdd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.hdd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_hdd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.hdd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.hdd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_hdd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.hdd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.hdd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_hdd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.hdd_sales_copy;

COMMIT;

INSERT /*+ APPEND NOLOGGING PARALLEL(4) */ INTO sh.hdd_sales_copy
SELECT /* +PARALLEL(4) */ sh.seq_hdd_sales_copy_pkid.NEXTVAL, prod_id, cust_id, time_id, 
       channel_id, promo_id, quantity_sold, amount_sold 
  FROM sh.hdd_sales_copy;

COMMIT;

-----
-- Create PK index and constraint
-----
ALTER TABLE sh.hdd_sales_copy DROP CONSTRAINT hdd_sales_copy_pk;
DROP INDEX sh.hdd_sales_copy_pk_idx;
ALTER TABLE sh.hdd_sales_copy
    ADD CONSTRAINT hdd_sales_copy_pk 
    PRIMARY KEY (key_id)
    USING INDEX ( 
        CREATE INDEX sh.hdd_sales_copy_pk_idx
            ON sh.hdd_sales_copy (key_id)
            TABLESPACE tsp_hdd
			PARALLEL 4
            NOLOGGING
    );

-----
-- Gather statistics
-----
BEGIN
	DBMS_STATS.GATHER_TABLE_STATS(ownname => 'SH', tabname => 'HDD_SALES_COPY', cascade => TRUE);
END;
/


/* 
|| Listing 8.2:
|| Moving online redo logs to SSD storage

*/

SQL> ALTER DATABASE
  2      ADD LOGFILE GROUP 7
  3      ('/u01/app/oracle/ssdonly/redo7a.log', '/u01/app/oracle/ssdonly/redo7b.log')
  4      SIZE 128M REUSE;
Database altered.

SQL> ALTER DATABASE
  2      ADD LOGFILE GROUP 8
  3      ('/u01/app/oracle/ssdonly/redo8a.log', '/u01/app/oracle/ssdonly/redo8b.log')
  4      SIZE 128M;
Database altered.

SQL> ALTER DATABASE
  2      ADD LOGFILE GROUP 9
  3      ('/u01/app/oracle/ssdonly/redo9a.log', '/u01/app/oracle/ssdonly/redo9b.log')
  4      SIZE 128M;
Database altered.

SQL> ALTER SYSTEM SWITCH LOGFILE;
System altered.

SQL> ALTER SYSTEM SWITCH LOGFILE;
System altered.

SQL> ALTER SYSTEM SWITCH LOGFILE;
System altered.

SQL> ALTER SYSTEM CHECKPOINT;
System altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 4;
Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 5;
Database altered.

SQL> ALTER DATABASE DROP LOGFILE GROUP 6;
Database altered.

SQL> SELECT group#, member from v$logfile;

    GROUP# MEMBER
---------- --------------------------------------------------
         7 /u01/app/oracle/ssdonly/redo7a.log
         7 /u01/app/oracle/ssdonly/redo7b.log
         8 /u01/app/oracle/ssdonly/redo8a.log
         8 /u01/app/oracle/ssdonly/redo8b.log
         9 /u01/app/oracle/ssdonly/redo9a.log
         9 /u01/app/oracle/ssdonly/redo9b.log

6 rows selected.

Swingbench Application Workload Profile

<?xml version = '1.0' encoding = 'UTF-8'?>
<SwingBenchConfiguration xmlns="http://www.dominicgiles.com/swingbench/config">
   <Name>Stored Procedures Benchmark</Name>
   <Comment></Comment>
   <Connection>
      <UserName>sh</UserName>
      <Password>sh</Password>
      <ConnectString>//192.168.0.136/tpcdb</ConnectString>
      <DriverType>Oracle10g Type IV jdbc driver (thin)</DriverType>
   </Connection>
   <Load>
      <NumberOfUsers>50</NumberOfUsers>
      <MinDelay>100</MinDelay>
      <MaxDelay>3000</MaxDelay>
      <QueryTimeout>0</QueryTimeout>
      <MaxTransactions>0</MaxTransactions>
      <RunTime>0:15</RunTime>
      <LogonGroupCount>1</LogonGroupCount>
      <LogonDelay>100</LogonDelay>
      <LogOutPostTransaction>true</LogOutPostTransaction>
      <WaitTillAllLogon>false</WaitTillAllLogon>
      <StatsCollectionStart>0:0</StatsCollectionStart>
      <StatsCollectionEnd>0:0</StatsCollectionEnd>
      <TransactionList>
         <Transaction>
            <Id>Stored Procedure 1</Id>
            <ShortName>SP1</ShortName>
            <ClassName>com.dom.benchmarking.swingbench.storedprocedures.StoredProcedure1</ClassName>
            <Weight>50</Weight>
            <Enabled>false</Enabled>
         </Transaction>
         <Transaction>
            <Id>Stored Procedure 2</Id>
            <ShortName>SP2</ShortName>
            <ClassName>com.dom.benchmarking.swingbench.storedprocedures.StoredProcedure2</ClassName>
            <Weight>50</Weight>
            <Enabled>true</Enabled>
         </Transaction>
         <Transaction>
            <Id>Stored Procedure 3</Id>
            <ShortName>SP3</ShortName>
            <ClassName>com.dom.benchmarking.swingbench.storedprocedures.StoredProcedure3</ClassName>
            <Weight>50</Weight>
            <Enabled>false</Enabled>
         </Transaction>
         <Transaction>
            <Id>Stored Procedure 4</Id>
            <ShortName>SP4</ShortName>
            <ClassName>com.dom.benchmarking.swingbench.storedprocedures.StoredProcedure4</ClassName>
            <Weight>50</Weight>
            <Enabled>false</Enabled>
         </Transaction>
         <Transaction>
            <Id>Stored Procedure 5</Id>
            <ShortName>SP5</ShortName>
            <ClassName>com.dom.benchmarking.swingbench.storedprocedures.StoredProcedure5</ClassName>
            <Weight>50</Weight>
            <Enabled>false</Enabled>
         </Transaction>
         <Transaction>
            <Id>Stored Procedure 6</Id>
            <ShortName>SP6</ShortName>
            <ClassName>com.dom.benchmarking.swingbench.storedprocedures.StoredProcedure6</ClassName>
            <Weight>50</Weight>
            <Enabled>true</Enabled>
         </Transaction>
         <Transaction>
            <Id>Stored Procedure 7</Id>
            <ShortName>SP7</ShortName>
            <ClassName>com.dom.benchmarking.swingbench.storedprocedures.StoredProcedure7</ClassName>
            <Weight>50</Weight>
            <Enabled>false</Enabled>
         </Transaction>
         <Transaction>
            <Id>Stored Procedure 8</Id>
            <ShortName>SP8</ShortName>
            <ClassName>com.dom.benchmarking.swingbench.storedprocedures.StoredProcedure8</ClassName>
            <Weight>50</Weight>
            <Enabled>false</Enabled>
         </Transaction>
      </TransactionList>
   </Load>
   <Database>
      <SystemUserName>system</SystemUserName>
      <SystemPassword>oracle</SystemPassword>
      <PerformAWRSnapShots>true</PerformAWRSnapShots>
   </Database>
   <Preferences>
      <StartMode>manual</StartMode>
      <Output>swingbench</Output>
      <JumpToEvents>true</JumpToEvents>
      <TimingsIncludeSleep>false</TimingsIncludeSleep>
      <TimingsIn>miliseconds</TimingsIn>
      <StatisticsLevel>simple</StatisticsLevel>
      <OutputFile>results.xml</OutputFile>
      <Charts DefaultChart="Overview">
         <Chart>
            <Name>Transactions Per Minute</Name>
            <Autoscale>true</Autoscale>
            <MaximumValue>-1.0</MaximumValue>
            <Logarithmic>false</Logarithmic>
         </Chart>
         <Chart>
            <Name>Transaction Response Time</Name>
            <Autoscale>true</Autoscale>
            <MaximumValue>-1.0</MaximumValue>
            <Logarithmic>true</Logarithmic>
         </Chart>
         <Chart>
            <Name>DML Operations Per Minute</Name>
            <Autoscale>true</Autoscale>
            <MaximumValue>-1.0</MaximumValue>
            <Logarithmic>false</Logarithmic>
         </Chart>
      </Charts>
      <AllowedErrorCodes/>
      <RefreshRate>5</RefreshRate>
   </Preferences>
</SwingBenchConfiguration>

PL/SQL Objects

/*
|| Type Specification: sh.integer_return_array
|| Purpose: Implements Swingbench function return values interface
|| Author:  Dominic Giles
*/

CREATE OR REPLACE TYPE sh.integer_return_array 
    IS VARRAY(25) OF INTEGER
/

/*
|| Package Specification: sh.swingbench
|| Purpose: Specifies signatures for Swingbench functions and procedures for
||          application workload generation
|| Author:  Dominic Giles (reformatted and augmented for these demonstrations)
*/

CREATE OR REPLACE PACKAGE sh.swingbench AS
    FUNCTION storedprocedure1(
         min_sleep INTEGER
        ,max_sleep INTEGER) 
      RETURN integer_return_array;

    FUNCTION storedprocedure2(
         min_sleep INTEGER
        ,max_sleep INTEGER) 
      RETURN integer_return_array;

    FUNCTION storedprocedure3(
         min_sleep INTEGER
        ,max_sleep INTEGER) 
      RETURN integer_return_array;

    FUNCTION storedprocedure4(
         min_sleep INTEGER
        ,max_sleep INTEGER) 
      RETURN integer_return_array;

    FUNCTION storedprocedure5(
         min_sleep INTEGER
        ,max_sleep INTEGER) 
      RETURN integer_return_array;

    FUNCTION storedprocedure6(
         min_sleep INTEGER
        ,max_sleep INTEGER) 
      RETURN integer_return_array;

    FUNCTION storedprocedure7(
         min_sleep INTEGER
        ,max_sleep INTEGER) 
      RETURN integer_return_array;

    FUNCTION storedprocedure8(
         min_sleep INTEGER
        ,max_sleep INTEGER) 
      RETURN integer_return_array;

END;
/

/*
|| Package Body: sh.swingbench
|| Purpose: Implements public functions and procedures for Swingbench 
||          application workload generation
|| Author:  Dominic Giles (reformatted and augmented for these demonstrations)
*/

CREATE OR REPLACE PACKAGE BODY sh.swingbench AS
    SELECT_STATEMENTS   INTEGER := 1;
    INSERT_STATEMENTS   INTEGER := 2;
    UPDATE_STATEMENTS   INTEGER := 3;
    DELETE_STATEMENTS   INTEGER := 4;
    COMMIT_STATEMENTS   INTEGER := 5;
    ROLLBACK_STATEMENTS INTEGER := 6;
    dml_array integer_return_array := integer_return_array();

    FUNCTION from_mills_to_tens(value INTEGER) 
      RETURN float 
    IS
        real_value float := 0;
    BEGIN
        real_value := value/1000;
        RETURN real_value;
    EXCEPTION 
        WHEN ZERO_DIVIDE THEN
            real_value := 0;
            RETURN real_value;
    END from_mills_to_tens;

    PROCEDURE sleep(
         min_sleep INTEGER
        ,max_sleep INTEGER) 
    IS
    BEGIN
        IF (((max_sleep - min_sleep) > 0) AND (min_sleep < max_sleep)) THEN
            DBMS_LOCK.SLEEP(DBMS_RANDOM.VALUE(from_mills_to_tens(max_sleep), from_mills_to_tens(min_sleep)));
        END IF;
    END sleep;

    PROCEDURE init_dml_array 
    IS
    BEGIN
        dml_array := integer_return_array();
        FOR i IN 1..7 
            LOOP
                dml_array.EXTEND;
                dml_array(i) := 0;
            END LOOP;
    END init_dml_array;

    PROCEDURE increment_selects(num_selects INTEGER)
    IS
    BEGIN
        dml_array(SELECT_STATEMENTS) := dml_array(SELECT_STATEMENTS) + num_selects;
    END increment_selects;

    PROCEDURE increment_inserts(num_inserts INTEGER)
    IS
    BEGIN
        dml_array(INSERT_STATEMENTS) := dml_array(INSERT_STATEMENTS) + num_inserts;
    END increment_inserts;

    PROCEDURE increment_updates(num_updates INTEGER)
    IS
    BEGIN
        dml_array(UPDATE_STATEMENTS) := dml_array(UPDATE_STATEMENTS) + num_updates;
    END increment_updates;

    PROCEDURE increment_deletes(num_deletes INTEGER)
    IS
    BEGIN
        dml_array(DELETE_STATEMENTS) := dml_array(DELETE_STATEMENTS) + num_deletes;
    END increment_deletes;

    PROCEDURE increment_commits(num_commits INTEGER) 
    IS
    BEGIN
        dml_array(COMMIT_STATEMENTS) := dml_array(COMMIT_STATEMENTS) + num_commits;
    END increment_commits;  

    PROCEDURE increment_rollbacks(num_rollbacks INTEGER) 
    IS
    BEGIN
        dml_array(ROLLBACK_STATEMENTS) := dml_array(ROLLBACK_STATEMENTS) + num_rollbacks;
    END increment_rollbacks;  

    /*
    || SSD Workloads
    */

    FUNCTION storedprocedure1(
         min_sleep INTEGER
        ,max_sleep INTEGER
         ) RETURN INTEGER_RETURN_array 
    -----
    -- Function:    storedprocedure1
    -- Purpose:     Generates a random read workload against the specified table
    -- Author:      Jim Czuprynski (based on Dominic Giles's Swingbench sample template)
    -- Maintenance Log:
    -- Version	Resource	Comments
    -- -------  ----------  -------------------------------------------------------- 
    -- 1.0      JSC         2011-08-17: Initial version (built into SwingBench)
    -----
    IS
        nIterations     NUMBER(5)     := 0;
        nBegKey         NUMBER(15)    := 0;    
        nEndKey         NUMBER(15)    := 0;    
        nMinQtySold     NUMBER(15,2)  := 0;
        nMaxAmtSold     NUMBER(15,2)  := 0;
    BEGIN
        init_dml_array();
        nIterations := DBMS_RANDOM.VALUE(1, 250);
        DBMS_APPLICATION_INFO.SET_MODULE(module_name => 'LG-SSD', action_name => 'RR');

        FOR i in 1..nIterations
            LOOP
                nBegKey := DBMS_RANDOM.VALUE(1, 120000000);
                nEndKey := nBegKey + DBMS_RANDOM.VALUE(1, 25);
                SELECT MIN(quantity_sold), MAX(amount_sold)
                  INTO nMinQtySold, nMaxAmtSold
                  FROM sh.ssd_sales_copy
                 WHERE key_id BETWEEN nBegKey AND nEndKey
                 ;
    
                sleep(min_sleep, max_sleep);
                
            END LOOP;

        RETURN dml_array;
    END storedprocedure1;

    FUNCTION storedprocedure2(
         min_sleep INTEGER
        ,max_sleep INTEGER
         ) RETURN INTEGER_RETURN_array 
    -----
    -- Function:    storedprocedure2
    -- Purpose:     Generates a random write workload against the specified table
    -- Author:      Jim Czuprynski (based on Dominic Giles's Swingbench sample template)
    -- Maintenance Log:
    -- Version	Resource	Comments
    -- -------  ----------  -------------------------------------------------------- 
    -- 1.0      JSC         2011-08-17: Initial version (built into SwingBench)
    -----
    IS
        nIterations     NUMBER(5)     := 0;
        nBegKey         NUMBER(15)    := 0;    
        nEndKey         NUMBER(15)    := 0;    
        nQtyFct         NUMBER(15,2)  := 0;
        nAmtFct         NUMBER(15,2)  := 0;
    BEGIN
        init_dml_array();
        nIterations := DBMS_RANDOM.VALUE(1, 250);
        DBMS_APPLICATION_INFO.SET_MODULE(module_name => 'LG-SSD', action_name => 'RW');

        FOR i in 1..nIterations
            LOOP
                nBegKey := DBMS_RANDOM.VALUE(1, 120000000);
                nEndKey := nBegKey + DBMS_RANDOM.VALUE(1, 25);
                nQtyFct := DBMS_RANDOM.VALUE(1, 15);
                nAmtFct := DBMS_RANDOM.VALUE(1, 15);
    
                UPDATE sh.ssd_sales_copy
    		       SET quantity_sold = quantity_sold * (1 + (nQtyFct / 100))
                      ,amount_sold = amount_sold * (1 + (nAmtFct / 100))
                 WHERE key_id BETWEEN nBegKey AND nEndKey
    			;
    			COMMIT;
                
                sleep(min_sleep, max_sleep);

            END LOOP;

        RETURN dml_array;
    END storedprocedure2;

    FUNCTION storedprocedure3(
         min_sleep INTEGER
        ,max_sleep INTEGER
         ) RETURN INTEGER_RETURN_array 
    -----
    -- Function:    storedprocedure3
    -- Purpose:     Generates a sequential read workload (table scans) against 
    --              the specified table
    -- Author:      Jim Czuprynski (based on Dominic Giles's Swingbench sample template)
    -- Maintenance Log:
    -- Version	Resource	Comments
    -- -------  ----------  -------------------------------------------------------- 
    -- 1.0      JSC         2011-08-17: Initial version (built into SwingBench)
    -----
    IS
        nIterations     NUMBER(5)     := 0;
        nMinQtySold     NUMBER(15,2)  := 0;
        nMaxAmtSold     NUMBER(15,2)  := 0;
    BEGIN
        init_dml_array();
        nIterations := 1;
        DBMS_APPLICATION_INFO.SET_MODULE(module_name => 'LG-SSD', action_name => 'SR');

        FOR i in 1..nIterations
            LOOP
    			SELECT MIN(quantity_sold), MAX(amount_sold) 
          		  INTO nMinQtySold, nMaxAmtSold
    	          FROM sh.ssd_sales_copy
    	        ;

                sleep(min_sleep, max_sleep);

            END LOOP;

        RETURN dml_array;
    END storedprocedure3;

    FUNCTION storedprocedure4(
         min_sleep INTEGER
        ,max_sleep INTEGER
         ) RETURN INTEGER_RETURN_array 
    -----
    -- Function:    storedprocedure4
    -- Purpose:     Generates a sequential write workload against the specified table
    -- Author:      Jim Czuprynski (based on Dominic Giles's Swingbench sample template)
    -- Maintenance Log:
    -- Version	Resource	Comments
    -- -------  ----------  -------------------------------------------------------- 
    -- 1.0      JSC         2011-08-17: Initial version (built into SwingBench)
    -----
    IS
        nIterations     NUMBER(5)     := 0;
        nQtyFct         NUMBER(15,2)  := 0;
        nAmtFct         NUMBER(15,2)  := 0;
    BEGIN
        init_dml_array();
        nIterations := DBMS_RANDOM.VALUE(1, 5);
        DBMS_APPLICATION_INFO.SET_MODULE(module_name => 'LG-SSD', action_name => 'SW');

        FOR i in 1..nIterations
            LOOP
                nQtyFct := DBMS_RANDOM.VALUE(1, 15);
                nAmtFct := DBMS_RANDOM.VALUE(1, 15);
    
                UPDATE sh.ssd_sales_copy
    		       SET quantity_sold = quantity_sold * (1 + (nQtyFct / 100))
                      ,amount_sold = amount_sold * (1 + (nAmtFct / 100))
    			;
    			COMMIT;

                sleep(min_sleep, max_sleep);
                
            END LOOP;

        RETURN dml_array;
    END storedprocedure4;
    
    /*
    || HDD Workloads
    */

    FUNCTION storedprocedure5(
         min_sleep INTEGER
        ,max_sleep INTEGER
         ) RETURN INTEGER_RETURN_array 
    -----
    -- Function:    storedprocedure5
    -- Purpose:     Generates a random read workload against the specified table
    -- Author:      Jim Czuprynski (based on Dominic Giles's Swingbench sample template)
    -- Maintenance Log:
    -- Version	Resource	Comments
    -- -------  ----------  -------------------------------------------------------- 
    -- 1.0      JSC         2011-08-17: Initial version (built into SwingBench)
    -----
    IS
        nIterations     NUMBER(5)     := 0;
        nBegKey         NUMBER(15)    := 0;    
        nEndKey         NUMBER(15)    := 0;    
        nMinQtySold     NUMBER(15,2)  := 0;
        nMaxAmtSold     NUMBER(15,2)  := 0;
    BEGIN
        init_dml_array();
        nIterations := DBMS_RANDOM.VALUE(1, 250);
        DBMS_APPLICATION_INFO.SET_MODULE(module_name => 'LG-HDD', action_name => 'RR');

        FOR i in 1..nIterations
            LOOP
                nBegKey := DBMS_RANDOM.VALUE(1, 120000000);
                nEndKey := nBegKey + DBMS_RANDOM.VALUE(1, 25);
                SELECT MIN(quantity_sold), MAX(amount_sold)
                  INTO nMinQtySold, nMaxAmtSold
                  FROM sh.hdd_sales_copy
                 WHERE key_id BETWEEN nBegKey AND nEndKey
                 ;
    
                sleep(min_sleep, max_sleep);
                
            END LOOP;

        RETURN dml_array;
    END storedprocedure5;

    FUNCTION storedprocedure6(
         min_sleep INTEGER
        ,max_sleep INTEGER
         ) RETURN INTEGER_RETURN_array 
    -----
    -- Function:    storedprocedure6
    -- Purpose:     Generates a random write workload against the specified table
    -- Author:      Jim Czuprynski (based on Dominic Giles's Swingbench sample template)
    -- Maintenance Log:
    -- Version	Resource	Comments
    -- -------  ----------  -------------------------------------------------------- 
    -- 1.0      JSC         2011-08-17: Initial version (built into SwingBench)
    -----
    IS
        nIterations     NUMBER(5)     := 0;
        nBegKey         NUMBER(15)    := 0;    
        nEndKey         NUMBER(15)    := 0;    
        nQtyFct         NUMBER(15,2)  := 0;
        nAmtFct         NUMBER(15,2)  := 0;
    BEGIN
        init_dml_array();
        nIterations := DBMS_RANDOM.VALUE(1, 250);
        DBMS_APPLICATION_INFO.SET_MODULE(module_name => 'LG-HDD', action_name => 'RW');

        FOR i in 1..nIterations
            LOOP
                nBegKey := DBMS_RANDOM.VALUE(1, 120000000);
                nEndKey := nBegKey + DBMS_RANDOM.VALUE(1, 25);
                nQtyFct := DBMS_RANDOM.VALUE(1, 15);
                nAmtFct := DBMS_RANDOM.VALUE(1, 15);
    
                UPDATE sh.hdd_sales_copy
    		       SET quantity_sold = quantity_sold * (1 + (nQtyFct / 100))
                      ,amount_sold = amount_sold * (1 + (nAmtFct / 100))
                 WHERE key_id BETWEEN nBegKey AND nEndKey
    			;
    			COMMIT;
                
                sleep(min_sleep, max_sleep);

            END LOOP;

        RETURN dml_array;
    END storedprocedure6;

    FUNCTION storedprocedure7(
         min_sleep INTEGER
        ,max_sleep INTEGER
         ) RETURN INTEGER_RETURN_array 
    -----
    -- Function:    storedprocedure7
    -- Purpose:     Generates a sequential read workload (table scans) against 
    --              the specified table
    -- Author:      Jim Czuprynski (based on Dominic Giles's Swingbench sample template)
    -- Maintenance Log:
    -- Version	Resource	Comments
    -- -------  ----------  -------------------------------------------------------- 
    -- 1.0      JSC         2011-08-17: Initial version (built into SwingBench)
    -----
    IS
        nIterations     NUMBER(5)     := 0;
        nMinQtySold     NUMBER(21,2)  := 0;
        nMaxAmtSold     NUMBER(21,2)  := 0;
    BEGIN
        init_dml_array();
        nIterations := 1;
        DBMS_APPLICATION_INFO.SET_MODULE(module_name => 'LG-HDD', action_name => 'SR');

        FOR i in 1..nIterations
            LOOP
    			SELECT MIN(quantity_sold), MAX(amount_sold) 
          		  INTO nMinQtySold, nMaxAmtSold
    	          FROM sh.hdd_sales_copy
    	        ;

                sleep(min_sleep, max_sleep);

            END LOOP;

        RETURN dml_array;
    END storedprocedure7;

    FUNCTION storedprocedure8(
         min_sleep INTEGER
        ,max_sleep INTEGER
         ) RETURN INTEGER_RETURN_array 
    -----
    -- Function:    storedprocedure8
    -- Purpose:     Generates a sequential write workload against the specified table
    -- Author:      Jim Czuprynski (based on Dominic Giles's Swingbench sample template)
    -- Maintenance Log:
    -- Version	Resource	Comments
    -- -------  ----------  -------------------------------------------------------- 
    -- 1.0      JSC         2011-08-17: Initial version (built into SwingBench)
    -----
    IS
        nIterations     NUMBER(5)     := 0;
        nQtyFct         NUMBER(15,2)  := 0;
        nAmtFct         NUMBER(15,2)  := 0;
    BEGIN
        init_dml_array();
        nIterations := DBMS_RANDOM.VALUE(1, 5);
        DBMS_APPLICATION_INFO.SET_MODULE(module_name => 'LG-HDD', action_name => 'SW');

        FOR i in 1..nIterations
            LOOP
                nQtyFct := DBMS_RANDOM.VALUE(1, 15);
                nAmtFct := DBMS_RANDOM.VALUE(1, 15);
    
                UPDATE sh.hdd_sales_copy
    		       SET quantity_sold = quantity_sold * (1 + (nQtyFct / 100))
                      ,amount_sold = amount_sold * (1 + (nAmtFct / 100))
    			;
    			COMMIT;

                sleep(min_sleep, max_sleep);
                
            END LOOP;

        RETURN dml_array;
    END storedprocedure8;
    
END;
/
Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved