Expert One-on-One: Oracle: Pt. 2

Thursday Sep 27th 2001 by DatabaseJournal.com Staff
Share:

Part two of this excerpt series covers understanding Oracle architecture, using bind variables, understanding concurrency control, and multi-versioning.

I was working on a project recently where they decided to use only the latest, greatest technologies: everything was coded in Java with EJBs. The client application would talk to the database server using beans — no Net8. They would not be passing SQL back and forth between client and server, just EJB calls using Remote Method Invocation (RMI) over Internet Inter-Orb Protocol (IIOP).

If you are interested in the details of RMI over IIOP you can refer to http://java.sun.com/products/rmi-iiop/.

This is a perfectly valid approach. This functionality works and can be extremely scalable. The people responsible for the architecture understood Java, EJBs, the protocols involved — all of that stuff. They felt they were in a strong position to successfully build such a project. When their application would not scale beyond a couple of users they decided that the database was at fault and severely doubted Oracle's claim to be the 'most scaleable database ever'.

The problem was not the database but a lack of knowledge of how the database worked — a lack of knowledge that meant that certain key decisions were made at design time that doomed this particular application to failure. In order to deploy EJBs in the database Oracle must be configured to run in MTS mode rather than dedicated server mode. What the team did not understand, crucially, was how using MTS with EJBs in the database would affect them. Without this understanding, and without a general knowledge of how Oracle worked, two key decisions were made:

  • We will run some stored procedures that take 45 seconds or longer (much longer at times) in our beans.
  • We will not support the use of bind variables. All of our queries will hard code the constant values in the predicate. All inputs to stored procedures will use strings. This is 'easier' than coding bind variables.

These two seemingly minor decisions guaranteed that the project would fail — utterly guaranteed it. They made it so that a highly scalable database would literally fall over and fail with a very small user load. A lack of knowledge of how the database worked more then overwhelmed their intricate knowledge of Java beans and distributed processing. If they had taken time to learn a bit more about the way Oracle worked, and consequently followed the following two simple guidelines, then their project would have had a much better chance of success the first time out.

Do not run Long Transactions Under MTS

The decision to run 45+ second transactions under MTS betrayed a lack of understanding of what MTS was designed to do and how it works in Oracle. Briefly, MTS works by having a shared pool of server processes that service a larger pool of end users. It is very much like connection pooling — since process creation and management are some of the most expensive operations you can ask an operating system to perform, MTS is very beneficial in a large-scale system. So, I might have 100 users but only five or ten shared servers.

When a shared server gets a request to run an update, or execute a stored procedure, then that shared server is dedicated to that task until completion. No one else will use that shared server until that update completes or that stored procedure finishes execution. Thus, when using MTS your goal must be to have very short statements. MTS is designed to scale up On-Line Transaction Processing (OLTP) systems — a system characterized by statements that execute with sub-second response times. You'll have a single row update, insert a couple of line items, and query records by primary key. You won't (or shouldn't) run a batch process that takes many seconds or minutes to complete.

If all of our statements execute very rapidly, then MTS works well. We can effectively share a number of processes amongst a larger community of users. If, on the other hand, we have sessions that monopolize a shared server for extended periods of time then we will see apparent database 'hangs'. Say we configured ten shared servers for 100 people. If, at some point, ten people simultaneously execute the process that takes 45 seconds or longer then every other transaction (including new connections) will have to wait. If some of the queued sessions want to run that same long process, then we have a big problem — the apparent 'hang' won't last 45 seconds, it will appear to last much longer for most people. Even if we only have a few people wanting to execute this process simultaneously rather than ten, we will still observe what appears to be a large degradation in performance from the server. We are taking away, for an extended period of time, a shared resource and this is not a good thing. Instead of having ten shared servers processing quick requests on a queue, we now have five or six (or less). Eventually the system will be running at some fraction of its capability, solely due to this resource being consumed.

The 'quick and dirty' solution was to start up more shared servers, but the logical conclusion to this is that you need a shared server per user and this is not a reasonable conclusion for a system with thousands of users (as this system was). Not only would that introduce bottlenecks into the system itself (the more servers you have to manage — the more processing time spent managing), but also it is simply not the way MTS was designed to work.

The real solution to this problem was simple: do not execute long running transactions under MTS. Implementing this solution was not. There was more then one way to implement this and they all required fundamental architectural changes. The most appropriate way to fix this issue, requiring the least amount of change, was to use Advanced Queues (AQ).

AQ is a message-oriented middleware hosted in the Oracle database. It provides the ability for a client session to enqueue a message into a database queue table. This message is later, typically immediately after committing, 'dequeued' by another session and the content of the message is inspected. This message contains information for the other session to process. It can be used to give the appearance of lightening fast response times by decoupling the long running process from the interactive client.

So, rather than execute a 45-second process, the bean would place the request, along with all its inputs, on a queue and execute it in a loosely coupled (asynchronous) rather than tightly coupled (synchronous) fashion. In this way, the end user would not have to wait 45 seconds for a response — the system would apparently be much more responsive.

While this approach sounds easy — just drop in 'AQ' and the problem is fixed — there was more to it than that. This 45- second process generated a transaction ID that was required by the next step in the interface in order to join to other tables — as designed, the interface would not work without it. By implementing AQ, we were not waiting for this transaction ID to be generated here — we were just asking the system to do it for us at some point. So, the application was stuck. On the one hand, we could not wait 45 seconds for the process to complete, but on the other hand, we needed the generated ID in order to proceed to the next screen and we could only get that after waiting 45 seconds. To solve this problem, what we had to do was to synthesize a pseudo-transaction ID, modify the long running process to accept this generated pseudo ID and have it update a table when it was done, by which mechanism the real transaction ID was associated with the pseudo id. That is, instead of the transaction ID being an output of the long running process, it would be an input to it. Further, all 'downstream' tables would have to use this pseudo-transaction ID — not the real one (since the real one would not be generated for a while). We also had to review the usage of this transaction ID in order to see what impact this change might have on other modules and so on.

Another consideration was the fact that, while we were running synchronously, if the 45-second process failed then the end user was alerted right away. They would fix the error condition (fix the inputs, typically) and resubmit the request. Now, with the transaction being processed asynchronously under AQ, we don't have that ability. New functionality had to be added in order to support this delayed response. Specifically, we needed some workflow mechanism to route any failed transaction to the appropriate person.

The upshot of all this is that we had to undertake major changes in the database structure. New software had to be added (AQ). New processes had to be developed (workflows and such). On the plus side, the removal of 45 seconds of lag time from an interactive process not only solved the MTS architecture issue, it enhanced the user experience — it meant that the end user got much faster 'apparent' response times. On the down side, all of this delayed the project considerably because none of it was detected until immediately before deployment, during scalability testing. It is just too bad that it was not designed the right way from the beginning. With knowledge of how MTS worked physically, it would have been clear that the original design would not scale very well.

If I were to write a book about how to build non-scalable Oracle applications, then Don't use Bind Variables would be the first and last chapter. This is a major cause of performance issues and a major inhibitor of scalability. The way the Oracle shared pool (a very important shared memory data structure) operates is predicated on developers using bind variables. If you want to make Oracle run slowly, even grind to a total halt — just refuse to use them.

Bind variable is a placeholder in a query. For example, to retrieve the record for employee 123, I can query:

select * from emp where empno = 123;

Alternatively, I can query:

select * from emp where empno = :empno;

In a typical system, you would query up employee 123 maybe once and then never again. Later, you would query up employee 456, then 789, and so on. If you use literals (constants) in the query then each and every query is a brand new query, never before seen by the database. It will have to be parsed, qualified (names resolved), security checked, optimized, and so on — in short, each and every unique statement you execute will have to be compiled every time it is executed.

The second query uses a bind variable, :empno, the value of which is supplied at query execution time. This query is compiled once and then the query plan is stored in a shared pool (the library cache), from which it can be retrieved and reused. The difference between the two in terms of performance and scalability is huge, dramatic even.

From the above description it should be fairly obvious that parsing a statement with hard-coded variables (called a hard parse) will take longer and consume many more resources than reusing an already parsed query plan (called a soft parse). What may not be so obvious is the extent to which the former will reduce the number of users your system can support. Obviously, this is due in part to the increased resource consumption, but an even larger factor arises due to the latching mechanisms for the library cache. When you hard parse a query, the database will spend more time holding certain low-level serialization devices called latches (see Chapter 3, Locking and Concurrency, for more details). These latches protect the data structures in the shared memory of Oracle from concurrent modifications by two sessions (else Oracle would end up with corrupt data structures) and from someone reading a data structure while it is being modified. The longer and more frequently we have to latch these data structures, the longer the queue to get these latches will become. In a similar fashion to having long transactions running under MTS, we will start to monopolize scarce resources. Your machine may appear to be under-utilized at times — and yet everything in the database is running very slowly. The likelihood is that someone is holding one of these serialization mechanisms and a line is forming — you are not able to run at top speed.It only takes one ill behaved application in your database to dramatically affect the performance of every other application. A single, small application that does not use bind variable will cause the relevant SQL of other well tuned applications to get discarded from the shared pool over time. You only need one bad apple to spoil the entire barrel.

If you use bind variables, then everyone who submits the same exact query that references the same object will use the compiled plan from the pool. You will compile your subroutine once and use it over and over again. This is very efficient and is the way the database intends you to work. Not only will you use fewer resources (a soft parse is much less resource intensive), but also you will hold latches for less time and need them less frequently. This increases your performance and greatly increases your scalability.

Just to give you a tiny idea of how huge a difference this can make performance-wise, you only need to run a very small test:

tkyte@TKYTE816> alter system flush shared_pool;

System altered.

Here I am starting with an 'empty' shared pool. If I was to run this test more than one time, I would need to flush the shared pool every time, or else the non-bind variable SQL below would, in fact, be cached and appear to run very fast.

tkyte@TKYTE816> set timing on
tkyte@TKYTE816> declare
2      type rc is ref cursor;
3      l_rc rc;
4      l_dummy all_objects.object_name%type;
5      l_start number default dbms_utility.get_time;
6  begin
7      for i in 1 .. 1000
8      loop
9          open l_rc for
10          'select object_name
11             from all_objects
12            where object_id = ' || i;
13          fetch l_rc into l_dummy;
14          close l_rc;
15      end loop;
16      dbms_output.put_line
17      ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||
18        ' seconds...' );
19  end;
20  /
14.86 seconds...
PL/SQL procedure successfully completed.

The above code uses dynamic SQL to query out a single row from the ALL_OBJECTS table. It generates 1000 unique queries with the values 1, 2, 3, ... and so on 'hard-coded' into the WHERE clause. On my 350MHz Pentium laptop, this took about 15 seconds (the speed may vary on different machines).

Next, we do it using bind variables:

tkyte@TKYTE816> declare
2      type rc is ref cursor;
3      l_rc rc;
4      l_dummy all_objects.object_name%type;
5      l_start number default dbms_utility.get_time;
6  begin
7      for i in 1 .. 1000
8      loop
9          open l_rc for
10          'select object_name
11             from all_objects
12            where object_id = :x'
13          using i;
14          fetch l_rc into l_dummy;
15          close l_rc;
16      end loop;
17      dbms_output.put_line
18      ( round( (dbms_utility.get_time-l_start)/100, 2 ) ||
19        ' seconds...' );
20  end;
21  /
1.27 seconds...
PL/SQL procedure successfully completed.

We use the same logic here — the only thing that has changed is the fact that we are not hard coding the values 1, 2, 3... and so on into the query — we are using bind variables instead. The results are pretty dramatic. The fact is that not only does this execute much faster (we spent more time parsing our queries then actually executing them!) it will let more users use your system simultaneously.

Executing SQL statements without bind variables is very much like compiling a subroutine before each and every method call. Imagine shipping Java source code to your customers where, before calling a method in a class, they had to invoke the Java compiler, compile the class, run the method, and then throw away the byte code. Next time they wanted to execute the exact same method, they would do the same thing; compile it, run it, and throw it away. You would never consider doing this in your application — you should never consider doing this in your database either.

In Chapter10, Tuning Strategies and Tools, we will look at ways to identify whether or not you are using bind variables, different ways to use them, an 'auto binder' feature in the database and so on. We will also discuss a specialized case where you don't want to use bind variables.

As it was, on this particular project, rewriting the existing code to use bind variables was the only possible course of action. The resulting code ran orders of magnitude faster and increased many times the number of simultaneous users that the system could support. However, it came at a high price in terms of time and effort. It is not that using bind variables is hard, or error prone, it's just that they did not do it initially and thus were forced to go back and revisit virtually all of the code and change it. They would not have paid this price if they had understood that it was vital to use bind variables in their application from day one.

Concurrency control is one area where databases differentiate themselves. It is an area that sets a database apart from a file system and that sets databases apart from each other. As a programmer, it is vital that your database application works correctly under concurrent access conditions, and yet this is something people fail to test time and time again. Techniques that work well if everything happens consecutively do not work so well when everyone does them simultaneously. If you don't have a good grasp of how your particular database implements concurrency control mechanisms, then you will:

  • Corrupt the integrity of your data.
  • Run slower than you should with a small number of users.
  • Decrease your ability to scale to a large number of users.

Notice I don't say, 'you might...' or 'you run the risk of...' but rather that invariably you will do these things. You will do these things without even realizing it. Without correct concurrency control, you will corrupt the integrity of your database because something that works in isolation will not work as you expect in a multi-user situation. You will run slower than you should because you'll end up waiting for data. You'll lose your ability to scale because of locking and contention issues. As the queues to access a resource get longer, the wait gets longer and longer. An analogy here would be a backup at a tollbooth. If cars arrive in an orderly, predictable fashion, one after the other, we never have a backup. If many cars arrive simultaneously, queues start to form. Furthermore, the waiting time does not increase in line with the number of cars at the booth. After a certain point we are spending considerable additional time 'managing' the people that are waiting in line, as well as servicing them (in the database, we would talk about context switching).

Concurrency issues are the hardest to track down — the problem is similar to debugging a multi-threaded program. The program may work fine in the controlled, artificial environment of the debugger but crashes horribly in the 'real world'. For example, under 'race conditions' you find that two threads can end up modifying the same data structure simultaneously. These kinds of bugs are terribly hard to track down and fix. If you only test your application in isolation and then deploy it to dozens of concurrent users, you are likely to be (painfully) exposed to an undetected concurrency issue. Over the next two sections, I'll relate two small examples of how the lack of understanding concurrency control can ruin your data or inhibit performance and scalability.

Implementing Locking

The database uses locks to ensure that, at most, one transaction is modifying a given piece of data at any given time. Basically, they are the mechanism that allows for concurrency — without some locking model to prevent concurrent updates to the same row, for example, multi-user access would not be possible in a database. However, if overused or used improperly, locks can actually inhibit concurrency. If you or the database itself locks data unnecessarily, then fewer people will be able to concurrently perform operations. Thus, understanding what locking is and how it works in your database is vital if you are to develop a scalable, correct application.

What is also vital is that you understand that each database implements locking differently. Some have page-level locking, others row level; some implementations escalate locks from row- level to page-level, some do not; some use read locks, others do not; some implement serializable transactions via locking and others via read-consistent views of data (no locks). These small differences can balloon into huge performance issues or downright bugs in your application if you do not understand how they work.

The following points sum up Oracle's locking policy:

  • Oracle locks data at the row level on modification only. There is no lock escalation to a block or table level, ever.
  • Oracle never locks data just to read it. There are no locks placed on rows of data by simple reads.
  • A writer of data does not block a reader of data. Let me repeat — reads are not blocked by writes. This is fundamentally different from almost every other database, where reads are blocked by writes.
  • A writer of data is blocked only when another writer of data has already locked the row it was going after. A reader of data never blocks a writer of data.

These facts must be taken into consideration when developing your application and you must also realize that this policy is unique to Oracle. A developer who does not understand how his or her database handles concurrency will certainly encounter data integrity issues (this is particularly common when a developer moves from another database to Oracle, or vice versa, and neglects to take the differing concurrency mechanisms into account in their application.

One of the side-effects of Oracle's 'non-blocking' approach is that if you actually want to ensure that no more than one user has access to a row at once, then you, the developer, need to do a little work yourself. Consider the following example. A developer was demonstrating to me a resource-scheduling program (for conference rooms, projectors, etc.) that he had just developed and was in the process of deploying. The application implemented a business rule to prevent the allocation of a resource to more than one person, for any given period of time. That is, the application contained code that specifically checked that no other user had previously allocated the time slot (as least the developer thought it did). This code queried the schedules table and, if no rows existed that overlapped that time slot, inserted the new row. So, the developer was basically concerned with two tables:

create table resources ( resource_name varchar2(25) primary key, ... );
create table schedules( resource_name varchar2(25) references resources,
    start_time    date,
    end_time      date );

And, before making, say, a room reservation, the application would query:

select count(*)
  from schedules
where resource_name = :room_name
  and (start_time between :new_start_time and :new_end_time
    rr
      end_time between :new_start_time and :new_end_time)

It looked simple and bullet-proof (to the developer anyway); if the count came back zero, the room was yours. If it came back non-zero, you could not reserve it for that period. Once I knew what his logic was, I set up a very simple test to show him the error that would occur when the application went live. An error that would be incredibly hard to track down and diagnose after the fact — one would be convinced it must be a database bug.

All I did was get someone else to use the terminal next to him. They both navigated to the same screen and, on the count of three, each hit the Go button and tried to reserve the same room for the exact same time. Both people got the reservation — the logic, which worked perfectly in isolation, failed in a multi-user environment. The problem in this case was caused by Oracle's non-blocking reads. Neither session ever blocked the other session. Both sessions simply ran the above query and then performed the logic to schedule the room. They could both run the query to look for a reservation, even if the other session had already started to modify the schedules table (the change wouldn't be visible to the other session until commit, by which time it was too late). Since they were never attempting to modify the same row in the schedules table, they would never block each other and, thus, the business rule could not enforce what it was intended to enforce.

The developer needed a method of enforcing the business rule in a multi-user environment, a way to ensure that exactly one person at a time made a reservation on a given resource. In this case, the solution was to impose a little serialization of his own — in addition to performing the count(*) above, the developer must first:

select * from resources 
       where resource_name = :room_name 
       FOR UPDATE;

A little earlier in the chapter, we discussed an example where use of the FOR UPDATE clause caused problems, but here it is what makes this business rule work in the way intended. What we did here was to lock the resource (the room) to be scheduled immediately before scheduling it, in other words before we query the Schedules table for that resource. By locking the resource we are trying to schedule, we have ensured that no one else is modifying the schedule for this resource simultaneously. They must wait until we commit our transaction — at which point, they would be able to see our schedule. The chance of overlapping schedules is removed. The developer must understand that, in the multi-user environment, they must at times employ techniques similar to those used in multi-threaded programming. The FOR UPDATE clause is working like a semaphore in this case. It serializes access to the resources tables for that particular row — ensuring no two people can schedule it simultaneously.

This is still highly concurrent as there are potentially thousands of resources to be reserved — what we have done is ensure that only one person modifies a resource at any time. This is a rare case where the manual locking of data you are not going to actually update is called for. You need to be able to recognize where you need to do this and, perhaps as importantly, where not to (I have an example of when not to below). Additionally, this does not lock the resource from other people reading the data as it might in other databases, hence this will scale very well.

Issues such as the above have massive implications when attempting to port an application from database to database (I return to this theme a little later in the chapter), and this trips people up time and time again. For example, if you are experienced in other databases, where writers block readers and vice versa then you may have grown reliant on that fact to protect you from data integrity issues. The lack of concurrency is one way to protect yourself from this — that is how it works in many non-Oracle databases. In Oracle, concurrency rules supreme and you must be aware that, as a result, things will happen differently (or suffer the consequences).

For 99 percent of the time, locking is totally transparent and you need not concern yourself with it. It is that other 1 percent that you must be trained to recognize. There is no simple checklist of 'if you do this, you need to do this' for this issue. This is a matter of understanding how your application will behave in a multi-user environment and how it will behave in your database.

This is a topic very closely related to concurrency control, as it forms the foundation for Oracle's concurrency control mechanism — Oracle operates a multi-version read-consistent concurrency model. In Chapter 3, Locking and Concurrency, we'll cover the technical aspects of this in more detail but, essentially, it is the mechanism by which Oracle provides for:

  • Read-consistent queries: Queries that produce consistent results with respect to a point in time.
  • Non-blocking queries: Queries are never blocked by writers of data, as they would be in other databases.

These are two very important concepts in the Oracle database. The term multi-versioning basically comes from the fact that Oracle is able to simultaneously maintain multiple versions of the data in the database. If you understand how multi-versioning works, you will always understand the answers you get from the database. Before we explore in a little more detail how Oracle does this, here is the simplest way I know to demonstrate multi-versioning in Oracle:

tkyte@TKYTE816> create table t
  2  as

  3  select * from all_users;
Table created.

tkyte@TKYTE816> variable x refcursor

tkyte@TKYTE816> begin
2          open :x for select * from t;
3  end;
4  /

PL/SQL procedure successfully completed.

tkyte@TKYTE816> delete from t;

18 rows deleted.

tkyte@TKYTE816> commit;

Commit complete.

tkyte@TKYTE816> print x

USERNAME                          USER_ID CREATED
------------------------------ ---------- ---------
SYS                                     0 04-NOV-00
SYSTEM                                  5 04-NOV-00
DBSNMP                                 16 04-NOV-00
AURORA$ORB$UNAUTHENTICATED             24 04-NOV-00
ORDSYS                                 25 04-NOV-00
ORDPLUGINS                             26 04-NOV-00
MDSYS                                  27 04-NOV-00
CTXSYS                                 30 04-NOV-00
...
DEMO                                   57 07-FEB-01

18 rows selected.

In the above example, we created a test table, T, and loaded it with some data from the ALL_USERS table. We opened a cursor on that table. We fetched no data from that cursor: we just opened it.

Bear in mind that Oracle and does not 'answer' the query, does not copy the data anywhere when you open a cursor — imagine how long it would take to open a cursor on a one billion row table if it did. The cursor opens instantly and it answers the query as it goes along. In other words, it would just read data from the table as you fetched from it.

In the same session (or maybe another session would do this), we then proceeded to delete all data from that table. We even went as far as to COMMIT work on that delete. The rows are gone — but are they? In fact, they are retrievable via the cursor. The fact is that the resultset returned to us by the OPEN command was pre-ordained at the point in time we opened it. We had touched not a single block of data in that table during the open, but the answer was already fixed in stone. We have no way of knowing what the answer will be until we fetch the data — however the result is immutable from our cursor's perspective. It is not that Oracle copied all of the data above to some other location when we opened the cursor; it was actually the delete command that preserved our data for us by placing it into a data area called a rollback segment.

This is what read-consistency is all about and if you do not understand how Oracle's multi-versioning scheme works and what it implies, you will not be able to take full advantage of Oracle nor will you be able to write correct applications in Oracle (ones that will ensure data integrity).

Let's look at the implications of multi-versioning, read- consistent queries and non-blocking reads. If you are not familiar with multi-versioning, what you see below might be surprising. For the sake of simplicity, we will assume that the table we are reading stores one row per database block (the smallest unit of storage in the database), and that we are fullscanning the table in this example.

The table we will query is a simple accounts table. It holds balances in accounts for a bank. It has a very simple structure:

create table accounts
( account_number number primary key,
    account_balance number
);

In reality the accounts table would have hundreds of thousands of rows in it, but for simplicity we're just going to consider a table with four rows (we will visit this example in more detail in Chapter 3, Locking and Concurrency):

Accounts Table Data

What we would like to do is to run the end-of-day report that tells us how much money is in the bank. That is an extremely simple query:

select sum(account_balance) 
       from accounts;

And, of course, in this example the answer is obvious: $1250. However, what happens if we read row 1, and while we're reading rows 2 and 3, an Automated Teller Machine (ATM) generates transactions against this table, and moves $400 from account 123 to account 456? Our query counts $500 in row 4 and comes up with the answer of $1650, doesn't it? Well, of course, this is to be avoided, as it would be an error — at no time did this sum of money exist in the account balance column. It is the way in which Oracle avoids such occurrences, and how Oracle's methods differ from every other database, that you need to understand.

In practically every other database, if you wanted to get a 'consistent' and 'correct' answer to this query, you would either have to lock the whole table while the sum was calculated or you would have to lock the rows as you read them. This would prevent people from changing the answer as you are getting it. If you lock the table up-front, you'll get the answer that was in the database at the time the query began. If you lock the data as you read it (commonly referred to as a shared read lock, which prevents updates but not other readers from accessing the data), you'll get the answer that was in the database at the point the query finished. Both of these methods inhibit concurrency a great deal. The table lock would prevent any updates from taking place against the entire table for the duration of your query (for a table of four rows, this would only be a very short period — but for tables with hundred of thousands of rows, this could be several minutes). The 'lock as you go' method would prevent updates on data you have read and already processed and could actually cause deadlocks between your query and other updates.

Now, I said earlier that you would not be able to take full advantage of Oracle if you did not understand the concept of multi-versioning. Here is one reason why that is true. Oracle uses multi-versioning to get the answer, as it existed at the point in time the query began, and the query will take place without locking a single thing (while our account transfer transaction updates rows 1 and 4, these rows will be locked to other writers — but not locked to other readers, such as our SELECT SUM...query). In fact, Oracle doesn't have a 'shared read' lock common in other databases — it does not need it. Everything inhibiting concurrency that can be removed, has been removed.

So, how does Oracle get the correct, consistent answer ($1250) during a read without locking any data— in other words, without decreasing concurrency? The secret lies in the transactional mechanisms that Oracle uses. Whenever you modify data, Oracle creates entries in two different locations. One entry goes to the redo logs where Oracle stores enough information to redo or 'roll forward' the transaction. For an insert this would be the row inserted. For a delete, it is a message to delete the row in file X, block Y, row slot Z. And so on. The other entry is an undo entry, written to a rollback segment. If your transaction fails and needs to be undone, Oracle will read the 'before' image from the rollback segment and restore the data. In addition to using this rollback segment data to undo transactions, Oracle uses it to undo changes to blocks as it is reading them — to restore the block to the point in time your query began. This gives you the ability to read right through a lock and to get consistent, correct answers without locking any data yourself.

So, as far as our example is concerned, Oracle arrives at its answer as follows:

How Oracle processes the above example

At time T6, Oracle is effectively 'reading through' the lock placed on row 4 by our transaction. This is how non-blocking reads are implemented — Oracle only looks to see if the data changed, it does not care if the data is currently locked (which implies that it has changed). It will simply retrieve the old value from the rollback segment and proceed onto the next block of data.

This is another clear demonstration of multi-versioning — there are multiple versions of the same piece of information, all at different points in time, available in the database. Oracle is able to make use of these 'snapshots' of data at different points in time to provide us with read-consistent queries and non- blocking queries.

This read-consistent view of data is always performed at the SQL statement level, the results of any single SQL statement are consistent with respect to the point in time they began. This quality is what makes a statement like the following insert a predictable set of data:

for x in (select * from t)
loop
  insert into t values (x.username, x.user_id, x.created);
end loop;

The result of the SELECT * FROM T is preordained when the query begins execution. The SELECT will not see any of the new data generated by the INSERT. Imagine if it did — this statement might be a never-ending loop. If, as the INSERT generated more rows in CUSTOMER, the SELECT could 'see' those newly inserted rows — the above piece of code would create some unknown number of rows. If the table T started out with 10 rows, we might end up with 20, 21, 23, or an infinite number of rows in T when we finished. It would be totally unpredictable. This consistent read is provided to all statements so that an INSERT such as the following is predicable as well:

insert into t select * from t;

The INSERT statement will with be provided a read-consistent view of T — it will not see the rows that it itself just inserted, it will only insert the rows that existed at the time the INSERT began. Many databases won't even permit recursive statements such as the above due to the fact that they cannot tell how many rows might actually be inserted.

So, if you are used to the way other databases work with respect to query consistency and concurrency, or you have never had to grapple with such concepts (no real database experience), you can now see how understanding how this works will be important to you. In order to maximize Oracle's potential, you need to understand these issues as they pertain to Oracle — not how they are implemented in other databases.

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved