Expert One-on-One: Oracle: Pt. 3

Thursday Oct 4th 2001 by DatabaseJournal.com Staff
Share:

This third installment looks at database independence, the impact of standards, features and functions, and solving problems simply.

By now, you might be able to see where I'm going in this section. I have made references above to other databases and how features are implemented differently in each. With the exception of some read-only applications, it is my contention that building a wholly database-independent application that is highly scalable is extremely hard — and is in fact quite impossible unless you know exactly how each database works in great detail.

For example, let's revisit our initial resource scheduler example (prior to adding the FOR UPDATE clause). Let's say this application had been developed on a database with an entirely different locking/concurrency model from Oracle. What I'll show here is that if you migrate your application from one database to another database you will have to verify that it still works correctly in these different environments.

Let's assume that we had deployed the initial resource scheduler application in a database that employed page-level locking with blocking reads (reads are blocked by writes) and there was an index on the SCHEDULES table:

create index schedules_idx 
      on schedules( resource_name, start_time );

Also consider that the business rule was implemented via a database trigger (after the INSERT had occurred but before the transaction committed we would verify that only our row existed in the table for that time slot). In a page-locking system, due to the update of the index page by RESOURCE_NAME and START_TIME it is very likely that we would have serialized these transactions. The system would have processed these inserts sequentially due to the index page being locked (all of the RESOURCE_NAMEs with START_TIMEs near each other would be on the same page). In that page level locking database our application would be apparently well behaved — our checks on overlapping resource allocations would have happened one after the other, not concurrently.

If we migrated this application to Oracle and simply assumed that it would behave in the same way, we would be in for a shock. On Oracle, which does row level locking and supplies non-blocking reads, it appears to be ill behaved. As we saw previously, we had to use the FOR UPDATE clause to serialize access. Without this clause, two users could schedule the same resource for the same times. This is a direct consequence of not understanding how the database we have works in a multi-user environment.

I have encountered issues such as this many times when an application is being moved from database A to database B. When an application that worked flawlessly in database A does not work, or works in an apparently bizarre fashion, on database B, the first thought is that database B is a 'bad database'. The simple truth is that database B just does it differently — neither database is wrong or 'bad', they are just different. Knowing and understanding how they work will help you immensely in dealing with these issues.

For example, very recently I was helping to convert some Transact SQL (the stored procedure language for SQL Server) into PL/SQL. The developer doing the conversion was complaining that the SQL queries in Oracle returned the 'wrong' answer. The queries looked like this:

declare
  l_some_variable   varchar2(25);
begin
  if ( some_condition )
  then
    l_some_variable := f( ... );
  end if;

  for x in ( select * from T where x = l_some_variable )
  loop
    ...

The goal here was to find all of the rows in T where X was Null if some condition was not met or where x equaled a specific value if some condition was met.

The complaint was that, in Oracle, this query would return no data when L_SOME_VARIABLE was not set to a specific value (when it was left as Null). In Sybase or SQL Server, this was not the case — the query would find the rows where X was set to a Null value. I see this on almost every conversion from Sybase or SQL Server to Oracle. SQL is supposed to operate under tri-valued logic and Oracle implements Null values the way ANSI SQL requires them to be implemented. Under those rules, comparing X to a Null is neither True or False — it is, in fact, unknown. The following snippet shows what I mean:

ops$tkyte@ORA8I.WORLD> select * from dual;

D
-
X

ops$tkyte@ORA8I.WORLD> select * from dual where null=null;

no rows selected

ops$tkyte@ORA8I.WORLD> select * from dual where null<>null;

no rows selected

This can be confusing the first time you see it — it proves that, in Oracle, Null is neither equal to nor not equal to Null. SQL Server, by default, does not do it that way: in SQL Server and sybase, Null is equal to Null. Neither Oracle's, sybase nor SQL Server's SQL processing is wrong — they are just different. Both databases are in fact ANSI compliant databases but they still work differently. There are ambiguities, backward compatibility issues, and so on, to be overcome. For example, SQL Server supports the ANSI method of Null comparison, just not by default (it would break thousands of existing legacy applications built on that database).

In this case, one solution to the problem was to write the query like this instead:

select *
  from t
  where ( x = l_some_variable OR 
        (x is null and
         l_some_variable is NULL ))

[The highlighted lines above are one line. They have been split for formatting purposes.]

However, this leads to another problem. In SQL Server, this query would have used an index on x. This is not the case in Oracle since a B*Tree index (more on indexing techniques in Chapter 7) will not index an entirely Null entry. Hence, if you need to find Null values, B*Tree indexes are not very useful.

What we did in this case, in order to minimize impact on the code, was to assign X some value that it could never in reality assume. Here, X, by definition, was a positive number — so we chose the number —1. Thus, the query became:

select * 
       from t 
       where nvl(x,-1) = nvl(l_some_variable,-1)

And we created a function-based index:

create index t_idx on t( nvl(x,-1) );

With minimal change, we achieved the same end result. The important points to recognize from this are that:

  • Databases are different. Experience in one will in part carry over to another but you must be ready for some fundamental differences as well as some very minor differences.
  • Minor differences (such as treatment of Nulls) can have as big an impact as fundamental differences (such as concurrency control mechanism).
  • Being aware of the database and how it works and how its features are implemented is the only way to overcome these issues.

Developers frequently ask me (usually more than once a day) how to do something specific in the database. For example, they will ask the question 'How do I create a temporary table in a stored procedure?' I do not answer such questions with a direct answer — I always respond with a question: 'Why do you want to do that?. Many times, the answer will come back: 'In SQL Server we created temporary tables in our stored procedures and we need to do this in Oracle.' That is what I expected to hear. My response, then, is easy — 'you do not want to create temporary tables in a stored procedure in Oracle (you only think you do).' That would, in fact, be a very bad thing to do in Oracle. If you created the tables in a stored procedure in Oracle you would find that:

  • Doing DDL is a scalability inhibitor.
  • Doing DDL constantly is not fast.
  • Doing DDL commits your transaction.
  • You would have to use Dynamic SQL in all of your stored procedures in order to access this table — no static SQL.
  • Dynamic SQL in PL/SQL is not as fast or as optimized as static SQL.

The bottom line is that you don't want to do it exactly as you did it in SQL Server (if you even need the temporary table in Oracle at all). You want to do things as they are best done in Oracle. Just as if you were going the other way from Oracle to SQL Server, you would not want to create a single table for all users to share for temporary data (that is how Oracle does it). That would limit scalability and concurrency in those other databases. All databases are not created equal — they are all very different.

If all databases are SQL92-compliant, then they must be the same. At least that is the assumption made many times. In this section I would like to dispel that myth.

SQL92 is an ANSI/ISO standard for databases. It is the successor to the SQL89 ANSI/ISO standard. It defines a language (SQL) and behavior (transactions, isolation levels, and so on) that tell you how a database will behave. Did you know that many commercially available databases are SQL92-compliant? Did you know that it means very little as far as query and application portability goes?

Starting with the standard, we will find that the SQL92 standard has four levels:

  • Entry-level — This is the level to which most vendors have complied. This level is a minor enhancement of the predecessor standard, SQL89. No database vendors have been certified higher and in fact the National Institute of Standards and Technology (NIST), the agency that used to certify for SQL- compliance, does not even certify anymore. I was part of the team that got Oracle 7.0 NIST-certified for SQL92 entry-level compliance in 1993. An entry level compliant database has the feature set of Oracle 7.0.
  • Transitional — This is approximately 'halfway' between entry-level and intermediate- level as far as a feature set goes.
  • Intermediate — this adds many features including (not by any means an exhaustive list):
    • Dynamic SQL
    • Cascade DELETE for referential integrity
    • DATE and TIME data types
    • Domains
    • Variable length character strings
    • A CASE expression
    • CAST functions between data types
  • Full — Adds provisions for (again, not exhaustive):
    • Connection management
    • A BIT string data type
    • Deferrable integrity constraints
    • Derived tables in the FROM clause
    • Subqueries in CHECK clauses
    • Temporary tables

The entry-level standard does not include features such as outer joins, the new inner join syntax, and so on. Transitional does specify outer join syntax and inner join syntax. Intermediate adds more, and Full is, of course all of SQL92. Most books on SQL92 do not differentiate between the various levels leading to confusion on the subject. They demonstrate what a theoretical database implementing SQL92 FULL would look like. It makes it impossible to pick up a SQL92 book, and apply what you see in the book to just any SQL92 database. For example, in SQL Server the 'inner join' syntax is supported in SQL statements, whereas in Oracle it is not. But, they are both SQL92-compliant databases. You can do inner joins and outer joins in Oracle, you will just do it differently than in SQL Server. The bottom line is that SQL92 will not go very far at the entry-level and, if you use any of the features of intermediate or higher, you risk not being able to 'port' your application.

You should not be afraid to make use of vendor-specific features — after all, you are paying a lot of money for them. Every database has its own bag of tricks, and we can always find a way to perform the operation in each database. Use what is best for your current database, and re-implement components as you go to other databases. Use good programming techniques to isolate yourself from these changes. The same techniques are employed by people writing OS-portable applications. The goal is to fully utilize the facilities available to you, but ensure you can change the implementation on a case-by-case basis.

For example, a common function of many database applications is the generation of a unique key for each row. When you insert the row, the system should automatically generate a key for you. Oracle has implemented the database object called a SEQUENCE for this. Informix has a SERIAL data type. Sybase and SQL Server have an IDENTITY type. Each database has a way to do this. However, the methods are different, both in how you do it, and the possible outcomes. So, to the knowledgeable developer, there are two paths that can be pursued:

  • Develop a totally database-independent method of generating a unique key.
  • Accommodate the different implementations and use different techniques when implementing keys in each database.

The theoretical advantage of the first approach is that to move from database to database you need not change anything. I call it a 'theoretical' advantage because the 'con' side of this implementation is so huge that it makes this solution totally infeasible. What you would have to do to develop a totally database-independent process is to create a table such as:

create table id_table ( id_name varchar(30), id_value number );
insert into id_table values ( 'MY_KEY', 0 );

Then, in order to get a new key, you would have to execute the following code:

update id_table set id_value = id_value + 1 where id_name = 'MY_KEY';
select id_value from id_table where id_name = 'MY_KEY';

Looks simple enough, but the outcome is that only one user at a time may process a transaction now. We need to update that row to increment a counter, and this will cause our program to serialize on that operation. At best, one person at a time will generate a new value for this key. This issue is compounded by the fact that our transaction is much larger then we have outlined above. The UPDATE and SELECT we have in the example are only two statements of potentially many other statements that make up our transaction. We have yet to insert the row into the table with this key we just generated, and do whatever other work it takes to complete this transaction. This serialization will be a huge limiting factor in scaling. Think of the ramifications if this technique was used on web sites that processed orders, and this was how we generated order numbers. There would be no multi-user concurrency, so we would be forced to do everything sequentially.

The correct approach to this problem would be to use the best code for each database. In Oracle this would be (assuming the table that needs the generated primary key is T):

Click here for code example 1.

This will have the effect of automatically, and transparently, assigning a unique key to each row inserted. The same effect can be achieved in the other databases using their types — the create tables syntax will be different, the net results will be the same. Here, we have gone out of our way to use each databases feature to generate a non-blocking, highly concurrent unique key, and have introduced no real changes to the application code — all of the logic is contained in this case in the DDL.

Another example of defensive programming to allow for portability is, once you understand that each database will implement features in a different way, to layer your access to the database when necessary. Let's say you are programming using JDBC. If all you use is straight SQL SELECTs, INSERTs, UPDATEs, and DELETEs, you probably do not need a layer of abstraction. You may very well be able to code the SQL directly in your application, as long as you limit the constructs you use to those constructs supported by each of the databases you intend to support. Another approach that is both more portable and offers better performance, would be to use stored procedures to return resultsets. You will discover that every vendor's database can return resultsets from stored procedures but how they are returned is different. The actual source code you must write is different for different databases.

Your two choices here would be to either not use stored procedures to return resultsets, or to implement different code for different databases. I would definitely follow the 'different code for different vendors' method, and use stored procedures heavily. This apparently seems to increase the amount of time it would take to implement on a different database. However, you will find it is actually easier to implement on multiple databases with this approach. Instead of having to find the perfect SQL that works on all databases (perhaps better on some than on others), you will implement the SQL that works best on that database. You can do this outside of the application itself, giving you more flexibility in tuning the application. We can fix a poorly performing query in the database itself, and deploy that fix immediately, without having to patch the application. Additionally, you can take advantage of vendor extensions to SQL using this method freely. For example, Oracle supports hierarchical queries via the CONNECT BY operation in its SQL. This unique feature is great for resolving recursive queries. In Oracle you are free to utilize this extension to SQL since it is 'outside' of the application (hidden in the database). In other databases, you would use a temporary table and procedural code in a stored procedure to achieve the same results, perhaps. You paid for these features so you might as well use them.

These are the same techniques developers who implement multi- platform code utilize. Oracle Corporation for example uses this technique in the development of its own database. There is a large amount of code (a small percentage of the database code overall) called OSD (Operating System Dependent) code that is implemented specifically for each platform. Using this layer of abstraction, Oracle is able to make use of many native OS features for performance and integration, without having to rewrite the large majority of the database itself. The fact that Oracle can run as a multi-threaded application on Windows and a multi-process application on UNIX attests to this feature. The mechanisms for inter-process communication are abstracted to such a level that they can be re- implemented on an OS-by-OS basis, allowing for radically different implementations that perform as well as an application written directly, and specifically, for that platform.

In addition to SQL syntactic differences, implementation differences, and differences in performance of the same query in different databases outlined above, there are the issues of concurrency controls, isolation levels, query consistency, and so on. We cover these items in some detail in Chapter 3, Locking and Concurrency, and Chapter 4, Transactions of this book, and see how their differences may affect you. SQL92 attempted to give a straightforward definition of how a transaction should work, how isolation levels are to be implemented, but in the end, you'll get different results from different databases. It is all due to the implementation. In one database an application will deadlock and block all over the place. In another database, the same exact application will not — it will run smoothly. In one database, the fact that you did block (physically serialize) was used to your advantage and when you go to deploy on another database, and it does not block, you get the wrong answer. Picking an application up and dropping it on another database takes a lot of hard work and effort, even if you followed the standard 100 percent.

A natural extension of the argument that you shouldn't necessarily strive for 'database independence' is the idea that you should understand exactly what your specific database has to offer and make full use of it. This is not a section on all of the features that Oracle 8i has to offer. That would be an extremely large book in itself. The new features of Oracle 8i themselves fill a book in the Oracle documentation set. With about 10,000 pages of documentation provided by Oracle, covering each and every feature and function would be quite an undertaking. Rather, this is a section on why it would benefit you to get at least a cursory knowledge of what is provided.

As I've said before, I answer questions about Oracle on the web. I'd say that 80 percent of my answers are simply URLs to the documentation. People are asking how they might go about writing some complex piece of functionality in the database (or outside of it). I just point them to the place in the documentation that tells them how Oracle has already implemented it, and how to use it. Replication comes up this way frequently. I'll receive the question 'I would like to keep a copy of my data elsewhere. I would like this to be a read-only copy. I need it to update only once a day at midnight. How can I write the code to do that?' The answer is as simple as a CREATE SNAPSHOT command. This is what built-in functionality in the database.

It is true you can write your own replication, it might even be fun to do so, but at the end of the day, it would not be the smartest thing to do. The database does a lot of stuff. In general, it can do it better then we can ourselves. Replication for example is internalized in the kernel, written in C. It's fast, it's fairly easy, and it is robust. It works across versions, across platforms. It is supported, so if you hit a problem, Oracle's support team will be glad to help. If you upgrade, replication will be supported there as well, probably with some new features. Now, consider if you had developed your own. You would have to provide support for all of the versions you wanted to support. Inter-operability between 7.3 and 8.0 and 8.1 and 9.0, and so on — this would be your job. If it 'broke', you won't be calling support. At least, not until you can get a test case that is small enough to demonstrate your basic issue. When the new release of Oracle comes out, it will be up to you to migrate your replication code to that release.

Not having a full understanding of what is available to you can come back to haunt you in the long run. I was recently talking with some developers and their management. They were demonstrating a 'very cool' piece of software they had developed. It was a message-based system that solved the database queue problem. You see this normally in a database if you wanted many people to use a table as a 'queue'. You would like many people to be able to lock the next record in the queue, skipping over any previously locked records (these queue records are being processed already). The problem you encounter is that there is no documented syntax in the database for skipping locked rows. So, if you didn't know anything about Oracle's features, you would assume that if you wanted queuing software on top of the database, you would have to build it (or buy it).

That is what these developers did. They built a series of processes, and developed APIs for doing message queuing on top of the database. They spent quite a bit of time on it, and used quite a few man-hours to achieve it. The developers were quite sure it was unique. Immediately after seeing it, and hearing of its functionality, I had one thing to say — Advanced Queues. This is a native feature of the database. It solves the 'get the first unlocked record in the queue table and lock it for me' problem. It was right there all along. Their developers, not knowing that this feature existed, spent a lot of time and energy writing their own. In addition to spending lots of time in the past on it, they would be spending lots of time maintaining it in the future. Their manager was less than impressed upon discovering the unique piece of software in effect emulated a native database feature.

I have seen people in an Oracle 8i database set up daemon processes that reads messages off of pipes (a database IPC mechanism). These daemon processes execute the SQL contained within the pipe message, and commit the work. They did this so that they could execute auditing in a transaction that would not get rolled back if the bigger transaction did. Usually, if a trigger or something were used to audit an access to some data, but a statement failed later on, all of the work would be rolled back (see Chapter 4 on Transactions, we discuss this statement level atomicity in some detail). So, by sending a message to another process, they could have a separate transaction do the work and commit it. The audit record would stay around, even if the parent transaction rolled back. In versions of Oracle before Oracle 8I, this was an appropriate (and pretty much the only) way to implement this functionality. When I told them of the database feature called autonomous transactions (we will take a detailed look at these in Chapter 15), they were quite upset with themselves. Autonomous transactions, implemented with a single line of code, do exactly what they were doing. On the bright side, this meant they could discard a lot of code and not have to maintain it. In addition, the system ran faster overall, and was easier to understand. Still, they were upset at the amount of time they had wasted reinventing the wheel. In particular the developer who wrote the daemon processes was quite upset at having just written a bunch of 'shelf-ware'.

The above list of examples is something I see repeated time, and time again — large complex solutions to problems that are already solved by the database itself. Unless you take the time to learn what is available, you are doomed to do the same thing at some point. In the second section of this book, Database Structures and Utilities, we are going to take an in-depth look at a handful of functionality provided by the database. I picked and chose the features and functions that I see people using frequently, or in other cases, functionality that should be used more often but is not. It is only the tip of the iceberg however. There is so much more to Oracle than can be presented in a single book.

There are always two ways to solve everything: the easy way and the hard way. Time and time again, I see people choosing the hard way. It is not always done consciously. More usually, it is done out of ignorance. They never expected the database to be able to do 'that'. I, on the other hand, expect the database to be capable of anything and only do it the 'hard' way (by writing it myself) when I discover it cannot do something.

For example, I am frequently asked 'How can I make sure the end user has only one session in the database?' (There are hundreds of other examples I could have used here). This must be a requirement of many applications but none that I've ever worked on — I've not found a good reason for limiting people in this way. However, people want to do it and when they do, they usually do it the hard way. For example, they will have a batch job run by the operating system that will look at the V$SESSION table and arbitrarily kill sessions of users who have more then one session. Alternatively, they will create their own tables and have the application insert a row when a user logs in, and remove the row when they log out. This implementation invariably leads to lots of calls to the help desk because when the application 'crashes', the row never gets removed. I've seen lots of other 'creative' ways to do this, but none is as easy as:


ops$tkyte@ORA8I.WORLD> create profile one_session limit sessions_per_user 1;
Profile created.

ops$tkyte@ORA8I.WORLD> alter user scott profile one_session;
User altered.

ops$tkyte@ORA8I.WORLD> alter system set resource_limit=true;
System altered.

That's it — now any user with the ONE_SESSION profile can log on only once. When I bring up this solution, I can usually hear the smacking of a hand on the forehead followed by the statement 'I never knew it could do that'. Taking the time to familiarize yourself with what the tools you have to work with are capable of doing can save you lots of time and energy in your development efforts.

The same 'keep in simple' argument applies at the broader architecture level. I would urge people to think carefully before adopting very complex implementations. The more moving parts you have in your system, the more things you have that can go wrong and tracking down exactly where that error is occurring in an overly complex architecture is not easy. It may be really 'cool' to implement using umpteen tiers, but it is not the right choice if a simple stored procedure can do it better, faster and with less resources.

I've worked on a project where the application development had been on going for over a year. This was a web application, to be rolled out to the entire company. The HTML client talked to JSPs in the middle tier, which talked to CORBA objects, which talked to the database. The CORBA objects would maintain 'state' and a connection to the database in order to maintain a session. During the testing of this system we found that they would need many front end application servers and a very large database machine to support the estimated 10,000 concurrent users. Not only that, but stability was an issue at times given the complex nature of the interaction between the various components (just exactly where in that stack is the error coming from and why? — that was a hard question to answer). The system would scale, it would just take a lot of horsepower to do it. Additionally, since the implementation used a lot of complex technologies — it would require experienced developers to not only to develop it but to maintain it. We took a look at that system and what it was trying to do and realized that the architecture was a little more complex than it needed to be in order to do the job. We saw that simply by using the PL/SQL module of Oracle iAS and some stored procedures, we could implement the exact system on a fraction of the hardware, and using less 'experienced' developers. No EJBs, no complex interaction between JSPs and EJBs — just the simple translation of a URL into a stored procedure call. This new system is still up and running today, exceeding the estimated user count and with response times that people do not believe. It uses the most basic of architectures, has the fewest moving pieces, runs on an inexpensive 4-CPU workgroup server and never breaks (well a tablespace filled up once, but that's another issue).

I will always go with the simplest architecture that solves the problem completely over a complex one any day. The payback can be enormous. Every technology has its place — not every problem is a nail, we can use more than a hammer in our toolbox.

There is another reason that I frequently see people doing things the hard way and again it relates to the idea that one should strive for 'openness' and 'database independence' at all costs. The developers wish to avoid using 'closed', 'proprietary' database features — even something as simple as 'stored procedures' or 'sequences' because that will lock them into a database system. Well, let me put forth the idea that the instant you develop a read/write application you are already somewhat locked in. You will find subtle (and sometimes not so subtle) differences between the databases as soon as you start running queries and modifications. For example, in one database you might find that your SELECT COUNT(*) FROM T deadlocks with a simple update of two rows. In Oracle, you'll find that the SELECT COUNT(*) never blocks for a writer. We've seen the case where a business rule appears to get enforced on one database, due to side effects of the database's locking model, and does not get enforced in another database. You'll find that, given the same exact transaction mix, reports come out with different answers in different databases — all because of fundamental implementation differences. You will find that it is a very rare application that can simply be picked up and moved from one database to another. Differences in the way the SQL is interpreted (for example, the NULL=NULL example) and processed will always be there.

On a recent project, the developers were building a web-based product using Visual Basic, ActiveX Controls, IIS Server, and the Oracle 8i database. I was told that the development folks had expressed concern that since the business logic had been written in PL/SQL, the product had become database dependent and was asked: 'How can we correct this?'

I was a little taken aback by this question. In looking at the list of chosen technologies I could not figure out how being database dependent was a 'bad' thing:

  • They had chosen a language that locked them into a single operating system and is supplied by a single vendor (they could have opted for Java).
  • They had chosen a component technology that locked them into a single operating system and vendor (they could have opted for EJB or CORBA).
  • They had chosen a web server that locked them in to a single vendor and single platform (why not Apache?).

Every other technology choice they had made locked them into a very specific configuration — in fact the only technology that offered them any choice as far as operating systems go was in fact the database.

Regardless of this — they must have had good reasons to choose the technologies they did — we still have a group of developers making a conscious decision to not utilize the functionality of a critical component in their architecture, and doing it in the name of 'openness'. It is my belief that you pick your technologies carefully and then you exploit them to the fullest possible extent. You have paid a lot for these technologies — would it not be in your best interest to exploit them fully? I had to assume that they were looking forward to utilizing the full potential of the other technologies — so why was the database an exception? An even harder question to answer in light of the fact that it was crucial to their success.

We can put a slightly different spin on this argument if we consider it from the perspective of 'openness'. You put all of your data into the database. The database is a very open tool. It supports data access via SQL, EJBs, HTTP, FTP, SMB, and many other protocols and access mechanisms. Sounds great so far, the most open thing in the world.

Then, you put all of your application logic and more importantly, your security outside of the database. Perhaps in your beans that access the data. Perhaps in the JSPs that access the data. Perhaps in your Visual Basic code running under Microsoft's Transaction Server (MTS). The end result is that you have just closed off your database — you have made it 'non-open'. No longer can people hook in existing technologies to make use of this data — they must use your access methods (or bypass security altogether). This sounds all well and fine today, but what you must remember is that the 'whiz bang' technology of today, EJBs for example, yesterday's concept, and tomorrow's old, tired technology. What has persevered for over 20 years in the relational world (and probably most of the object implementations as well) is the database itself. The front ends to the data change almost yearly, and as they do, the applications that have all of the security built inside themselves, not in the database, become obstacles, roadblocks to future progress.

The Oracle database provides a feature called Fine Grained Access Control (Chapter 21 is dedicated to it). In a nutshell, this technology allows the developer to embed procedures in the database that can modify queries as they are submitted to the database. This query modification is used to restrict the rows the client will receive or modify. The procedure can look at who is running the query, when they are running the query, what terminal they are running the query from, and so on, and can constrain access to the data as appropriate. With FGAC, we can enforce security such that, for example:

  • Any query executed outside of normal business hours by a certain class of users returned zero records.
  • Any data could be returned to a terminal in a secure facility but only non-sensitive information to a 'remote' client terminal.

Basically, it allows us to locate access control in the database, right next to the data. It no longer matters if the user comes at the data from a Bean, a JSP, a VB application using ODBC, or SQL*PLUS, the same security protocols will be enforced. You are well situated for the next technology that comes along.

Now, I ask you — which implementation is more 'open'? The one that makes all access to the data possible only through calls to the VB code and ActiveX controls (replace VB with Java and ActiveX with EJB if you like — I'm not picking on a particular technology but an implementation here) or the solution that allows access from anything that can talk to the database, over protocols as diverse as SSL, HTTP and Net8 (and others) or using APIs such as ODBC, JDBC, OCI, and so on? I have yet to see an ad-hoc reporting tool that will 'query' your VB code. I know of dozens that can do SQL, though.

The decision to strive for database independence and total 'openness' is one that people are absolutely free to take, and many try, but I believe that it is the wrong decision. No matter what database you are using, you should exploit it fully, squeezing every last bit of functionality you can out of that product. You'll find yourself doing that in the tuning phase (which again always seems to happen right after deployment) anyway. It is amazing how quickly the database independence requirement can be dropped when you can make the application run five times faster just by exploiting the software's capabilities.

Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved