Sunday, November 20, 2011

Revived Boston Area DBA SIG Meeting

The DBA SIG of the Northeast Oracle User Group has been revived (thank you, Lyson and Jeane) and I was honored to be the speaker of the first session of what I hope will be a long list of very successful like the old days. 

I started at 7 PM and finished at midnight - a solid 5 hours later! Thank you for your patience. It just made my day to have you in the audience that late. I hope you found it useful.

Here is the slide deck and the scripts I used during my session. As in the past, I cherish the moments and will highly appreciate to have your feedback.

Thursday, November 10, 2011

Revived NOUG DBA SIG Events

Congratulations to North East Oracle User Group in the Boston area who has restarted the DBA SIG. This was a highly successful program that used to be held after work hours on a weekday in the Oracle building in Burlington. I was privileged to have presented there from 2004 until its sad demise in 2009. Now, I am honored to be invited to be the first speaker in the revived program. I am presenting the session "Addressing Performance Issues during Change with Real Application Testing, Intelligent Stats and SQL Plan Baselines with Live Demos". More information here.

When: Nov 16th 6:30 PM to 9:00 PM
Where: Doubletree by Hilton at 5400 Computer Drive, Westborough, MA 01581
Food and drinks will be served.
The event is free to all NOUG members.

I will give away several Oracle books at the event for the best questions, etc.

If you plan on attending this, the organizers respectfully request that you RSVP to immediately. They need some reasonably accurate headcount to order food and drinks, which is yet another reason to attend.

Abstract: Change is inevitable - be it applying a patchset or creating an index. In this session you will learn how to harness the power of three major features of Oracle database to improve the performance during any types of change, or at any other time. You will learn, with plenty of demos, how to configure and use Database Replay and SQL Performance Analyzer to predict performance, use extended statistics to make the optimizer more intelligent and use SQL Plan Baselines to make the performance consistent but open to further improvements.

As it has been the norm, I plan to explain these concepts and techniques with lots of live demos. If you are in the Boston area, I sincerely hope to see you all there. I have nothing but pleasant memories every one of the 5 times I have presented in that venue and expect nothing less this time.

Tuesday, October 18, 2011

AIOUG Webcast: Methodical Performance Tuning

A big thank you to all those you attended my session today. I sincerely hope you got something out of it. Here are the scripts I used in the demo. And, here is the slide deck, if you are interested.

Remember, this was just the beginner's session. We will have intermediate and advanced ones in near future. Stay tuned through the AIOUG site.

Thursday, October 06, 2011

Migration to Exadata Session at #OOW11

Considering it was the last session of #OOW11 I was surprised to see a sizable number of folks showing up for my 3rd and final session slated for 3 to 4 PM on Thursday. Thank you for attending and for your questions.

Here is the slide deck. Note: please do not click on the link. Instead, right click on it, save the file and open it. It's a Powerpoint show; not a PPT. You can download free Powerpoint player to watch it, if you don't have Powerpoint installed.

Wednesday, October 05, 2011

Rise of the Machines

This is the penultimate day of #OOW11 and I am here at the hotel lobby trying to put some order around the myriads of nuggets of information I have had over the last several days.

The announcements this year have been centered around introduction of various new products from Oracle - Oracle Database Cloud, Cloud Control, Database Appliance, Big Data Appliance, Exalytics, T4 Super cluster and so on. One interesting pattern that emerges from the announcements that is different  from all the previous years is the introduction of several engineered and assembled systems that perform some type of task - specialized or generic. In the past Oracle announced machines too; but not so many at the same time, leading to an observation by April Sims (Executive Editor, Select Journal) that this year can be summed up in one phrase - Rise of the Machines.

But many of the folks I met in person or online were struggling to put their head around the whole lineup. It's quite clear that they were very unclear (no pun intended) how these are different and what situation each one would fit in. It's perfectly normal to be little confused about the sweet spots of each product considering the glut of information on them and seemingly overlapping functionalities. In the Select Journal Editorial Board meeting we had earlier this morning, I committed to writing about the differences between the different systems announced at #OOW11 and their usages in Select Journal 2012 Q1 edition. I didn't realize at that time what a tall order that is. I need to reach out to several product managers and executives inside Oracle to understand the functionality differences in these machines. Well, now that I have firmly put my feet in mouth, I will have to do just that. [Update on 4/29/2012: I have done that. Please see below]

In the demogrounds I learned about Oracle Data Loader for Hadoop and Enterprise-R, two exciting technologies that will change the way we collect and analyze large data sets, especially unstructured ones. Another new technology, centered around Cloud Control, was the Data Subsetting. It allows you to pull a subset of data from the source system to create test data, mask it if necessary and even find sensitive data based on some format. The tool was due for quite some time.

Again, I really need to collect my thoughts and sort through all that information overload I was subjected to at OOW. This was the best OOW ever.

Update on April 29th, 2011

I knew I had to wrap my head around these announcements and sort through the features available in the engineered machines. And I did exactly that. I presented a paper in the same name - Rise of the Machines - in Collaborate 2012, the annual conference of the Independent Oracle Users Group. Here is the presentation. In that session I explained the various features of 6 machines - Oracle Database Appliance, Exadata, Exalogic, Sparc Super Cluster, Exalytics and Big Data Appliance, the differences between them and where each one should be used. Please download the session if you want to know more about the topic.

Tuesday, October 04, 2011

Unicode Migration Assistant for Oracle

When you want to convert a database created in the default characterset to a multibyte characterset, there were two basic approaches - the safe export/import and the not-for-the-faint-of-the-heart alter database convert internal. In either case you had to follow a string of activities - checking the presence of incompatible values by running csscan, etc.

There is a new tool from Oracle to make the process infinitesimally simpler - Migration Assistant for Unicode. It's a GUI tool that you can install on the client. A server side API (installed via a patch)  does all the heavy lifting with the client GUI providing a great intuitive interface. You have the steps pretty much laid out for you. But the main strength of the tool is not that. There are two primary differentiators for the tool.

  1. When you do have a bad character, what can you really do? You can truncate the part of the data. But how do you know how much to truncate? If you truncate aggressively, you may shave off a chunk and lose valuable data; but be miserly and you risk having the bad data in place. This tool will show the data in a separate window allowing you to correct only the affected data; nothing less, nothing more.
  2. When users copy and paste data from some unicode compliant system to Oracle, e.g. from MS Word to a VARCHAR2 field in the database, the characters may look garbled; but given proper characterset they become meaningful. This tool allows you to see the data in many charactersets to identify which one was used to create it in the first place. After that it's a simple matter to reproduce that characters in the proper characterset.

With these two differentiators in place, the tool has great future. Check out everything on this tool at or just visit the booth at #OOW Demogrounds in Moscone South.

Oh, did I mention that the tool is free?

OOW11 Session #2 Exadata Management

Thank you all for attending my second session in #OOW11 - Exadata Management. You can download the slide deck here. Important: DO NOT CLICK on this link; instead, right click on this link, save the file and then open it.

Here is the slide deck.

Here are the resources I referred to in the presentation. Please note: URLs could change without my knowledge.

My Papers
5-part Linux Commands article series
4-part Exadata Command Reference article series
OTN Page on Exadata
OTN Exadata Forum

Thanks for attending. As always, your feedback will be highly appreciated.

Sunday, October 02, 2011

OOW11 Presentation: Exadata for Oracle DBAs

Thank you very much for attending my session "Exadata for Oracle DBAs" at #OOW11 (Oracle Open World), 2011. Here are the links to I mentioned in the presentation:

5-part Linux Commands article series
4-part Exadata Command Reference article series

You can download the Powerpoint show here (please don't click. Right click and download it to watch)

Update: If you have downloaded the slides earlier, please redownload. I corrected a small inaccuracy - the compression is handled by compute nodes; decompression can be offloaded to the cells. Thanks to Greg Rahn for pointing it out.

Of course, please remember that your feedback - good or bad - is always valued.

Monday, August 29, 2011

Setting Up Oracle Connection Manager

The Problem

It seems really simple. We have an Oracle database (on all nodes of a full rack Exadata, to be exact), which a lot of end-users connect to through apps designed in a rather adhoc and haphazard manner - on Excel spreadsheets, Access forms, TOAD reports and other assorted tools. We want to control the access from these machines and streamline them.

The database machine sits behind a firewall. To allow the adhoc tools accessing the database from the client machines mean we have to change the firewall rules. Had it been one or two clients, it would have been reasonable; but with 1000+ client machines, it becomes impractical. So I was asked to provide an alternative solution.

The Solution

This is not a unique problem; it's the same problem when the machines need to access resources that exist across firewalls. The easy solution is to punch a hole through the firewall to allow that access; but is not desirable for obvious security reasons. A better solution, often implemented, is to have a proxy server. The proxy sits between the two layers of access and can access the servers behind the firewall. Clients make the request to the proxy which it passes on to the server.

Such a proxy solves the problem; but we are looking for a simpler solution. Does one exist?

Yes, it does. The answer is Connection Manager from Oracle. Among its many functions, one stands out - it acts as a proxy between the different layers of access and passes through the request. It's not a separate product; but is an option in the Oracle Client software (not the database or grid infrastructure software). This  option is not automatically installed. When installing the client software, choose "Custom" and explicitly select "Connection Manager" from the list.

Sunday, July 24, 2011

Difference between Select Any Dictionary and Select_Catalog_Role

When you want to give a user the privilege to select from data dictionary and dynamic performance views such as V$DATAFILE, you have two options:

grant select any dictionary to ;
grant select_catalog_role to ;

Did you ever wonder why there are two options for accomplishing the same objective? Is one of them redundant? Won't it make sense for Oracle to have just one privilege? And, most important, do these two privileges produce the same result?

The short answer to the last question is -- no; these two do not produce the same result. Since they are fundamentally different, there is a place of each of these. One is not a replacement for the other. In this blog I will explain the subtle but important differences between the two seemingly similar privileges and how to use them properly.

Create the Test Case

First let me demonstrate the effects by a small example. Create two users called SCR and SAD:

SQL> create user scr identified by scr;
SQL> create user sad identified by sad;

Grant the necessary privileges to these users, taking care to grant a different one to each user.

SQL> grant create session, select any dictionary to sad;

Grant succeeded.

SQL> grant create session, select_catalog_role to scr;

Grant succeeded.

Let's test to make sure these privileges work as expected:

SQL> connect sad/sad

SQL> select * from v$session;
 ... a bunch of rows come here ...

SQL> connect scr/scr

SQL> select * from v$datafile;
 ... a bunch of rows come here ...

Both users have the privilege to select from the dictionary views as we expected. So, what is the difference between these two privileges? To understand that, let's create a procedure on the dictionary tables/views on each schema. Since we will create the same procedure twice, let's first create a script which we will call p.sql. Here is the script:

create or replace procedure p as
    l_num number;
    select count(1)
    into l_num
    from v$session;

The procedure is very simple; it merely counts the number of connected sessions by querying V$SESSION. When you connect as SAD and create the procedure by executing p.sql:

SQL> @p.sql

Procedure created.

The procedure was created properly; but when you connect as SCR and execute the script:

SQL> @p.sql

Warning: Procedure created with compilation errors.

SQL> show error
Errors for PROCEDURE P:

-------- ------------------------------------------------
4/2      PL/SQL: SQL Statement ignored
6/7      PL/SQL: ORA-00942: table or view does not exist

That must be perplexing. We just saw that the user has the privilege to select from the V$SESSION view. You can double check that by selecting from the view one more time. So, why did it report ORA-942: table does not exist?

Not All Privileges have been Created Equal

The answer lies in the way Oracle performs compilations. To compile a code with a named object, the user must have been granted privileges by direct grants; not through the roles. Selecting or performing DML statements do not care how the privileges were received. The SQL will work as long as the privileges are there. The privilege SELECT ANY DICTIONARY is a system privilege, similar to create session or unlimited tablespace. This is why the user SAD, which had the system privilege, could successfully compile the   procedure P.

The user SCR had the role SELECT_CATALOG_ROLE, which allowed it to SELECT from V$SESSION but not to create the procedure. Remember, to create another object on the base object, the user must have the direct grant on the base object; not through a role. Since SCR had the role not the direct grant on V$DATAFILE, it can't compile the procedure.

So while both the privileges allow the users to select from v$datafile, the role does not allow the users to create objects; the system privilege does.

Why the Role?

Now that you know how the privileges are different, you maybe wondering why the role is even there. It seems that the system grant can do everything and there is no need for a role. Not quite.The role has a very different purpose. Roles provide privileges; but only when they are enabled. To see what roles are enabled in a session, use this query:

SQL> connect scr/oracle
SQL> select * from session_roles
  2  /


2 rows selected.

We see that two roles - SELECT_CATALOG_ROLE and HS_ADMIN_SELECT_ROLE - have been enabled in the session. The first one was granted to the user. The other one is granted to the first one; so that was also enabled.

Just because a role was granted to the user does not necessarily mean that the role would be enabled. The roles which are marked DEFAULT by the user will be enabled; the others will not be. Let's see that with an example. As SYS user, execute the following:

SQL> alter user scr default role none;

User altered.

Now connect as SCR user and see which roles have been enabled:

SQL> connect scr/oracle
SQL> select * from session_roles;

no rows selected

None of the roles have been enabled. Why? That's  because none of the roles are default for the user (effected by the alter user statement by SYS). At this point when you select from a dynamic performance view:

SQL> select * from v$datafile;
select * from v$datafile
ERROR at line 1:
ORA-00942: table or view does not exist

You will get this error because the role is not enabled, or active. Without the role the user does not have any privilege to select from the data dictionary or dynamic performance view. To enable the role, the user has to execute the SET ROLE command:


Role set.

Checking the enabled roles:

SQL> select * from session_roles;


2 rows selected.

Now the roles have been enabled. Since the roles are not default, the user must explicitly enable them using the SET ROLE command. This is a very important characteristic of the roles. We can control how the user will get the privilege. Merely granting a role to a user will not enable the role; the user's action is required and that can be done programmatically. In security conscious environments, you may want to take advantage of that property. A user does not always have the to have to privilege; but when needed it will be able to do so.

The SET ROLE command is an SQL*Plus command. To call it from SQL, use this:


   dbms_session.set_role ('SELECT_CATALOG_ROLE');

You can also set a password for the role. So it will be set only when the correct password is given;

SQL> alter role SELECT_CATALOG_ROLE identified by l       
  2  /

Role altered.

To set the role, you have to give the correct password:

SQL> set role SELECT_CATALOG_ROLE identified by l;

Role set.

If you give the wrong password:

SQL> set role SELECT_CATALOG_ROLE identified by fl
  2  /
set role SELECT_CATALOG_ROLE identified by fl
ERROR at line 1:
ORA-01979: missing or invalid password for role 'SELECT_CATALOG_ROLE'

You can also revoke the execute privilege on dbms_session from public. After that the user will not be able to use it to set the role. You can construct another wrapper procedure to call it. Inside the wrapper, you can have all sort of checks and balances to make sure the call is acceptable.

We will close this discussion with a tip. How do you know which roles are default? Simply use the following query:

  2  from dba_role_privs
  3  where GRANTEE = 'SCR';

GRANTED_ROLE                   DEF
------------------------------ ---


Thanks to Randolph Geist ( and Pavel Ruzicka ( for pointing out yet another important difference. SELECT ANY DICTIONARY allows select from all SYS owner tables such as TAB$, USER$, etc. This is not possible in the SELECT_CATALOG_ROLE. This difference may seem trivial; but is actually quite important in some cases. For instance, latest versions of Oracle do not show the password column from DBA_USERS; but the hashed password is visible in USER$ table. It's not possible to reverse engineer the password from the hash value; but it is possible to match it to a similar entry and guess the password. A user with the system privilege will be able to do that; but a user with the role will not be.


In this blog entry I started with a simple question - what is the difference between two seemingly similar privileges - SELECT ANY DICTIONARY and SELECT_CATALOG_ROLE. The former is a system privilege, which remains active throughout the sessions and allows the user to create stored objects on objects on which it has privileges as a result of the grant. The latter is not a system grant; it's a role which does not allow the grantee to build stored objects on the granted objects. The role can also be non-default which means the grantee must execute a set role or equivalent command to enable it. The role can also be password protected, if desired.

The core message you should get from this is that roles are different from privileges. Privileges allow you to build stored objects such as procedures on the objects on which the privilege is based. Roles do not.

Who Manages the Exadata Machine?

For organizations that just procured an Exadata machine, one of the big questions is bound to be about the group supporting it. Who should it be - the DBAs, Sys Admins, Network Admins, or some blend of multiple teams?

The conventional Oracle database system is a combination of multiple distinct components - servers, managed by system admins; storage units, managed by SAN admins; network components such as switches and routers, managed by network admins; and, of course, the database itself, managed by the DBAs. Exadata has all those components - servers, storage (as cell servers), infiniband network, ethernet network, flash disks, the whole nine yards; but packaged inside a single physical frame representing a single logical unit - a typical engineered system. (For a description of the components inside the Exadata system, please see my 4-part article series on Oracle Technology Network) None of these conventional technology groups posses the skillsets to the manage all these components. That leads to a difficult but important decision - how the organization should assign the operational responsibilities.


There are two choices for organizations to assign administrative responsibilities.

  1. Distributed - Have these individual groups manage the respective components, e.g. Sys Admins managing the Linux servers, the storage admins managing the storage cells, network admins managing the network components and finally DBAs managing the database and the cluster.
  2. Consolidated - Create a specialized group - Database Machine Administrator (DMA) and have one of these groups expand the skillset to include the other non-familiar areas.

Each option has its own pros and cons. Let's examine them and see if we can get the right fit for our specific case.

Distributed Management

Under this model each component of Exadata is managed as an independent entity by a group traditionally used to manage that type of infrastructure. For instance, the system admins would manage the Linux OS, overseeing all aspects of it such as creation of users to applying the patches and RPMs. The storage and database would be managed likewise by the specialist teams.

The benefit of this solution is its seeming simplicity - components are managed by their respective specialists without a need for advanced training. The only need for training is for storage, where the Exadata Storage Server commands are new and specific to Exadata.

While this approach seems a nobrainer on surface, it may not be so in reality. Exadata is not just something patched up from these components; it is an engineered system. There is a huge meaning behind that qualifier. These components are not designed to act alone; they are put together to make the entire structure a better database machine. And, note the stress here - not an application server, not a fileserver, not a mail server; not a general purpose server - but a database machine alone. This means the individual components - the compute nodes, the storage servers, the disks, the flashdisk cards and more - are tuned to achieve that overriding objective. Any incremental tuning in any specific component has to  be within the framework of the entire frame; otherwise it may fail to produce the desired result, or worse, produce undesirable result.

For instance the disks where the database resides are attached to the storage cell servers; not the database compute nodes. The cell servers, or Cells run Oracle Enterprise Linux, which is very similar to Red Hat Linux. Under this model of administration, the system admins are responsible for managing the operating system. A system admin looks at the host and determines that it is under tuned since the filesystem cache is very low. In a normal Linux system, that would have been a correct observation; but in Exadata, the database is in ASM and a filesystem cache is less important. On the other hand, the Cells need the memory to place the Storage Indexes on the disk contents. Placing a large filesystem cache not only produce nothing to help the filesystem; but actually hurt the performance for the paging of Storage Indexes.

This is just one example of how the engineered systems are closely interrelated. Assuming they are separate and assigning multiple groups with different skillsets may not work effectively.

Database Machine Administrator

This is leads to the other approach - making a single group responsible for the entire frame from storage to the database. The single group would be able to understand the impact of the changes in one component to the overall effectiveness of the rack and will be in a better position to plan and manage. The single role that performs the management of Exadata is known as Database Machine Administrator (DMA).

I can almost hear the questions firing off inside your brain. The most likely question probably is whether it is even possible to have a single skillset that encompasses storage, system, database and network.

Yes, it definitely is. Remember, the advantages of an engineered system do not stop at being a carefully coordinated individual components. Another advantage is the lack of controls in those components. There are less knobs to turn on each component in an Exadata system. Take for instance the Operating System. There are two types of servers - the compute nodes and the cells. In the cells, the activity performed by a system admin is severely limited - almost to the point of being none. On the compute nodes, the activities are limited as well. The only allowable activities are - setting up users, setting up email relays, possibly setting up an NFS mount and handful of more. This can easily be done by a non-expert. One does not have to a System Admin to manage the servers.

Consider storage, the other important component. Traditionally storage administrators perform critical functions such as adding disks, carving out LUNs, managing replication for DR and so on. These functions are irrelevant in Exadata. For instance, the disks are preallocated in Exadata, the LUNs are created at installation time, there is no replication since the DR is by Data Guard which at the Oracle database level. One need not be a storage expert to the perform the tasks in Exadata. Additionally the Storage Admins are experts in the specific brand of storage, e.g. EMC VMax or IBM XiV. In Exadata, the storage is different from all the other brands your storage admins may be managing. They have to learn about the Exadata storage anyway; so why not have someone else, specifically the DMA learn?

Consider Network. In Exadata the network components are very limited since it is only for the components inside the rack. This reduces the flexibility of the configuration compared to a regular general purpose network configuration. the special kind of hardware used in Exadata - Infiniband - requires some special skills which the network ops folks may have to learn anyway. So, why not the DMAs instead of them? Besides, Oracle already provides a lot of tools to manage this layer.

That leaves the most visible component - the database which is, after all, the heart and soul of Exadata. This layer is amenable to a considerable degree of tuning and the depth of skills in this layer is vital to managing Exadata effectively. Transferring the skills needed here to a non-DBA group or individual is difficult, if not impossible. This makes the DBA group the most natural choice for evolving into the DMA role after absorbing the relevant other skills. The other skills are not necessarily at par with the administrator of the respective components. For instance the DMA does not need to be a full scale Linux system admin; but just needs to know a few relevant concepts, commands and tools to perform the job well. Network management is Exadata is a fraction of the skills expected from a network admin. The storage management in cell servers are new to any group; so the DMA will find that as easy as any other group, if not easier.

By understanding the available knobs on all the constituent components of Exadata, the DMA can be better prepared to be an effective administrator of the Exadata system; not by divvying up the activities to individual groups which are generally autonomous. The advantages are particularly seen when troubleshooting or patching Exadata. Hence, I submit here for your consideration - a new role called DMA (Database Machine Administrator) for the management of Exadata. The role should have the following skillsets:

60% Database Administration
20% Cell Administration
15% Linux Administration
5% Miscellaneous (Infiniband, network, etc.)

I have written an article series on Oracle Technology Network - Linux for Oracle DBAs. This 5-part article series has all the commands an concepts the Oracle DBA should understand about Linux. I have also written a 4 part article series - Commanding Exadata - for DBAs to learn the 20% cell administration. With these two , you will have everything you need to be a DMA. Scroll down to the bottom of this page and click on "Collection of Some of My Very Popular Web Articles" to locate all these articles and more.


In this blog entry, I argued for creating  a single role to manage the Exadata system instead of multiple groups managing individual parts. Here are the reasons in a nutshell:

  1. Exadata is an engineered system where all the components play collaboratively instead of as islands. Managing them separately may be ineffective and detrimental.
  2. The support organizations of components such as Systems, storage, DBA, etc. in an organizations are designed with a generic purpose in mind. Exadata is not generic. Its management needs unprecedented close coordination among various groups which may be new to the organization and perhaps difficult to implement.
  3. The needed skillsets are mostly database centric; other components have very little to manage.
  4. These other skills are easy to add to the DBA skills making the natural transition to the DMA role.

Best of luck in becoming a DMA and implementing Exadata.

Sunday, May 15, 2011

Feedback on Tanel Poder's AOTS #2 Class

In my earlier post I me wrote about Tanel Poder's Virtual Class #2. Tanel is a world famous expert in Oracle internals. I was fortunate to be able to attend the class on all days except Friday. It was solid 4 hours of learning every day. Let me summarize some of the key points I appreciated as an attendee. I hope it will be useful for you making a decision to attend one later.

(1) The beginning of the class explained the concepts like latching and locking pretty well
(2) Next in the line was understanding "why" they occur; not "how much"
(3) Reinforced the fact that latches smell like, look like, behave like locks; and are kind of locks
(4) Differentiated between soft and hard parses and explained the role of cursor_sharing parameter
(5) Differentiated between library cache pins, locks and latches

But if there is only one benefit I can say is priceless, that would be the holistic tuning, using the key concepts to understand the behavior all the way from the user response time to tracing the unix process to understand where the bottlenecks are. Tanel explained all that with a live example to cement that knowledge. All in all, an excellent class by an excellent instructor.

Friday, May 06, 2011

Tanel Poder's Virtual Class #2

On a fine day in some month in some year, a little scene is playing out in Acme Bank's IT department.

Developer (with the best I-told-you-these-DBAs-are morons face) harrumphs, "The database is slow again!!! Does anybody even care?"

Junior DBA enthusiastically responds with a slightly belligerent tone, "Of course we do. I opened a ticket. The ticket# is ...."

Mid-level DBA announces with an officious tone, "The session is waiting for latches" as if explains everything.

Mid-level DBA aspiring to be the top dog chimes in "the session has waited 123.904567 seconds in cache buffer chain latch, 24.096534 seconds in share pool latch, 456.742340 seconds in db file scattered read ...." and rants the list of wait events measured precisely by taking a trace and/or looking at an esoteric mix of v$ and x$ views to drive even Graham Wood mad.

Everyone looks at the Senior DBA, who pronounces with the gravest voice, "my intuition says that this is caused by the application change that went in yesterday"

The developers gasped but before their string of protests flowed in, the user asks innocently, "OK. How do we solve it?"


Exaggeration and over-dramatization aside, I think you will appreciate the simple truth in this scenario, which is not really that far from reality in many cases. At the end of the day you have to"solve" the problem. Measuring the wait events to the precision of micro seconds helps; but does not identify the root cause. You must know what causes the latch. No; you must understand what causes the cache buffer chain latch and then go on a hunt to stamp out the root cause. The tenets of solving a problem are understanding why it happens in the first place, identifying all possible causes, eliminating the irrelevant causes by evidence and finally zeroing on a solution based on findings. Intuition helps; but is not a replacement, not by a long shot, of the understanding the inner workings.

So, Acme hires an "expert", who did all that; but in the end, solved the problem. the junior DBA closed the ticket. Everyone was happy.

Do you want to be that expert? Do you want to poke your head under the hood of the Oracle engine? If so, look no further. Tanel Poder has been hacking the Oracle database since the, oh, I don't know, a long time. He put together all his knowledge into a set of instructional modules and he is offering all that as a virtual class. It's runs May 9th to 12th, 11 AM to 3 PM (US Eastern Time) every day. Interested? Register here

Oh, did I mention that this is a web-based class? That means you don't have to take even a step more for this - you can be on your bed in pajamas or in your cube at work. You get tons of material for learning, hacking and playing with innards of Oracle, along with Tanel's world famous TPT scripts - an alchemist's box for Oracle-geeks.

I am excited to attend; and I highly recommend anyone seriously wishing to be that expert.

Sunday, April 24, 2011

Nulls in Ordering

You want to find out the tables with the highest number of rows in a database. Pretty simple, right? You whip up the following query:

select owner, table_name, num_rows
from dba_tables
order by num_rows;

And, here is the output:

OWNER         TABLE_NAME                       NUM_ROWS
------------- ------------------------------ ----------
SYS           L$1
SYS           L$2
... output truncated ...

Whoa! The NUM_ROWS columns comes up with blanks. Actually they are nulls. Why are they coming up first? This is due to the fact that these tables have not been analyzed. CRM_ETL seems like an ETL user. The tables with GTT_ in their names seem to be global temporary table, hence there are no statistics. The others belong to SYS and SQLTXPLAIN, which are Oracle default users and probably never analyzed. Nulls are not comparable to actual literals; so they are neither less or greater than the others. By default the nulls come up first when asking for a ordered list. 

You need to find the tables with the highest number of rows fast. If you scroll down, you will see these rows; but it will take some time and it makes you impatient.You can add a new predicate something like: where num_rows is not null; but it's not really elegant. It will do the null processing. And what if you want the table names with null num_rows as well? This construct will eliminate that possibility. So, you need a different approach.

Nulls Last

If you want to fetch the nulls but push them tot he end of the list rather than first, you should add a new clause to the order by - NULLS LAST, as shown below.

select owner, table_name, num_rows
from dba_tables
order by 3 desc nulls last

Here is the output:

OWNER            TABLE_NAME       NUM_ROWS
---------------- --------------- ----------
CRM_ETL F_SALES_SUMM_01 1664092226
CRM_ETL          F_SALES_SUMM_02 948708587
CRM_ETL          F_SALES_SUMM_03 167616243
... output truncated ...

This solves the problem. The nulls will be shown; but after the last of the rows with non-null num_rows value.

Saturday, April 16, 2011

Can I Fit a 80MB Database Completely in a 80MB Buffer Cache?

This is in the Series "100 Things You Probably Didn't Know About Oracle". If you haven't already, I urge you to read the other parts -

  • Part1 (Commit does not force writing of buffers into the disk), 
  • Part2 (How Oracle Locking Works), 
  • Part3 (More on Interested Transaction Lists).

During the recently concluded Collaborate 11 ( I was asked a question which led me to the idea for this entry - the fourth in the series. If the database size is 100GB and you have a 100GB buffer cache, all the blocks will be in the memory and there will never be an I/O to the datafiles other than the regular DBWR lazy writes, right?

This is a very important question and you must consider the implications carefully. Many folks assume that by getting a large buffer cache eliminates or reduces the buffer related I/O - a very wrong assumption.A large buffer cache helps; but the relationship between buffer and block is not one to one. A block may have more than one buffer in the buffer cache. How so? Let's see how that happens. We will examine this in a non-RAC database to keep it simple.


First, let's create a table and insert some rows into it.

SQL> create table bhtest (col1 number, col2 char(2000));

Table created.

SQL> insert into bhtest values (&n,'&c');
Enter value for n: 1
Enter value for c: x
old   1: insert into bhtest values (&n,'&c')
new   1: insert into bhtest values (1,'x')

1 row created.

SQL> /
Enter value for n: 2
Enter value for c: x
old   1: insert into bhtest values (&n,'&c')
new   1: insert into bhtest values (2,'x')

1 row created.

... insert 6 rows ...

Note how I used char(2000) instead of varchar2(2000). The reason is simple. The char datatype takes up entire string of values, all 2000 of them regardless of the actual value placed there. So, even though I put 'x' there, the entire row will be quite a long one.

After the insertion, check which blocks these rows went into. We can do that by calling dbms_rowid.rowid_block_number() function.

select col1, dbms_rowid.rowid_block_number(rowid)
from bhtest;

---------- ------------------------------------
         4                                 4419
         5                                 4419
         1                                 4423
         2                                 4423
         3                                 4423

5 rows selected.

From the output we can see that rows 1 through 3 went to block 4423 and 4 and 5 went to block 4419. We also need the object ID of the object

SQL> select object_id from dba_objects
  2  where object_name = 'BHTEST'

  3  /

This completes the setup. In summary, we know that these rows are in blocks 4419 and 4423 and the object ID is 99360.


If possible, clear out the cache by shutting down and restarting the database. This brings up an empty cache. It's not absolutely necessary though. Now select from the table:

SQL> select * from bhtest;  

This will bring up all the blocks of the table into the buffer cache, To check for the same, you can check the view V$BH (the buffer headers). The column OBJD is the object_id. (Actually it's the DATA_OBJECT_ID. In this case both are the same; but may not be in all cases). Here are the columns of interest to us:

  • FILE# - the file_id 
  • BLOCK# - the block number 
  • CLASS# - the type of the block, e.g. data block, segment header, etc. Shown as a code 
  • STATUS - the status of the buffer, Exclusive Current, Current, etc. 
  • LOCK_ELEMENT_ADDR - if there is a buffer lock on the buffer, then the address of the lock 

To make it simpler to understand, we will use a decode() on the class# field to show the type of the block. With that, here is our query:

select file#, block#,
    decode(class#,1,'data block',2,'sort block',3,'save undo block', 4,
'segment header',5,'save undo header',6,'free list',7,'extent map',
8,'1st level bmb',9,'2nd level bmb',10,'3rd level bmb', 11,'bitmap block',
12,'bitmap index block',13,'file header block',14,'unused',
15,'system undo header',16,'system undo block', 17,'undo header',
18,'undo block') class_type, status, lock_element_addr
from v$bh
where objd = 99360
order by 1,2,3

Save this query because we will be using it a lot in this experiment. Here is the output.

---------- ---------- ------------------ ---------- --------
         7       4418 segment header     cr         00
         7       4418 segment header     xcur       00
         7       4419 data block         xcur       00
         7       4420 data block         xcur       00
         7       4421 data block         xcur       00
         7       4422 data block         xcur       00
         7       4423 data block         xcur       00

7 rows selected.

There are 7 buffers. In this example we have not restarted the cache. So there are two buffers for the segment header. There is one buffer for each data block - from 4419 to 4423. The status is "xcur", which stands for Exclusive Current. I will explain that in detail in a later blog. But in summary it means that the buffer was acquired (or filled by a block) with the intention of being modified. If the intention is merely to select, then the status would have shown CR (Consistent Read). In this case since the rows were inserted modifying the buffer, the blocks were gotten in xcur mode. From a different session update a single row. For easier identification I have used Sess2> as the prompt:

Sess> update bhtest set col2 = 'Y' where col1 = 1;

1 row updated.

From the original session, check the buffers:

---------- ---------- ------------------ ---------- --------
         7       4418 segment header     cr         00
         7       4418 segment header     xcur       00
         7       4419 data block         xcur       00
         7       4420 data block         xcur       00
         7       4421 data block         xcur       00
         7       4422 data block         xcur       00
         7       4423 data block         cr         00
         7       4423 data block         xcur       00

8 rows selected. 

There are 8 buffers now, up one from the previous seven. Note there are two buffers for block ID 4423. One CR and one xcur. Why two?

It's because when the update statement was issued, it would have modified the block. Instead of modifying the existing buffer, Oracle creates a "copy" of the buffer and modifies that. This copy is now [Note there was a typo earlier "not", it should have been "now". Corrected. Thanks to Martin Bex] XCUR status because it was acquired for the purpose of being modified. The previous buffer of this block, which used to be xcur, is converted to "CR". There can't be more than one XCUR buffer for a specific block, that's why it is exclusive. If someone wants to find out the most recently updated buffer, it will just have to look for the copy with the XCUR status. All others are marked CR.

Let's continue with this experiment. From a third session, update a different row in the same block.

Sess3> update bhtest set col2 = 'Y' where col1 = 2;

1 row updated.

From the original session, find out the buffers.

---------- ---------- ------------------ ---------- --------
         7       4418 segment header     xcur       00
         7       4418 segment header     cr         00
         7       4419 data block         xcur       00
         7       4420 data block         xcur       00
         7       4421 data block         xcur       00
         7       4422 data block         xcur       00
         7       4423 data block         cr         00
         7       4423 data block         xcur       00
         7       4423 data block         cr         00
         7       4423 data block         cr         00

There are 4 buffers for block 4423 now - up from 2. What happened? Since the buffer was required to be modified once more, Oracle created yet another "copy", marked it "xcur" and relegated the older one to "cr". What about the extra CR copy? That was done because Oracle had to perform something called CR processing to create a CR copy from another CR copy or an XCUR copy.

You can notice how the number of buffers proliferate. Let's change the experiment a little bit. From a 4th session, select from the table, instead of updating a row:

Sess4>  select * from bhtest ;

From the original session, check for the buffers.

---------- ---------- ------------------ ---------- --------
         7       4418 segment header     xcur       00
         7       4418 segment header     cr         00
         7       4419 data block         xcur       00
         7       4420 data block         xcur       00
         7       4421 data block         xcur       00
         7       4422 data block         xcur       00
         7       4423 data block         cr         00
         7       4423 data block         cr         00
         7       4423 data block         cr         00
         7       4423 data block         cr         00
         7       4423 data block         cr         00
         7       4423 data block         xcur       00

12 rows selected.

Whoa! there are 12 buffers now. Block 4423 now has 6 buffers - up from 4 earlier. This was merely a select statement, which, by definition does not change data. Why did Oracle create a buffer for that?

Again, the answer is CR processing. The CR processing creates copies of the buffer and rolls them back or forward to create the CR copy as of the correct SCN number. This created 2 additional CR copies. From one block, now you have 6 buffers and some buffers were created as a result of select statement. This should answer the question whether the buffer cache of size of the database will be able to hold all the buffers.

Free Buffers

While being on the subject, let's ponder over another question - what happens when you flush the buffer cache? Let's see.

SQL> alter system flush buffer_cache;

System altered.

Checking the buffers using the script shown earlier:

---------- ---------- ------------------ ---------- --------
         7       4418 segment header     free       00
         7       4418 segment header     free       00
         7       4419 data block         free       00
         7       4420 data block         free       00
         7       4421 data block         free       00
         7       4422 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00

The buffers are still there and marked as belonging to the object. However the status is "free", i.e. the buffers can be reused if some session wants them for some other block. If a session wants a free buffer and can't find one, it waits with the wait event "free buffer wait". At that point, Oracle makes room in the buffer cache for the blocks requested by the session by forcing the buffers out of the buffer cache. The CR copies are discarded (since they were not updated) and the XCUR copies were written to the disk (if not written already).

Who does the writing? It's the process known as DBWR - Database Buffer Writer, which is generally named DBW0, DBW1, etc. The DBWR is a very lazy process. It sleeps most of the time, unless it is woken up by someone. In this case the session (actually the server process) kicks DBWn to write the dirty buffers to the disk and change the status to non-dirty. This is why sometimes SELECT statement may generate writing to data files. Until the buffers are freed up, the session waits patiently and displays to all that it's waiting on free buffer waits. You can check it by selecting the EVENT column from V$SESSION.

Impact on Segment Header

So far we talked about the data block. When does the segment header see some action? Segment header does not contain any data; so it does not need to be updated every time an update is made. But when the segment itself is modified, segment header is updated. Let's see an example with adding a column:

SQL> alter table bhtest add (col3 number);

Table altered.

Check the buffers:

---------- ---------- ------------------ ---------- --------
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     free       00
         7       4418 segment header     free       00
         7       4419 data block         free       00
         7       4420 data block         free       00
         7       4421 data block         free       00
         7       4422 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00

16 rows selected.

There are 6 copies of the segment header. Table alteration changed the segment header block - block# 4418. Just like any other block, the buffer was copied over and modified, creating multiple copies of the block. Whe you issue another DDL operation - truncate - the result is the same:

SQL> truncate table bhtest; 

---------- ---------- ------------------ ---------- --------
         7       4416 1st level bmb      free       00
         7       4417 2nd level bmb      free       00
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     free       00
         7       4418 segment header     free       00
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     cr         00
         7       4418 segment header     free       00
         7       4419 data block         free       00
         7       4420 data block         free       00
         7       4421 data block         free       00
         7       4422 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00
         7       4423 data block         free       00

20 rows selected.

There are now additional segment header buffers, since truncate is a DDL command and segment header is modified. Even though there was no block of the table in the buffer cache (remember, we flushed it), the segment header still needs to be updated. Truncate also marks all blocks as free, and resets the high water mark and updates the bitmap block. The bitmap block (BMB) is used in tablespaces with automatic segment space management (ASSM) to display whether a block is free or not, which is similar to the functionality of freelists. The truncate caused the BMBs (there are two - 1st level and 2nd level) to be modified as well and they also come into the buffer cache.


From the above discussion you saw how a table with just two blocks populated fills up the buffer cache with 20 buffers. Imagine a normal database with, say 10000 filled blocks (8KX10,000 = 80M). It might easily fill 200,000 buffers. With a 8K block size that amounts to 8 K X 200 K = 1600M, or about 1.6 GB of buffer cache. The amount of buffers taken up depends on several factors:

(1) less buffers consumed in selects
(2) less buffers consumed if the commits occur more frequently, since the demand for CR processing is less
(3) more buffers are consumed if the activity updates buffers
(4) more the logical I/O, the more the need for buffers
(5) DDL activities increase the demand for the buffers, even if the blocks are empty

Food for Thought

Now that you know how the buffers are consumed, what do you think the situation is in a RAC environment  where buffers are transferred from one instance to the other - better, worse, or about the same? The answer will come in a future blog. Stay tuned.

Fourth Day at Collaborate 11

Thank you to those who attended my fourth and last session Designing Applications for RAC at Collaborate 11. Considering it was at 8:30 AM on Thursday, right after the big party night, you must all be really committed to your work. Much appreciated.

You can get the slides here.

I look forward to seeing you all at Collab next year.

Second Day at Collaborate 11

Many thanks to those attended my session RAC for Beginners despite the 8 AM timeslot. You must be really motivated. I hope you found the session useful and leaned something you can start using.

You can download the slides here.

Thanks to those who came to my second session of the day - Secure Your Database in a Single Day. Hopefully that met your expectations. The slides are found here.

Monday, April 11, 2011

Day One of Collaborate 11 - My Session

Thank you everyone who came to my session which got off with a rocky start due to a recording issue in the facilities. I was also the manager of the Manageability Bootcamp which ran the whole day. My sincere thanks to the speakers who spoke on the track. We had some of the best speakers and best topics.

If you want to get a copy of my session - Wait Events in RAC - you can download it here. Three more sessions and two expert panels to go.

My next sessions

(1) RAC Customer Panel - 4/12 9:15 - 10:15
(2) HA Bootcamp Panel - 4/12 11:45 - 12:15
(3) RAC for Beginners - 4/13 8:00 - 9:00
(4) Secure your Database in a Single Day - 4/13 2:15-3:15
(5) Designing Applications for RAC - 4/14 8:30 - 9:30

I hope to see all there.

Sunday, March 27, 2011

Tanel Poder's Online Seminars

What are good ways to learn about the inner workings of Oracle to troubleshoot performance and availability issues?

Fortunately there is only one good answer - just one good source - Tanel Poder's virtual seminars at

Why? Let's examine the typical objectives and the various means to accomplish them. I'm sure you have heard this complaint before - "the database is slow". If I had a dime every time I heard it, well ... you know the rest! Most DBAs by now know the next best thing they should do - check the wait interface - V$SESSION. That's a very good first step.

From the view they figured out the session is waiting on cache buffer chain latch, and has been waiting 1234.56 seconds, measured to the precise microseconds and counting. They also got all the other data such as session Id, the statement and so and so forth. But then what? It's like a radio host reporting the stick market - x number of stocks are up, y are down, Dow is up by n, Nasdaq is down by m, and so on. All are factually correct; but none helps you in answering your question - why  your specific portfolio of stocks is down and what other stocks you should consider.

Of course, you have to pay a professional to get that information. In your database issue at the moment, that's what you have to do as well - you have to be a professional to decipher the further information. And that's the vital second step. After you identified what is the ailment, you have to understand how to eliminate that. Most seasoned professionals stop at the first step when the second step is the most important. And that's where you need Tanel's highly acclaimed Advanced Oracle Performance Troubleshooting seminar. It's not based on slides; but showing the demos right in front of you, using code you can understand and reuse.

In this specific example, in the first part of his series, under the section "Latches" he shows you how to get the specifics of the latch, for instance the latch children. There are some traditional fixes, of course; but the big question is what if (and, that is a big IF) these well understood fixes do not work? What do you do next. Pray?

No; turn to Tanel's class. He explains how to get information from various sources inside the database. In this case the trick is to find out who has the latch and who is waiting for it, and what specific latch is so popular. This picture helps (from Tanel's class). Reproduced with permission:

As they say, a picture is worth thousand words. Now that you know how to find that popular latch, you are well on your way to troubleshooting. Hopefully this is all you have to do.

But what if it's not. The problem still eludes a solution? Have no fear. In his hang analysis section, Tanel talks about taking system dumps and explains how to analyze them - again with a few slides; and showing the actual trace file and interpreting the file right in front of you.

Talk about tough luck; suppose the problem is still not solved. Tanel takes a process stack dump. Yes, it is not for the faint of the heart; but with the right training anyone can do it. He shows you how to take the stack trace and analyzes one right in front of you. Here is an example of how to interpret the stack trace (reproduced with permission)

And he explains each section with how to interpret the different data to come to the solution.

At this point no problem is big enough to stand this type of scrutiny. If the problem persists, well, it's most likely a bug then. Oh, yes, Tanel will state the bugs that could affect performance issues and point to the MetaLink notes as well.

Interested? I guess more like excited. I certainly am. Tanels' first class is running April 11-15 It's a virtual class; so you don't have to step away from wherever you are, in whatever state you are in. As long you have the internet connection and an agile mind to absorb the superb information presented, you are in for a massive dose of superior learning.

What about the scripts Tanel uses in the class? Are they proprietary? Do you have to buy them to use them? Do you need to write them down so fast that you capture all the details? Not at all. As a part of the course, Tanel will expose his entire script library to you.

Intersted? Oops; I lied! Actually the scripts and tools are all free, even if you don't attend his class. They are at Download to your heart's content.

So, why is Tanel giving away his hard work for free? Well, buying the best tool from Home Depot will not make me a good carpenter; I must learn how to use them to build a deck. A script is only 20% of the solution; the rest 80% is knowledge. And the 80% is what Tanel's virtual seminars are all about. Hope you make the right choice. I know I have.

Speaking at a Virtual Conference #VirtaThon

I was putting together abstracts for Oracle Open World #oow11 this year and remembered something someone had asked me earlier - to present at a conference like this, how does one overcome the fear of delivering a session? In fact, it is a question asked of me several times.

This is not a trivial issue; it's a real problem. There are many folks who are otherwise excellent sources of knowledge, in fact fountains of practical ideas; but when asked to speak in front of a live audience, they would rather kiss a frog than step in from of the podium. The mortal fear of public speaking is one of the many challenges to get good speakers for conferences.

The second challenge is cost. Conferences are conducted at some physical place. Unless you live in that city, or within commutable distance, you have to travel there. Add to the plane fare, cost of hotel, food, rental car, and all that extra expenses the emotional turmoil of being away from the family, especially those folks with small children who would miss one of their parents a is not something you can just brush aside. Even if you are  not a parent, you may be a caregiver to a loved one and your absence will be hard on the cared one.

Finally, the change is not something most people like. You may be comfortable with your present surroundings, among familiar people you work with every day. Traveling to a new city and spending time with strangers may be exciting or daunting based on how you look at it. Perhaps you work from home everyday in your PJs. Getting the wrinkles off the pants to go to the conferences may be a lot. At least to some people.

The answer to all this may be a trend I see developing now - web based conferences. You can attend them in your PJs and speak at them in your PJs as well; no iron needed. One such conference is #VirtaThon ( where several speakers, most of them widely known in the Oracle user community are speaking. Being a virtual conference, you don't have to travel anywhere to attend the sessions; you attend from the privacy and comfort of your own home or office. Remember, they are *not* webcasts; they are virtual conference sessions. So you actually participate in the sessions as you would do in a normal conference - ask questions, interact via chat and have follow ups after talks. If you are a speaker, there is nothing better - you don't have to travel to the venue. You need a computer with an internet connection. If you are uncomfortable speaking to a very live audience, it should be much easier speaking to virtual audience.No traveling, no TSA checks and no red eye flights.

Just to pique your interest, here is just a sampling of the speakers (in no particular order)

  • Dan Hotka
  • Bert Scalzo
  • Riyaj Shamsudeen
  • Syed Jaffer Hussain
  • Steven Fuerurestein
  • Jeremy Schneider
  • Guy Harrison 
  • Brian Huff
  • Lewis Cunningham
  • Mike Ault

And many more. Oh, I am speaking as well.

Interested? Submit an abstract at as soon as possible. The deadline is approaching fast.

Oh, yeas, a little detail. You will not be speaking for free; you will earn money for your efforts. So, what's stopping you?

Saturday, March 19, 2011

Review: Oracle Application Express 4.0 Cookbook

I decided to add a twist to my usual blogs - a book review, which I have not done before. I have been reading a book - Oracle Apex 4.0 Cookbook by Marcel van der Plas and Michel van Zoest and published by Packt, a UK based publisher. Michel is one of the first Apex Certified Experts in the world - a no small feat. It has been technically reviewed by a well known cast - Oracle ACE Director Dimitri Gielis, who also won Oracle Magazine's Apex Developer of the Year in '09; Maarten van Luijtelaar and Oracle ACE and frequent blogger Surachart Opun.

Why Apex

Back in the early 90's there was a product called SQL*Forms - for those who are old enough to remember that.  Up to verion 3.0, the source code, known as .INP file, was a text file. It was a breeze to generate a form to show data and even do some rudimentary editing. Since the source file was in text format, we used to open the file in vi editor instead of the required Forms Designer and add other fields and records. It was very convenient not just for non-SQL savvy users; but for us as well, to examine and easily fix incorrect data. It certainly beat writing long SQL statements. It was, in a way, one of the original Rapid Application Development environments for Oracle database.

Forms 4.0 changed the convenience factor a little - the source file was no longer text and hence not editable by vi. We had to open the form in the Designer. However, the value proposition of the Forms product as a rapid application development tool remained. It was still possible to create the perfect quick edit tool or a quick application for the non-SQL savvy user in a jiffy. In fact the added "true" graphical interface was a competition for the more popular GUI tools of those times.

Over the course of time, the product ceased to exist by itself. With the advent of the web environment, inclusion of Java and the three tier execution model, Forms assumed a more sophisticated avatar. Rather than being a standalone product, it had to have a crutch in the form of Application Server. While that undoubtedly added reliability, scalability and brought it to the realm of real web development, it took away the convenience of the rapid application development by the non-developers such as DBAs.

That was so until about fifteen years ago, when Oracle introduced WebDB, a database development tool. Later it morphed into a product called HTMLDB, which blossomed into Oracle Application Express. It not only brought web development to Oracle database, it afforded quick application development, and even development of applications by users - end-users; not DBAs or developers. This shifted the whole paradigm in favor of the users, enabling them to roll out apps without mush dependence on their more "skilled" colleagues.

The Book

Now back to this topic. Application Express (fondly called Apex) is pretty well documented and has been around for some time. It has matured as a product and seen its share of coverage in the IT circuits. There is no dearth of books, articles, blogs, code samples, etc. about Apex. So, with all that in place, how long will it take you to create your very first application (assuming you know reasonably about Oracle)?

You might be surprised to find that it will take a while. The manuals and normal books are usually geared towards imparting knowledge about a specific topic - Apex in this case. Quick start is not usually the objective. It's somewhat like the difference between all_rows and first_rows hints.

In addition, most books are designed to build up knowledge through a series of logical steps - starting with concepts. It's very important to understand the concepts to build a strong foundation. However, there will be days when you want to know just how to do a certain stuff. It's like checking a dictionary for the meaning of a word, rather than reading a book about improving vocabulary, or learning about its etymology. It could happen to anyone - beginner or a seasoned professional. I know; I thumb through my own book - RMAN Recipes whenever I am stuck with a task. A cook book format work well. This book is exactly that - a cookbook of recipes which you can quickly refer to instead of spending the time in looking where exactly that is covered.

Detailed Review

  1. Apex is a GUI environment; so a book has to have a lot of screenshots to explain things properly. This book does not crimp on that aspect.
  2. It has a very useful four section approach for each recipe.
  3. * Getting Ready - the preparation
    * How to Do It - the actual work
    * How it Works - explanation of the work
    * See Also - more information on the topic
  4. It starts with how you can start with a simple form from the EMP table. It cleverly points out that 
  5. It has recipes to manage several off the shelf utilities, e.g. Google Maps.In most applications today, maps are a choice du rigueur, not just that have a geo-spatial element in them.
  6. It handles creating apps in different languages, which is very useful in many cases.
  7. If you are already an expert in Apex, you understand how important it is to develop and distribute plug-ins. This book provides the easiest explanation of plug-ins and how to develop them.
  8. Using APIs and Webservices is explained lucidly.
  9. Finally, one of the thorniest topics - managing Apex environments - has been adequately covered in two chapters.

All in all, this is a solid book to keep for beginners and advanced users alike. I am sure you will like reading this book. I am definitely keeping it as one of my carry along books (I have the e-Book)