Sunday, February 25, 2007

MV from single byte to multibyte database, Part II

I think I have found a bug in 10.2.0.3.
- I will try to describe this as short as I can.

I used CAST() to force data from an 8i database into a format that I preferred.

The first FAST refresh was ok, as the mv log was empty, but when data arrived, it failed, as the 8i database could not handle/understand the CAST function.

Strangely, it worked, if there were CLOB's being replicated too.

I traced the two situations, and saw that it in both cases first tries to select CAST()... from ... @remoteDB.

But in the succesfull (CLOB) case, it then does
SELECT * from mlog$_mytable@remoteDB
SELECT * from mytable@remoteDB
SELECT * from mlog$_mytable@remoteDB
SELECT * from mytable@remoteDB (and now MANY waits)
UPDATE localTable...
UPDATE localTable...
....

So it seems that the introduction of the CLOB column somehow enables an exception handling, that causes the refresh to use another approach instead of failing.

We were so lucky, that in the case with the MV without a CLOB, the master table had a CLOB, so when we added this to the MV, everything was fine.

Strange.

Thursday, February 22, 2007

MV from single byte to multibyte database

Today I did some interesting things (at work :-)

At a customer, they need to replicate a table from a single byte character database (8.1.7.4.1) to a 10.2.0.3 with UTF8.

And the table contains clobs.

In their existing setup, all character columns are tripled in length (due to character conversion), which means that VARCHAR2(256) translates to VARCHAR2(768) in the MV.
This gives many problems, one being with indexes as the index key cannot contain more then a little over 3000 bytes on this database (8k blocksize).

It is not possible/a good idea to precreate a table with columns matching the origin table, as special characters might be stored with more then one byte, and thus we can end up with values that does not fit into the MV.

This is also the reason for Oracle to automatically expand the columns.

My idea was then to create a table with the (character) column's length specified in CHAR's, not BYTES. This worked beatifully, and then create a MV on this prebuilt table, where I in the select CAST() the columns into lengths specified in CHAR's.

This resulted in ORA-22992: Master table Materialized View references contains a CLOB column. ( This is in fact a bug, 4043461), still not fixed.
Solution is: Create the Materialized View without using the 'build deferred' clause.

OK. This might even be good, I realized after a while.
*Maybe* I could create the MV without a prebuilt table, and only CAST() the coulumns, and sure enough.

First time though it failed with the annoying ORA-12060: shape of prebuilt table does not match definition query.
It was in fact a error-40: Error happening 40 centimeters behind the monitor.
I had accidently cut a "0" away from a colum, so it was defined as 100 CHARS and not 1000 CHARS, as it should.

When I fixed that error, it worked, I thought. But no, not there yet.

The refresh jobs that were created for be by the create MV statement called dbms_refresh(), and they failed consistently with an obscure error:

ORA-12008: error in materialized view refresh path
ORA-00907: missing right parenthesis
ORA-02063: preceding line from DLPROD
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2254
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2460
ORA-06512: at "SYS.DBMS_IREFRESH", line 683
ORA-06512: at "SYS.DBMS_REFRESH", line 195
ORA-06512: at line 2

Was it the 8.1.7 database not being able to handle my CAST() workaround?
At the top of my head, I thought it might be time to ask Support, but then I remembered the database version and how they would most likely just close it, unless it reproduced in a supported version.

I tried with dbms_mview.refresh( 'MV', METHOD => 'F'), and got the same error.
After working with it for a while, I found that this worked:

dbms_mview.refresh('MV', METHOD => 'F', NESTED => TRUE);

FINALLY :-)

It refreshed.

Tuesday, February 13, 2007

Project - plan stability, Oracle 9.2

Recently I was asked to build a solution to analyze the impact of patching/upgrading a database and gathering new statistics on a database.

The (huge) application that runs on this database encorporates some of the best of breed technologies, as hardcoded values, dynamic in lists etc. Also, the SQL's are generated by the client very dynamically, so the sheer number of SQL's are also huge, and it is literally impossible to analyze it by looking at the code.

Great.

So I decided to record the SQL's from v$sql etc. on the production instance, clone the database to a test server and here do the patch/upgrade+statistics. The hope is then that most of the SQL's remain the same, and that we are able to categorize the ones that change execution plans into something digestible. (One tool I plan to use is Tom Kyte's utility to strip hard coded values).

Hope is good.

Now, how do we work out some time estimates for this project - how many hours should we spend tuning the unknown number of statements that perform worse :-)