The Oracle Lite Blog

The Rekounas Blog about Oracle's Mobile Enterprise Application Platform

The Oracle Lite Blog header image 2

Gaining performance out of Queue-Based refresh

March 9th, 2010 · 3 Comments

We have all been there.  Users start synching, MGP kicks in, some junior SQL developer has written a poorly executing batch job (yeah, I am blaming the junior PL/SQL guy) and all of the sudden, everything comes to a halt. Your online community can’t perform any transactions, your syncing users are just sitting there twiddling their thumbs, support staff is getting all kinds of alerts and calls, the poorly written PL/SQL batch job that the junior developer wrote isn’t finishing, and management is coming to you looking for answers because this is the 3rd time in the last month that this scenario has happened… the perfect storm!

What do you do?  You could tune the existing queries for your publication items to improve sync and MGP performance.  Of course, there is always that poorly written and timed PL/SQL that the junior guy wrote that is inserting/deleting 1 million records in one of your core tables that you are synchronizing to your application, and just when you notice this happening along with syncing or MGP, you notice transactions in C$ALL_CLIENT_ITEMS is blocking all kinds of sessions.  Oracle recommends setting the following in webtogo.ora

SKIP_INQ_CHK_BFR_COMPOSE=NO
DO_APPLY_BFR_COMPOSE=NO

I prefer going to queue based synchronization for situations like this.  Why?   The first things is that it removes triggers on those tables that are usually affected by the blocking locks.  Secondly, if you have bulk operations on a table, the MGP LOG process can get bogged down with logging all the transactions and you often see deadlocks when this happens not to mention blocking locks if the deadlock doesn’t cancel out the MGP process quickly enough.

I am assuming if you are reading this, you are probably familiar with the Mobile Workbench and have created your publication items via that mechanism and felt really good about the fact that you don’t have to write all kinds of stored procedures to perform your synchronizations.  Well, roll up your sleeves, because you will have to write some PL/SQL to get a queue-based refresh working.  In the mobile development kit, there is a tutorial on how to go about creating a queue-based item (Location %ORACLE_HOME%\Mobile\Sdk\samples\Sync\win32\QBasedPI\)

There is a readme.txt in that folder.  I am not going to repost the tutorial, but I will talk about the aspects you should be aware of.  The tutorial basically give a complete-refresh example. This would not be the best example for most applications as there are also bi-directional publication items that you need to worry about.  The real piece of the code you should be concentrating on is the DOWNLOAD_INIT within the package.  This is your download logic.  Your logic should contain the following logic:

  • handles physical deletes
  • handles inserts
  • handles update
  • handles logical deletes

Their example only handles inserts.  If you want your publication item to perform incremental syncs, you will need to have a mechanism in that table that indicates records that need to be updated, inserted, or deleted. Here is a template of the mechanism I use.  In our tables, we have an AUDIT_DATE field.  A timestamp or date field is really useful in determining if a record has been updated.  Of course, you may need a trigger on the table or trust those junior developers to have code in place to update that AUDIT_DATE field.

The physical delete:
DELETE CTM$CTM$<TABLE_NAME>
WHERE CLID$$CS = CLIENTID AND DMLTYPE$$ = 'D';

The insert logic:
INSERT INTO CTM$<TABLE_NAME>(CLID$$, FIELDS1, FIELDS2, DMLTYPE$$, TRANSID$$)
SELECT CLIENTID, FIELDS1, FIELDS2, 'I', CURR_TRAN FROM <TABLE_NAME>
WHERE NOT EXISTS (SELECT 'X' FROM CTM$<TABLE_NAME>
WHERE <MATCH PRIMARY KEYS HERE>)

The update logic:
UPDATE CTM$<TABLE_NAME>

The logical delete:
UPDATE CTM$<TABLE_NAME>
SET TRANID$$ = CURR_TRAN,
DMLTYPE$$ = 'D'
WHERE AUDIT_DATE < TO_CHAR( ADD_MONTHS( SYSDATE, -6 )) AND CLID$$CS = CLIENTID;

Now, in my example, I just keep 6 months of data on the client. At the end of the day, you can decide all aspects of synchronization. Customize the synchronization to fit your business needs.

Share

Tags: Oracle Lite

3 responses so far ↓

  • 1 gary wilkinson wrote on Mar 10, 2010 at 5:29 am

    Greg,

    Interesting that you are looking at the queue based stuff as this does not seem to be used out there much.

    we deal with the incremental changes a bit differently since we use a lot of view based PIs and the client workload can be quite dynamic. what we do is
    1) insert into TMP_CTM$ (global temp table to improve performance) all data matching PI query for the user
    2) insert into CTM$, currtran, dmltype$$=’I’ all records from TMP_CTM$ not in CTM$ with a tranid$$ of -2
    3) insert into CTM$, currtran, dmltype$$=’U’ all records from TMP_CTM$ in CTM$ with a tranid$$ of -2, but data differences
    4) insert into CTM$, currtran, dmltype$$=’D’ all records from CTM$ with a tranid$$ of -2, but not in TMP_CTM$
    5) insert into CTM$, tranid$$=-2 all new records, update where tranid$$=-2 and updates done, delete where tranid$$=-2 and deletes created

    essentially we keep a backup of what is on the client in the CTM$ table with a dummy tranid$$ of -2 for comparison purposes. also useful for checking the client data. there is functionality around tranid$$ of -1 (new build) and +ve tranids, hence the -2

    also found it best to do two passes of the upload in the upload_complete section, once for inserts and then one for updates to get the weighting dependencies correct


  • 2 rekounas wrote on Mar 10, 2010 at 9:17 am

    Thanks for the input Gary. We have implemented the use of global temporary tables as well. Actually, we only use two. Since most of our data is driven by these two GTT, we are able to get away with using just these two. I am in a process right now of writting either dynamic SQL or creating a Java program that will build the queue based sql for me. I am still using the asynchronous approach for most of the tables, but one of my clients basically has a data warehouse that they are synchronizing, and keeping to the asynchronous approach just became a blocking lock nightmare.


  • 3 Tom wrote on Aug 12, 2012 at 11:37 pm

    Hi Greg,

    I implemented the above suggested logical delete against the publication item in Berkeley DB via my queue-based data sync package, but encountered invalid snapshot data received error when I performed msync.exe.

    By the way, I managed to perform complete refresh, insert and update incremental changes except deleting operation.

    Please let me know if you can think of any thing that could possibly go wrong .

    Thanks and hope to hear from you soon !


Leave a Comment

*