Bookmark and Share

Monday, September 21, 2009

Triggers and LOBs: Synchronous CDC and Synchronous Streams Capture

Just a quick post to outline some of the limits of using LOBs and triggers together; there is no secret, it's written (black on white) in the manual : depending how a LOB column is updated, triggers on the table will fire... or NOT ! The reason is indeed related to the design of the LOB Locator that allow data chunk updates without any reference to the changed row.

Anyway, I don't want to dig too much into the details of why things are like they are. Instead, you'll find below some PL/SQL code samples. Those pieces of code show that, if you work with the LOB locator, the triggers won't fire. On the other side, if you update the LOB columns directly or create a locator, in that case, the triggers will fire.

Create a table and a "for each row" trigger

To begin, we'll create a table with a trigger. That trigger increments the number of times it fires in a separate table named xlog :
create table x (id number, text clob)
lob(text) store as securefiles;

create table xlog (id number, call number);

create or replace trigger x_trigger
after insert or update or delete on x
for each row
begin
if inserting then
insert into xlog (id, call) values (:new.id, 1);
else
update xlog set call=call+1 where id=:old.id;
end if;
end;
/
Note:
To test Oracle Database 11g Release 2 Streams new features (in a latter post), I've used a securefile LOB.

Changes to LOB does fire triggers

To begin, we change a few row data without using any LOB persistent locator. As you'll discover in the xlog table every row changes fire the trigger:
insert into x values (1,null);
update x set text='X' where id=1;
update x set text='XY' where id=1;
commit;

select x.text, xlog.call
from x, xlog
where x.id=1
and xlog.id=1;

TEXT CALL
---- ----
XY 3

Changes to LOB doesn't fire triggers

On the other side, if you change the row data with the persistent LOB Locator like below, you'll find that the trigger doesn't fire:
declare
my_x clob;
begin
select text into my_x
from x where id=1
for update;
dbms_lob.append(my_x,'Z');
dbms_lob.append(my_x,'Z');
commit;
end;
/

select x.text, xlog.call
from x, xlog
where x.id=1
and xlog.id=1;

TEXT CALL
---- ----
XYZZ 3

Conclusion & Cleanup

If you just think about it, this may feed your thoughts about the LOB support (or actually non support!) with Synchronous Change Data Capture (CDC) and Synchronous Streams Capture. But enough of conclusions for today, lets just drop the tables and its trigger:
drop table x purge;
drop table xlog purge;

3 comments:

  1. Hello.

    It's very nice that such kind of blog is appeared in WWW, cause when we are started to work with OStreams we got nothing about it except Metalink and Docs.

    We use 10gr2.
    And 2 days ago I found serious reduce of productivity on CAPTURE size. I've selected view v$streams_capture and found that elapsed lcr time is about 50% of total elapsed capture time.

    My suggestions is that such huge percent serious bottleneck.

    Does anybody knows how can I improve cpature productivity?
    Thx.

    ReplyDelete
  2. Hello,

    The best way to get some help is your case is to open a SR, research MySupport and (for the case someone has faced the same issue) open a thread on the OTN Streams forum.

    Make sure you also specify the exact version, OS and applied patches (there is a bunch of recommended ones on top of 10.2.0.4).

    That said, I agree with you the interesting question is : "Why?". I've faced situations where the capture/enqueue was an issue but I did not face the same case as yours. Or, more likely, if I did, I did not pay attention to the fact the time was actually spent building the LCR (Oracle sorts the bugs by area on Streams and there are known issues related to the capture)

    That may also be that it's somehow expected in your case. We don't know the types you are playing with, the amount of "elapsed time" you are talking about, the complexity of the capture rule, the amount of data you are filtering, etc.

    The good news for you is that you've probably pass the hardest part of your issue: the diagnostic. Now comes the most interesting part, the part where you'll be shining: reproduce your case on a test, figure out explain why, fix the issue for more than 3 days (though you could spend some times on those tasks too).

    One more thing: The capture parallelism parameter is supposed to provides more throughput to that exact redo to LCRs conversion task. This impacts the number of preparers that handle that job (It won't decrease the % since your reasoning is %). Obviously this assume your server can actually scale; that I don't know either, like I don't know if your not facing a bug.

    Just to outline I know nothing again. I should just shut up!

    ReplyDelete
  3. Thx, for quick reply.

    As you told - the hardest part was diagnostic, yeah.

    I have tried parallellism and parameters for capture, also I have tried splitting capture for different tables, but nothing changed.

    It's OTN time!

    ReplyDelete