Bookmark and Share

Sunday, March 29, 2009

How to Use A DDL Apply Handler to Manage "RENAME" Commands

In my last post, This Is Why Streams Cannot Apply "RENAME"!, I've explained why the Streams Apply processes don't manage the RENAME operations. Like I said, the easiest way to deal with this is to use the ALTER ... RENAME TO ... command instead. Unfortunately, it's not always easy to change the code of an application. That's especially the case when it's a packaged application or if the application is maintain by a third-party.

In this post, I'll explain how to use a DDL Apply Handler to workaround that limit. What the handler does is simply change the RENAME into an ALTER TABLE... RENAME TO .... and set the instantiation SCN of the renamed table. If you want to test this example, you should implement a schema or global replication. There is a complete example in my previous post entitled Renaming The Schema Of a DDL Statement in a LCR and it requires only one database. What follows is based on that example but you can easily adapt it to your needs...

This post is made of the following sections:
Note:
Before you start, there is a few things that are worth noting:
  • In order to parse the DDL text, I use a package of my own named arkzoyd_sqlcode. You can download and install it in your Streams administrator's schema, e.g. STRMADMIN. I won't publish the source of it, at least for now, but if you are interested, leave a comment with your email; I won't publish the comment but I'll contact you.
  • It's more experimental than production ready; All comments are welcome but I don't guaranty you'll make it work. So far, I've successfully set it up on a 11.1.0.7 Enterprise Edition database running on Linux x86 32bits and 64bits.
  • I would have expected changing the DDL text in a custom rule-based transformation would have worked. But for some reasons, despite its change as well as the change of the operation type, the apply process doesn't instantiate the table after its renaming, like it does with a ALTER TABLE... RENAME TO .... That's why I've used a DDL Handler instead of a custom rule-based transformation.
  • If the new DDL text, modified with LCR$_DDL_RECORD.SET_DDL_TEXT, is too small, junk characters are appended to it. That's look like a bug or at least that's something else I don't understand. To workaround that issue I pad the string with spaces at its end.

Setup a Schema Replication

If you don't want to use 2 separate databases, you can follow my previous blog post where I explain how to setup a schema-level replication with one only database: Renaming The Schema Of a DDL Statement in a LCR.

Create the DDL Apply Handler

The DDL Apply Handler is a procedure that takes the DDL LCR as an input. You'll find below a template of a handler that manages the RENAME. What it does is (1) get DDL text, (2) if the command type is NULL and it's related to a table, check if that's a rename and build the corresponding ALTER TABLE ... RENAME TO ..., (3) if that's a rename, change the command type to ALTER TABLE and set the instantiations SCN for the new table, (4) execute the DDL LCR; below is the script that creates it:
connect strmadmin/strmadmin

@arkzoyd_sqlcode.sql

create or replace procedure ddl_apply_handler(in_any anydata)
is
lcr sys.lcr$_ddl_record;
rc pls_integer;
ddl_text clob;
seq number;
tlist arkzoyd_sqlcode.ddl_rename_output_t;
begin
-- Access the LCR and create a clob for the ddl text
rc := in_any.getobject(lcr);
dbms_lob.createtemporary(ddl_text, true);

-- Get the DDL text
lcr.get_ddl_text(ddl_text);

-- Transform the DDL
if (lcr.get_command_type is null)
and (lcr.get_object_type='TABLE') then
begin
tlist:=arkzoyd_sqlcode.get_renamed_tables(ddl_text);
exception when others then
-- That's not a rename command
tlist.old_table_name:=null;
end;
if (tlist.old_table_name is not null) then
lcr.set_ddl_text(rpad(to_clob('alter table '||
tlist.old_table_name||
' rename to ' ||
tlist.new_table_name)
,200));
lcr.set_command_type('ALTER TABLE');

dbms_apply_adm.set_table_instantiation_scn(
lcr.get_base_table_owner()||'.'||
tlist.new_table_name ,
lcr.get_source_database_name() ,
lcr.get_scn());
end if;
end if;

lcr.execute();
-- Free temporary LOB space
dbms_lob.freetemporary(ddl_text);
end;
/

Add the DDL Handler to the Apply

Once the procedure created, add it to the apply:
begin
dbms_apply_adm.stop_apply('streams_apply');

dbms_apply_adm.alter_apply(
apply_name => 'streams_apply',
ddl_handler => 'strmadmin.ddl_apply_handler');

dbms_apply_adm.start_apply('streams_apply');

end;
/

col apply_name format a13
col ddl_handler format a40
select apply_name,ddl_handler
from dba_apply;

APPLY_NAME DDL_HANDLER
------------- ----------------------------------------
STREAMS_APPLY "STRMADMIN"."DDL_APPLY_HANDLER"

Execute a test

Once the DDL Handler configure, we can run a test check if it's working:
connect source/source

create table test (id number);

insert into test values (1);

commit;

rename test to newtest;

insert into newtest values (2);

commit;

connect destination/destination

select * from destination.newtest;

ID
--
1
2

connect source/source

drop table newtest purge;

connect destination/destination

select * from destination.newtest
*
ERROR at line 1:
ORA-00942: table or view does not exist

Clean the environment

To leave the environment like it was before you've created it; You can run the script below and refer to the last section of Renaming The Schema Of a DDL Statement in a LCR to remove the schema level replication:
begin
dbms_apply_adm.stop_apply('streams_apply');

dbms_apply_adm.alter_apply(
apply_name => 'streams_apply',
remove_ddl_handler => true);

dbms_apply_adm.start_apply('streams_apply');

end;
/

col apply_name format a13
col ddl_handler format a40

select apply_name,ddl_handler
from dba_apply;

APPLY_NAME DDL_HANDLER
------------- ----------------------------------------
STREAMS_APPLY

drop package body arkzoyd_sqlcode;
drop package arkzoyd_sqlcode;
drop procedure ddl_apply_handler;

Read more...

Friday, March 20, 2009

This Is Why Streams Cannot Apply "RENAME"!

You've probably mentioned it in the "Types of DDL Changes Ignored by an Apply Process" section of the documentation, Streams processes cannot apply a RENAME command, though it can capture it and you can catch it in a DDL handler. But, do you know why?

The answer to that question actually stands in the 2 commands below:
alter session set current_schema=scott;

rename emp to emp2;
*
ERROR at line 1:
ORA-03001: unimplemented feature
For some obscure reasons RENAME cannot be used after current_schema has been changed. Unfortunately the apply process changes the current schema to apply DDL so that it can be executed, even if the schema impacted by the DDL command is not in the source command. So, rename does not work. Obviously you can write a DDL handler and parse the command to manage that manually. But if you think that's easy, ask yourself why the Streams development team did not to that themselves to provide the feature out-of-the-box... Then run:
create table "alter table rename"(col1 number);

/* alter table rename */ rename "alter table rename" to "rename";
There is a few remaining mystery, though: If ALTER TABLE ... RENAME TO ... doesn't suffer those limits, why do DBA and developers execute RENAME to rename a table? Will that change in 11g Release 2?

Read more...

Thursday, March 19, 2009

What If You Don't Prepare The Source Table For Instantiation?

As you know, the DBMS_STREAMS_ADM package provides shortcuts to setup Streams; one of this shortcut is that it prepares the table for instantiation on the source database when you use ADD_TABLE_RULES. Obviously this assumes the package connects to that source database if that's a downstream capture. To say it in another way, you must use a database link if you want the table to be prepared by ADD_TABLE_RULES in the case of a downstream capture; If that's not the case, don't forget to use DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION on the source database.

But what happens if you don't prepare the source table for instantiation?

There is probably no better way to check than to test it. I've tried to change my approach this time! I won't describe how to setup your environment; Instead you'll find attached a script that set Streams up with that exact case, i.e. it doesn't run prepare_table_instantiation! If you want to give it a try, you can download it from this link: streams-no-prepare.sql. Use Right-Click->Save As... Hopefully it won't be too buggy!

The script requires 1 only database: (1) the database should be 11g; that may work with 10g but I did not test it at all, (2) it must have the USERS and TEMP tablespaces created, (3) it has to be in ARCHIVELOG mode and (4) you should be able to connect with "/ as sysdba". To use the script, run:
sqlplus /nolog @streams-no-prepare.sql
If you get any issue, leave a comment on this post with your email; it won't appear and we'll discuss your issue offline if you find a bug. The script should:
  • Create the whole configuration, except for the table preparation:
  • Start the Apply and Capture
  • Create a second script drop-no-prepare.sql to drop the configuration, once you are done with it
If you wait for a few minutes, Streams should work:
col capture_name format a16
col state format a30 wor wra
col capture_message_number format 999999999999999
col apply_name format a16
set lines 80

select capture_name,
capture_message_number capture_scn,
apply_name,
state
from v$streams_capture;

CAPTURE_NAME CAPTURE_SCN APPLY_NAME STATE
---------------- ----------- ---------------- ------------------------------
STREAMS_CAPTURE 1316028 STREAMS_APPLY CAPTURING CHANGES

select apply_name,
status
from dba_apply;

APPLY_NAME STATUS
---------------- --------
STREAMS_APPLY ENABLED
So we'll perform our test:
insert into source.t1
values (3,'Text 3');

commit;

col id format 99
col text format a10

select *
from destination.t1;

ID TEXT
--- ----------
1 Text 1
2 Text 2
But even if you wait you'll get no apply error and the change will never be applied. Prepare the table for instantiation now and insert a new data:
exec dbms_capture_adm.prepare_table_instantiation('source.t1','keys');

insert into source.t1
values (4,'Text 4');

commit;
If you check the destination table you'll see only changes after the table has been instantiated are captured:
col id format 99
col text format a10

select *
from destination.t1;

ID TEXT
--- ----------
4 Text 4
1 Text 1
2 Text 2
As you can see it's important to prepare the instantiation so that the capture can capture. There is many more things to understand from this example and we'll detail them in the coming weeks. For now, delete your Streams configuration:
sqlplus /nolog @drop-no-prepare.sql

Read more...

Monday, March 16, 2009

Adding A Destination To A Streams Configuration

In the first post of this blog, Oracle Streams One Way Table Replication 101, I've described how to setup one of the simplest Streams replication possible: SOURCE.T1 DML changes are captured and applied to DESTINATION.T1; There is just 1 database and no propagation. Now, we'll transform this example so that the changes are also directed into another queue, in the same database, and applied to DESTINATION2.T1. This new example will allow us to test some of the new features of Oracle Streams 11g; I'll show you that in the coming weeks...

It's worth to note that the scenario I've used to add the new destination, can impact the primary destination replication. There is a way to prevent this issue if that's a concern for you: you can created a specific capture for the time of the instantiation and merge it with the existing capture once the instantiation done. In the current example, there is only one table and no real changes. Leaving the existing configuration as much in sync as possible was not my primary goal. For this reason, I've chosen a more direct method made of the steps below: Before we start, we assume we have a Streams replication setup as described in Oracle Streams One Way Table Replication 101, steps 1 to 9!

Create a Streams queue in the destination database

As I've already told you, I use a second queue. For this reason, the first step of the setup consists in creating that second queue; In order to proceed, I've used the dbms_streams_adm.set_up_queue procedure like below:
connect strmadmin/strmadmin

begin
dbms_streams_adm.set_up_queue(
queue_table => 'strmadmin.streams_queue_table2',
queue_name => 'strmadmin.streams_queue2');
end;
/

Create a propagation from streams_queue to streams_queue2

Once the queue created, I create a queue-to-queue propagation for the DML changes on source.t1; I propagate the messages in the same database so I don't use any database link:
begin
dbms_streams_adm.add_table_propagation_rules(
table_name => 'source.t1',
streams_name => 'streams_propagation',
source_queue_name => 'strmadmin.streams_queue',
destination_queue_name => 'strmadmin.streams_queue2',
include_dml => true,
include_ddl => false,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true,
queue_to_queue => true);
end;
/

col propagation_name format a19
col source_queue_name format a13
col destination_queue_name format a14
col destination_dblink format a9
col status format a10

select propagation_name,
source_queue_name,
destination_queue_name,
destination_dblink,
status
from dba_propagation;

PROPAGATION_NAME SOURCE_QUEUE_ DESTINATION_QU DESTINATI STATUS
------------------- ------------- -------------- --------- ----------
STREAMS_PROPAGATION STREAMS_QUEUE STREAMS_QUEUE2 AQ$_LOCAL ENABLED

Create an apply process for the new destination

Then, I create the apply process that subscribes to the new queue for the changes made to source.t1:
begin
dbms_streams_adm.add_table_rules(
table_name => 'source.t1',
streams_type => 'apply',
streams_name => 'streams_apply2',
queue_name => 'strmadmin.streams_queue2',
include_dml => true,
include_ddl => false,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
end;
/

col apply_name format a14
col queue_name format a14
col rule_set_name format a11

select apply_name,
queue_name,
rule_set_name,
status,
message_delivery_mode
from dba_apply;

APPLY_NAME QUEUE_NAME RULE_SET_NA STATUS MESSAGE_DE
-------------- -------------- ----------- ---------- ----------
STREAMS_APPLY STREAMS_QUEUE RULESET$_9 ENABLED CAPTURED
STREAMS_APPLY2 STREAMS_QUEUE2 RULESET$_16 DISABLED CAPTURED

Add a Transformation to rename the schema on the Apply rule

Like in the previous example, I need to rename the schema. I use a declarative transformation to rename source to destination2:
col rule_owner format a9
col streams_name format a14
col table_owner format a6
col streams_type format a5
col rule_type format a3
col table_name format a5
col rule_name format a5 new_value rulename

select rule_owner,
streams_name,
streams_type,
table_owner,
table_name,
rule_type,
rule_name
from dba_streams_table_rules
where streams_name='STREAMS_APPLY2'
and streams_type='APPLY'
and rule_type='DML';

RULE_OWNE STREAMS_NAME STREA TABLE_ TABLE RUL RULE_
--------- -------------- ----- ------ ----- --- -----
STRMADMIN STREAMS_APPLY2 APPLY SOURCE T1 DML T115

prompt &&rulename
T115

begin
dbms_streams_adm.rename_schema(
rule_name => '&&rulename' ,
from_schema_name => 'SOURCE',
to_schema_name => 'DESTINATION2',
step_number => 0,
operation => 'add');
end;
/

col rule_name format A6
col from_schema_name format a6
col to_schema_name format a12

select rule_name,
transform_type,
from_schema_name,
to_schema_name,
declarative_type
from dba_streams_transformations;

RULE_N TRANSFORM_TYPE FROM_S TO_SCHEMA_NA DECLARATIVE_T
------ -------------------------- ------ ------------ -------------
T110 DECLARATIVE TRANSFORMATION SOURCE DESTINATION RENAME SCHEMA
T115 DECLARATIVE TRANSFORMATION SOURCE DESTINATION2 RENAME SCHEMA

Insure there is no pending transaction on the source

At that point, that's important to make sure there is no uncommited changes made on the source database before the apply was created. To check that's the case, we run dbms_capture_adm.prepare_xxxx_instantiation that holds a share look on all the source objects. We need to run that command AFTER we create the apply process! We'll instantiate the objects at a SCN that is after that procedure execution:
connect / as sysdba

exec dbms_capture_adm.prepare_table_instantiation('source.t1');

Instantiate the new destination

I can now create the new schema and table to instantiate them:
create user destination2
identified by destination2
default tablespace users
temporary tablespace temp;

grant connect,resource to destination2;

create table destination2.t1(
id number primary key,
text varchar2(80));

col apply_scn format 999999999999 new_value instantiation_scn

select dbms_flashback.get_system_change_number apply_scn
from dual;

APPLY_SCN
---------
795716


prompt &&instantiation_scn
795716

insert into destination2.t1
(select * from source.t1 as of scn &&instantiation_scn);

commit;
Once the table instantiated, I store the instantiation SCN in the database so that the new apply process only applies changes committed after that instantiation SCN. Before we proceed, we have to make sure the change we'll make won't affect the existing apply process. That's because we use the same database for both apply. To make sure that's the case, we'll verify the applied_message_number from dba_apply_progress is greater than the instantiation SCN we'll set.
Note
If there is no changes on the source database, there is no reason for applied_message_number to change either. For this reason you can run an update on one of the source object and commit the change to make sure that number can increase.
select applied_message_number
from dba_apply_progress;

APPLIED_MESSAGE_NUMBER
----------------------
796912
0
begin
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => 'source.t1',
source_database_name => 'BLACK',
instantiation_scn => &&instantiation_scn);
end;
/

Start the new Apply Process

Once the configuration finished, I could start the new apply process; the propagation is enabled by default:
exec dbms_apply_adm.start_apply('streams_apply2')

col propagation_name format a19
col status format a10

select propagation_name,
status
from dba_propagation;

PROPAGATION_NAME STATUS
------------------- ----------
STREAMS_PROPAGATION ENABLED

Test the replication

And I've tested the replication:
insert into source.t1 values (4,'Text 4');

1 row created.

commit

col id format 99
col text format a6

select id,
text
from destination.t1;

ID TEXT
--- ------
4 Text 4
1 Text 1
2 Text 2
3 Text 3

select id,
text
from destination2.t1;

ID TEXT
--- ------
4 Text 4
1 Text 1
2 Text 2
3 Text 3

delete from source.t1 where id=4;

commit;

Reset the Configuration

Like for the other scenarios, you'll find below the script to reset the configuration to what it was before this post:
exec dbms_propagation_adm.stop_propagation('streams_propagation');
exec dbms_propagation_adm.drop_propagation('streams_propagation', true);

exec dbms_apply_adm.stop_apply('streams_apply2')
exec dbms_apply_adm.drop_apply('streams_apply2',true);

begin
dbms_streams_adm.remove_queue(
queue_name => 'strmadmin.streams_queue2',
cascade => false,
drop_unused_queue_table => true);
end;
/

select queue_table
from dba_queue_tables
where owner='STRMADMIN';

QUEUE_TABLE
-------------------
STREAMS_QUEUE_TABLE

drop user destination2 cascade;

Read more...

Sunday, March 8, 2009

Renaming The Schema Of a DDL Statement in a LCR

If you look at the rename_schema procedure of DBMS_STREAMS_ADM in the documentation, you'll find the following note:
Declarative transformations can transform row LCRs only. These row LCRs can be captured by a capture process, captured by a synchronous capture, or constructed and enqueued by an application. Therefore, a DML rule must be specified when you run this procedure. If a DDL is specified, then the procedure raises an error.
And actually, if you try to apply that rule-based transformation to DDL LCR rule, you get the following error:
ORA-26746: DDL rule "STRMADMIN"."SOURCE30" not allowed for this operation
This post provides an example of a custom rule-based transformation for that renames a schema in the even of certain type of DDL statement; it is built without specifying any action context (see note 309575.1), even if obviously an action context is implicitly created by the dbms_streams_adm.set_rule_transform_function procedure of oracle 10g. The basic idea is help you build an exemple of DDL replication in a single database in order to introduce some of the limits of DDL...

This post contains in the following sections:

Prepare the SOURCE and DESTINATION schemes

In this example, we'll capture DDL and DML changes made to the SOURCE schema and apply them to the DESTINATION schema; We'll start by creating those 2 empty schemas:
connect / as sysdba

create user source
identified by source
default tablespace users
temporary tablespace temp;

grant connect,resource to source;

create user destination
identified by destination
default tablespace users
temporary tablespace temp;

grant connect,resource to destination;

Configure supplemental logging and the LogMiner data dictionary

In this example we will force supplemental logging for all the primary and uniques keys of the database. If we don't do it, we would have to add supplemental logging to the tables after we create each one of them in the SOURCE schema:
connect / as sysdba

alter database add supplemental log
data (primary key, unique index) columns;

select SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI
from gv$database;

SUPPLEME SUP SUP
-------- --- ---
YES YES YES
We'll also store the dictionary in the redo logs for LogMiner:
var first_scn number;

set serveroutput on

DECLARE
scn NUMBER;
BEGIN
DBMS_CAPTURE_ADM.BUILD(
first_scn => scn);
DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
:first_scn := scn;
END;
/

First SCN Value = 2480147
Note:
In this case, the SOURCE schema is empty and we don't need to execute the prepare_schema_instantiation procedure of dbms_capture_adm. That's a very unlikely scenario and if have to instantiate existing objects, don't forget to execute this procedure.

Create the Streams administrator and a Streams queue

Create a Streams administrator (STRMADMIN) and a queue to be use to stage the messages (streams_queue):
connect / as sysdba

CREATE TABLESPACE streams_tbs
DATAFILE '/u01/app/oracle/oradata/BLACK/streams_tbs.dbf' size 25M
AUTOEXTEND ON MAXSIZE 256M;

CREATE USER strmadmin IDENTIFIED BY strmadmin
DEFAULT TABLESPACE streams_tbs
QUOTA UNLIMITED ON streams_tbs;

grant dba to strmadmin;

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
END;
/

Create the Streams capture

Like in the first blog example about Streams, the next step consists in creating a capture process and associating a declarative rule-based transformation to the DML capture rule. Make sure you are connected as STRMADMIN to perform these operations:
connect strmadmin/strmadmin

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'source',
streams_type => 'capture',
streams_name => 'streams_capture',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
END;
/

set lines 120
col streams_name format a16
col streams_type format a9
col schema_name format a10
col rule_type format a8
col rule_name format a10
col rule_owner format a15

select STREAMS_NAME
, STREAMS_TYPE
, SCHEMA_NAME
, RULE_TYPE
, RULE_NAME
, RULE_OWNER
from DBA_STREAMS_SCHEMA_RULES;

STREAMS_NAME STREAMS_T SCHEMA_NAM RULE_TYP RULE_NAME RULE_OWNER
---------------- --------- ---------- -------- ---------- ----------
STREAMS_CAPTURE CAPTURE SOURCE DML SOURCE42 STRMADMIN
STREAMS_CAPTURE CAPTURE SOURCE DDL SOURCE43 STRMADMIN

Create the apply process and set the schema instantiation SCN

We now need to create an apply process, add rules so that it applies the captured LCR and define the schema instantiation SCN so that it knows at what SCN it should start its work. The script below perform these operations:
connect strmadmin/strmadmin

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => 'source',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
END;
/

select streams_name,
streams_type,
schema_name,
rule_type,
rule_name,
rule_owner
from dba_streams_schema_rules
where streams_name='STREAMS_APPLY';

STREAMS_NAME STREAMS_TY SCHEMA_NAME RULE_ RULE_NAME RULE_OWNER
------------- ---------- ------------ ----- --------------- ------------
STREAMS_APPLY APPLY SOURCE DML SOURCE60 STRMADMIN
STREAMS_APPLY APPLY SOURCE DDL SOURCE61 STRMADMIN

col APPLY_SCN format 999999999999 new_value instantiation_scn

select dbms_flashback.get_system_change_number apply_scn
from dual;

APPLY_SCN
---------
2565304

select &&instantiation_scn apply_scn
from dual;

APPLY_SCN
---------
2565304

begin
dbms_apply_adm.set_schema_instantiation_scn(
source_schema_name => 'source',
source_database_name => 'BLACK',
instantiation_scn => &&instantiation_scn);
end;
/

col source_database format a10
col source_schema format a15
col instantiation_scn format 999999999999

select source_database, source_schema, instantiation_scn
from dba_apply_instantiated_schemas;

SOURCE_DAT SOURCE_SCHEMA INSTANTIATION_SCN
---------- --------------- -----------------
BLACK DESTINATION 2565304

Associate the rule-based transformations with the DML and DDL apply rule

Before we associate the transformations with the apply rules, we'll write the a function to transform the current_schema property of the DDL LCR. For the purpose of this example, I've added a ddl_logging table and a sequence ddl_logging_seq in the STRMADMIN schema so that we can store some logs about the transformed LCR. Below is the script that creates the function:
connect strmadmin/strmadmin

create sequence ddl_logging_seq;

create table ddl_logging(
ddl_logging_seq number,
text clob);

create or replace function source2destination(in_any in anydata)
return anydata
IS
lcr sys.lcr$_ddl_record;
rc number;
seq number;
ddl CLOB;
begin
-- Check if the object type is SYS.LCR$_DDL_RECORD
if in_any.GETTYPENAME='SYS.LCR$_DDL_RECORD' THEN
-- Put the row LCR into lcr
rc := in_any.GETOBJECT(lcr);

-- Log DDL and other properties in ddl_logging
seq:=ddl_logging_seq.nextval;
insert into ddl_logging(ddl_logging_seq, text)
values (seq, 'CURRENT SCHEMA : '||lcr.get_current_schema());
DBMS_LOB.CREATETEMPORARY(ddl, true);
lcr.GET_DDL_TEXT(ddl);
insert into ddl_logging(ddl_logging_seq, text)
values (seq, 'CAPTURED DDL : '||ddl);
dbms_lob.freetemporary(ddl);
commit;

-- Transform the LCR
lcr.set_current_schema('DESTINATION');

-- Return the transformed LCR
return anydata.convertobject(lcr);
end if;
end;
/
Once the function created, we can use it as a custom rule-based transformation for the DDL LCR. We'll add a declarative rule-based transformation to rename the schema of the DML LCR:
set lines 120
col streams_name format a16
col streams_type format a9
col schema_name format a10
col rule_type format a8
col rule_name format a10
col rule_owner format a15

select STREAMS_NAME
, STREAMS_TYPE
, SCHEMA_NAME
, RULE_TYPE
, RULE_NAME
, RULE_OWNER
from DBA_STREAMS_SCHEMA_RULES;

STREAMS_NAME STREAMS_T SCHEMA_NAM RULE_TYP RULE_NAME RULE_OWNER
---------------- --------- ---------- -------- ---------- ----------
STREAMS_CAPTURE CAPTURE SOURCE DML SOURCE42 STRMADMIN
STREAMS_CAPTURE CAPTURE SOURCE DDL SOURCE43 STRMADMIN
STREAMS_APPLY APPLY SOURCE DML SOURCE60 STRMADMIN
STREAMS_APPLY APPLY SOURCE DDL SOURCE61 STRMADMIN

accept rulename prompt "Enter the Rule Name: "
Enter the Rule Name: SOURCE61

begin
dbms_streams_adm.set_rule_transform_function(
rule_name => '&&rulename',
transform_function => 'source2destination');
end;
/

col rule_owner format a10
col rule_name format a10
col transform_function_name format a32

select rule_owner,rule_name,transform_function_name,custom_type
from dba_streams_transform_function;

RULE_OWNER RULE_NAME TRANSFORM_FUNCTION_NAME CUSTOM_TYPE
---------- ---------- -------------------------------- -----------
STRMADMIN SOURCE61 "STRMADMIN"."SOURCE2DESTINATION" ONE TO ONE

select RULE_ACTION_CONTEXT
from DBA_RULES
where RULE_NAME='&&rulename'
and rule_owner='STRMADMIN';

RULE_ACTION_CONTEXT(ACTX_LIST(NVN_NAME, NVN_VALUE()))
-----------------------------------------------------------------------------
RE$NV_LIST(RE$NV_ARRAY(RE$NV_NODE('STREAMS$_TRANSFORM_FUNCTION', ANYDATA())))

select STREAMS_NAME
, STREAMS_TYPE
, SCHEMA_NAME
, RULE_TYPE
, RULE_NAME
, RULE_OWNER
from DBA_STREAMS_SCHEMA_RULES;

STREAMS_NAME STREAMS_T SCHEMA_NAM RULE_TYP RULE_NAME RULE_OWNER
---------------- --------- ---------- -------- ---------- ----------
STREAMS_CAPTURE CAPTURE SOURCE DML SOURCE42 STRMADMIN
STREAMS_CAPTURE CAPTURE SOURCE DDL SOURCE43 STRMADMIN
STREAMS_APPLY APPLY SOURCE DML SOURCE60 STRMADMIN
STREAMS_APPLY APPLY SOURCE DDL SOURCE61 STRMADMIN

accept rulename prompt "Enter the Rule Name: "
Enter the Rule Name: SOURCE60

begin
dbms_streams_adm.rename_schema(
rule_name => '&&rulename' ,
from_schema_name => 'SOURCE',
to_schema_name => 'DESTINATION',
step_number => 0,
operation => 'add');
end;
/

Start the capture and apply processes

Start the capture and the apply processes:
exec dbms_capture_adm.start_capture('streams_capture');
exec dbms_apply_adm.start_apply('streams_apply');

Test the replication

Note
The DDL commands are captured as it and the name of the source schema must not be included in it otherwise the apply process will fail. To avoid this problem you can use alter session set current_schema=source;
Check the replication is working as expected:
connect / as sysdba

alter session set current_schema=source;

create table t1(id number,
text varchar2(100),
constraint t1_pk primary key(id));

col owner format a12

select owner, table_name
from dba_tables
where owner in ('SOURCE','DESTINATION');

OWNER TABLE_NAME
------------ ----------
SOURCE T1
DESTINATION T1

select owner, constraint_name
from dba_constraints
where owner in ('SOURCE','DESTINATION');

OWNER CONSTRAINT_NAME
------------ ---------------
SOURCE T1_PK
DESTINATION T1_PK

select owner, index_name
from dba_indexes
where owner in ('SOURCE','DESTINATION');

OWNER INDEX_NAME
------------ ----------
DESTINATION T1_PK
SOURCE T1_PK

insert into t1 values(1,'Text 1');

commit;

col id format 99
col text format a10

select id, text
from destination.t1;

ID TEXT
-- ------
1 Text 1

update t1 set text='Text 2'
where id=1;

commit;

ID TEXT
-- ------
1 Text 2

drop table t1;

select owner, table_name
from dba_tables
where owner in ('SOURCE','DESTINATION');

no rows selected

select owner,
object_name,
original_name
from dba_recyclebin
where owner in ('SOURCE','DESTINATION')
order by 2,3;

OWNER OBJECT_NAME ORIGINAL_NAME
------------ ------------------------------ -------------
SOURCE BIN$ZKFLC8wxhgngQAB/AQAw5g==$0 T1_PK
DESTINATION BIN$ZKFLC8wyhgngQAB/AQAw5g==$0 T1
SOURCE BIN$ZKFLC8wyhgngQAB/AQAw5g==$0 T1
DESTINATION BIN$ZKFLDgByejLgQAB/AQAwqA==$0 T1_PK

set long 1000
set longchuncksize 1000

col text format a100 wor wra
select text from strmadmin.ddl_logging;

TEXT
-----------------------------------------------------------------
CURRENT SCHEMA : SOURCE
BASE TABLE NAME: SOURCE
OBJECT OWNER: SOURCE
CAPTURED DDL : create table t1(id number,
text varchar2(100),
constraint t1_pk primary key(id))

CURRENT SCHEMA : SOURCE
BASE TABLE NAME: SOURCE
OBJECT OWNER: SOURCE
CAPTURED DDL : drop table t1 AS "BIN$ZKFLC8wyhgngQAB/AQAw5g==$0"

Suppress the Oracle Streams Configuration

In this last step, we'll clean up the environment and leave it as it was when we've started:
connect / as sysdba

exec dbms_capture_adm.stop_capture('streams_capture');
exec dbms_apply_adm.stop_apply('streams_apply');

exec dbms_capture_adm.drop_capture('streams_capture',true);
exec dbms_apply_adm.drop_apply('streams_apply',true);

begin
for i in (select source_schema name
from dba_apply_instantiated_schemas
where source_schema in ('SOURCE','DESTINATION'))
loop
dbms_apply_adm.set_schema_instantiation_scn(
source_schema_name => i.name,
source_database_name => 'BLACK',
instantiation_scn => null);
end loop;
for i in (select source_object_owner||'.'||
source_object_name name
from dba_apply_instantiated_objects
where source_object_owner in ('SOURCE','DESTINATION'))
loop
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => i.name,
source_database_name => 'BLACK',
instantiation_scn => null);
end loop;
end;
/

exec dbms_streams_adm.remove_queue('strmadmin.streams_queue',true,true);

drop user strmadmin cascade;

drop tablespace streams_tbs
including contents and datafiles;

drop user destination cascade;
drop user source cascade;

alter database drop supplemental log
data (primary key, unique index) columns;

select SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI
from gv$database;

SUPPLEME SUP SUP
-------- --- ---
YES NO NO

Read more...

Monday, March 2, 2009

Differences Between Asynchronous And Synchronous Capture

Oracle 11g provides a new client that captures changes made to a database: the synchronous capture(1). "Synchronous" may sound confusing at the first hearing. Actually, only the capture is synchronous; in other words, the changes are captured by a trigger-like mechanism. The staging and apply are always done in background and there is no guaranty if you query the replicated data they will reflect the source committed changes right away. You still need to rely on the monitoring to insure the changes have been applied

I would not think of synchronous capture as an capture process-ersatz for Oracle Standard Edition; instead, it comes with its own set of benefits you can leverage or reproduce to better address some of your requirements. For this reason and because the information is spread in several part of the documentation, I've tried to summarize the differences between the 2 implicit capture methods of Oracle 11g. If you think something is missing or is worth to mention, leave a comment. Below is the table that compares asynchronous and synchronous captures for Oracle 11.1:

TypeAsynchronous CaptureSynchronous Capture
First release9.211.1
EditionEnterprise Only Enterprise, Standard and Standard One
Database modeRequires the database to run in ARCHIVELOG mode with the appropriate supplemental logging for the captured informationsCan run in NOARCHIVELOG and doesn't require any supplemental logging information
Captured eventsDML and DDLDML Only
Capture levelglobal, schema, table or table subsettable or table subset only
Before you create can instantiate the objects on the destinationYou need to execute the build and prepare_[xxx]_instanciation procedure of the dbms_capture_adm package (explicitly or implicitly) to prepare the logminer data dictionary and to make sure all the changes can be captured from the logs, i.e. there is no pending changesYou need to execute the prepare_sync_instantiation function from the dbms_capture_adm package to make sure you'll be able to capture the changes to be applied. Actually that function is run implicitly when you create the rule with dbms_streams_adm.
Before the capture creation-
You must create the apply processes or a DML handlers to consume the messages as well as the queue propagation processes before the synchronous capture. If you don't, any DML change to the source tables will fail with "ORA-24033: no recipients for message"
Starting/stopping the capture-You cannot stop a synchronous capture.
Rules associated with the captureRules can be simple or complex. They can be in a positive or in a negative rule set. They can be created with any procedure of dbms_streams_adm or dbms_capture_admRules must be simple; they must be in the positive rule set and be table or table subset rules. They must be created with the add_table_rules or the add_subset_rules procedure of the dbms_streams_adm package.
MessagesCaptured messages are sent to the buffered part of the queue and are kept in the Streams Pool up to they are consumed or they spill.Captured messages are persistent and are kept in the buffer cache or on disks.
OLD and NEW values for an updated rowdepends on the supplemental logging definitionAll OLD and NEW values are always stored in the LCR of an updated row.
Queues-The queue used to enqueue messages from the synchronous capture must be a commit time queue.
Table with unsupported data typesIf one column of a table is not supported by the capture process, you must remove the whole table from the capture.If one column is not supported, you can use a transformation rule to capture the table without the unsupported column
List of Unsupported table/columnIs available via the dba_streams_unsupported or the dba_streams_newly_supported viewsIs available via the dba_streams_columns view
Supported Types with Oracle 11.1The list of supported data type of capture processes is the following:
  • VARCHAR2
  • NVARCHAR2
  • FLOAT
  • NUMBER
  • DATE
  • BINARY_FLOAT
  • BINARY_DOUBLE
  • TIMESTAMP
  • TIMESTAMP WITH TIME ZONE
  • TIMESTAMP WITH LOCAL TIME ZONE
  • INTERVAL YEAR TO MONTH
  • INTERVAL DAY TO SECOND
  • RAW
  • CHAR
  • NCHAR
  • UROWID
  • LONG
  • LONG RAW
  • CLOB with BASICFILE storage
  • NCLOB with BASICFILE storage
  • BLOB with BASICFILE storage
  • XMLType stored as CLOB
The list of supported data type of a synchronous capture is the following:
  • VARCHAR2
  • NVARCHAR2
  • FLOAT
  • NUMBER
  • DATE
  • BINARY_FLOAT
  • BINARY_DOUBLE
  • TIMESTAMP
  • TIMESTAMP WITH TIME ZONE
  • TIMESTAMP WITH LOCAL TIME ZONE
  • INTERVAL YEAR TO MONTH
  • INTERVAL DAY TO SECOND
  • RAW
  • CHAR
  • NCHAR
  • UROWID
Others ConsiderationsCapture Processes have some advanced abilities that are not available with synchronous capture:
  • Combined Capture and Apply
  • Downstream Capture
-

(1) You'll find an example of synchronous capture implementation in a previous post entitled Streams Synchronous Capture 101.

Read more...