Bookmark and Share
Showing posts with label fasttrack. Show all posts
Showing posts with label fasttrack. Show all posts

Monday, March 16, 2009

Adding A Destination To A Streams Configuration

In the first post of this blog, Oracle Streams One Way Table Replication 101, I've described how to setup one of the simplest Streams replication possible: SOURCE.T1 DML changes are captured and applied to DESTINATION.T1; There is just 1 database and no propagation. Now, we'll transform this example so that the changes are also directed into another queue, in the same database, and applied to DESTINATION2.T1. This new example will allow us to test some of the new features of Oracle Streams 11g; I'll show you that in the coming weeks...

It's worth to note that the scenario I've used to add the new destination, can impact the primary destination replication. There is a way to prevent this issue if that's a concern for you: you can created a specific capture for the time of the instantiation and merge it with the existing capture once the instantiation done. In the current example, there is only one table and no real changes. Leaving the existing configuration as much in sync as possible was not my primary goal. For this reason, I've chosen a more direct method made of the steps below: Before we start, we assume we have a Streams replication setup as described in Oracle Streams One Way Table Replication 101, steps 1 to 9!

Create a Streams queue in the destination database

As I've already told you, I use a second queue. For this reason, the first step of the setup consists in creating that second queue; In order to proceed, I've used the dbms_streams_adm.set_up_queue procedure like below:
connect strmadmin/strmadmin

begin
dbms_streams_adm.set_up_queue(
queue_table => 'strmadmin.streams_queue_table2',
queue_name => 'strmadmin.streams_queue2');
end;
/

Create a propagation from streams_queue to streams_queue2

Once the queue created, I create a queue-to-queue propagation for the DML changes on source.t1; I propagate the messages in the same database so I don't use any database link:
begin
dbms_streams_adm.add_table_propagation_rules(
table_name => 'source.t1',
streams_name => 'streams_propagation',
source_queue_name => 'strmadmin.streams_queue',
destination_queue_name => 'strmadmin.streams_queue2',
include_dml => true,
include_ddl => false,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true,
queue_to_queue => true);
end;
/

col propagation_name format a19
col source_queue_name format a13
col destination_queue_name format a14
col destination_dblink format a9
col status format a10

select propagation_name,
source_queue_name,
destination_queue_name,
destination_dblink,
status
from dba_propagation;

PROPAGATION_NAME SOURCE_QUEUE_ DESTINATION_QU DESTINATI STATUS
------------------- ------------- -------------- --------- ----------
STREAMS_PROPAGATION STREAMS_QUEUE STREAMS_QUEUE2 AQ$_LOCAL ENABLED

Create an apply process for the new destination

Then, I create the apply process that subscribes to the new queue for the changes made to source.t1:
begin
dbms_streams_adm.add_table_rules(
table_name => 'source.t1',
streams_type => 'apply',
streams_name => 'streams_apply2',
queue_name => 'strmadmin.streams_queue2',
include_dml => true,
include_ddl => false,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
end;
/

col apply_name format a14
col queue_name format a14
col rule_set_name format a11

select apply_name,
queue_name,
rule_set_name,
status,
message_delivery_mode
from dba_apply;

APPLY_NAME QUEUE_NAME RULE_SET_NA STATUS MESSAGE_DE
-------------- -------------- ----------- ---------- ----------
STREAMS_APPLY STREAMS_QUEUE RULESET$_9 ENABLED CAPTURED
STREAMS_APPLY2 STREAMS_QUEUE2 RULESET$_16 DISABLED CAPTURED

Add a Transformation to rename the schema on the Apply rule

Like in the previous example, I need to rename the schema. I use a declarative transformation to rename source to destination2:
col rule_owner format a9
col streams_name format a14
col table_owner format a6
col streams_type format a5
col rule_type format a3
col table_name format a5
col rule_name format a5 new_value rulename

select rule_owner,
streams_name,
streams_type,
table_owner,
table_name,
rule_type,
rule_name
from dba_streams_table_rules
where streams_name='STREAMS_APPLY2'
and streams_type='APPLY'
and rule_type='DML';

RULE_OWNE STREAMS_NAME STREA TABLE_ TABLE RUL RULE_
--------- -------------- ----- ------ ----- --- -----
STRMADMIN STREAMS_APPLY2 APPLY SOURCE T1 DML T115

prompt &&rulename
T115

begin
dbms_streams_adm.rename_schema(
rule_name => '&&rulename' ,
from_schema_name => 'SOURCE',
to_schema_name => 'DESTINATION2',
step_number => 0,
operation => 'add');
end;
/

col rule_name format A6
col from_schema_name format a6
col to_schema_name format a12

select rule_name,
transform_type,
from_schema_name,
to_schema_name,
declarative_type
from dba_streams_transformations;

RULE_N TRANSFORM_TYPE FROM_S TO_SCHEMA_NA DECLARATIVE_T
------ -------------------------- ------ ------------ -------------
T110 DECLARATIVE TRANSFORMATION SOURCE DESTINATION RENAME SCHEMA
T115 DECLARATIVE TRANSFORMATION SOURCE DESTINATION2 RENAME SCHEMA

Insure there is no pending transaction on the source

At that point, that's important to make sure there is no uncommited changes made on the source database before the apply was created. To check that's the case, we run dbms_capture_adm.prepare_xxxx_instantiation that holds a share look on all the source objects. We need to run that command AFTER we create the apply process! We'll instantiate the objects at a SCN that is after that procedure execution:
connect / as sysdba

exec dbms_capture_adm.prepare_table_instantiation('source.t1');

Instantiate the new destination

I can now create the new schema and table to instantiate them:
create user destination2
identified by destination2
default tablespace users
temporary tablespace temp;

grant connect,resource to destination2;

create table destination2.t1(
id number primary key,
text varchar2(80));

col apply_scn format 999999999999 new_value instantiation_scn

select dbms_flashback.get_system_change_number apply_scn
from dual;

APPLY_SCN
---------
795716


prompt &&instantiation_scn
795716

insert into destination2.t1
(select * from source.t1 as of scn &&instantiation_scn);

commit;
Once the table instantiated, I store the instantiation SCN in the database so that the new apply process only applies changes committed after that instantiation SCN. Before we proceed, we have to make sure the change we'll make won't affect the existing apply process. That's because we use the same database for both apply. To make sure that's the case, we'll verify the applied_message_number from dba_apply_progress is greater than the instantiation SCN we'll set.
Note
If there is no changes on the source database, there is no reason for applied_message_number to change either. For this reason you can run an update on one of the source object and commit the change to make sure that number can increase.
select applied_message_number
from dba_apply_progress;

APPLIED_MESSAGE_NUMBER
----------------------
796912
0
begin
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => 'source.t1',
source_database_name => 'BLACK',
instantiation_scn => &&instantiation_scn);
end;
/

Start the new Apply Process

Once the configuration finished, I could start the new apply process; the propagation is enabled by default:
exec dbms_apply_adm.start_apply('streams_apply2')

col propagation_name format a19
col status format a10

select propagation_name,
status
from dba_propagation;

PROPAGATION_NAME STATUS
------------------- ----------
STREAMS_PROPAGATION ENABLED

Test the replication

And I've tested the replication:
insert into source.t1 values (4,'Text 4');

1 row created.

commit

col id format 99
col text format a6

select id,
text
from destination.t1;

ID TEXT
--- ------
4 Text 4
1 Text 1
2 Text 2
3 Text 3

select id,
text
from destination2.t1;

ID TEXT
--- ------
4 Text 4
1 Text 1
2 Text 2
3 Text 3

delete from source.t1 where id=4;

commit;

Reset the Configuration

Like for the other scenarios, you'll find below the script to reset the configuration to what it was before this post:
exec dbms_propagation_adm.stop_propagation('streams_propagation');
exec dbms_propagation_adm.drop_propagation('streams_propagation', true);

exec dbms_apply_adm.stop_apply('streams_apply2')
exec dbms_apply_adm.drop_apply('streams_apply2',true);

begin
dbms_streams_adm.remove_queue(
queue_name => 'strmadmin.streams_queue2',
cascade => false,
drop_unused_queue_table => true);
end;
/

select queue_table
from dba_queue_tables
where owner='STRMADMIN';

QUEUE_TABLE
-------------------
STREAMS_QUEUE_TABLE

drop user destination2 cascade;

Read more...

Tuesday, February 3, 2009

Setting Up a Downstream Real-Time Capture with RMAN 101

Oracle Streams downstream capture is probably one of most commonly used Streams configuration, whether its for a long time replication or for a short period of time like during a database upgrade. The limited impact on the source database advocates for that configuration; That's because, it only requires to add supplemental logging and to capture the dictionary from it the primary database.

On the other hand, Oracle Recovery Manager, or RMAN, is one of the most common way to clone a database and it can easily manage several hundreds of Gigabytes, or more. It's probably one of the number one tools for most Oracle Database Administrator.

This post shows a simple setup of Streams Real Time Downstream Capture instantiated with the help of RMAN; like in the first post of this blog "Oracle Streams One Way Table Replication 101", we will focus on keeping everything simple and we'll use many shortcuts to get to the result. However, you should be able fill the gap with your needs pretty easily and you'll get all the details in the documentation. Regarding that specific configuration, you can start with Oracle® Streams Concepts and Administration 11g Release 1 (11.1) B Online Database Upgrade with Oracle Streams.

Lets start... We'll build the configuration shown in the graphic below:


Note
The above configuration requires Oracle Database Enterprise Edition
We'll assume we have already have a database and its instance; they are both named BLACK in what follow. We'll get into all the steps related to the demo setup, from the creation of the source schema to the testing of the working configuration; here are the related sections:

2 Words About the Source Database and its schema

The source database and its instance are both named BLACK. The script below creates a DEMO schema with 2 tables T1 and T2 that have primary keys and which are fully supported by Streams:

connect / as sysdba

create user demo
identified by demo
default tablespace users
temporary tablespace temp;

grant connect, resource to demo;

connect demo/demo

create table t1 (
id number primary key,
lib varchar2(10));

create table t2 (
id number primary key,
lib varchar2(10));

insert into demo.t1
values (1,'data 1');

insert into demo.t1
values (2,'data 2');

insert into demo.t2
values (1,'data 1');

commit;
Note
To ease the work, we assume there is no unsupported type in the application schema (see DBA_STREAMS_UNSUPPORTED) and all the tables have a primary a unique key.
Configure Database Logging and Prepare the Multi-Version Data Dictionary

For more details about the requirements of a Streams Source Database, you can refer to the related section of the 1st post of this blog. To make it short, you need the database to be in archivelog mode and to configure supplemental logging:
connect / as sysdba

alter database add supplemental log
data (primary key, unique index) columns;

select LOG_MODE,
SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI
from v$database;

LOG_MODE SUPPLEME SUP SUP
------------ -------- --- ---
ARCHIVELOG IMPLICIT YES YES
You also need to a copy of the data dictionary from the source to the destination database. To do it, we need to run DBMS_CAPTURE_ADM.BUILD as below:
var first_scn number;

set serveroutput on

DECLARE
scn NUMBER;
BEGIN
DBMS_CAPTURE_ADM.BUILD(
first_scn => scn);
DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
:first_scn := scn;
END;
/

First SCN Value = 319193
It is important to keep track of the SCN returned by DBMS_CAPTURE_ADM.BUILD. You'll use that SCN as the first SCN of the capture processes so that the meta data from the source database can be pushed and kept in the destination MVDD. Note that the last command captures the minimum to add to the MVDD and you have to add some additional definition for the objects you want to replicate. You should use one of the prepare_xxxx_instantiation procedure of the dbms_capture_adm package to add the related definitions in the redologs:
exec dbms_capture_adm.prepare_table_instantiation(-
table_name=>'demo.t1');

exec dbms_capture_adm.prepare_table_instantiation(-
table_name=>'demo.t2');

Duplicate you database with RMAN

You can use the method of your choice to instanciate the data on the destination database. What matters is that (1) you know the SCN of the table copies and (2) that SCN called the instanciation SCN is after the first SCN returned by DBMS_CAPTURE_ADM.BUILD. In what follows, we'll use Recovery Manager to get to that; we'll fully leverage the 11g new DUPLICATE FROM ACTIVE DATABASE command.
Note
In that example, we'll duplicate BLACK to build a new database named WHITE on the same server; Because the 2 databases will run on the same server, we'll rename the directories /.../BLACK into /.../WHITE.
To duplicate the database, we can run the following operations:
  • Configure the service for the new Instance; On Linux, the command looks like below:
echo "WHITE:$ORACLE_HOME:N" >>/etc/oratab
  • Setup the environment for the new instance
. oraenv
WHITE
  • Create a password file with the same password than for the BLACK instance:
cd $ORACLE_HOME
orapwd file=orapwWHITE password=change_on_install ignorecase=Y
  • Statically register the WHITE instance in its local listener, via the listener.ora file. It allows to connect to WHITE with a SYSDBA user from a remote location even when the instance is down.
cd $ORACLE_HOME/network/admin

$ grep -A 6 SID_LIST_LISTENER listener.ora
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(ORACLE_HOME = /u01/app/oracle/product/11.1.0/db_1)
(SID_NAME = WHITE)
)
)

lsnrctl reload listener
  • Create an alias for the 2 instances in the tnsnames.ora file:
cd $ORACLE_HOME/network/admin

$ cat tnsnames.ora

BLACK =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))
)
(CONNECT_DATA= (SID=BLACK)
)
)

WHITE =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))
)
(CONNECT_DATA= (SID=WHITE)
)
)
  • Create an init.ora file to start the new instance. Note that you can do it with 4 parameters only (even 3 if you use the 11g memory_target parameter); The spfile will be rebuilt from the BLACK one during the DUPLICATE operation anyway:
cd $ORACLE_HOME/dbs

echo "db_name=WHITE" >initWHITE.ora
echo "pga_aggregate_target=160M" >>initWHITE.ora
echo "processes=100" >>initWHITE.ora
echo "sga_target=250M" >>initWHITE.ora
  • If the source database uses some directories that are not managed automatically by OMF or ADR, you'll have to create the corresponding directories for the new database
# Corresponding destination for non-OMF datafiles
mkdir -p /u01/app/oracle/oradata/WHITE

# Corresponding destination for archivelogs
mkdir -p /u01/app/oracle/oradata/WHITE/archivelogs

# Audit File Destination
mkdir -p $ORACLE_BASE/admin/WHITE/adump
  • Connect to both source and destination database with RMAN:
$ORACLE_HOME/bin/rman
connect target sys@black
database Password:
connected to target database: BLACK (DBID=361377223)

connect auxiliary sys@white
auxiliary database Password:
connected to auxiliary database (not started)
  • Start the WHITE instance in nomount mode:
startup auxiliary nomount;
  • Run the DUPLICATE command:
DUPLICATE TARGET DATABASE
TO WHITE
FROM ACTIVE DATABASE
DB_FILE_NAME_CONVERT 'BLACK','WHITE'
SPFILE
PARAMETER_VALUE_CONVERT 'BLACK','WHITE'
SET PGA_AGGREGATE_TARGET = '150M'
SET SGA_TARGET = '250M'
SET LOG_FILE_NAME_CONVERT 'BLACK','WHITE';
  • exit RMAN, the DUPLICATE operation is done!
exit;
Create the Streams Administrator and the Streams Queue

We'll create the Streams Administrator and the Streams Queue on the WHITE database; for a more detailed explanation, refer to the corresponding section of my previous post:
. oraenv
WHITE

sqlplus / as sysdba

CREATE TABLESPACE streams_tbs
DATAFILE '/u01/app/oracle/oradata/WHITE/streams_tbs01.dbf'
SIZE 25M AUTOEXTEND ON MAXSIZE 256M;

CREATE USER strmadmin IDENTIFIED BY strmadmin
DEFAULT TABLESPACE streams_tbs
QUOTA UNLIMITED ON streams_tbs;

grant dba to strmadmin;

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
END;
/

exit;
Note
In the case of a downsteam capture, there is no need to create a Streams Administrator on the source database. There is no need either to create a database link from or to the source database and the global_names parameter doesn't have to be set to true.

Setup the Log Transport Service for Downstream Capture

To allow the redo logs to be shipped from BLACK to WHITE:
  • Create standby redo logs of the size of the BLACK database on WHITE.
  • Change the log_archive_dest_n parameters of the BLACK instance so that the logs are shipped in asynchronous mode to WHITE.
To start, we'll check the size of the redolog files on BLACK:
. oraenv
BLACK

sqlplus / as sysdba

select thread#, sequence#, bytes
from v$log
order by 1,2;

THREAD# SEQUENCE# BYTES
------- --------- ----------
1 17 52428800
1 18 52428800
1 19 52428800

exit;
Then we'll create standby redologs of the same size on WHITE:
. oraenv
WHITE

sqlplus / as sysdba

alter database add standby logfile thread 1 group 21
('/u01/app/oracle/oradata/WHITE/redo21_1.log') SIZE 52428800;
alter database add standby logfile thread 1 group 22
('/u01/app/oracle/oradata/WHITE/redo22_1.log') SIZE 52428800;
alter database add standby logfile thread 1 group 23
('/u01/app/oracle/oradata/WHITE/redo23_1.log') SIZE 52428800;

exit;
Once the standby redologs created, we can setup the log transport service on the BLACK instance:
. oraenv
BLACK

sqlplus / as sysdba

alter system set log_archive_dest_2=
'service=WHITE lgwr async noregister valid_for=(online_logfiles,all_roles) db_unique_name=WHITE';

alter system set log_archive_config=
'dg_config=(BLACK,WHITE)';

alter system set log_archive_dest_state_2=enable;

exit

. oraenv
WHITE

sqlplus / as sysdba

alter system set log_archive_config=
'dg_config=(BLACK,WHITE)';

exit
Note:
In the case of Real-Time Capture, redo logs are shipped by the LNS process to the RFS processes. The logs data are then stored in the standby redo logs files. The ARCH processes of the destination instance archive the standby redo logs; As a consequence you should not use the template clause.
Create the Downstream Capture

The downstream capture creation is exactly like the capture creation on the source database. The only difference is the source database name that, in this case, is different from the actual capturing database:
connect strmadmin/strmadmin

BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => 'strmadmin.streams_queue',
capture_name => 'streams_capture',
rule_set_name => NULL,
source_database => 'BLACK',
use_database_link => false,
first_scn => 319193,
logfile_assignment => 'implicit');
END;
/
Once the capture created, we'll set its downstream_real_time_mine property to Y:
BEGIN
DBMS_CAPTURE_ADM.SET_PARAMETER(
capture_name => 'streams_capture',
parameter => 'downstream_real_time_mine',
value => 'Y');
END;
/
We can eventually add the capture rules:
connect strmadmin/strmadmin

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'demo.t1',
streams_type => 'capture',
streams_name => 'streams_capture',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
END;
/

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'demo.t2',
streams_type => 'capture',
streams_name => 'streams_capture',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
END;
/
Specify the Instanciation SCN and Create the Apply

We need to specify the SCN of the tables so that the Apply knows what transaction it should start to apply. In the case of the RMAN duplicate command, that's the SCN right before the open resetlogs:
select INCARNATION#,RESETLOGS_TIME,RESETLOGS_CHANGE#
from v$database_incarnation
order by 1;

INCARNATION# RESETLOGS RESETLOGS_CHANGE#
------------ --------- -----------------
1 26-JAN-09 1
2 03-FEB-09 329645
begin
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => 'demo.t1',
source_database_name => 'BLACK',
instantiation_scn => 329644);
end;
/

begin
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => 'demo.t2',
source_database_name => 'BLACK',
instantiation_scn => 329644);
end;
/

col SOURCE_DATABASE format a6
col OBJECT format a10
col INSTANTIATION_SCN format 999999999999

select source_database,
source_object_owner||'.'||source_object_name object,
instantiation_scn
from dba_apply_instantiated_objects;

SOURCE OBJECT INSTANTIATION_SCN
------ ---------- -----------------
BLACK DEMO.T1 329644
BLACK DEMO.T2 329644
We can create the Apply like below. In that case, creating the rules implicitly create the associated Apply if it doesn't exist:
-- Create the Apply and add the DEMO.T1 table
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'demo.t1',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
END;
/

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'demo.t2',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
END;
/
The configuration is over...

Start And Monitor Oracle Streams

Once Streams Capture and Apply created, start both of them:
exec dbms_capture_adm.start_capture('streams_capture');

exec dbms_apply_adm.start_apply('streams_apply');

Before we test changes on the source database are replicated as expected to the destination server, we can check the status of the components involved, we'll start by checking the status of the Log Transport Service on the BLACK instance and manage any error:
show parameter log_archive_dest_2
show parameter log_archive_dest_state_2

alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';

col dest_id format 99
col timestamp format a20
col message format a90 wor wra
set pages 1000
set lines 120
select dest_id,
timestamp,
message
from v$dataguard_status
order by timestamp;

select process,
pid,
status,
sequence#,
thread#
from v$managed_standby;
You can check that the files sent by BLACK are registered as expected on WHITE; connect to the WHITE database and execute:
select process,
status,
client_process,
sequence#,
thread#
from v$managed_standby;

set lines 120
set pages 1000
col consumer_name format a15
col name format a65 wor wra
col source format a6
col t# format 99
col seq# format 9999999

select consumer_name ,
source_database source,
thread# t#,
sequence# seq#,
first_scn,
name,
purgeable
from dba_registered_archived_log
order by source_database, first_scn;
You can also check the status of the standby logs on WHITE:
set lines 120
set pages 1000
col dbid format 999999999999999
col bytes format 999,999,999,999
col first_change# format 999999999999999
col sequence# format 9999999

select dbid,
sequence#,
bytes,
status,
first_change#
from v$standby_log;
The streams capture should be running without any error on WHITE:
set lines 120
set pages 1000
col capture_name format a15
col status format a7
col source format a6
col error_message format a85 wor wra

select capture_name,
status,
source_database source,
error_message
from dba_capture;

set lines 120
set pages 1000
col capture_name format a15
col state format a50 wor wra
col sysdate format a20
col capture_time format a20
col total_messg format 99999999999
col apply_name format a15

alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';

select capture_name,
state,
capture_time,
total_messages_created total_messg,
apply_name
from v$streams_capture;
The apply should be running without any error on WHITE too:

alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';

set lines 120
set pages 1000
col apply_name format a15
col status format a7
col status_change_time format a20
col error_message format a60 wor wra

select apply_name,
status,
status_change_time,
error_message
from dba_apply;

set lines 120
set pages 1000
col apply_name format a15
col source_database format a7
col sysdate format a20
col apply_time format a20
col applied_message_create_time format a20

select apply_name,
source_database,
sysdate,
apply_time,
applied_message_create_time
from dba_apply_progress;
If there is any remaining error, you'll have to correct it ;-).

Test the Replication

Testing the replication is pretty easy, connect to the BLACK database and run some DML and DDL commands:
sqlplus demo/demo@BLACK

insert into t1
values (3,'data 3');

commit;

select * from t1;

ID LIB
-- ------
3 data 3
1 data 1
2 data 2

alter table t2 add (new_column number);

update t2 set new_column=2;

commit;

select * from t2;

ID LIB NEW_COLUMN
-- ------ ----------
1 data 1 2

exit;
Once the changes performed, you can check they are applied to the destination:
sqlplus demo/demo@WHITE

select * from t1;

ID LIB
-- ------
3 data 3
1 data 1
2 data 2

select * from t2;

ID LIB NEW_COLUMN
-- ------ ----------
1 data 1 2

exit;
You see how easy it is!

Other Considerations

What next? Because simplest part of the story has actually finished here!

We can easily imagine you'll use Streams Downstream Capture to upgrade a database with a new Release or Patch Set. If that's the case, consider using the dynamic service registration to make the application connect to your new target seamlessly. Obviously I didn't say how to deal with non-supported types or operations. Or eventually, how to build a really advanced rollback scenario. That's a totally different story... in a totally different dimension.

Concerning this hands-on, we'll finish it by cleaning up our environment, i.e. disable the Log Transport Service, remove the supplemental logging from BLACK, drop the WHITE database and its instance and drop the DEMO schema; The script below shows how to deal with those operations:
. oraenv
BLACK

sqlplus / as sysdba

alter system reset log_archive_dest_2;
alter system reset log_archive_dest_state_2;
alter system reset log_archive_config;

alter system set log_archive_dest_2='' scope=memory;
alter system set log_archive_config='' scope=memory;

alter database drop supplemental log
data (primary key, unique index) columns;

select SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI
from gv$database;

drop user demo cascade;

exit;

. oraenv
WHITE

$ORACLE_HOME/bin/rman target /
startup force mount
sql 'alter system enable restricted session';
drop database including backups;

exit;

###################################
# Delete Remaining Database Files #
###################################
# Delete the entries in $ORACLE_HOME/dbs
rm -f $ORACLE_HOME/dbs/*WHITE*

# Delete the Streams Registered Archived Logs
# (RMAN doesn't do it)
rm -f /u01/app/oracle/oradata/WHITE/archivelogs/*

# Delete the database Directories
rm -rf $ORACLE_BASE/oradata/WHITE
rm -rf $ORACLE_BASE/admin/WHITE
rm -rf $ORACLE_BASE/diag/rdbms/white

# Delete the WHITE instance from /etc/oratab
ed /etc/oratab <<EOF
/^WHITE/d
wq
EOF

# Remove the entries from the network configuration
# like tnsnames.ora and listener.ora manually
vi $ORACLE_BASE/network/admin/tnsnames.ora
vi $ORACLE_BASE/network/admin/listener.ora

lsnrctl reload LISTENER
Leave your comments if you find this post useful or confusing. Let me know if you find any typos!

Read more...

Tuesday, January 20, 2009

Tracking Updates And Deletes With a Custom Rule-Based Transformation

As you know, Streams can be used in several cases, including some really advanced ones where you may want to transform DDL or DML changes into something very different. This post starts from the previous Streams One Way Table Replication 101 post. Instead of maintening a copy of the table in a destination schema, we'll transform DML captured changes and we'll keep the data from before the changes; we'll use a Custom Rule-Based Transformation for that purpose.

I won't argue if this is or not the right way to manage your problem; it's probably not, by the way! The goal here is to illustrate a specific point of the documentation that is described in Managing Custom Rule-Based Transformations.

First, let's represent what we want to achieve; Basically that's what follows:

Custom Transformation Example

We want to populate a table named T3_HISTORY from the changes made to the T3 table, so that:

  • If one row is inserted in T3, nothing is applied to T3_HISTORY
  • If one or several columns are modified in a row of T3, a row is inserted in T3_HISTORY with the primary key of the changed row and the values of the columns before they've been modified
  • If one row is deleted from T3, it is inserted in T3_HISTORY
Pretty simple? To setup that example, we'll perform the steps below:
Note:
We've tested this post on a 11.1.0.7 Enterprise Edition version of Oracle running on Linux 32bits.

Step 1: Build The Sample Schema and Table

In order to start, we'll create a schema named SOURCE and a table named T3. The script below can be used for that simple operation:

connect / as sysdba

create user source
identified by source
default tablespace users
temporary tablespace temp;

grant connect,resource to source;

connect source/source

create table t3(
id number primary key,
text1 varchar2(80),
text2 varchar2(80));

insert into t3(id, text1, text2)
values (1,'Text 1','Text 1');

insert into t3(id, text1, text2)
values (2,'Text 2','Text 2');

commit;
Step 2: Configure Database Logging and the Multi-Version Data Dictionary

Then, we'll enable the supplemental logging and we'll capture dictionary for our apply process; for a more detailed explanation, refer to the corresponding section of the previous post:
connect / as sysdba

alter database add supplemental log
data (primary key, unique index) columns;

select SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI
from gv$database;

SUPPLEME SUP SUP
-------- --- ---
IMPLICIT YES YES

connect / as sysdba

var first_scn number;

set serveroutput on

DECLARE
scn NUMBER;
BEGIN
DBMS_CAPTURE_ADM.BUILD(
first_scn => scn);
DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
:first_scn := scn;
END;
/

First SCN Value = 49042939541
exec dbms_capture_adm.prepare_table_instantiation(-
table_name=>'source.t3');

Step 3: Create the Streams Administrator and The Streams Queue

We'll create the Streams Administrator and the Streams Queue; for a more detailed explanation, refer to the corresponding section of the previous post:
connect / as sysdba

CREATE TABLESPACE streams_tbs
DATAFILE '/u01/app/oracle/oradata/BLACK/streams_tbs01.dbf'
SIZE 25M AUTOEXTEND ON MAXSIZE 256M;

CREATE USER strmadmin IDENTIFIED BY strmadmin
DEFAULT TABLESPACE streams_tbs
QUOTA UNLIMITED ON streams_tbs;

grant dba to strmadmin;

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
END;
/

Step 4: Create the Capture Process and Define the Capture Rules

Once the source and the destination databases setup, you can create a capture process and add it a rule to capture the source table; In this case we are not interested by the INSERT commands but instead, just by DELETE and UPDATE. For that reason we'll use a AND conditition that will be added to the rule evaluation. For a more detailed explanation about the capture process, you can refer to the corresponding section of the previous post:
connect strmadmin/strmadmin

var first_scn number;

exec :first_scn:= 49042939541

BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => 'strmadmin.streams_queue',
capture_name => 'capture4transfo',
rule_set_name => NULL,
source_database => 'BLACK',
use_database_link => false,
first_scn => :first_scn,
logfile_assignment => 'implicit');
END;
/

col capture_name format a15
col queue_name format a13
col first_scn format 999999999999
col start_scn format 999999999999
col rule_set_name format a11

select capture_name,
queue_name,
first_scn,
start_scn,
rule_set_name
from dba_capture;

CAPTURE_NAME QUEUE_NAME FIRST_SCN START_SCN RULE_SET_NA
--------------- ------------- ------------- ------------- -----------
CAPTURE4TRANSFO STREAMS_QUEUE 49042939541 49042939541

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'source.t3',
streams_type => 'capture',
streams_name => 'capture4transfo',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
include_tagged_lcr => true,
source_database => 'BLACK',
inclusion_rule => true,
and_condition =>
':lcr.get_command_type() in (''DELETE'',''UPDATE'')');
END;
/

set lines 120
col streams_name format a16
col streams_type format a9
col table_owner format a10
col table_name format a15
col rule_type format a8
col rule_name format a15

select STREAMS_NAME,
STREAMS_TYPE,
TABLE_OWNER,
TABLE_NAME,
RULE_TYPE,
RULE_NAME
from DBA_STREAMS_TABLE_RULES;

STREAMS_NAME STREAMS_T TABLE_OWNE TABLE_NAME RULE_TYP RULE_NAME
---------------- --------- ---------- ---------- -------- ---------
CAPTURE4TRANSFO CAPTURE SOURCE T3 DML T352

col rule_condition format a60 wor wra
select rule_condition
from dba_streams_table_rules
where streams_name='CAPTURE4TRANSFO'
and table_owner='SOURCE'
and table_name='T3';

RULE_CONDITION
------------------------------------------------------------
(((((:dml.get_object_owner() = 'SOURCE' and :dml.get_object_
name() = 'T3')) and :dml.get_source_database_name() = 'BLACK
' )) and (:dml.get_command_type() in ('DELETE','UPDATE')))

Step 5: Create a Declarative Rule-Based Transformation and Add it to the Capture Process

We want the transformation to be as resuable as possible. For that reason we'll proceed in 2 step. The first step consists in renaming the table from SOURCE.T3 to DESTINATION.T3_HISTORY with de declarative rule based transformation. We'll apply the transformation to the capture rule and the script below details the associated steps:
connect strmadmin/strmadmin

begin
dbms_streams_adm.rename_table(
rule_name => 'T352' ,
from_table_name => 'SOURCE.T3',
to_table_name => 'DESTINATION.T3_HISTORY',
step_number => 0,
operation => 'add');
end;
/

col rule_name format A6
col from_schema_name format a6
col to_schema_name format a12

select rule_name,
transform_type,
from_schema_name,
to_schema_name,
declarative_type
from dba_streams_transformations;

RULE_N TRANSFORM_TYPE FROM_S TO_SCHEMA_NA DECLARATIVE_T
------ -------------------------- ------ ------------ -------------
T352 DECLARATIVE TRANSFORMATION SOURCE DESTINATION RENAME TABLE

Step 6: Create an Apply Process

We can create an apply process that will subscribe the Streams queue and get the changes to be applied on the DESTINATION.T3_HISTORY table:
connect strmadmin/strmadmin

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'destination.t3_history',
streams_type => 'apply',
streams_name => 'apply4transfo',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
include_tagged_lcr => true,
source_database => 'BLACK',
inclusion_rule => true);
END;
/

col apply_name format a13
col queue_name format a13
col rule_set_name format a11

select apply_name,
queue_name,
rule_set_name,
status,
message_delivery_mode
from dba_apply;

APPLY_NAME QUEUE_NAME RULE_SET_NA STATUS MESSAGE_DE
------------- ------------- ----------- -------- ----------
APPLY4TRANSFO STREAMS_QUEUE RULESET$_55 DISABLED CAPTURED

set lines 120
col streams_name format a16
col streams_type format a9
col table_owner format a11
col table_name format a15
col rule_type format a8
col rule_name format a15

select STREAMS_NAME,
STREAMS_TYPE,
TABLE_OWNER,
TABLE_NAME,
RULE_TYPE,
RULE_NAME
from DBA_STREAMS_TABLE_RULES
where STREAMS_NAME='APPLY4TRANSFO';

STREAMS_NAME STREAMS_T TABLE_OWNER TABLE_NAME RULE_TYP RULE_NAME
---------------- --------- ----------- ---------- -------- ------------
APPLY4TRANSFO APPLY DESTINATION T3_HISTORY DML T3_HISTORY54

Step 7: Create a Custom Rule-Based Transformation and Add it to the Apply Process Rule

As you can easily understand the transformation we'll apply to the LCR is not simple enough to be executed declaratively. For that reason, we'll create a custom function that will take the LCR as a parameter and will return the transformed LCR. You'll find the code of it below; it consists in the following:
  • Make sure the LCR is an 'UPDATE' or a 'DELETE'
  • Get all the before images (i.e. 'old') of the changed and PK/UK columns and set them as the values to be inserted in the LCR (i.e. 'new')
  • Add a column SCN that contains the SCN of the DML statement
  • Add a column DML_OPERATION that says if the capture operation was an UPDATE or a DELETE
  • Delete all the before images (i.e. 'old') of the LCR
  • Transform the command into an INSERT
You'll find the code of the procedure below:
connect strmadmin/strmadmin

create or replace function transform4history(in_any in anydata)
return anydata
IS
lcr sys.lcr$_row_record;
lcr_rl sys.lcr$_row_list;
rc number;
invalid_column exception;
pragma exception_init(invalid_column,-23607);
begin
-- Get the type of object
-- Check if the object type is SYS.LCR$_ROW_RECORD
if in_any.GETTYPENAME='SYS.LCR$_ROW_RECORD' THEN
-- Put the row LCR into lcr
rc := in_any.GETOBJECT(lcr);
-- Check if that's an Update or a Delete
if (lcr.GET_COMMAND_TYPE() in ('UPDATE','DELETE')) then
lcr_rl:=lcr.get_values('old');
for i in lcr_rl.first..lcr_rl.last loop
begin
lcr.set_value('new',lcr_rl(i).column_name,
lcr.get_value('old',lcr_rl(i).column_name));
exception when invalid_column then
lcr.add_column('new',lcr_rl(i).column_name,
lcr.get_value('old',lcr_rl(i).column_name));
end;
end loop;
lcr.add_column('new','SCN',
anydata.convertnumber(lcr.get_scn()));
lcr.add_column('new','DML_OPERATION',
anydata.convertvarchar2(lcr.get_command_type()));
for i in lcr_rl.first..lcr_rl.last loop
lcr.delete_column(lcr_rl(i).column_name,'old');
end loop;
lcr.set_command_type('INSERT'); -- Change it to an insert
return anydata.convertobject(lcr);
end if;
return in_any;
end if;
end;
/

You can add the transformation to the apply rule that matches the destination table. It order to do it, use DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION like below:
connect strmadmin/strmadmin

begin
dbms_streams_adm.set_rule_transform_function(
rule_name => 'T3_HISTORY54',
transform_function => 'transform4history');
end;
/

col rule_name format A12
col user_function_name format a32

select rule_name,
transform_type,
user_function_name
from dba_streams_transformations;

RULE_NAME TRANSFORM_TYPE USER_FUNCTION_NAME
------------ -------------------------- --------------------------------
T352 DECLARATIVE TRANSFORMATION
T3_HISTORY54 CUSTOM TRANSFORMATION "STRMADMIN"."TRANSFORM4HISTORY"
Step 8: Instantiate the Table in the DESTINATION Schema

Streams is now configured. Before you start the different processes, you must instantiate the table in the DESTINATION schema and store the instantiation SCN so that the apply process knows what changes exactly it can apply. In our test case, we'll start with an empty table that will be slighly different from the original table:
  • We'll add a column SCN to store the SCN of the change
  • We'll add a column DML_OPERATION to store the DML command used on the primary table
  • It won't have any Primary Key; even though we could create one on (ID, SCN)
The associated SQL is below:
connect / as sysdba

create user destination
identified by destination
default tablespace users
temporary tablespace temp;

grant connect,resource to destination;

create table destination.t3_history(
id number,
text1 varchar2(80),
text2 varchar2(80),
scn number,
dml_operation varchar2(6));
col apply_scn format 999999999999

select dbms_flashback.get_system_change_number apply_scn
from dual;

APPLY_SCN
-----------
49042945957
Once the table created, use the dbms_apply_adm.set_table_instantiation_scn procedure to store the instantiation SCN for the apply process like below:
begin
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => 'source.t3',
source_database_name => 'BLACK',
instantiation_scn => 49042945957);
end;
/

col SOURCE_DATABASE format a6
col OBJECT format a10
col INSTANTIATION_SCN format 999999999999

select source_database,
source_object_owner||'.'||source_object_name object,
instantiation_scn
from dba_apply_instantiated_objects;

SOURCE OBJECT INSTANTIATION_SCN
------ ---------- -----------------
BLACK SOURCE.T3 49042945957
Step 9: Start the Capture and Apply Processes

You can start the capture and the apply processes :
exec dbms_capture_adm.start_capture('capture4transfo');
exec dbms_apply_adm.start_apply('apply4transfo');
Step 10: Test the Table Replication

To test the replication, update the source table and check the changes are propagated to the destination table. Below is a set of simple tests that shows the replication is working as it should:
update source.t3 set text1= 'Text 3'
where id=1;

delete from source.t3
where id=1;

commit;

pause

col id format 99
col text1 format a6
col text2 format a6
col scn format 999999999999

select id,
text1,
text2,
scn,
dml_operation
from destination.t3_history
order by id, scn;

ID TEXT1 TEXT2 SCN DML_OP
--- ------ ------ ------------- ------
1 Text 1 49042951081 UPDATE
1 Text 3 Text 1 49042965443 DELETE
Step 11: Stop and Suppress the Oracle Streams Configuration

In this last step, we'll clean up the environment and leave it as it was when we've started. In order to do it it, we'll stop and drop the capture and apply processes; we'll drop the Streams queue, the Streams administrator and the associated tablespace; we'll drop the 2 schemes and we'll disabled the supplemental logging:
connect / as sysdba

exec dbms_capture_adm.stop_capture('capture4transfo');
exec dbms_apply_adm.stop_apply('apply4transfo');

exec dbms_capture_adm.drop_capture('capture4transfo',true);
exec dbms_apply_adm.drop_apply('apply4transfo',true);

exec dbms_streams_adm.remove_queue('strmadmin.streams_queue',true,true);

exec dbms_apply_adm.SET_TABLE_INSTANTIATION_SCN('source.t3','BLACK', null)

select source_database,
source_object_owner||'.'||source_object_name object,
instantiation_scn
from dba_apply_instantiated_objects;

drop user strmadmin cascade;

drop tablespace streams_tbs
including contents and datafiles;

drop user destination cascade;
drop user source cascade;

alter database drop supplemental log
data (primary key, unique index) columns;

select SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI
from gv$database;

SUPPLEME SUP SUP
-------- --- ---
NO NO NO

Read more...

Tuesday, January 13, 2009

Streams Synchronous Capture 101

Streams Synchronous Capture is a New Feature of Oracle 11g; It doesn't rely on archive logs or redo logs to capture changes. Instead it captures changes when they are made. It uses a row level trigger-based mechanism to perform this task. As its name suggests, the only part that is "synchronous" is the capture; the changes continue to be applied asynchronously to the destination objects and the databases stay loosely coupled. As a matter of fact, propagation and apply processes don't change so much.

In addition to the capture itself, the main change associated with synchronous capture is the queue usage. Messages are persistent/commit time instead of buffered. The reason is that the capture process won't get any second chance to capture a change and, as consequence, it cannot afford loosing anything in a queue. As a result, every change of a synchronous capture is logged in all the databases it transit through and the fingerprint can be quite significant. It's worth to notice too, that synchronous capture can only propagate DML changes, it only works with a small subset of datatypes that doesn't include LOBs, and the capture process have to run on the source database. However, synchronous capture has its own set of advantages too:
  • It can capture changes on a Standard Edition or a Standard Edition One 11g database
  • It can be used even if the source database is running in noarchivelog mode and doesn't require you add any supplemental logging
  • It's probably simpler to setup, though I feel like a regular Streams setup is pretty simple too
  • It's a new feature of 11.1 ;-)
In this post, we'll reproduce the same setup we've built in our previous Oracle Streams One Way Table Replication 101 post, with synchronous capture this time; in order to do it, we'll go through the steps below:
Step 1: Build The Sample Schema and Table

In order to start, we'll create a schema named SOURCE and a table named T2 the same way we've done it in the previous post. You'll find below the script to perform the associated operations:
connect / as sysdba

create user source
identified by source
default tablespace users
temporary tablespace temp;

grant connect,resource to source;

connect source/source

create table t2(
id number primary key,
text varchar2(80));

insert into t2(id, text)
values (1,'Text 1');

insert into t2(id, text)
values (2,'Text 2');

commit;

Step 2: Create the Streams Administrator and Queue

The database doesn't require any specific settings. It doesn't rely either on the Multi Version Data Dictionary. We'll create a Streams Administrator in all the databases that are involved with the capture, propagation or apply processes; the script below does it. It creates a tablespace to store the queues. It creates a user STRMADMIN, grants it the privileges and roles as well as it creates the queue that will be used by the capture and the apply processes to share the LCR:
connect / as sysdba

CREATE TABLESPACE streams_tbs
DATAFILE SIZE 25M
AUTOEXTEND ON MAXSIZE 256M;

CREATE USER strmadmin IDENTIFIED BY strmadmin
DEFAULT TABLESPACE streams_tbs
QUOTA UNLIMITED ON streams_tbs;

grant dba to strmadmin;

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
END;
/
Step 3: Create an Apply Process
IMPORTANT :
Opposite to the asynchronous capture, synchronous capture is always running. You cannot start it, and more importantly you cannot stop it! The reason is that it has only one chance to capture changes and you could miss changes if you had that ability. A consequence of that and of the fact a running capture process must have one apply process is that you MUST create the apply process before you create the capture process. If you don't, you may encounter an error like the one below when you try to change your source tables:
insert into source.t1 values (5,'Text 5')
*
ERROR at line 1:
ORA-26694: error while enqueueing into queue STRMADMIN.STREAMS_QUEUE
ORA-24033: no recipients for message
Before rush on the creation of the apply process, one important point to understand:
  • You'll want the consumed messages to be the ones captured by a synchronous capture and NOT by an asynchronous capture. That means that you want the apply process to subscribe to the persistent message part of the queue and NOT to the buffered part of it. The way you specify that is to set the applied_captured parameter to FALSE. That's for the subtle part: you should name that parameter "applied asynchronous captured" for yourself ;-). And here comes the confusing part, the 11.1 documentation for dbms_apply_adm.create_capture suggests that FALSE is the default value. That's not the case; the default is TRUE and that parameter has to be specified in the case of a synchronous capture.
The script below creates an apply process and configures it so that it subscribes to the strmadmin.streams_queue queue:
connect strmadmin/strmadmin

begin
dbms_apply_adm.create_apply(
queue_name => 'strmadmin.streams_queue',
apply_name => 'sync_apply',
apply_captured => false,
source_database => 'BLACK');
end;
/

begin
dbms_streams_adm.add_table_rules(
table_name => 'source.t2',
streams_type => 'apply',
streams_name => 'sync_apply',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
include_tagged_lcr => false,
source_database => 'BLACK',
inclusion_rule => true);
end;
/


set lines 120
col streams_name format a12
col streams_type format a12
col table_owner format a11
col table_name format a5
col rule_type format a8
col rule_name format a5

select STREAMS_NAME,
STREAMS_TYPE,
TABLE_OWNER,
TABLE_NAME,
RULE_TYPE,
RULE_NAME
from DBA_STREAMS_TABLE_RULES;

STREAMS_NAME STREAMS_TYPE TABLE_OWNER TABLE RULE_TYP RULE
------------ ------------ ----------- ----- -------- ----
SYNC_APPLY APPLY DESTINATION T2 DML T226
Step 4: Create a Capture Process

You can now create the synchronous capture process. You can simply use DBMS_STREAMS_ADM.ADD_TABLE_RULES with streams_type=>'sync_capture' like below:
connect strmadmin/strmadmin

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'source.t2',
streams_type => 'sync_capture',
streams_name => 'sync_capture',
queue_name => 'strmadmin.streams_queue');
END;
/

set lines 120
col streams_name format a12
col streams_type format a12
col table_owner format a11
col table_name format a5
col rule_type format a8
col rule_name format a4

select STREAMS_NAME,
STREAMS_TYPE,
TABLE_OWNER,
TABLE_NAME,
RULE_TYPE,
RULE_NAME
from DBA_STREAMS_TABLE_RULES;

STREAMS_NAME STREAMS_TYPE TABLE_OWNER TABLE RULE_TYP RULE
------------ ------------ ----------- ----- -------- ----
SYNC_CAPTURE SYNC_CAPTURE SOURCE T2 DML T228
SYNC_APPLY APPLY SOURCE T2 DML T226
Step 5: Add a Transformation Rule to the Table Apply Rule

Add the Transformation associated with the dml apply rule so that the schema name is transformed from SOURCE to DESTINATION. The script below adds the transformation to the apply. It is the same as the one for asynchronous capture:
begin
dbms_streams_adm.rename_schema(
rule_name => 'T226' ,
from_schema_name => 'SOURCE',
to_schema_name => 'DESTINATION',
step_number => 0,
operation => 'add');
end;
/

col rule_name format A6
col from_schema_name format a6
col to_schema_name format a12

select rule_name,
transform_type,
from_schema_name,
to_schema_name,
declarative_type
from dba_streams_transformations;

RULE_N TRANSFORM_TYPE FROM_S TO_SCHEMA_NA DECLARATIVE_T
------ -------------------------- ------ ------------ -------------
T226 DECLARATIVE TRANSFORMATION SOURCE DESTINATION RENAME SCHEMA

Step 6: Instantiate the Table in the DESTINATION Schema

Streams is now configured. Before you start the different processes, you must instantiate the table in the DESTINATION schema and store the instantiation SCN so that the apply process knows what changes exactly it can apply. To instantiate the table, you can use the method of your choice: RMAN, DataPump, exp or Flashback Query. Here is an example with Flashback Query:
connect / as sysdba

create user destination
identified by destination
default tablespace users
temporary tablespace temp;

grant connect,resource to destination;

create table destination.t2(
id number primary key,
text varchar2(80));
col apply_scn format 999999999999 new_value apply_scn

select dbms_flashback.get_system_change_number apply_scn
from dual;

APPLY_SCN
-----------
49042368875

insert into destination.t2
(select *
from source.t2 as of scn &&apply_scn);

commit;
Once the table instantiated, use the dbms_apply_adm.set_table_instantiation_scn procedure to store the instantiation SCN for the apply process like below:
begin
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => 'source.t2',
source_database_name => 'BLACK',
instantiation_scn => &&apply_scn);
end;
/

col SOURCE_DATABASE format a6
col OBJECT format a10
col INSTANTIATION_SCN format 999999999999

select source_database,
source_object_owner||'.'||source_object_name object,
instantiation_scn
from dba_apply_instantiated_objects;

SOURCE OBJECT INSTANTIATION_SCN
------ ---------- -----------------
BLACK SOURCE.T2 49042368875

Step 7: Start the Apply Processes

You can finally start the capture and the apply processes :
exec dbms_apply_adm.start_apply('sync_apply');
Step 8: Test/Validate the Replication

To test the replication, update the source table and check the changes are propagated to the destination table. Below is a simple test that shows the replication is working as it should:
insert into source.t2(id, text)
values (3,'Text 3');

commit;

pause

col id format 99
col text format a6

select id,
text
from destination.t2;

ID TEXT
-- ------
3 Text 3
1 Text 1
2 Text 2

Step 9: Stop and Suppress the Oracle Streams Configuration

In this last step, we'll clean up the environment and leave it as it was when we've started. In order to do it it, we'll stop and drop the capture and apply processes; we'll drop the Streams queue, the Streams administrator and the associated tablespace. We'll also drop the 2 schemes:
connect / as sysdba

exec dbms_apply_adm.stop_apply('sync_apply');

exec dbms_capture_adm.drop_capture('sync_capture',true);
exec dbms_apply_adm.drop_apply('sync_apply',true);

exec dbms_streams_adm.remove_queue('strmadmin.streams_queue',true,true);

drop user strmadmin cascade;

drop tablespace streams_tbs
including contents and datafiles;

drop user destination cascade;
drop user source cascade;

begin
for i in (select source_schema name,
source_database
from dba_apply_instantiated_schemas
where source_schema in ('SOURCE','DESTINATION'))
loop
dbms_apply_adm.set_schema_instantiation_scn(
source_schema_name => i.name,
source_database_name => i.source_database,
instantiation_scn => null);
end loop;
for i in (select source_object_owner||'.'||
source_object_name name,
source_database
from dba_apply_instantiated_objects
where source_object_owner in ('SOURCE','DESTINATION'))
loop
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => i.name,
source_database_name => i.source_database,
instantiation_scn => null);
end loop;
end;
/

Read more...

Sunday, January 11, 2009

Oracle Streams One Way Table Replication 101

If you're looking to jump start Oracle Streams, this is probably the right post: a quick setup, a lot of shortcuts, one only database and probably the simplest configuration you will ever see. Obviously, that doesn't address everything; you'll always need to deep dive into the concepts and principals of Streams. But you should be able to get a running Streams configuration in less than 30 minutes and use it as a basis to build more complex test cases. Hopefully that will be as useful to you as it is to me.

In this example, we'll maintain a copy of a table named T1 located in the SOURCE schema, in the DESTINATION schema of the same database. In order to get to that result, we'll dig into the details of the steps below:

Note:
We've tested this post on a 11.1.0.7 and on a 10.2.0.4 Enterprise Edition version of Oracle running on Linux 32bits.

Step 1: Build The Sample Schema and Table

In order to start, we'll create a schema named SOURCE and a table named T1. You'll find below the script to perform the associated operations:

connect / as sysdba

create user source
identified by source
default tablespace users
temporary tablespace temp;

grant connect,resource to source;

connect source/source

create table t1(
id number primary key,
text varchar2(80));

insert into t1(id, text)
values (1,'Text 1');

insert into t1(id, text)
values (2,'Text 2');

commit;
Step 2: Configure Database Logging and the Multi-Version Data Dictionary
Streams capture processes use the content of the redologs/archivelogs to build Logical Change Records (LCR) that are propagated to the destination database via buffered queues. Once propagated, the LCRs are applied to the destination database by the apply processes that transform them into a set of SQL commands. As you know the redo logs don't contain SQL but binary data. Those data are built to be used by a recovery process and, by default, don't contain the necessary data to be applied on a different database. Here are some of the consequences of using the redo logs to capture changes made on a source database:
  • The redologs don't contain the SQL commands that have been run on the database; instead, they contain the rowids of the changed rows with the before and after image of those changes. So if you update, say 1 column for all the rows of a table, in one update command, Streams will NOT replicate that command. Instead, capture processes generate one LCR per changed row. Each LCR includes the before and after value of the changes. Saying it in an other way, one update on the source database may lead to millions LCRs and updates on the destination database.
  • RowIds are used to uniquely identified changed rows and recover the archivelogs and redologs. But RowIds of rows differ on a destination database that is not a physical copy of the source database! To uniquely identify changed rows on a destination database, we must add to the LCR the value of a Unique or a Primary Key with every change. To add them to the LCR we must add them to the redo logs; we'll use supplemental logging for that purpose!
  • The schemes and names of the objects that are changed are not stored in the redo logs either; Instead, what is stored is the object identifiers. Those identifiers are stored in the LCRs and you need a copy of the dictionary from the source database on the destination database to "reverse engineer" LCRs into SQL. That copy of the dictionary is part of the Multi-Version Data Dictionary (MVDD). It needs to be captured from the source database and sent to the destination database.
For the reasons above, we need to prepare the source database to be compatible with Streams Capture.

To uniquely identified rows on the destination database, you need the database to be in ARCHIVELOG mode. Add supplemental logging on the source database and log the unique and primary keys of all the changed rows. That can be done with the atomicity of one object but, as discussed earlier, this post takes some shortcuts and we'll add the supplemental logging at the database level. Below is the script to perform that operation:
connect / as sysdba

alter database add supplemental log
data (primary key, unique index) columns;

select SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI
from gv$database;

SUPPLEME SUP SUP
-------- --- ---
IMPLICIT YES YES
To push a copy of the data dictionary from the source to the destination database, we need to run DBMS_CAPTURE_ADM.BUILD as below:
connect / as sysdba

var first_scn number;

set serveroutput on

DECLARE
scn NUMBER;
BEGIN
DBMS_CAPTURE_ADM.BUILD(
first_scn => scn);
DBMS_OUTPUT.PUT_LINE('First SCN Value = ' || scn);
:first_scn := scn;
END;
/

First SCN Value = 49042254018
It is important to keep track of the SCN returned by DBMS_CAPTURE_ADM.BUILD. You'll use that SCN as the first SCN of the capture processes so that the meta data from the source are pushed and kept in the destination MVDD. Note that the last command captures the minimum to add to the MVDD and you have to add the meta data of all the objects you want to replicate. In order to do that, you must use one of the prepare_xxxx_instantiation procedure of the dbms_capture_adm package as below:
exec dbms_capture_adm.prepare_table_instantiation(-
table_name=>'source.t1');

Step 3: Create the Streams Administrator and The Streams Queue

To use Oracle Streams, you must create a Streams Administrator in all the databases that directly involved with a capture, propagation or apply process; the script below does it. It creates a tablespace to store the queues: it is require to have a separate tablespace for queues. Then it creates a user STRMADMIN, grants it the privileges and roles as well as the buffered queue with will be used by the capture and the apply processes to share the LCR:
connect / as sysdba

CREATE TABLESPACE streams_tbs
DATAFILE '/u01/app/oracle/oradata/BLACK/streams_tbs.dbf' size 25M
AUTOEXTEND ON MAXSIZE 256M;

CREATE USER strmadmin IDENTIFIED BY strmadmin
DEFAULT TABLESPACE streams_tbs
QUOTA UNLIMITED ON streams_tbs;

grant dba to strmadmin;

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
END;
/

Step 4: Create the Capture Process and Define the Capture Rules
Once the source and the destination databases setup, you can create the capture process and add the source table to it. The first_scn parameter must match the one returned from DBMS_CAPTURE_ADM.BUILD; the source_database parameter must match the db_unique_name (or db_name, if db_unique_name is not used) of the source database; this is the script to create the capture process:
connect strmadmin/strmadmin

accept first_scn prompt "Enter the First SCN of the Capture: "
Enter the First SCN of the Capture: 49042254018

var first_scn number;
exec :first_scn:=&&first_scn

BEGIN
DBMS_CAPTURE_ADM.CREATE_CAPTURE(
queue_name => 'strmadmin.streams_queue',
capture_name => 'streams_capture',
rule_set_name => NULL,
source_database => 'BLACK',
use_database_link => false,
first_scn => :first_scn,
logfile_assignment => 'implicit');
END;
/

col capture_name format a15
col queue_name format a13
col first_scn format 999999999999
col start_scn format 999999999999
col rule_set_name format a11

select capture_name,
queue_name,
first_scn,
start_scn,
rule_set_name
from dba_capture;

CAPTURE_NAME QUEUE_NAME FIRST_SCN START_SCN RULE_SET_N
--------------- ------------- ------------- ------------- ----------
STREAMS_CAPTURE STREAMS_QUEUE 49042254018 49042254018 RULESET$_7

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'source.t1',
streams_type => 'capture',
streams_name => 'streams_capture',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
include_tagged_lcr => true,
source_database => 'BLACK',
inclusion_rule => true);
END;
/

set lines 120
col streams_name format a16
col streams_type format a9
col table_owner format a10
col table_name format a15
col rule_type format a8
col rule_name format a15

select rule_owner,
STREAMS_NAME,
STREAMS_TYPE,
TABLE_OWNER,
TABLE_NAME,
RULE_TYPE,
RULE_NAME
from DBA_STREAMS_TABLE_RULES;

STREAMS_NAME STREAMS_T TABLE_OWNE TABLE_NAME RULE_TYP RULE_NAME
--------------- --------- ---------- ---------- -------- ---------
STREAMS_CAPTURE CAPTURE SOURCE T1 DML T16

Step 5: Create an Apply Process

You must create an apply process that will subscribe the Streams queue and apply the changes on the SOURCE.T1 table:
connect strmadmin/strmadmin

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'source.t1',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'strmadmin.streams_queue',
include_dml => true,
include_ddl => false,
include_tagged_lcr => true,
source_database => 'BLACK',
inclusion_rule => true);
END;
/


col apply_name format a13
col queue_name format a13
col rule_set_name format a11

select apply_name,
queue_name,
rule_set_name,
status,
message_delivery_mode
from dba_apply;

APPLY_NAME QUEUE_NAME RULE_SET_NA STATUS MESSAGE_DE
------------- ------------- ----------- -------- ----------
STREAMS_APPLY STREAMS_QUEUE RULESET$_10 DISABLED CAPTURED

Step 6: Transform the SQL Captured so that changes on SOURCE are applied to DESTINATION

In this example the source and the destination databases are the same; as a result the source and target tables cannot have the same schema or name. What we'll do now is add a transformation rule so that the schema name is transformed from SOURCE to DESTINATION. The script below add the transformation to the apply:
connect strmadmin/strmadmin

col rule_name format a20 new_value rulename

select rule_owner,
STREAMS_NAME,
STREAMS_TYPE,
TABLE_OWNER,
TABLE_NAME,
RULE_TYPE,
RULE_NAME
from DBA_STREAMS_TABLE_RULES
where streams_name='STREAMS_APPLY'
and streams_type='APPLY'
and rule_type='DML';

prompt &&rulename
T16

begin
dbms_streams_adm.rename_schema(
rule_name => '&&rulename' ,
from_schema_name => 'SOURCE',
to_schema_name => 'DESTINATION',
step_number => 0,
operation => 'add');
end;
/

col rule_name format A6
col from_schema_name format a6
col to_schema_name format a12

select rule_name,
transform_type,
from_schema_name,
to_schema_name,
declarative_type
from dba_streams_transformations;

RULE_N TRANSFORM_TYPE FROM_S TO_SCHEMA_NA DECLARATIVE_T
------ -------------------------- ------ ------------ -------------
T16 DECLARATIVE TRANSFORMATION SOURCE DESTINATION RENAME SCHEMA

Step 7: Instantiate the Table in the DESTINATION Schema

Streams is now configured. Before you start the different processes, you must instantiate the table in the DESTINATION schema and store the instantiation SCN so that the apply process knows what changes exactly it can apply. To instantiate the table, you can use the method of your choice: RMAN, DataPump, exp or Flashback Query:
connect / as sysdba

create user destination
identified by destination
default tablespace users
temporary tablespace temp;

grant connect,resource to destination;

create table destination.t1(
id number primary key,
text varchar2(80));
col apply_scn format 999999999999 new_value instantiation_scn

select dbms_flashback.get_system_change_number apply_scn
from dual;

APPLY_SCN
-----------
49042261443

prompt Enter the Instantiation: &&instantiation_scn
Enter the Instantiation: 49042261443

insert into destination.t1
(select * from source.t1 as of scn &&instantiation_scn);

commit;
Once the table instantiated, use the dbms_apply_adm.set_table_instantiation_scn procedure to store the instantiation SCN for the apply process like below:
begin
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => 'source.t1',
source_database_name => 'BLACK',
instantiation_scn => &&instantiation_scn);
end;
/

col SOURCE_DATABASE format a6
col OBJECT format a10
col INSTANTIATION_SCN format 999999999999

select source_database,
source_object_owner||'.'||source_object_name object,
instantiation_scn
from dba_apply_instantiated_objects;

SOURCE OBJECT INSTANTIATION_SCN
------ ---------- -----------------
BLACK SOURCE.T1 49042261443

Step 8: Start the Capture and Apply Processes

You can start the capture and the apply processes :
exec dbms_capture_adm.start_capture('streams_capture');
exec dbms_apply_adm.start_apply('streams_apply');
Step 9: Test the Table Replication

To test the replication, update the source table and check the changes are propagated to the destination table. Below is a simple test that shows the replication is working as it should:
insert into source.t1(id, text)
values (3,'Text 3');

commit;

pause

col id format 99
col text format a6

select id,
text
from destination.t1;

ID TEXT
--- ------
1 Text 1
2 Text 2
3 Text 3
I guess troubleshooting the Streams configuration is out of the scope of this post. However, you can query the fixed views and catalog views if you need to get some of details about the configuration and its state.

Step 10: Stop and Suppress the Oracle Streams Configuration

In this last step, we'll clean up the environment and leave it as it was when we've started. In order to do it it, we'll stop and drop the capture and apply processes; we'll drop the Streams queue, the Streams administrator and the associated tablespace; we'll drop the 2 schemas and we'll disabled the supplemental logging:
exec dbms_capture_adm.stop_capture('streams_capture');
exec dbms_apply_adm.stop_apply('streams_apply');

exec dbms_capture_adm.drop_capture('streams_capture',true);
exec dbms_apply_adm.drop_apply('streams_apply',true);

exec dbms_streams_adm.remove_queue('strmadmin.streams_queue',true,true);

drop user strmadmin cascade;

drop tablespace streams_tbs
including contents and datafiles;

drop user destination cascade;
drop user source cascade;

begin
for i in (select source_schema name
from dba_apply_instantiated_schemas
where source_schema in ('SOURCE','DESTINATION'))
loop
dbms_apply_adm.set_schema_instantiation_scn(
source_schema_name => i.name,
source_database_name => 'BLACK',
instantiation_scn => null);
end loop;
for i in (select source_object_owner||'.'||
source_object_name name
from dba_apply_instantiated_objects
where source_object_owner in ('SOURCE','DESTINATION'))
loop
dbms_apply_adm.set_table_instantiation_scn(
source_object_name => i.name,
source_database_name => 'BLACK',
instantiation_scn => null);
end loop;
end;
/
alter database drop supplemental log
data (primary key, unique index) columns;

select SUPPLEMENTAL_LOG_DATA_MIN,
SUPPLEMENTAL_LOG_DATA_PK,
SUPPLEMENTAL_LOG_DATA_UI
from gv$database;

SUPPLEME SUP SUP
-------- --- ---
NO NO NO

Read more...