Oracle Bi-Directional Data Replication

Thursday Apr 13th 2006 by Vincent Chan
Share:

Part one of this two part series provides the procedures for setting up bi-directional schema-level replication and conflict resolution between two databases using Oracle 10g Streams.

Data availability is an important factor to the success of any businesses. In a distributed environment, Oracle Streams improves data availability and accessibility by sharing and replicating information to multiple sites quickly and reliably. Oracle Streams was introduced in Oracle 9i and has since matured into robust and powerful software is used to perform data replication, data warehouse transformation and messaging.

This is part one of a two-part series of illustrating the set up of bi-directional replication using Oracle 10g Streams. Part one provides the procedures for setting up bi-directional schema-level replication and conflict resolution between two databases. Part two will demonstrate how a new source database can be easily added into the replicated environment.

Streams Overview

There are three basic processes of Oracle Streams: Capture, Staging and Consumption (Apply).

The capture process at the source site captures events such as DML and DDL from either the online redo logs or archived log files, formats the changes into Logical Change Records (LCRs) and queues them into a staging area (queue). The LCRs are then propagated to an apply queue at the destination site where the changes are eventually dequeued and applied by the apply process.

In a bi-directional replication set up, each site captures, propagates and applies changes to the other site, keeping the databases at both sites current. Each site acts as a source database and as a destination database. Because simultaneous updates can be performed on the same records by both sites, conflicts can arise. In the following sections, we will discuss Streams pre-built update conflict resolution handler and how conflicts can be avoided or minimized.

Conflict Detection

When an apply process applies row LCR at the destination database, conflicts are automatically detected when

1.  An update conflict occurs – the same record is updated simultaneously by two sites resulting in a mismatch of the current column value at the destination database and the old column value at the source database.

2.  A delete conflict occurs – the record to be deleted does not exist in the destination database.

3.  A uniqueness conflict occurs – a record inserted in the destination database results in a unique or primary key constraint violation.

4.  A foreign key conflict occurs – a record inserted or updated in the destination database results in a foreign key constraint violation.

Ideally, you should design your system to avoid or minimize conflicts. There are various ways of achieving that such as using unique sequence keys at each replicated site, horizontal subsetting of data to ensure each site can only make changes to its own data and avoiding physical deletion of records by logically marking the records deleted and purging them at a later time.

Conflict Resolution

Oracle only provides the update conflict handler. If the pre-built conflict handler is insufficient, you can also create a custom conflict handler to resolve possible conflicts.

The four types of pre-built update conflict handlers are

1.  Discard – Row LCR propagated from the source site is discarded at the destination site when a conflict is detected.

2.  Overwrite – Row LCR propagated from the source site overwrites the data at the destination site when a conflict is detected.

3.  Minimum – The column value originated from the source site is compared with the column value at the destination site. The apply process applies or retains the lower value at the destination site.

4.  Maximum – The column value originated from the source site is compared with the column value at the destination site. The apply process applies or retains the higher value at the destination site.

The type of update conflict handler to select from is driven by your business rules and requirements. For example, if you designate one site as the authoritative site, you may want to discard changes propagated from other sites when a conflict occurs. In this article, we will use the discard and overwrite methods to resolve conflicts.

Designing your system to avoid and resolving possible conflicts is one of the most critical aspects for a successful implementation of bi-directional replication. Detailed information on Conflict Resolution can be obtained from http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14228/conflict.htm#i1006084 .

Setting Up Bi-Directional Replication

An overview of the Oracle Streams environment:

Host Name

Instance Name

Global Database Name

Database Role

trout3

ladb

ladb.world

Source / Destination database

trout4

sfdb

sfdb.world

Source / Destination database

Step 1: Configure archive log mode

Archived redo logs are required by the capture process to extract changes. Verify both ladb and sfdb are in archive log mode and configure your log archive destination or flash recovery area.

Step 2: Modify initialization parameters

Parameter

ladb

sfdb

Description

compatible

10.2.0

10.2.0

Both databases are running Oracle 10gR2.

global_names

true

true

Database link name must match global name.

job_queue_processes

2

2

The maximum number of job queue processes to propagate messages. Set this parameter to at least 2.

db_recovery_file_dest_size

10G

10G

Specifies the storage size of database recovery files such as archive logs.

db_recovery_file_dest

/u01/app/oracle/flashdest

/u01/app/oracle/flashdest

Specifies the location of database recovery files such as archive logs.

parallel_max_servers

6

6

The maximum number of parallel execution processes that can be used by the capture and apply processes.

You may encounter ora-1372 or ora-16081 during Streams capture and apply if the value is set too low. Set this parameter to at least 6.

sga_target

500M

500M

Enables Automatic Shared Memory Management (ASMM). In Oracle 10gR2, ASMM automatically manages the streams_pool_size. streams_pool_size provides buffer areas for streams processing.

If this parameter is not set, you should manually set streams_pool_area to at least 200M. Otherwise, memory will be allocated for Streams from the shared pool area.

Step 3: Set up tnsnames.ora

Add the TNS entries on trout3 and trout4.

LADB.WORLD =
   (DESCRIPTION =
       (ADDRESS = (PROTOCOL = TCP)(HOST = trout3)(PORT = 1521))
       (CONNECT_DATA =
       (SERVER = DEDICATED)
       (SERVICE_NAME = ladb)
        )
    )
SFDB.WORLD =
   (DESCRIPTION =
       (ADDRESS = (PROTOCOL = TCP)(HOST = trout4)(PORT = 1521))
       (CONNECT_DATA =
       (SERVER = DEDICATED)
            (SERVICE_NAME = sfdb)
        )
     )   

Step 4: Create tablespaces

Create the tablespaces for the application schema and Streams administrator.

On ladb

create tablespace appsts datafile '/u02/oradata/ladb/appsts01.dbf' size 100M; 
create tablespace streamsts datafile '/u02/oradata/ladb/streamsts01.dbf' size 100M;

On sfdb

create tablespace appsts datafile '/u02/oradata/sfdb/appsts01.dbf' size 100M; 
create tablespace streamsts datafile '/u02/oradata/sfdb/streamsts01.dbf' size 100M;

Step 5: Create Streams administrator

Create the Streams administrator in both ladb and sfdb.

create user strmadmin identified by strmadmin default tablespace streamsts temporary tablespace temp;
grant dba,select_catalog_role to strmadmin;
exec dbms_streams_auth.grant_admin_privilege('strmadmin',true);

Step 6: Create application schema

Create the application schema in both ladb and sfdb.

create user apps identified by apps default tablespace appsts temporary tablespace temp;
grant connect,resource to apps;
grant select on v_$database to apps;

Step 7: Create application schema’s objects

Create the application schema’s objects in ladb. The objects in sfdb are created later using Oracle’s Data Pump utility.

connect apps/apps@ladb.world
create table dept (
 deptno   number(10) not null, 
 dname   varchar2(20)  not null,
 site  varchar2(10)  not null
);
create or replace trigger dept_site_trg 
before insert on dept for each row
begin
 if :new.site is null then
  select name into :new.site from v$database; 
 end if;
end;
/
alter table dept add constraint dept_pk primary key (deptno);
create sequence deptno_laseq start with 1 increment by 5; 
create sequence deptno_sfseq start with 2 increment by 5;
insert into dept values (deptno_laseq.nextval,'FINANCE','LADB');
insert into dept values (deptno_laseq.nextval,'HR','LADB');
insert into dept values (deptno_laseq.nextval,'RESEARCH','LADB');
commit;
create table emp (
 empno  number(10)  not null,
 ename  varchar2(20) not null,
 site  varchar2(10)    not null,
 job  varchar2(10),
 deptno  number(10)
);
create or replace trigger emp_site_trg 
before insert on emp for each row
begin
 if :new.site is null then
  select name into :new.site from v$database; 
 end if;
end;
/
alter table emp add constraint emp_pk primary key (empno);
alter table emp add constraint emp_fk foreign key (deptno) references dept (deptno);
create sequence empno_laseq start with 1 increment by 5;  
create sequence empno_sfseq start with 2 increment by 5;  
insert into emp values (empno_laseq.nextval,'MIKE','LADB','CLERK',1);
insert into emp values (empno_laseq.nextval,'JANE','LADB','ANALYST',1);
insert into emp values (empno_laseq.nextval,'ZACH','LADB','ENGINEER',11);
commit; 

Step 8: Enable table level supplemental logging

Set up supplemental logging as the apps user on ladb. Supplemental logging logs additional columns information into the redo logs for row identification.

There are two types of supplemental log groups:

1.  Unconditional supplemental log group – always logs the before images of the specified columns in the log group whenever any table column is updated.

2.  Conditional supplement log group – only logs the before images of the specified columns in the log group when at least one column in the log group is updated.

The following adds columns to the conditional supplemental log group for the emp and dept tables:

alter table dept add supplemental log group loggrp_dept (deptno,dname,site);
alter table emp add supplemental log group loggrp_emp (empno,ename,site,job,deptno);
SQL> select log_group_name, table_name
  2  from dba_log_groups
  3  where owner='APPS';
LOG_GROUP_NAME                 TABLE_NAME
------------------------------ ------------------------------
LOGGRP_EMP                     EMP
LOGGRP_DEPT                    DEPT

Step 9: Create database link

As the Streams administrator on ladb, create a private database link to sfdb and on sfdb, create a private database link to ladb.

On ladb

create database link sfdb.world connect to strmadmin identified by strmadmin using 'sfdb.world';

On sfdb

create database link ladb.world connect to strmadmin identified by strmadmin using 'ladb.world';

Step 10: Create Streams queues

As the Streams administrator, create the capture and apply queues on ladb and sfdb.

On ladb

begin
dbms_streams_adm.set_up_queue(
  queue_table => 'apply_laqtab',
  queue_name  => 'apply_laq',
  queue_user  => 'strmadmin');
end;
/
begin
dbms_streams_adm.set_up_queue(
  queue_table => 'capture_laqtab',
  queue_name  => 'capture_laq',
  queue_user  => 'strmadmin');
end;
/

On sfdb

begin
dbms_streams_adm.set_up_queue(
  queue_table => 'apply_sfqtab',
  queue_name  => 'apply_sfq',
  queue_user  => 'strmadmin');
end;
/
begin
dbms_streams_adm.set_up_queue(
  queue_table => 'capture_sfqtab',
  queue_name  => 'capture_sfq',
  queue_user  => 'strmadmin');
end;
/

Step 11: Configure capture process on ladb

Rules are added in positive or negative rule sets using the dbms_streams_adm package. Changes are captured using positive rule set by specifying a true value for the inclusion_rule parameter.

Create and configure the capture process on ladb.

connect strmadmin/strmadmin@ladb.world
begin
dbms_streams_adm.add_schema_rules (
schema_name  => 'apps',
  streams_type => 'capture',
  streams_name => 'capture_lastrm',
  queue_name  => 'capture_laq',
  include_dml => true,
  include_ddl => true,
  inclusion_rule => true);
end;
/

Step 12: Configure apply process on ladb

Create and configure the apply process on ladb and add rules to the positive rule set. The apply process dequeues the LCR events from sfdb and applies the changes to the application schema.

connect strmadmin/strmadmin@ladb.world
begin
dbms_streams_adm.add_schema_rules (
schema_name  => 'apps',
  streams_type => 'apply',
  streams_name => 'apply_src_sfdb',
  queue_name  => 'apply_laq',
  include_dml => true,
  include_ddl => true,
  source_database => 'sfdb.world');
end;
/

Step 13: Configure conflict resolution on ladb

Set up update conflict handler on ladb to discard changes from sfdb during conflict resolution. For the discard and overwrite handlers, the resolution_column parameter is not used to resolve conflicts. Any column from the column_list can be passed to the resolution_column. For the minimum and maximum handlers, the resolution_column is used to determine if the row LCR should be discarded or applied.

The discard handler is called when an update conflict occurs to at least one of the columns in the column_list. You may encounter ora-23460 if the column specified in the column_list is not in the supplemental log group. To rectify the issue, simply add the missing column to a new supplemental log group.

ora-23460:

// *Cause: before resolving conflicts, some values necessary resolving
//         conflicts are not available,
//         or after resolving conflicts, some values necessary for
//         re-trying of the SQL are not available
connect strmadmin/strmadmin@ladb.world
declare
 field  dbms_utility.name_array;
begin
 field(1) := 'ename';
 field(2) := 'site';
 field(3) := 'job';
 field(4) := 'deptno';
 
   dbms_apply_adm.set_update_conflict_handler(
      object_name            =>  'apps.emp',
      method_name            =>  'discard',
      resolution_column     =>  'site',  
      column_list            =>  field);
end;
/
declare
 field  dbms_utility.name_array;
begin
 field(1) := 'dname';
 field(2) := 'site';
 
   dbms_apply_adm.set_update_conflict_handler(
      object_name            =>  'apps.dept',
      method_name            =>  'discard',
      resolution_column      =>  'site',  
      column_list            =>  field);
end;
/

Step 14: Configure propagation process on ladb

The configuration for the propagation process is similar to the capture and apply process. We add rules to the positive rule set and specify the source queue name and destination queue name. Changes are propagated from the source queue to the destination queue.

connect strmadmin/strmadmin@ladb.world

connect strmadmin/strmadmin@ladb.world
begin
dbms_streams_adm.add_schema_propagation_rules (
  schema_name   => 'apps',
  streams_name   => 'prop_ladb_to_sfdb',
  source_queue_name  => 'capture_laq',
  destination_queue_name  => 'apply_sfq@sfdb.world',
  include_dml   => true,
  include_ddl   => true,
  source_database  => 'ladb.world');
end;
/

Step 15: Configure capture process on sfdb

Similarly, create and configure the capture process on sfdb.

connect strmadmin/strmadmin@sfdb.world
begin
 dbms_streams_adm.add_schema_rules (
  schema_name  => 'apps',
  streams_type  => 'capture',
  streams_name  => 'capture_sfstrm',
  queue_name  => 'capture_sfq',
  include_dml  => true,
  include_ddl  => true);
end;
/

Step 16: Set the schema instantiation SCN on ladb

Setting the schema instantiation SCN for sfdb at ladb ensures that only changes after the instantiation SCN from sfdb are applied at ladb.

connect strmadmin/strmadmin@sfdb.world
declare
 v_scn number;
begin
 v_scn := dbms_flashback.get_system_change_number();
 dbms_apply_adm.set_schema_instantiation_scn@ladb.world (
 source_schema_name => 'apps',
 source_database_name => 'sfdb.world',
 instantiation_scn => v_scn,
 recursive             => true);    
end;
/

Step 17: Configure apply process on sfdb

Create and configure the apply process on sfdb and add rules to the positive rule set. The apply process dequeues the LCR events from ladb and applies the changes to the application schema.

connect strmadmin/strmadmin@sfdb.world
begin
 dbms_streams_adm.add_schema_rules (
  schema_name  => 'apps',
  streams_type  => 'apply',
  streams_name  => 'apply_src_ladb',
  queue_name  => 'apply_sfq',
  include_dml  => true,
  include_ddl  => true,
  source_database => 'ladb.world');
end;
/

Step 18: Configure propagation process on sfdb

Configure the propagation process to propagate changes from sfdb to ladb.

connect strmadmin/strmadmin@sfdb.world
begin
 dbms_streams_adm.add_schema_propagation_rules (
  schema_name  => 'apps',
  streams_name  => 'prop_sfdb_to_ladb',
  source_queue_name => 'capture_sfq',
  destination_queue_name => 'apply_laq@ladb.world',
  include_dml  => true,
  include_ddl  => true,
  source_database => 'sfdb.world');
end;
/

Step 19: Export application schema on ladb

On trout3, create or use an existing directory on the filesystem to store the export dump file.

$> mkdir -p /u01/app/oracle/admin/ladb/export

As the sys user on ladb, create an Oracle directory:

create directory expdir as '/u01/app/oracle/admin/ladb/export';

Obtain the current SCN on ladb:

select dbms_flashback.get_system_change_number() from dual;

Export the application schema using the current SCN obtained above:

$> expdp system/system  schemas=apps  directory=expdir \
    logfile=expapps.log dumpfile=apps.dmp FLASHBACK_SCN=<current SCN>

Step 20: Import application schema on sfdb

On trout4, create or use an existing directory on the filesystem to store the export dump file from ladb. Transfer the dump file from trout3 to this directory.

$> mkdir -p /u01/app/oracle/admin/sfdb/import

As the sys user on sfdb, create an Oracle directory:

create directory impdir as 
  '/u01/app/oracle/admin/sfdb/import';

Import the application schema from ladb:

impdp system/system directory=impdir dumpfile=
  apps.dmp logfile=impapps.log

The supplemental log groups are imported together with the rest of the application schema’s objects.

SQL> connect apps/apps@sfdb.world
SQL> select log_group_name, table_name
  2  from dba_log_groups
  3  where owner='APPS';
LOG_GROUP_NAME                 TABLE_NAME
------------------------------ ---------------
LOGGRP_EMP                     EMP
LOGGRP_DEPT                    DEPT

Step 21: Configure conflict resolution on sfdb

Set up an update conflict handler to overwrite changes on sfdb with ladb’s during conflict resolution.

The overwrite handler is called when an update conflict occurs to at least one of the columns in the column_list. You may encounter ora-23460 if the column specified in the column_list is not in the supplemental log group. To rectify the issue, simply add the missing column to a new supplemental log group.

connect strmadmin/strmadmin@sfdb.world
declare
  field  dbms_utility.name_array;
begin
  field(1) := 'ename';
  field(2) := 'site';
  field(3) := 'job';
  field(4) := 'deptno';
 
  dbms_apply_adm.set_update_conflict_handler(
    object_name =>  'apps.emp',
    method_name =>  'overwrite',
    resolution_column =>  'site',  
    column_list =>  field);
end;
/
declare
  field  dbms_utility.name_array;
begin
  field(1) := 'dname';
  field(2) := 'site';
 
  dbms_apply_adm.set_update_conflict_handler(
    object_name =>  'apps.dept',
    method_name =>  'overwrite',
    resolution_column =>  'site',  
    column_list =>  field);
end;
/

Step 22: Start capture and apply processes on sfdb

Setting the disable_on_error parameter to ‘N’ allows the apply process to continue applying row LCRs even when it encounters errors. The default value is ‘Y’ which disables the apply process automatically on the first error encountered.

connect strmadmin/strmadmin@sfdb.world
begin
 dbms_apply_adm.set_parameter (
  apply_name => 'apply_src_ladb',
  parameter => 'disable_on_error',
  value => 'N');
end;
/
begin
 dbms_apply_adm.start_apply (
  apply_name => 'apply_src_ladb');
end;
/
begin
 dbms_capture_adm.start_capture (
  capture_name => 'capture_sfstrm');
end;
/

Step 23: Start capture and apply processes on ladb

connect strmadmin/strmadmin@ladb.world
begin
 dbms_apply_adm.set_parameter (
  apply_name => 'apply_src_sfdb',
  parameter => 'disable_on_error',
  value => 'N');
end;
/
begin
 dbms_apply_adm.start_apply (
  apply_name => 'apply_src_sfdb');
end;
/
begin
 dbms_capture_adm.start_capture (
  capture_name => 'capture_lastrm');
end;
/

Congratulations, you have set up a bi-directional replicated environment!

Step 24: It’s time to test drive …

We will replicate DMLs and DDLs between ladb and sfdb and test the discard and overwrite conflict handlers by simulating update conflicts.

SQL> connect apps/apps@ladb.world
SQL> select * from emp;
     EMPNO ENAME                SITE       JOB            DEPTNO
---------- -------------------- ---------- ---------- ----------
         1 MIKE                 LADB       CLERK               1
         6 JANE                 LADB       ANALYST             1
        11 ZACH                 LADB       ENGINEER           11
SQL> select * from dept;
    DEPTNO DNAME                SITE
---------- -------------------- ----------
         1 FINANCE              LADB
         6 HR                   LADB
        11 RESEARCH             LADB
SQL> connect apps/apps@sfdb.world
SQL> select * from emp;
     EMPNO ENAME                SITE       JOB            DEPTNO
---------- -------------------- ---------- ---------- ----------
         1 MIKE                 LADB       CLERK               1
         6 JANE                 LADB       ANALYST             1
        11 ZACH                 LADB       ENGINEER           11
SQL> select * from dept;
    DEPTNO DNAME                SITE
---------- -------------------- ----------
         1 FINANCE              LADB
         6 HR                   LADB
        11 RESEARCH             LADB

Test DML

SQL> connect apps/apps@ladb.world
SQL> insert into emp (empno,ename,job,deptno) values (empno_laseq.nextval,'SALLY','DEVELOPER',11);
1 row created.
SQL> commit;
Commit complete.
SQL> connect apps/apps@sfdb.world
SQL> select * from emp;
     EMPNO ENAME                SITE       JOB            DEPTNO
---------- -------------------- ---------- ---------- ----------
         1 MIKE                 LADB       CLERK               1
         6 JANE                 LADB       ANALYST             1
        11 ZACH                 LADB       ENGINEER           11
       101 SALLY                LADB       DEVELOPER          11
SQL> insert into emp (empno,ename,job,deptno) values (empno_sfseq.nextval,'AMY','ACCOUNTANT',1);
1 row created.
SQL> commit;
Commit complete.
SQL> connect apps/apps@ladb.world
Connected.
SQL> select * from emp;
     EMPNO ENAME                SITE       JOB            DEPTNO
---------- -------------------- ---------- ---------- ----------
         1 MIKE                 LADB       CLERK               1
         6 JANE                 LADB       ANALYST             1
        11 ZACH                 LADB       ENGINEER           11
       101 SALLY                LADB       DEVELOPER          11
         2 AMY                  SFDB       ACCOUNTANT          1

Test DDL

SQL> connect apps/apps@ladb.world
SQL> alter table emp add (delete_yn varchar2(1) default 'N' not null);
Table altered.
SQL> connect apps/apps@sfdb.world
SQL> desc emp
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EMPNO                                     NOT NULL NUMBER(10)
 ENAME                                     NOT NULL VARCHAR2(20)
 SITE                                      NOT NULL VARCHAR2(10)
 JOB                                                VARCHAR2(10)
 DEPTNO                                             NUMBER(10)
 DELETE_YN                                 NOT NULL VARCHAR2(1)
SQL> select * from emp;
     EMPNO ENAME                SITE       JOB            DEPTNO D
---------- -------------------- ---------- ---------- ---------- -
         1 MIKE                 LADB       CLERK               1 N
         6 JANE                 LADB       ANALYST             1 N
        11 ZACH                 LADB       ENGINEER           11 N
       101 SALLY                LADB       DEVELOPER          11 N
         2 AMY                  SFDB       ACCOUNTANT          1 N
SQL> alter table emp add constraint delete_yn_chk check (delete_yn in ('Y','N'));
Table altered.
SQL> connect apps/apps@ladb.world
SQL> select constraint_name,constraint_type,table_name,search_condition 
 from user_constraints where constraint_name='DELETE_YN_CHK';
CONSTRAINT_NAME           C TABLE_NAME      SEARCH_CONDITION
------------------------- - --------------- -------------------------
DELETE_YN_CHK             C EMP             delete_yn in ('Y','N')

Test Update Conflict Resolution – Discard handler at ladb

To test the update conflict handlers, we will create conflicts on the dname column for deptno=6 in both ladb and sfdb. By setting the Streams tag to a value other than the default NULL at ladb, we instruct the capture process not to capture the changes in the redo logs. In this case, the updated SALES value was not applied at sfdb.

SQL> connect apps/apps@ladb.world
SQL> exec dbms_streams.set_tag(hextoraw('1'));
PL/SQL procedure successfully completed.
SQL> select dbms_streams.get_tag from dual;
GET_TAG
---------------------------------------------------------------
01
SQL> update dept set dname='SALES' where dname='HR';
1 row updated.
SQL> commit;
Commit complete.
SQL> select * from dept;
    DEPTNO DNAME                SITE
---------- -------------------- ----------
         1 FINANCE              LADB
         6 SALES                LADB
        11 RESEARCH             LADB
SQL> connect apps/apps@sfdb.world
SQL> select * from dept;
    DEPTNO DNAME                SITE
---------- -------------------- ----------
         1 FINANCE              LADB
         6 HR                   LADB
        11 RESEARCH             LADB
SQL> update dept set dname='PARTS' where dname='HR';
1 row updated.
SQL> commit;
Commit complete.
SQL> select * from dept;
    DEPTNO DNAME                SITE
---------- -------------------- ----------
         1 FINANCE              LADB
         6 PARTS                LADB
        11 RESEARCH             LADB
SQL> connect apps/apps@ladb.world
SQL> select * from dept;
    DEPTNO DNAME                SITE
---------- -------------------- ----------
         1 FINANCE              LADB
         6 SALES                LADB
        11 RESEARCH             LADB

The discard handler was invoked in ladb and the conflict was resolved by discarding the row LCR from sfdb.

Test Update Conflict Resolution – Overwrite handler at ladb

SQL> connect apps/apps@ladb.world
SQL> update dept set dname='HR' where dname='SALES';
1 row updated.
SQL> commit;
Commit complete.
SQL> select * from dept;
    DEPTNO DNAME                SITE
---------- -------------------- ----------
         1 FINANCE              LADB
         6 HR                   LADB
        11 RESEARCH             LADB
SQL> connect apps/apps@sfdb.world
SQL> select * from dept;
    DEPTNO DNAME                SITE
---------- -------------------- ----------
         1 FINANCE              LADB
         6 HR                   LADB
        11 RESEARCH             LADB

The overwrite handler was invoked in sfdb and the conflict was resolved by overwriting PARTS with HR.

Conclusion

The procedure of setting up bi-directional data replication is relatively simple, however, without defining the best conflict resolution strategy and a well-thought out system design, you may find yourself spending long hours working on resolving conflicts and synchronizing data. Though setting up conflict resolution is optional in a bi-directional or multi-directional replicated environment, you should always have one in place to handle most if not all the possible conflicts.

» See All Articles by Columnist Vincent Chan

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved