Replication :: Table A - How To Replicate Specific Row Transaction To Table B
Feb 7, 2013
Let's say we have Table - A and we would like to replicate specific row transaction to Table B.
Here are the rows in *Table A*
Time: Lets say 15:00
A1 Just Updated @15:00
A2 Just inserted @15:01
A3
B1 - Daily Delete Row -i.e just deleted a while back - Non scheduled process --executed by application @15:02
B2 -
B3 - Daily Delete Row - i.e just deleted a while back -- Non Schduled process --executed by application @15:05
B4 - Just recently purged (As part of 180 Day purge ) - Scheduled process executed by operations team @15:10
B5 - Just recently purged (As part of 180 Day purge ) - Scheduled process executed by operations team @15:10
B6 -Just recently purged (As part of 180 Day purge ) - Scheduled process executed by operations team @15:10
Current Data in Table B (Before Replication) @15:00
A1 (without updates)
A3
B1
B2
B3
B4
B5
B6
Expected rows in Table B (via replication/snapshot/materialized view / or any other method)
*Replication at 15:30*
Table B - Read Only
Expected rows after replication-
A1 -- Newly updated details
A2 -- Newly inserted row
A3
B1 - Daily delete row is expected to be replicated
B2
B3 - Daily delete row is expected to be replicated
***Note row B4 is not expected to be replicated to table B.
Questions:
1) How can we get updates, inserts and daily deletes replicated while ignore large purges?
2) How can large purge changes be reflected in replicated tables as well without deleting daily deletes?
i work in an application that should make the replication from a publusher table to a remote subscribe table, using snapshot,and trigger, replication data of update works perfectly (update,insert,delete), but when i try to add or dropp a clumn in the publisher table, repplication fail, i know that my method d'ont replicate ddl statment like create or alter table, so i would like the better way to do the replication of the ddl statment without loosing tha data in the subscribe table, i'm working with oracle XE,
I have a table MYTABLE in database mydb1 duplicated via materialized view and materialized view log and refresh_snapshot commands to a MYTABLE on mydb2 database.
I like to duplicate this table MYTABLE to a third database mydb2, using the same method (materialized view and refresh_snapshot command).
Is it possible ? What's hapend to the materialized view log where I launch a refresh_snapshots on mydb2 ? How is this materialized view log truncated ?
I want to set up advance replication for 3 master site (multimaster) I created 3 master site named orc1,orc2,orc3 and followed up oracle replication management of API book instruction I created 2 tables(tes1,test2) in hr schema in all 3 master site with the same data. then I created the following steps
1-CONNECT repadmin/repadmin@orc1
2-Create the master group named hr_test_repg
BEGIN DBMS_REPCAT.CREATE_MASTER_REPGROUP( gname => 'hr_test_repg'); END; /
4-add tables test1 and test2 to the group
BEGIN DBMS_REPCAT.CREATE_MASTER_REPOBJECT( gname => 'hr_test_repg', type => 'TABLE', oname => 'test1',
[code]....
I could create DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT for test2 but not for test1 and it produces error
RROR at line 1: RA-23309: object hr.test1 of type TABLE exists RA-06512: at "SYS.DBMS_SYS_ERROR", line 105 RA-06512: at "SYS.DBMS_REPCAT_MAS", line 2552 RA-06512: at "SYS.DBMS_REPCAT", line 562 RA-06512: at line 2
I have approximately 1200 transaction to be updated to a master table. There are other columns in the master table but only one column is being updated. I would like to use sqlloader if possible or any other efficient means. Those 1200 record is stored in an excel spreadsheet. The col1 of the excel spreadsheet have to match col1 of the master table inorder for update col2 from the excel spreadsheet. Here is an example of the data. My operation system is HPUX and database is Oracle 10g.
Master table col1 col2 col3 col 4 4238 susan 56e 5879 h698c rich 12g 7091 joyce 34b 0876 mike 25n 7501 k956b robert 87c 9498 angela 67r 3645 doris 92y
I have a multimaster advance replication environment and we have less than 10 transaction per day...I want to propagate data as soon as possible like sync if I setup the schedule push to propagate transaction every 5 seconds as below
in other hand ,book(I mean Advance Replication) has written that for simulating continuous push we should setup it as below so it will propagate transaction every 1 minute.
My second question is:I know interval >= 'SYSDATE + (1/144)' means every 10 minutes a job will start and delay_seconds => 1200 means each job remain aware for 20 minutes, but I can't understand the logic?why it can simulate 1 minute propagation?
sql statement to query a transaction table that stores transactions of items bought from my organisation.The report i would like to generate is one that lists the items bought and this should be grouped month by month.
The Procedure A extracts and filter some data from the DW, this data is stored on the Global Temporary Table. Another Procedure, named B, use the data from the Global Temporary Table and store it on a normal table using another procedure Named X that Merge the data from Global Temporary against the Normal Table (inserting if not exist and updating some fields if exist).
(X isn´t important on the new flow)
Now, i need to add some steps on the normal flow:
The Procedure A extracts and filter some data from the DW, this data is stored on the Global Temporary Table. Another Procedure, named B, use the data from the Global Temporary Table and store it on a normal table. Using the Data from Global Temporary Teble i must to Store some fields on another normal table, for this i use another Procedure named C that merge the data from Global Temporary Table against the data from normal table, and i must to commit at this point. X Merge the data from Global Temporary Table and the data from the Normal table con the procedure "C" against another Normal Table (inserting if not exist and updating if exist).
The issue that i´m expecting is that i can´t use "C" for merge and commit, because this truncate the data on the global temporary table. I can´t change the on commit delete rows option, because another procedures are using this Global Temporary Table on production.
Before you ask, i try using AUTONOMOUS_TRANSACTION on "C" and didn´t works because "C" can´t found data on the Global Temporary table ( i use a select count on "C" from Global Temporary), this is because The Autonomous_Transaction (i think). So, what i can do? I tried to trace the session Number on C and A and with the AT is the same ( so isn´t session change problem).
I need:Commit on the Procedure "C" without Truncating Global Temporary Table because i need this data for X.
I want my query to return the rows of the table where a column contains a specific values first in a certain order, and then return the rest of the rows alphabetized.
For Example:
Country ALBANIA ARGENTINA AUSTRALIA .... CANADA .... USA ....
Now i want USA and CANADA on top in that order and then other in alphabetized order.
I need to create a table which contains the details of what companies a specific seeker applied to when he logs in into his account.... Also, when an employer logs in, he needs to get the list of seekers who applied to him...give database schema required for such a situation asap.Say, the user has a primary key UID, and each employer has a primary key EID..
Our oracle database has been running great. This past wendesday the application was unable to access a specific table. Sutting down and restarting the database seemed to have fixed the issue. Last night the database has been working since that restart on wednesday morning. Suddenly stopped working. We stopped and restarted the database again this time getting ORA-00600 error as shown below. We are able to make a connection to the database and can even access the tables in question. The application is complaining about the connection to the database. I understand that ORA-00600 is an internal error what I need to know is what the internal error code 549 actually is. I have tried googling to no avail.
Connected. ORA-00600: internal error code, arguments: [549], [], [], [], [], [], [], [] ORA-03113: end-of-file on communication channel Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production With the Partitioning, OLAP and Oracle Data Mining options JServer Release 9.2.0.1.0 - Production
I have a scenario like, am having two databases DB1 and DB2 in different locations where I need to replicate some of the tables(around 10 to 15 tables) from DB1 to DB2(i.e. Whenever I update any table in DB1 it has to reflect in DB2.). Both DB1 and DB2 has the same database objects.
(DB version - Oracle 10g Release 10.2.0.4.0).
the steps how this can be done. Can it be done using Materialised View.
I have to reorganize one table that related to several other tables. The reorg is too slow when it runs on this table. I would like to create one image of the table and synch it with the original one in real time. So when I run the reorg, I will use the image table that does not constrained by indexes and other objects. Once the reorg is done, I would like to rename the table. how could I do the replication in real time?
I have two schema on the two servers for replication replication is working fine.
i export one schema to another so all the tables exists at both the sites. I am adding objects in the replication group using oracle enterprise manager console.
some of the tables added fine. but some gives me error like.
ORA-23309: object UMESH.PRODUCT_MASTER of type TABLE exists
but with the error in and when generate replication support.
SQL> select status,request,message,oname from dba_repcatlog; STATUS REQUEST -------------- ----------------------------- MESSAGE -------------------------------------------------------------------------------- ONAME ------------------------------ ERROR CREATE_MASTER_REPOBJECT
[code]....
sometimes i got error like
ORA-00942: table or view does not exist
when use CREATE_MASTER_REPOBJECT command to create object at master definition site while the the table exists at the master site.but in the same situation other objects are working fine.