Replication :: Golden Gate For Production Cut Over
Feb 4, 2012
I was planning to a production cut over from Aix to linux .I thought GG as an option so that i can have two DBs run parallel and replicate and do a cutover to linux during the change window.
Now the problem i see is that only half the tables have primary key.. so I THINK golden gate cannot be used an an option.
Below questions are swirling in my mind while doing some research on Golden gate setup and replication
1) Can we put any time delay between source and target, so that the replication data will be applied after some set of time?
2) If I setup the Golden Gate between Oracle as source and SQL server as destination, then how the core of procedures/views/functions/packages will be replicated to SQL server target?
i.e. is only data replicates between the DBs or the other procedures/views/packages.. etc will also be replicated? If not,then do we need the manual involvement to replicate them?
3) Is it possible to replicate the schemas within the same DB using golden gate?
i.e. if TOOLS DB is shaving schemas A and B, can I replicate the A objects to B schema within the same DB TOOLS?
1) Link for downloading golden gate software. I want to use it on oracle 11g on linux (X86). ( I have downloaded fbo_ggs_Linux_x64_ora11g_64bit.zip from oracle.com size 89mb but I'm not sure whether this is the actual version or not!) 2) Easy ,step by step guide to install the software. 3) Documents/link for performing basic operation.
For using replication in our production, here i am testing golden gate as replication tool. I tested all scenario in Uni direction ( source to destination). Now have to test replication with DDL support in Bi-Direction. Not getting any Doc for doing replication in Bi-direction( Two Way). If any one has done the same, then please share limitation of replication in Bi-direction through Golden Gate.
I was trying to test out the bidirectional replication using golden gate 11g . while doing some tests i did find out there are inconsistency in the database.
Scenario:
Database 11gee,Goldengate11g.
Set up a bidirectional replication replication it works fine until i hit the below scenario. I updated 1 record in the source database then i updated same record in the target database . Now i issued commit one by one on both the src/target. Now checking the record in src/target have different values. src containing the value which was committed in target and vice versa in target. leading to data integrity.
How can we resolve the above issues i have tried different options but nothing seems to work till now.
What is the stream's "tag" equivalent in Golden gate ?My tables are already in GG replication, but i want to do few insert in to source which i dont want to replicate to target.
Currently we are loading data from oracle to sql server through oracle 11g gateway. during running scripts most of the queries takes a long time to complete. Because of this performance issue, we want to configure oracle golden gate to move data from oracle to sqlserver. whats the difference between oracle gateway & oracle golden gate? both are performing same functionality?
We have Development, Staging/UAT ( installed on XX.XX.XX.10 ) and Production ( installed on XX.XX.XX.20) Environment respectively. I have queries regarding getting the data from Production environment into Staging environment. The overall PROD database size is around 250 GB.
STAGING DATABASE DETAILS
SID : STG_DB Staging Schema Name :schema_UAT Replication Schema Name :schema_PrdReplica ( This is the schema where the production data gets loaded daily)
PROD DATABASE DETAILS
SID : PROD_DB Prod Schema Name : schema_PROD
What is happening now: ---------------------------- There is a script (Stored Proc) written on staging ( STG_DB.schema_PrdReplica ) environment which executes daily in NIGHT and does replication. Currently we use DBMS_DATAPUMP to get the ENTIRE data/Meta Data (Everything) from Production to Staging. It is ta king significantly more time. It takes approx 8 Hours to replicate the everything from PROD_DB.schema_PROD to STG_DB.schema_PrdReplica
What I am expecting : ----------------------------- I want to reduce the replication time.
I have heard about Level 0 (Full BackUp ) and Level 1 ( Incremental Cumulative ) Backups in RMAN. I am planning to take PROD_DB.schema_PROD Full Backup (Level 0) on Sunday and will restore that on STG_DB.schema_PrdReplica immediately. And on weekdays ( Mon - Fri ) I will take Level 1 ( Incremental Cumulative ) and will restore that on STG_DB.schema_PrdReplica
I am assuming by doing so, the overall replication time will be reduced. How can I implement this with script assuming that two different servers are on different machines.
Actually am trying to replicate two db servers from one in hong kong and another in china. when am trying to establish the replication, am getting error 'ORA-04052: error occurred when looking up remote object' like this...
but the same way i have tried in my local network, it is working fine.i have tried schema replication through enterprise manager grid control..
i need to set up a central server with all the master tables and two other local database which will hold the updatable materialized view of the master table...the databases must be synchronized with central server..and user will work on the materialized view database...
I want to set up advance replication for 3 master site (multimaster) I created 3 master site named orc1,orc2,orc3 and followed up oracle replication management of API book instruction I created 2 tables(tes1,test2) in hr schema in all 3 master site with the same data. then I created the following steps
1-CONNECT repadmin/repadmin@orc1
2-Create the master group named hr_test_repg
BEGIN DBMS_REPCAT.CREATE_MASTER_REPGROUP( gname => 'hr_test_repg'); END; /
4-add tables test1 and test2 to the group
BEGIN DBMS_REPCAT.CREATE_MASTER_REPOBJECT( gname => 'hr_test_repg', type => 'TABLE', oname => 'test1',
[code]....
I could create DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT for test2 but not for test1 and it produces error
RROR at line 1: RA-23309: object hr.test1 of type TABLE exists RA-06512: at "SYS.DBMS_SYS_ERROR", line 105 RA-06512: at "SYS.DBMS_REPCAT_MAS", line 2552 RA-06512: at "SYS.DBMS_REPCAT", line 562 RA-06512: at line 2
I have one 10g database in other country. I want part of their db (selected tables or tablespaces) and import that data to my 10g DB and i want keep to date this data.
I know two ways
1. Data Pump Imp/Emp via FTP, but i can't send only data that have changed (incremental), i must pumping whole selected part of database (i want only new data from their DB, but consistanse with my DB)
2. RMAN etc. or other archivisation tool, i can do incremental achivisation, but can i send files to another instances (my db) and load only that data? Can i do that with SQL*Loader?
I Have configured the replication between two database's at table level. After few miniuts of successful configuration of replication between two db's at table level, I am getting the ORA-00001: unique constraint (%s_PK) violated error in dba_apply_error.
I checked constraint name,type and status on table replicated is same on both source and destination db.
I am using prebuilt MV to perform replication of about 300-400 master tables from one database to another database. I am wondering about the impacts on triggers in general replication.
IS there a general rule to enable/disable a trigger before a refresh.
I want to configured A Synchronous replication in oracle 8i. Noe using GUI i am bale to create replication but whatever i modify it remain in the transaction i have to manually run the job ,
I also tried to configured continuous asynchronous configuration by setting delay rate 500000. but it was also not working.
Is it possible to replicate table data on real time from sql server (2005 32 bit or sql server 2000 32 bit)to oracle 10g running on linux 64 bit? If yes then what are the steps.
It will be one way replication from sql server to oracle. Which option is best sql server dts or Oracle Stream replication to replicate table data.
I am facing the row lock issue in production. I have been trying to resolve the issue but i coud'nt. I traced out by using different queries which sql query is locking which but everything looks good.
And i also checked for connections open and close everything is in good place but unable to resolve the issue. we are running a batch file which runs in every night some of the records are processing and if any one record is failed it is blocking another records.My oracle version is oracle 10.2.0
I partitioned a source table of around 100 million rows (62GB) in DEV server. The target database was created new. It was range partioned on a date column as follows:
PARTITION BY RANGE (ENTRY_DATE_TIME) ( PARTITION ppre2012 values less than (TO_DATE('01/01/2012','DD/MM/YYYY')) TABLESPACE WST_LRG_D, PARTITION p2012 values less than (TO_DATE('01/01/2013','DD/MM/YYYY')) TABLESPACE WST_LRG_D, PARTITION p2013 values less than (TO_DATE('01/01/2014','DD/MM/YYYY')) TABLESPACE WST_LRG_D, PARTITION p2014 values less than (MAXVALUE) TABLESPACE WST_LRG_D )
That is yearly basis. Anything before 2012 went to ppre2012, then p2012, p2013 and so forth. There is 20 million rows in p2012. and around 75 million rows in ppre2012. We needed both the source (un-partitioned) and target (partitioned) tables in DEv for comparision. The queries are normally on the current year partition. Just to state taht I am a developer and don't have full visibility to the production instance.
Now that our tests are complete, we would like to promote this in production. Obviously in production we would not not need both source and target tables. In all probability this will be performed over a weekend window. Therefore I would like to suggest the following .
1) use expdp to export source table 2) drop the source table 3) create a new source table "partitioned" with no indexes 4) use impdp to get data back into table 5) create global index (it is a unique index to enforce uniquness) and the rest of indexes as local 6) perform dbms_stats.gather_table_stats(user,'SOURCE', cascade=>true). This takes around 2 hours in dev
My point is that whether importing 100 million rows will not cause issues with undo segments. Can we import data say first to the current partition p2012 (20 million rows) first.
I faced a problem in our production primary DB like the listener was running but it was not connecting through the remote system, when i accessed the primary database.
But when i restarted the listener it started connecting. I want to know the reason for this crash/freeze.
Oracle version: Oracle Database 11g release 11.2.0.1.0 SE1 64 - bit production