Replication :: Data Between 4 Databases In Network
Nov 22, 2012
We want to replicate the data between the databases.We have 4 databases in a network.If there will be any change in database 1,e.g. updation in any table,it should automatically replicate on other 3 databases.or user will change something in database 2 ,it should replicate on other 3 databases and vice versa. All 4 databases have same schema and same configuration.
I Have configured the replication between two database's at table level. After few miniuts of successful configuration of replication between two db's at table level, I am getting the ORA-00001: unique constraint (%s_PK) violated error in dba_apply_error.
I checked constraint name,type and status on table replicated is same on both source and destination db.
I need to set up replication between three 10g R2 databases.There will be one main database where the data will flow into two small databases. A little bit of information will flow back from these smaller databases into the main one.
I have been asked to evaluate ODI as an option. Unfortunately I cannot use Golden Gate due to the cost.I've had a play with ODI and it looks like it's all done on a GUI where I would much prefer to be using command line scripts for setup and monitoring.
It also looks like ODI is more geared to integrating data from many different sources but I'm just going from Oracle to Oracle so don't know if it's a bit overkill.
I'm thinking of just looking at CDC which I can set up with scripts and looks to be the basis behind ODI anyway (when it comes to taking data from Oracle) - again. I may even go down the streams route that I have used before.
I want to upgrade all the Oracle 10g Release (10.2.0)master sites (bi-directional) databases to Oracle 11g Release (latest). In fact, we are using bi-directional oracle streams and snapshot replication, it means capture,propagate and apply process is running.
I am working in a bank as an system consultant, i have a SAN Storage Area and oracle as below.
SAN 1
This interface includes the DATA FILES of the oracle tablespace
SAN 2
SAN1 Mirrors the DATA FILES of the oracle tablespace to SAN 2
1. Can i rely on real time data recovery from SAN2 ? 2. if SAN1 (Data Files are currupted) will the SAN2 Data Files will be currupted as well. 3. If the SAN2 is currupted then what Oracle Features can be used to have uncurrupted data.
I have the task to migrate the total databases(Exact copy to be moved to another server).The current server is going for format.After I did the following steps I am getting the tablespaces(databases)-4 sizes same ,but I am facing issue like some default tablespaces i.e temp,system are not matching.
temp tablespace *************** current server - 4.0(approximately) Migrating server - 160 MB
System tablespace ***************** current server - 580 MB Migrating server - 220 MB
Also I checked the tables are also matching for the 4 databases.Also Provide the solution or method which is correct.
steps done for migrating(By me) ******************************** EXPORTING DATA USING DATAPUMP *********************************
1 From command prompt MKDIR 'c:oraclexeapp mp';
2 From SQL prompt conn system/kotak;
3 create or replace directory dmpdir as 'c:oraclexeapp mp';
4 grant read,write on directory dmpdir to kotak;
5 From command prompt
expdp system/kotak@xe full=Y directory=dmpdir dumpfile=xe.dmp logfile=expdpxe.log; IMPORTING DATA USING DATAPUMP ***************************** in another server machine
1 From SQL prompt conn system/kotak;
2 create or replace directory dmpdir as 'c:oraclexeapp mp';
I have created 2 PCs (1 for Primary DB (IT_SERVER), 2 for Standby DB (IT_SERVER2) on Oracle VM Virtual Box in order to check my Data Guard configuration.
I have created a Shared Folder on 2nd PC (IT_SERVER2) for Oracle Standby DB. Now when I issue my query from 1st PC (Primary DB) to Create a Parameter file on 2nd PC Shared Oracle Folder, it returns following errors
SQL> CREATE PFILE='\IT_SERVER2E:ORACLEPFILESTLDB2.ORA' FROM SPFILE; CREATE PFILE='\IT_SERVER2E:ORACLEPFILESTLDB2.ORA' FROM SPFILE * ERROR at line 1: ORA-09210: sftopn: error opening file OSD-04002: unable to open file O/S-Error: (OS 67) The network name cannot be found.
Then I try this
SQL> CREATE PFILE=\IT_SERVER2'E:ORACLEPFILESTLDB2.ORA' FROM SPFILE; CREATE PFILE=\IT_SERVER2'E:ORACLEPFILESTLDB2.ORA' FROM SPFILE * ERROR at line 1: ORA-00911: invalid character
I got a primary database which ships logs successfully to a standby database where it is applied successfully as well.
I can connect using the tns name defined for data guard (which are already used for log shipping) in sqlplus.
Yet, at the time of configuring data guard broker,the command add database 'standby' connect identifier is 'to_standby'; fails.
Looking into drc<instance>.log, I see the error ora-03113 network I/O.
I haven't seen any error like that online and I can't explain why it does that when logs can ship already with the same connection identifier.
I have edited dest_2 with and without parameters LGWR and SYNC and reconfigured the broker without success. I've removed the configuration files and tried again.
I've checked that the password file is the same on both servers as well.Nothing seems to work. Is there a parameter I'm missing somewhere.
we want to use database link to connect a Database for operating the select,update or ... commands,our destination database is WE8ISO8859P1 and current database is AR8MSWIN1256 cahrset, but when we operate a command to view data,all NonEnglish characters appear odd wich we can not recognize the appeared text, also if we use convert function no change would make, view right charachters with our database link.
it doesnt work with convert founction
select convert(menu_name,'US7ASCII','WE8ISO8859P1'), convert(menu_name,'ar8mswin1256','WE8ISO8859P1'), convert((convert(menu_name,'US7ASCII','WE8ISO8859P 1')),'ar8mswin1256','WE8ISO8859P1'), menu_name from T$R_MENU@"TO201.US.ORACLE.COM" WHERE MENU_ID=601011;
result is EU?iY ?C?ICa? OCOaI? E???? ?C?IC?? ?C??I? EU?iY ?C?ICa? OCOaI? E???? ?C?IC?? ?C??I?
We have an application that fetches and writes data into oracle database through pro c. oracle datyabase is on another server.
We are storing some secure information into oracle database so we want to encrypt the data sent by our aplication into oracle database.We do not want to use SSL(i.e certificates) and also do not want to make use of Advance Security Option available in oracle and also do not want to make any changes in sqlnet.ora file on server side.
achieve encryption of traffic between our application and Oracle database?
I am trying to write data to a network shared folder. When I write to a local file it works perfectly. Below is my procedure.
CREATE OR REPLACE procedure nbpsbp_file as type r_cursor is ref cursor; refr r_cursor; tab_name varchar2(20):= null; tab_name1 varchar2(20) := null; tab_name2 varchar2(20) := null;
[code]....
When I execute the above procedure, it gives me the following error
ORA-29283: invalid file operation ORA-06512: at "SYS.UTL_FILE", line 475 ORA-29283: invalid file operation ORA-06512: at "NBPSBP_FILE", line 36 ORA-06512: at line 1
I have also set the parameter utl_file_dir = '\10.16.10.225 emp' When I set the utl_file_sir to a local folder, for example, c: emp, and use the same path in UTL_FILE.FOPEN, then it works fine and writes the desired output to text file. But when I give it a network address, it raises the above error.
I have one primary database server and one physical standby database serve. but i am unable to fix "ORA-16607" while enabling configuration. during log switch redo data is being applied to the phy standby side but some thing wrong happens in BROKER
here are details........about my configuration
Primary database name PRIM Physical Database Name STAN net service PRIM net service STAN
DGMGRL> show configuration
Configuration Name: test Enabled: YES Protection Mode: MaxPerformance Fast-Start Failover: DISABLED Databases: prim - Primary database stan - Physical standby database
Current status for "test": Warning: ORA-16607: one or more databases have failed
====================================================== Alert log file First para show no problem for for redo to be transmitted to stndby ====================================================== LNS1 started with pid=48, OS id=5408 Thu Sep 21 21:32:03 2006 Thread 1 advanced to log sequence 59 Current log# 1 seq# 59 mem# 0: /u01/app/oracle/oradata/PRIM/redo01.log
I have one primary database and one physical standby database on my data guard environment.
Now i want one more physical standby database in my data guard environment, meaning i want 2 physical standby. since this is a production environment, i need to take great care. However, downtime will be mandatory.
[u]init.ora for production:[/u]
pup2.__db_cache_size=6241124352 pup2.__java_pool_size=33554432 pup2.__large_pool_size=134217728 pup2.__oracle_base='/oracle/app/oracle'#ORACLE_BASE set from environment pup2.__pga_aggregate_target=5771362304 [code]......
init.ora for my first primary standby
pup2.__db_cache_size=6509559808 pup2.__java_pool_size=33554432 pup2.__large_pool_size=67108864 pup2.__oracle_base='/oracle/app/oracle'#ORACLE_BASE set from environment pup2.__pga_aggregate_target=5502926848 [code].......
oracle 11.2.0.3 I have insert only tables that receive srecords from multiple processes at a rate of about 200/second. Each transaction can have up to 100 records. I have another set of processes that queries this table for the latest data. These processes run anywhere from once a minute to once an hour. Processes do not get all of the data. They get data based on a type field.
Both of these are from java middle tiers. The process that queries data (The subscriber) does so at the request of many remote servers (there will be vast numbers). I am not allowed to expose these downstream databases to the internet (they are not oracle DBs anyway) so I cannot use streams or golden gate
So basicallyInsert Process: multiple sessions that combined insert records up to 200/second. There will be between 1-100 records per commit.
Query Process: Downtream process makes a request to my middle tier. This middle tier runs a query to get the latest data and passes it back. This design is set and I cannot change it.
1. right now we capture the insert time of the record. However, at this rate of inserts some processes will commit faster than others. So I cant use a 'greater than my insert time' query. 2. streams/golden gate won't work. can't register these DBs. 3. don't want to serialize my inserts because since I am not sure I can keep up with the insert rate. I don't even know what the specs will be for the production hardware. I have to actually deliver this before its decided. So I am being conservative. 4. I really want to avoid updates on this table if possible. In part due to my limited ability to test. 5. due to the number of downstream processes it is possible that it will request data and for some reason fail to insert the data locally. So the downstream application will keep track of the latest data it received. This means that a subscriber may need to request the same data again.
Is there a way to set up change data capture with multiple subscribers to handle this? if my subscribers are just queries? All the queries come from the same servers(there will be several, but all the same thing). If so, when I performance test this are there any wait issues I should keep an eye on?
I have recently configure Data guard with Database 10g (10.2.0.4-64 bits) on Windows 2007 server.My Data Gurad Configuration show Success status with 2 databases on same (or local) location.My questions are
1-When I query SHOW PARAMETER LOG ARCH
DG_CONFIG(PRMDB)
ONLY 1 (PRIMARY DATABASE IS DISPLAYED ONLY NOT 2 DTATABASES e.g. DG_CONFIG(PRMDB, STLDB)
2- How to check the log applied interval or time (either transaction by transaction, timing etc)
Suppose, we have one main database on one side(say A) and an external system on the other side(say C). Midway, we have one more staging database(say B).Lets take, we have one record related to Bank Information both in database A and database B and the following activites are performed.
1) "Bank Name" column in database B is updated by the external system C. When this Bank Name is updated in database B by external system C, we need to update the value of this field in database A.
2) Suppose couple of days later, the Bank phone number of the same bank record is updated in main database A and the same update needs to be reflected in staging database B.
How can we take care of both these activities of data-synchronization. What are the different approaches we can take? FYI, we are on oracle 10g rel2 and Windows OS.
We have Development, Staging/UAT ( installed on XX.XX.XX.10 ) and Production ( installed on XX.XX.XX.20) Environment respectively. I have queries regarding getting the data from Production environment into Staging environment. The overall PROD database size is around 250 GB.
STAGING DATABASE DETAILS
SID : STG_DB Staging Schema Name :schema_UAT Replication Schema Name :schema_PrdReplica ( This is the schema where the production data gets loaded daily)
PROD DATABASE DETAILS
SID : PROD_DB Prod Schema Name : schema_PROD
What is happening now: ---------------------------- There is a script (Stored Proc) written on staging ( STG_DB.schema_PrdReplica ) environment which executes daily in NIGHT and does replication. Currently we use DBMS_DATAPUMP to get the ENTIRE data/Meta Data (Everything) from Production to Staging. It is ta king significantly more time. It takes approx 8 Hours to replicate the everything from PROD_DB.schema_PROD to STG_DB.schema_PrdReplica
What I am expecting : ----------------------------- I want to reduce the replication time.
I have heard about Level 0 (Full BackUp ) and Level 1 ( Incremental Cumulative ) Backups in RMAN. I am planning to take PROD_DB.schema_PROD Full Backup (Level 0) on Sunday and will restore that on STG_DB.schema_PrdReplica immediately. And on weekdays ( Mon - Fri ) I will take Level 1 ( Incremental Cumulative ) and will restore that on STG_DB.schema_PrdReplica
I am assuming by doing so, the overall replication time will be reduced. How can I implement this with script assuming that two different servers are on different machines.
We need to transfer data from oracle 10g to Oracle 9i in the following condition.
There will be two database server , one is online server where online user fill the form which is generated by java, spring , hibernate and using database 10 g. at day end i need to execute a process that transferring data from online server to offline server that is in oracle database 9i. This process is scheduled. Some security reason client do not kept this two database on same network. My challenge is that transfer data from online server to offline server with applying client security norms.I have option like:
1) Using Oracle replication method, creating materialized view on remote server , refreshing it at regular interval. but database connectivity is not contineous, should i go for that ?
2) Write java application on intermediate server where we write process to get the connection of this two database servers. From java application we call the procedure for selecting data from Oracle 10g and insert into oracle 9i database and using flag on both data to identified how many rows are transfered and how many remaining for trasfer.
We are getting following error when we are trying to extract data from ASM.
GGS ERROR 500 Oracle GoldenGate Capture for Oracle, ext_1.prm: Getting attributes for ASM file +DATA/testgg/onlinelog/group_1.257.742844671, SQL <BEGIN dbms_diskgroup.getfileattr('+DATA/testgg/onlinelog/group_1.257.742844671', :filetype, :filesize, :lblksize); END;>: (6550) ORA-06550: line 1, column 7: PLS-00201: identifier 'DBMS_DISKGROUP.GETFILEATTR' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignoredNot able to establish initial position for begin time 2011-02-16 16:42:05.
Actually am trying to replicate two db servers from one in hong kong and another in china. when am trying to establish the replication, am getting error 'ORA-04052: error occurred when looking up remote object' like this...
but the same way i have tried in my local network, it is working fine.i have tried schema replication through enterprise manager grid control..