Replication :: Goldengate Extract For 12C Database
Aug 30, 2013
I am implementing GG 11g r2 for 12C database. But i am getting below error. My question Why Goldengate needs specific package ... Since this is homogeneous & heterogeneous.
/u01/12c_database_software/goldengate/dirtmp.
2013-08-30 05:28:44 INFO OGG-01513 Oracle GoldenGate Capture for Oracle, ext1.prm: Positioning to Sequence 66, RBA 25067536, SCN 0.0.
2013-08-30 05:28:44 ERROR OGG-01028 Oracle GoldenGate Capture for Oracle, ext1.prm: ORA-06550: line 1, column 7:
PLS-00201: identifier 'SYS.DBMS_INTERNAL_CLKM' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored.
2013-08-30 05:28:44 ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, ext1.prm: PROCESS ABENDING.
Database details
----------------
SQL> select object_name, object_type from dba_objects where object_name='DBMS_INTERNAL_CLKM' and object_type in ('PACKAGE');
no rows selected
SQL> select object_name, object_type from dba_objects where object_name='DBMS_INTERNAL_CLKM';
We are getting following error when we are trying to extract data from ASM.
GGS ERROR 500 Oracle GoldenGate Capture for Oracle, ext_1.prm: Getting attributes for ASM file +DATA/testgg/onlinelog/group_1.257.742844671, SQL <BEGIN dbms_diskgroup.getfileattr('+DATA/testgg/onlinelog/group_1.257.742844671', :filetype, :filesize, :lblksize); END;>: (6550) ORA-06550: line 1, column 7: PLS-00201: identifier 'DBMS_DISKGROUP.GETFILEATTR' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignoredNot able to establish initial position for begin time 2011-02-16 16:42:05.
I have a question about GG Sequence Replication and Triggers. My main database, which I would like to replicate on another server, is highly dependent on sequences for assigning surrogate keys to every row in every table in the application. I know that I need to add Sequence support to my source database (plus supplemental logging, etc), but I'm curious about the target database.
I do not anticipate allowing Read/Write access to this database - we are migrating from 10.2.0.4 (source) to 11.2.0.3 (target) on a new platform, and I want to keep the 11g database up-to-date with our production data until it is time to begin the actual conversion of our application. My thinking is that if I use the SUPPRESSTRIGGERS dboption in my Replicat session, this should take care of the use of the Sequences for assigning the surrogate key values, and the data should add to the tables normally without any intervention by the sequences/trigger combination. I know I will have to manually "correct" the sequences on my 11.2.0.3 database whenever I want to open this database up for use, but I have a script for this ready to go.
Also, in my source database, I am using Oracle Context indexes for generic name searching - this feature creates a number of DR$ named tables in the main application schema that I am replicating (approximately 50 of them). I am assuming that I should EXCLUDE these tables from the replication, as the context indexing should automatically update them as changes to the underlying data are applied via the replication of the indexed tables.
Using Golden Gate to replicate a database (Encrypted Tablespace, Oracle 11.2.0.1, Windows 2008) to a different database server (No Encrypted Tablespace Oracle 11.1 Linux)
Following error goldengate report ERROR OGG-01771 DBOPTIONS DECRYPTPASSWORD must be used to decrypt TSE data. Use TRANLOGOPTION IGNORETSERECORDS if you do not need to capture any tables that are in an encrypted tablespace.
How use it
GGSCI> ENCRYPT PASSWORD "shared key" Add an entry to the Extract parameter file to decrpt the new shared password
i need to set up a central server with all the master tables and two other local database which will hold the updatable materialized view of the master table...the databases must be synchronized with central server..and user will work on the materialized view database...
We are facing a project where it is mandatory that the migration (from 9i to 11g) happens without any downtime. We thought about using Goldengate do to this migration. But i would like to listen to somebody who already did such kind of migration (i never used goldengate before). The basic steps to do such migration would be:
1) Install the Goldengate client on both source and target 2) Export only the metadata (structure of the table, for example) from source to target (here is one point of doubt of mine. This export can be only done using exp/imp?) 3) Perform the initial load from source to target (here i have another doubt: It it possible to perform an initial load from a whole database?) 4) Configure manager, export and replicat to perform the migration with the source database open in read-write
With the steps above, would i be able to perform a migration without downtime? What other considerations do you have?
BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Solaris: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production
All, I have an infuriating problem and I'm hoping for some advice. Well actually it is a number of problems but I'll use one example as a microcosm.I'm going to use as the example:
Now then, this query runs perfectly and as expected (direct reads/DB scattered reads) until it hit step 13 in the plan (FTS of T6). At step 13 it basically enters into a (rather fatal) process of reading almost nothing but undo exclusively.
Now then, I wonder why this might be. Of course the natural reaction is to say "they data is changing, its read consistency (optional rookie insult)". Now, I found this hard to believe that the volume of data could be so extensively changed, keeping in mind its a good sized table and it's doing a full access.
I checked the DBA_HIST_SEG_STAT(DB_BLOCK_CHANGES_DELTA) view for the period and in the run up, i.e. the explain plan steps 2-12 took approx 20 minutes. In this window, less than 1% of the blocks in the table T6 were changed (approx 3-4k out of a 500k block table).Nearly every access of this table is reading undo, yet since this query started, 99% of the table is unchanged. But the query is behaving as if everything it reads has altered since it began.
To be clear, I understand the read consistency etc and expect it, what I DONT expect is the virtually the ENTIRE table scan operation routing via undo when I'm accessing the full table, yet hardly any blocks are altered after I start running.
Note this query/application seems to fetch the rows in blocks of 1k, its almost as if its re-executing for each fetch, but I wouldn't have expected that.
In summary: >Query starts >hits final table >almost exclusive undo reads >under 1% of block changes in read table since query began
Is there any other way anyone knows of that might cause a session to read into undo beyond a session changing data after the query executed?
How to Pick / Extract the java class files from the database.? We have not maintained the latest codes in the oracle application server where java class code is residing.
All the Java Classes are available only in database. So we need to pick the latest java class code from production environment. In TOAD we tried but all class objects are listing at the left side but we are unable to take the code. So how can we take the latest codes(java classes) from the Production Database as a backup.
How I can maintain a replication scheme from a production database and a standby. I was watching the advanced replication methods of Oracle, but what I want is in the evening to run a process and modify the database incrementally and thus leave until the next night.
And the server I want to allocate to the standby database, also implements other processes, so my settings would be:
Production: Oracle Database 11g Linux 5.5 Standby: Windows 2003 Database 11g
Maybe that data is important, let me make clear that what I want is that the database is updated incrementally.
I have a table MYTABLE in database mydb1 duplicated via materialized view and materialized view log and refresh_snapshot commands to a MYTABLE on mydb2 database.
I like to duplicate this table MYTABLE to a third database mydb2, using the same method (materialized view and refresh_snapshot command).
Is it possible ? What's hapend to the materialized view log where I launch a refresh_snapshots on mydb2 ? How is this materialized view log truncated ?
There is a database db1 which has user U1 in in it contains T1 as table.
Likely,
There is also another database db2 which also has a user named U2 containing table T2 in it.
Now
I want to use the concept of JOINS and Join Table T1 of database named DB1 and Table T2 of database named DB2 and access from database named DB3 using Materialized View Concept.
what shall i do to access tables of DB1 and DB2 from database DB3 using Materialized View.
I am planning to move the databases in multi master replication from HP-Ux to AIX server. I am planning to use export/import to moving the database. But do I need to drop the replication administrator user before I do export/import?
I have a clients who currently implementing an application that using Oracle 10g in 20 distributed location. Each location will have its own database server and locally managed. I plan to create a Disaster Recovery (DR) Centre for this client in centralised location. I plan to setup 20 Application Server but only one database server with 20 instances. My question, can the Dataguard manage the replication between 20 database (with single instances) and single database (with 20 instances? The reason we design such way is to reduce the license cost of Oracle.
There are 2 databases, database A and database B. Database A is Oracle 11.2.0.2 which runs on linux and Database B is Oracle 11.2.0.2 which runs on windows xp machine. In database A, there are 100's of tables which are being updated every 10 minutes or 15 minutes. For reporting purpose, the developer wants to run report for the tables. But since database A is being updated every now and then, generating reports takes almost 15 to 20 minutes. So the reports can be generated in Database B. Once in a day the database B should have the updated data from database A so that the reports can be generated in database B with less time. What could be the best solution for the database B to have the updated data on daily basis from database A in oracle?
We have three unix servers with four databases (10gR2) containing "HP Operation Management Unix" (OMU) server messages for monitoring purpose, and we now want to transfer these data to one new database on a new server for reporting purpose.
The message table in each OMU database keeps the message row until it is "Acknowledged" or for maximum fourteen days, then it is moved to an historic table where it stays for another three days. Keeping data for only seventeen days are a performance issue.The new "Reporting database" is intended to hold messages data for the last 90 days.
I wonder which method to use to move/replicate data against the databases? Materialized view using database link, with view on top of the MVs. How to keep rows longer than the master (source) table, avoiding deletion when master row is deleted
Oracle Streams, with local capture and remote apply. How will this influence on the master database performance. There are about 10000 new messages in each OMU database every day. Is it possible having four streams connections against the reporting database ?
Or should I simply use database triggers which fires after insert and update and applies changes to the reporting database using database links ?
We have Oracle Database Link linked to MySQL . We want to force executing remote statement that contains group by on the remote MySQL server and let MySQL do the aggregation instead of Oracle.
We tried also DRIVING_SITE but it doesn't work and the query that was sent to MySQL from Oracle DB Link didn't include GROUP BY and it looks like that the Group by was executed on the local Oracle.
Is there a way for force executing GROUP BY statements on remote DB instead of the local Oracle DB?
Actually am trying to replicate two db servers from one in hong kong and another in china. when am trying to establish the replication, am getting error 'ORA-04052: error occurred when looking up remote object' like this...
but the same way i have tried in my local network, it is working fine.i have tried schema replication through enterprise manager grid control..
I'm looking for solutions to make real time replications from an Oracle 8i database.
For more details : - we have an existing database with Oracle 8i - we want to replicate this database: the replicated database will be used by a web application. - we need the replication to be in real-time (we wish to have the shortest lag)
For the moment i didn't find much information on the web and it seems that replication solutions for the version 8i are quiet limited.
Some more details :
- The responsible want imperatively that the web application work with a replicated database, without interacting directly with the original one. - An evolution of oracle should be done but within some years so were trying to find a solution rapidly with 8i.
I would like to know the Replication method which is fast and the best approach,we need two schemas to be moved/replicated to a new reporting database.It appears that data is to be flown in one way,do we proceed with Materialized view replication or please clarify about Oracle Streams and Advanced replication. what are the factors to decide the replication method.
I want to set up advance replication for 3 master site (multimaster) I created 3 master site named orc1,orc2,orc3 and followed up oracle replication management of API book instruction I created 2 tables(tes1,test2) in hr schema in all 3 master site with the same data. then I created the following steps
1-CONNECT repadmin/repadmin@orc1
2-Create the master group named hr_test_repg
BEGIN DBMS_REPCAT.CREATE_MASTER_REPGROUP( gname => 'hr_test_repg'); END; /
4-add tables test1 and test2 to the group
BEGIN DBMS_REPCAT.CREATE_MASTER_REPOBJECT( gname => 'hr_test_repg', type => 'TABLE', oname => 'test1',
[code]....
I could create DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT for test2 but not for test1 and it produces error
RROR at line 1: RA-23309: object hr.test1 of type TABLE exists RA-06512: at "SYS.DBMS_SYS_ERROR", line 105 RA-06512: at "SYS.DBMS_REPCAT_MAS", line 2552 RA-06512: at "SYS.DBMS_REPCAT", line 562 RA-06512: at line 2