Replication :: Using Goldengate To Extract Data From ASM - 10G
Feb 17, 2011
We are getting following error when we are trying to extract data from ASM.
GGS ERROR 500 Oracle GoldenGate Capture for Oracle, ext_1.prm: Getting attributes for ASM file +DATA/testgg/onlinelog/group_1.257.742844671, SQL <BEGIN dbms_diskgroup.getfileattr('+DATA/testgg/onlinelog/group_1.257.742844671', :filetype, :filesize, :lblksize); END;>: (6550) ORA-06550: line 1, column 7: PLS-00201: identifier 'DBMS_DISKGROUP.GETFILEATTR' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignoredNot able to establish initial position for begin time 2011-02-16 16:42:05.
I am implementing GG 11g r2 for 12C database. But i am getting below error. My question Why Goldengate needs specific package ... Since this is homogeneous & heterogeneous.
/u01/12c_database_software/goldengate/dirtmp. 2013-08-30 05:28:44 INFO OGG-01513 Oracle GoldenGate Capture for Oracle, ext1.prm: Positioning to Sequence 66, RBA 25067536, SCN 0.0. 2013-08-30 05:28:44 ERROR OGG-01028 Oracle GoldenGate Capture for Oracle, ext1.prm: ORA-06550: line 1, column 7: PLS-00201: identifier 'SYS.DBMS_INTERNAL_CLKM' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignored. 2013-08-30 05:28:44 ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, ext1.prm: PROCESS ABENDING.
Database details ---------------- SQL> select object_name, object_type from dba_objects where object_name='DBMS_INTERNAL_CLKM' and object_type in ('PACKAGE');
no rows selected
SQL> select object_name, object_type from dba_objects where object_name='DBMS_INTERNAL_CLKM';
I have a question about GG Sequence Replication and Triggers. My main database, which I would like to replicate on another server, is highly dependent on sequences for assigning surrogate keys to every row in every table in the application. I know that I need to add Sequence support to my source database (plus supplemental logging, etc), but I'm curious about the target database.
I do not anticipate allowing Read/Write access to this database - we are migrating from 10.2.0.4 (source) to 11.2.0.3 (target) on a new platform, and I want to keep the 11g database up-to-date with our production data until it is time to begin the actual conversion of our application. My thinking is that if I use the SUPPRESSTRIGGERS dboption in my Replicat session, this should take care of the use of the Sequences for assigning the surrogate key values, and the data should add to the tables normally without any intervention by the sequences/trigger combination. I know I will have to manually "correct" the sequences on my 11.2.0.3 database whenever I want to open this database up for use, but I have a script for this ready to go.
Also, in my source database, I am using Oracle Context indexes for generic name searching - this feature creates a number of DR$ named tables in the main application schema that I am replicating (approximately 50 of them). I am assuming that I should EXCLUDE these tables from the replication, as the context indexing should automatically update them as changes to the underlying data are applied via the replication of the indexed tables.
Using Golden Gate to replicate a database (Encrypted Tablespace, Oracle 11.2.0.1, Windows 2008) to a different database server (No Encrypted Tablespace Oracle 11.1 Linux)
Following error goldengate report ERROR OGG-01771 DBOPTIONS DECRYPTPASSWORD must be used to decrypt TSE data. Use TRANLOGOPTION IGNORETSERECORDS if you do not need to capture any tables that are in an encrypted tablespace.
How use it
GGSCI> ENCRYPT PASSWORD "shared key" Add an entry to the Extract parameter file to decrpt the new shared password
I'm a SAP consultant working in SQL on NT platforms. This is the first conversion from Oracle that I have done. My client has provided us with a "Cold" backup of the Oracle dbase on a HD formatted in Unix, I have the partition mounted and I'm able to view the files. I have the ORDATA folder with all the .DBF files.
Q: How do I extract the data from the .DBF files. I need to export to something workable with SQL.
Original database was on Unix, I'm operating on Windows platform.
Im having a problem with writing an appropriate query for a report in my web application. I need it to extract data from three related tables:
CAR( PK CAR_ID INT NOT NULL, TYPE VARCHAR NOT NULL) REPAIR_CENTER( PK REPAIR_CENTER_ID INT NOT NULL, NAME VARCHAR NOT NULL) [code]...
I need the report to display only available cars. Available cars must have these characteristics:
1. if the CAR_REPAIR table is empty, displays all entries from CAR table... 2. if car has multiple entries in the CAR_REPAIR table display only the latest DATE_RETURN if its lower than todays date (SYSDATE), otherwise don't display that car... 3. don't display cars that are in the CAR_REPAIR table and have DATE_RETURN value of NULL
Is there a way where i can extract the data from Oracle Express 6 (OLAP data) into an excel or any other format, so that the same can be loaded onto a MS Sql or a normal Oracle database.
We are facing a project where it is mandatory that the migration (from 9i to 11g) happens without any downtime. We thought about using Goldengate do to this migration. But i would like to listen to somebody who already did such kind of migration (i never used goldengate before). The basic steps to do such migration would be:
1) Install the Goldengate client on both source and target 2) Export only the metadata (structure of the table, for example) from source to target (here is one point of doubt of mine. This export can be only done using exp/imp?) 3) Perform the initial load from source to target (here i have another doubt: It it possible to perform an initial load from a whole database?) 4) Configure manager, export and replicat to perform the migration with the source database open in read-write
With the steps above, would i be able to perform a migration without downtime? What other considerations do you have?
How to extract the data from XML using the xsd file. attached files.
Explanation: first check the EmailMessage tage from order_conf.xml compared with Email.xml(<xsd:element name="EmailMessage">) if exists then go to next node. EmailMessage(exists tag in order xml file) ->next <ns1:emailNotificationype> this tag should be follow under the EmailMessage tag(<xsd:element ref="emailNotificationype">) in Email.xml ->next <ns1:orderNotification> -> check this tag in <xsd:element name="orderNotification"> in Email.xml. -> next <ns1:templateFormatInfo> -> it should follow under <xsd:element name="orderNotification"> in Email.xml. -> next <ns1:templateFormatInfo> -> it should follow these tages <xsd:element name="templateFormatInfo"> <xsd:element ref="templatecode"/> <xsd:element ref="templateversion"/>
I have a table which have two columns date on hourly basis and response time. I want to pull the previous date's data on hourly basis with the corresponding response time. The data will be loaded to the table every midnight.
eg: Today's date 23/10/2012 I want to pull data from 22/10/12 00 to 22/10/12 23
The below query is pulling the date as required but I am not able to pull the response time.
with a as (select min(trunc(lhour)) as mindate, max(trunc(lhour)) as maxdate from AVG_HR) SELECT to_char(maxdate + (level/25), 'dd/mm/yyyy hh24') as dates FROM a CONNECT BY LEVEL <= (1)*24 ;
I have a requirement to extract the data from a table using the UTL file utilities.
My problem is, Say i have a table t1 with column C1,C2, C3, C4, C5. This table t1 gets loaded everyday. i need to pickup the data only that which has changed/inserted in the last load. How can i achieve this ? There is no timestamp in this table.
select t1.c1 as gr1, t2.c1 as gr2, t1.c2 from test_data t1,test_data t2 where t1.c1<>t2.c1 and t1.c2=t2.c2 and (select count(*) from test_data t3 where t3.c1=t1.c1)= (select count(*) from test_data t4 where t4.c1=t2.c1) order by 1 asc, 2 asc
but I don't find the way to refilter to group the data as expected. The idea is find subsets and show the set of data and values in column c1.
Is it possible for Access to extract data from an Oracle database and upload it directly?
Currently we have a business process where data is being extracted in scheduled queries (30+) to Excel spreadsheets, then manually edited to remove heading lines and imported to an Access database. I see an opportunity to automate a time consuming manual activity by having the Access db extract the data and directly upload it.
In our application, we are allowing user to upload data using excel sheet in UI. We are using PHP script in UI and using SQL Loader to load data from excel sheet to temp_table.
The temp_table has a primary key.
Here my question is , Is there any way to put some batch id for every upload in that table in automatic way ? so that we can easily extract the data by using batch id . we are using Oracle 11g.
Suppose, we have one main database on one side(say A) and an external system on the other side(say C). Midway, we have one more staging database(say B).Lets take, we have one record related to Bank Information both in database A and database B and the following activites are performed.
1) "Bank Name" column in database B is updated by the external system C. When this Bank Name is updated in database B by external system C, we need to update the value of this field in database A.
2) Suppose couple of days later, the Bank phone number of the same bank record is updated in main database A and the same update needs to be reflected in staging database B.
How can we take care of both these activities of data-synchronization. What are the different approaches we can take? FYI, we are on oracle 10g rel2 and Windows OS.
We have Development, Staging/UAT ( installed on XX.XX.XX.10 ) and Production ( installed on XX.XX.XX.20) Environment respectively. I have queries regarding getting the data from Production environment into Staging environment. The overall PROD database size is around 250 GB.
STAGING DATABASE DETAILS
SID : STG_DB Staging Schema Name :schema_UAT Replication Schema Name :schema_PrdReplica ( This is the schema where the production data gets loaded daily)
PROD DATABASE DETAILS
SID : PROD_DB Prod Schema Name : schema_PROD
What is happening now: ---------------------------- There is a script (Stored Proc) written on staging ( STG_DB.schema_PrdReplica ) environment which executes daily in NIGHT and does replication. Currently we use DBMS_DATAPUMP to get the ENTIRE data/Meta Data (Everything) from Production to Staging. It is ta king significantly more time. It takes approx 8 Hours to replicate the everything from PROD_DB.schema_PROD to STG_DB.schema_PrdReplica
What I am expecting : ----------------------------- I want to reduce the replication time.
I have heard about Level 0 (Full BackUp ) and Level 1 ( Incremental Cumulative ) Backups in RMAN. I am planning to take PROD_DB.schema_PROD Full Backup (Level 0) on Sunday and will restore that on STG_DB.schema_PrdReplica immediately. And on weekdays ( Mon - Fri ) I will take Level 1 ( Incremental Cumulative ) and will restore that on STG_DB.schema_PrdReplica
I am assuming by doing so, the overall replication time will be reduced. How can I implement this with script assuming that two different servers are on different machines.