Data Guard :: ORA-19909 - Datafile 1 Belongs To Orphan Incarnation?
Jul 22, 2013
I have flash backed by primary database to an older SCN, from then on I am receiving the following errors on my standby database when I do media recovery :
ERROR at line 1:ORA-00283: recovery session canceled due to errorsORA-19909: datafile 1 belongs to an orphan incarnation
ORA-01110: data file 1:
'/u01/app/sbydb11g/system01.dbf' PRIMARY DATABASE RMAN> list incarnation; List of Database IncarnationsDB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time------- ------- -------- ---------------- --- ---------- ----------1 1 ORCL 1343967574 PARENT 1 13-AUG-092 2 ORCL 1343967574 PARENT 754488 23-MAY-133 3 ORCL 1343967574 PARENT 1713927 14-JUN-134
A datafile was deleted from a partitioned table which has 31 partitions , that datafile contains data from day 25 to day 31 st I am not having a backups of this database. but I have all the dumps of that table what is the solution to get those 6 partitions data back.
Till now i was pretty sure that standby redologs are applied to database without any delay, but everything looks that the process is different. i did on primary server table and add some records to this table.
Next i switch to standby and open redologs (with hex editor) to look for this "create table" and "insert" commands. and i succeeded. so this is prove that standby log are supplied by primary database.
Next i cancelled recovery process and open database in read only mode and there was no records in this standby db, moreover there wasn't any table on standby. so i started to be confused, because i suspected that this table and it content will be on standby database.
Next i started recover process again for standby database and do some switch log on primary server. after that i back to standby, cancel recovery process and open database in read only mode. there was table and contents for it.
so my question is:
does standby redologs should be applied online to database or they are only applied after promoting standby database to primary database? it looks like contents only from archivelog is applied to standby database. it is correct?
I use the following script to duplicate a target to an auxiliary instance. This works fine once.
However if I run it a second time it will create a new set of datafiles leaving the original files in place taking up space.
Is there something that I can put in the script that will stop this happening.
I know I could probably put a "set newname for datafile" clause for each datafile in the target database, but this makes the script over complicated and I would like to keep it as generic as possible.
DUPLICATE TARGET DATABASE TO dupdb until time "TO_DATE('07/06/2011 07:25','DD/MM/YYYY HH24:MI')" LOGFILE GROUP 1 ('+DATA') SIZE 50M, GROUP 2 ('+DATA') SIZE 50M, GROUP 3 ('+DATA') SIZE 50M, GROUP 4 ('+DATA') SIZE 50M;
We have physical data guard configured version (10.2.0.4). We are in need to upgrade primary & standby database to 11G R2. Can we perform rolling upgrade.
I have to display count of employees that belongs to different categories.
is the situatio There is a category table CATEGORY with three columns (PK,NAME,TREEPOSITION) and we have categories A, B, C these three categories can further have sub-categories so the treeposition for the sub categories will be followed by their root category with _ 'symbol'
Now I have table for the employees with 3 columns (pk,name,category_id), where employees.category_id=category.pk So I want to calculate the number of employees in each category or sub-category.
since the number of categories will be large and each will be having different names so going through names will be bad option left is grouping through the treepostion the problem is I cant use like using IN for the TREEPOSITION. .
In my Production DB. 5 Datafiles created in same tablespace. Datafile size is of 25GB. Data stored in all Datafile. Data is just 5GB in all datafile. I want to move data from 5 datafiles to single or couple of datafiles.
I am using perl script to dynamically generate the control file.If I have data in the control file as well as in the datafile, how would i write the control file in that case. Is the below one correct?
load data INFILE '*' INFILE '/export/home/test/test.csv' INSERT INTO TABLE EMP fields terminated by "," optionally enclosed by '"' trailing nullcols ( empno, empname, sal, deptno ) [code]....
Is there any way that if my control file contains half of the data and my data file contains the other half of the data can i club this data into a logical record in the control file to populate the DB?
My exact 2nd requirement is, my DB contains 5 cols and for 1 col the data is common(countryName) which i have to pass to the control file dynamically and the .csv file contains the data for the other four cols. How could i combine these in the ctrl file and populate the DB?
so if the DB contains CountryName, empid, ename, sal and dept..I will get the CountryName to the ctrl file and csv contains the data for empid, ename, sal and dept. How would i combine these data into a logical record and populate the DB?
i have configured physical standby in my local system, to check logshipping i created a table at primary db, wen i tried to check in standby, it says table does not exist..below are primary & standby alert entries..
Primary alert log
Fatal NI connect error 12514, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=172.16.0.98)(PORT=1522))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=STAND)(SERVER=dedicat ed)(CID=(PROGRAM=d:oracle11gappadministratorproduct11.1.0db_1inORACLE.EXE)(HOST=A960M)(USER=SYSTEM))(SERVER=dedicated))) VERSION INFORMATION: TNS for 64-bit Windows: Version 11.1.0.6.0 - Production
Statspack has been configured for Active Dataguard on Primary database.We got an spike of Buffer busy waits for about 5 min in Active Dataguard, this was causing worse Application SQL's response time during this 5 min window.Below is what i got from statspack report for one hour
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment ~~~~~~~~ ---------- ------------------ -------- --------- ------------------- Begin Snap: 18611 21-Feb-13 22:00:02 236 2.2 End Snap: 18613 21-Feb-13 23:00:02 237 2.1 Elapsed: 60.00 (mins) [code]...
Why there could sudden spike of demand on UNDO data in Active Data Guard ?
I am working in a bank as an system consultant, i have a SAN Storage Area and oracle as below.
SAN 1
This interface includes the DATA FILES of the oracle tablespace
SAN 2
SAN1 Mirrors the DATA FILES of the oracle tablespace to SAN 2
1. Can i rely on real time data recovery from SAN2 ? 2. if SAN1 (Data Files are currupted) will the SAN2 Data Files will be currupted as well. 3. If the SAN2 is currupted then what Oracle Features can be used to have uncurrupted data.
i did everything writen but when i do *SQL>alter database recover managed standby database disconnect from session;*
i go and look in the standby database AlertLog file ,and thats whats writen
*ORA-01157: cannot identify/lock data file 1 - see DBWR trace file ORA-01110: data file 1: 'F:ORACLEPRODUCT10.2.0ORADATADBSYSTEM01.DBF' ORA-27041: unable to open file OSD-04002: غير قادر على فتح الملف O/S-Error: (OS 3) The system cannot find the path specified.
[code]....
strange thing that it realises the primary database in drive F and it goes to it but i dont understand what could be the reason of this ,although im doing this command while primary database is shutdown!
I configure logical standby online .when I execute dbms_logstdby.buid,first
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
it was blcoked by other sesson,then i kill the holding session,but no work.then i cancel this step and execute it again . the error is
SQL> EXECUTE DBMS_LOGSTDBY.BUILD; BEGIN DBMS_LOGSTDBY.BUILD; END; * ERROR at line 1: ORA-01354: Supplemental log data must be added to run this command ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 3669 ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 3755 ORA-06512: at "SYS.DBMS_LOGMNR_D", line 12 ORA-06512: at line 1 ORA-06512: at "SYS.DBMS_INTERNAL_LOGSTDBY", line 370 ORA-06512: at "SYS.DBMS_LOGSTDBY", line 157
I have set up a cross platform (Microsoft Windows IA (32-bit) -> Linux x86 64-bit) data guard and it worked fine.Then I did a switch over (which again worked) and found out the data is not getting replicated at all.. checked the data files available from the new primary database and found out they are in the windows format as below..
SQL> select name from v$datafile;
NAME -------------------------------------------------------------------------------- D:ORACLEAPPADMINISTRATORORADATAMFSSYSTEM01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSSYSAUX01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSUNDOTBS01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSUSERS01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSRMANRMAN_TS01.DBF
and physically they were created at '/home/app/oracle/product/11.2.0/db_1/dbs/' and as
1) scn differs wrt primary in standby (i checked, 1day difference), how to make scn same?
2)i created a table in primary, its not refelecting in standby, (below i ve pasted alertlog entries)
ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified. Errors in file d:oracle11gappadministratordiag dbmsstandstand racestand_dbw0_6916.trc: ORA-01157: cannot identify/lock data file 2 - see DBWR trace file ORA-01110: data file 2: 'D:ORACLE11GAPPADMINISTRATORORADATASTANDSYSAUX01.DBF'
[code]....
3)wen i try to open standby database in read only mode gives below error..
ERROR at line 1: ORA-16004: backup database requires recovery ORA-01157: cannot identify/lock data file 1 - see DBWR trace file ORA-01110: data file 1: 'D:ORACLE11GAPPADMINISTRATORORADATASTANDSYSTEM01.DBF'
Our organization has recently decided to go for storage metro cluster solution for disaster recovery. In a Data guard environment, we normally calculate how much archive log is generating and based on that value we calculate the required bandwidth.
For storage metro cluster, we need to find how much block is changing in our primary database, and the same rate of change would apply on DR cluster. Now, i need to give the assumption how much changing is happening in my system. How to calculate the change.
i'm using a mixture of Oracle� Data Guard Concepts and Administration11g Release 1 (11.1) And Bulletin : MAA - Creating a RAC Physical Standby for a RAC Primary 10g ( on oracle support) But neither is comprehensive for what i'm trying to do.
setting up Data Guard Broker on a RAC Data Guard environment?
I am very new to data guard...I have a primary and a physical standby database...If the primary got crashed, are the users directly switched to physical standyby (is it automated) or the DBA has to manually do the fail over or switch over....is it the same concept as in RAC like TAF....How can the DBA know that the users are disconnected from the database.....explain with some steps...
Log shipping stopped logging/shipping into the Standby Database in one of our Oracle Data Guard Servers about two month ago and there was no change carried out as at that time. The Data Guard has only one Standby database. I have been managing the log shipping and recovery manually.
I did upgrade from 10.2.0.1 to 10.2.0.4 on the secondary database , the primary database version is 10.2.0.4,i took cold backup from the primary to the secondary database , when i try to create control file on the secondary , i am getting this ORA-01130: database file version 10.2.0.3.0 incompatiable with ORACLE version 10.2.0.1.0.*
Oracle Database 10g Release 10.2.0.4.0 - 64bit Production PL/SQL Release 10.2.0.4.0 - Production CORE 10.2.0.4.0 Production TNS for 64-bit Windows: Version 10.2.0.4.0 - Production NLSRTL Version 10.2.0.4.0 - Production
I have to implement Physical standby using same SID. parameters required to set on Primary and standby. Also what entries are required to do in TNS file. Recently we have faced hardware failure.