Data Guard :: ORA-01157 / Cannot Identify / Lock Data File 1
Jul 23, 2010
i did everything writen but when i do *SQL>alter database recover managed standby database disconnect from session;*
i go and look in the standby database AlertLog file ,and thats whats writen
*ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'F:ORACLEPRODUCT10.2.0ORADATADBSYSTEM01.DBF'
ORA-27041: unable to open file
OSD-04002: غير قادر على فتح الملف
O/S-Error: (OS 3) The system cannot find the path specified.
[code]....
strange thing that it realises the primary database in drive F and it goes to it but i dont understand what could be the reason of this ,although im doing this command while primary database is shutdown!
I'm using SAP ECC6.0, Oracle 10G, HPUX B11.23.Recently I had performed database restoration from my backup tape. However, during the process of bringing up the Oracle and SAP database, I observed that there could be some archive logs went missing. I had tried to check them from my backup tape, but could not find it.
In short, now my SAP database is up and running, but I'm having another problem when I executed a "CheckDB" job in DB13. The job is unable to complete successfully
DB13 job log: 30.08.2010 19:44:40 Job started 30.08.2010 19:44:40 Step 001 started (program RSDBAJOB, variant &0000000000061, user ID BASIS) 30.08.2010 19:44:40 Execute logical command BRCONNECT On host drqaecc 30.08.2010 19:44:40 Parameters: -u / -jid CHECK20100830194440 -c -f check
Our server got crash in server1 and we recovered the same DB to server2 using RMAN back up. But while running our scheduler for email alert we got the error message as ORA-01157: cannot identify/lock data file 201 - see DBWR trace file ORA-01110: data file 201: 'D:ORADATAKFDBTEMP01.DBF' Our temp file is already in E folder.
I have a database of branch A whose files are located in E: Drive. I want to download it in branch B.I placed all the files of Branch A in D: Drive of Branch B. When i start the database i was getting controlfle error. I made required corrections in initorcl.ora. Now when i start the Database,its mounted, but I am getting ORA-01157: Cannot identify datafile 1. I tried to rename the file, but the "alter database rename file1 to file2;" option is not working.
I'm rying to import schema's from a dump file that came from a different environment.
What I have is:
1. dump file 2. log file of the export
I'm trying to import the file(containing three schemas) with remap_schemas, and it fails, gives a lot of ORA-00959: tablespace 'string' does not exist.
Now, I've read in OTN:
[URL]
that what you need to do in that case is to use the REMAP_TABLESPACE option,to redirect the objects to a different tablespace.
I don't see a name of the tablespace I'm getting the error for in the export log.I don't know if I have more tablespaces I have to redirect with REMAP_TABLESPACE.
I don't want to perform this 3 times, have an error, by that find out what's the next tablespace needing redirection and only then starting over...
How can I know from the dump file and the log file,what is the tablespace names i need for the redirection to my names? Or its just that the tablespace giving me the error is the only one in the dump file?
How to understand the two parameters db_file_name_convert and log_file_name_convert,if there are missing in the parameter file of standdy database,how does oracle will do?
1) How to copy the ARCHIVED LOG FILE from Primary to DR location. Where if the DR is having the file already, how to SKIP that Particular Archived log file? 2) I have Checked with the FILE_TRANSFER DBMS package, but it gives the error if any file existing already.
I have an 11gR2 data base with Local Extent Management and Manual Segment Management. Can Data Guard be used to replicate this instance into ASM storage? [I have multiple long fields (BLOBs and CLOBs) in various tables.]
I wanted to make a script for applying the Archived log File in to DR by certain interval.
1) I will use the below view for finding the SEQUENCE so far applied.select sequence# from v$log_archive. 2) But how i can compare with the archive log files available in physical location with the above mentioned view.
eg:- the above view shows, the till the sequence 46789 is applied.And in the DR physical Location available sequences are 46795. which means 6 more archived log files are more, which are not applied into the DR so far.
We have physical data guard configured version (10.2.0.4). We are in need to upgrade primary & standby database to 11G R2. Can we perform rolling upgrade.
I am in need of a clarification regarding the file size growth in Physical Standby Database.Say we have 150 GB size of LIVE DB and we have created the same as in STDBY(150 GB).Will there be a growth in STDBY DB datafiles with respect to the LIVE DBOR only the logs are applied currently to the STDBY and the changes will only be reflecting when there is a switchover or failure scenario??
[oracle@RSASPGERP02 ~]$ cd / [oracle@RSASPGERP02 ~]$ . .bash_profile [oracle@RSASPGERP02 ~]$ sqlplus / as sysdba sql*plus:Release 10.2.0.4.0 - production Error: ORA-09925:Unable to create audit trail file
[code]....
its standby archieving problem ,the problem appears when try to connect directly or through telenet and we try to login directly using oracle user we receving following message and login fail, "GDM could not write to your authorization file,this could mean that you are out of disk space or that your home directory could not be opened for writting"
I am creating physical standby database through Rman duplicate command from 2 node rac cluster. rman do all its work. now am try to start the mrp process on physical standby database. I am getting following errors
------------------------------------------------------------ Check that the primary and standby are using a password file and remote_login_passwordfile is set to SHARED or EXCLUSIVE, and that the SYS password is same in the password files. returning error ORA-16191 ------------------------------------------------------------ ORA-16191: Primary log shipping client not logged on standby ------------------------------------------------------------
I copied the same pass file from primary to standby and many times verify the same but i got the same error.
I am trying to write a code to identify the delimiter in the file ( which is in the form of table(id, raw) in system)then I take this delimiter and pass it as a parameter to SP which perform cleaning of this file(table) and creates another clean table.
The problem I am facing is until now the file was coming with one fixed (TAB) delimiter, but now it has come with different (SPACE), now here I want to develop the code to identify the delimiter place it in a variable an pass this as parameter to cleaning SP.
--here i want to develop code to identify delimiter from hosts_equiv file which has data as below
select * from hosts_equiv where left(raw,1) not in ('#','*') and isnull(raw,'')<>''
Raw id ---------------- --- hiper USER1 1 hiper2 USER1 2 APX user2 3
Need to identify delimiter between e.g. hiper USER1 and pass it as a parameter to the raw_parse sp
declare Tab varchar(10) set Tab = char(9) exec raw_Parse 'hosts_equiv', -- From table ( entire file content is stored as table with Id record no sequence generated) 'CleanedHosts_Equiv', -- To table name, when passed it will clean the from table and places the cleaned data in to_table
i have configured physical standby in my local system, to check logshipping i created a table at primary db, wen i tried to check in standby, it says table does not exist..below are primary & standby alert entries..
Primary alert log
Fatal NI connect error 12514, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=172.16.0.98)(PORT=1522))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=STAND)(SERVER=dedicat ed)(CID=(PROGRAM=d:oracle11gappadministratorproduct11.1.0db_1inORACLE.EXE)(HOST=A960M)(USER=SYSTEM))(SERVER=dedicated))) VERSION INFORMATION: TNS for 64-bit Windows: Version 11.1.0.6.0 - Production
Statspack has been configured for Active Dataguard on Primary database.We got an spike of Buffer busy waits for about 5 min in Active Dataguard, this was causing worse Application SQL's response time during this 5 min window.Below is what i got from statspack report for one hour
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment ~~~~~~~~ ---------- ------------------ -------- --------- ------------------- Begin Snap: 18611 21-Feb-13 22:00:02 236 2.2 End Snap: 18613 21-Feb-13 23:00:02 237 2.1 Elapsed: 60.00 (mins) [code]...
Why there could sudden spike of demand on UNDO data in Active Data Guard ?
I am working in a bank as an system consultant, i have a SAN Storage Area and oracle as below.
SAN 1
This interface includes the DATA FILES of the oracle tablespace
SAN 2
SAN1 Mirrors the DATA FILES of the oracle tablespace to SAN 2
1. Can i rely on real time data recovery from SAN2 ? 2. if SAN1 (Data Files are currupted) will the SAN2 Data Files will be currupted as well. 3. If the SAN2 is currupted then what Oracle Features can be used to have uncurrupted data.
I configure logical standby online .when I execute dbms_logstdby.buid,first
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
it was blcoked by other sesson,then i kill the holding session,but no work.then i cancel this step and execute it again . the error is
SQL> EXECUTE DBMS_LOGSTDBY.BUILD; BEGIN DBMS_LOGSTDBY.BUILD; END; * ERROR at line 1: ORA-01354: Supplemental log data must be added to run this command ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 3669 ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 3755 ORA-06512: at "SYS.DBMS_LOGMNR_D", line 12 ORA-06512: at line 1 ORA-06512: at "SYS.DBMS_INTERNAL_LOGSTDBY", line 370 ORA-06512: at "SYS.DBMS_LOGSTDBY", line 157
I have set up a cross platform (Microsoft Windows IA (32-bit) -> Linux x86 64-bit) data guard and it worked fine.Then I did a switch over (which again worked) and found out the data is not getting replicated at all.. checked the data files available from the new primary database and found out they are in the windows format as below..
SQL> select name from v$datafile;
NAME -------------------------------------------------------------------------------- D:ORACLEAPPADMINISTRATORORADATAMFSSYSTEM01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSSYSAUX01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSUNDOTBS01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSUSERS01.DBF D:ORACLEAPPADMINISTRATORORADATAMFSRMANRMAN_TS01.DBF
and physically they were created at '/home/app/oracle/product/11.2.0/db_1/dbs/' and as
1) scn differs wrt primary in standby (i checked, 1day difference), how to make scn same?
2)i created a table in primary, its not refelecting in standby, (below i ve pasted alertlog entries)
ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified. Errors in file d:oracle11gappadministratordiag dbmsstandstand racestand_dbw0_6916.trc: ORA-01157: cannot identify/lock data file 2 - see DBWR trace file ORA-01110: data file 2: 'D:ORACLE11GAPPADMINISTRATORORADATASTANDSYSAUX01.DBF'
[code]....
3)wen i try to open standby database in read only mode gives below error..
ERROR at line 1: ORA-16004: backup database requires recovery ORA-01157: cannot identify/lock data file 1 - see DBWR trace file ORA-01110: data file 1: 'D:ORACLE11GAPPADMINISTRATORORADATASTANDSYSTEM01.DBF'
Our organization has recently decided to go for storage metro cluster solution for disaster recovery. In a Data guard environment, we normally calculate how much archive log is generating and based on that value we calculate the required bandwidth.
For storage metro cluster, we need to find how much block is changing in our primary database, and the same rate of change would apply on DR cluster. Now, i need to give the assumption how much changing is happening in my system. How to calculate the change.
I have written a java code which reads 2 millions of data under a particular column from CSV file and store it into a set. Now there is a table in Oracle database which contains 10 millions of records for that particular column. Now, I want to form a SQL query which select those records under that particular column from the database table which is in CSV file but not in database table. For e.g.
If I consider the CSV file name as employee.csv and it has column called employee_name under which the records are as follows
i'm using a mixture of Oracle� Data Guard Concepts and Administration11g Release 1 (11.1) And Bulletin : MAA - Creating a RAC Physical Standby for a RAC Primary 10g ( on oracle support) But neither is comprehensive for what i'm trying to do.
setting up Data Guard Broker on a RAC Data Guard environment?