Backup & Recovery :: Process Of Setting Up DR Environment For SAP And Oracle Database
Feb 7, 2012
We are in the process of setting up a DR environment for our SAP and Oracle databases . The netapp and our architects came up with solution as follows .
1.Standby databases are built for all production databases.
2.The SAP file systems are replicated to the secondary site
3.The Oracle logfiles and controlfiles are replicated by netapp snap mirror every 10 mins interval
4.The database is recovered through recover standby database every 15 mins at standby site
5.Please note there is no data guard involved .
6.To test the failover , the mirror is broken .The standby controlfile is replaced with Production controlfile and Redo logs files.
7.The standby database issued a startup comnmand and it worked .
Would like to know whether the step 6 is a correct approach ? I tried to convince the architects that this will result in a very disastrous situation for us but none is listened to .
I'm trying to move my backup sets from windows database environment, to OEL 5.7 environment on another server.
I've found a manual [URL] by which I am trying to do it.I took backup sets from last night's backup using RMAN,and the current parameter(initSID.ora) file from the running live database.Now i need to configure control files in the pfile accordingly.
1. can i take current control files from the running system, to restore and recover backup sets from last night, to the state the database was at backup time?
2. how can i find out if control files are backed up and know by RMAN? "list backup completed after '2012-JUN-19';" >> gives me Archive redo logs, datafiles, but don't see the control files(or don't reconize them).
I have a doubt regarding the process the RMAN follows for restore and recovery of the database.
My Level 0 full backup completes on Saturdays at around 11 am after taking 8 hours (It starts at 3am).On Sunday, I ask RMAN to restore (on a different box) till Saturday 11:30 am.Then, after the restore is successful, I recover it till 11:45 am.
The recovery also goes fine and Iam able to clone to test box.
Till what time is the database restored? I assume its till 11 am since the L0 backup finishes at 11 am though I have asked it to restore till 11:30 am
During the recovery, ONLY the archives generated between Saturday 3 am and till the time i asked for recovery (Saturday 11:45 am ) are required. But, checking the recovery log file, i was surprised to see that RMAN has restored archive files starting from Friday 9:30 pm (Much before the time the Level 0 backup even started).
I have facing problem while taking backup on Windows 7 client of Oracle 11g R2 database. I have installed oracle 11gR2 for windows on windows 7 machine. I have created a directory like below in database.
On Database Server
SQL> create directory win_expdp_dir as 'd:expimp';
Directory created.
SQL> grant read, write on directory win_expdp_dir to lab;
Grant succeeded.
On Windows 7 machine (client machine)
D:appproduct11.2.0client_1BIN>expdp lab/lab@wbdata.wbh-db11g DIRECTORY=win_expdp_dir DUMPFILE=lab.dmp LOGFILE=lab.log Export: Release 11.2.0.1.0 - Production on Mon May 2 12:51:44 2011 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39002: invalid operation ORA-39070: Unable to open the log file.
I have give all sharing rights on d:expimp directory. My main question is why i'm getting this error. Is there any thing missing in setup. how to take export on windows 7 client.
I've read a lot about the different types of backup available with Oracle (hot and cold backup). However, I was thinking of a different way of performing this task. I'm currently using Windows Server 2008 R2 and Oracle 11g Standard Edition. I'd like to schedule an entire backup of my server via the utility "Windows Server Backup" (available for free).That way, I could recover my entire server with all the programs and files in case the latter crashes.I'm wondering if this solution could be used as a way of backing up (and recovering) the Oracle database. Should I still set up a regular hot backup with the Archivelog mode enabled in case some operations/transactions were being done at the time of the crash (for the data integrity)?
In an OLTP environment what cursor_sharing setting is preferred?Though typically we retain the original setting for most of the parameters except memory settings etc. I have queries in the following context
No. I am not facing any issue as of now (I am not supporting any Live environment) But I want to know the desgn considerations
First of all in OLTP environment (say one I am referring) we use pl/sql variables which are obviously bind variables Only in case where plan is expected to change we use hard coded values like 'CREDIT' or 'DEBIT' etc. for acc_type column
Again there can be 2 scenario 1) we use the same query for both acc_type values 2) we use 2 different queries IF v_parameter = 'CR' select * from accounts where acc_type='CREDIT'... else select * from accounts where acc_type='DEBIT'... end if;
Again suppose the values are skewed and we gather stats with histograms hereIs't it the setting 'cursor_sharing=similar' which will be useful in above case?as with this setting optimizer will 'think' which plan to pick depending upon the values and bind variable peeking is taken care in option 2 above with IF ELSE clause?
BTW I have carried several tests but not getting conclusive results For example I created following table with skewed data, created index and gather stats with histogram
SQL> select object_id,count(*) from skewed_data_tab group by object_id;
SQL> create index i_skewed_tab_data on skewed_data_tab(object_id);
SQL> exec dbms_stats.gather_table_stats(user,'SKEWED_DATA_TAB',cascade=>true, method_opt=>'for all columns size 254');
Then traced with following options 1) alter session set events '10046 trace name context forever, level 12';
SQL> begin for i_outer in(select n from ids order by tstamp) loop for i_inner in (select /* for exact */ object_id,object_name,object_type from skewed_data_tab where object_id=i_outer.n)
[code]...
2) set termout off alter session set events '10046 trace name context forever, level 12'; @/u04/scripts/exact.sql 5 cat /u04/scripts/exact.sql select /* for exact */ object_id,object_name,object_type from skewed_data_tab where object_id=&1;
I am new in oracle. I want to restore my database in oracle 10g enterprise. Actually i have a backup file(.bkp file) from oracle 10g xe and now i wanna restore in oracle 10g enterprise.
I want to know how can i create an empty database in oracle 10g and what is the meaning of empty database. basically i want to perform migration from one database to another database by using exp and imp.
How to create empty database and migrate the database one platform to another platform by using the exp and imp. I have oracle 10g (10.2.0.1.0) on both xp and linux.
We have faced database(10.2.0) issue cause incomplete recovery and have performed open resetlogs. This DB is of 12Tb. In 10g opening database with reset logs do not invalidate previous backups. we have another replica(no reset state)of this database which we sync using archives.how we can apply those archives(reset database)to previous database?
I want a copy of data from oracle database in Production to my local machine as upgrade is going to be there soon. How can I do that so that I may check my old data later irrespective of upgrade?
We have a non-production Oracle 10g Cluster running on Linux, with DataGuard (logical standby). From time to time, we need to refresh the schema on primary, but to do so as always caused problems with the logical standby. Our DBAs can never get it to complete successfully. They have tried a bunch of different methods (even provided from Oracle), but it does not work. e have a bunch of skip statements on the
Everytime, we need to refresh the schema, we have to build the entire database (primary and logical standby) from production RMAN backup. As you can imagine this is a very time consuming ordeal. There has got to be a way this process can be completed in a timely manner.
I was thinking of the following.... 1) shut down dataguard, log shipping 2) lock user, kill sessions, drop user from primary 3) lock user, kill sessions, drop user from logical standby 4) run impdb on production export file on primary 5) run impdb on production export file on logical standby 6) re-enable dataguard, log shipping 7) confirm logs being applied, databases in sync
We are only replicating the one schema to the logical standby. I am not clear on how redo logs would be applied to the logical standby. There are hundreds of them @ 100 mb each, so I would think if we do this independently, I could somehow sync primary and logical standby after the imports complete.
SQL> alter database mount; Database altered. SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01139: RESETLOGS option only valid after an incomplete database recovery
My database is in NOARCHIVELOG mode.I took whole DB backup ( cold).Then just after half an hour I ran following script.
RMAN> RUN { RESTORE DATABASE;RECOVER DATABASE;alter database open;}
Starting restore at 24-FEB-12 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=133 device type=DISK channel ORA_DISK_1: starting datafile backup set restore
[code]...
Starting recover at 24-FEB-12 using channel ORA_DISK_1
starting media recovery archived log for thread 1 with sequence 2 is already on disk as file /u01/app/oracle/oradata/PROD/redo02.log archived log file name=/u01/app/oracle/oradata/PROD/redo02.log thread=1 sequence=2 media recovery complete, elapsed time: 00:00:01
[code]...
Why do I need to specify an option at the first place?As my redo is intact, it is not incomplete recovery and, I do not want to generate new incarnation of my database.Why oracle simply not opening my database?
I have oracle 10g installed on my system and name of the database is "ORCL" for which I have schedule the incremental backup everyday. Mentioned below are the steps followed
*************PARAMETERS TO BE CHANGED****************** configure channel 1 device type disk format '\192.16.17.140dbbackups192.16.17.152oracle_rman_backup_incrementalstd_%U'; configure channel 2 device type disk format '\192.16.17.140dbbackups192.16.17.152oracle_rman_backup_incrementalstd_%U'; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '\192.16.17.140dbbackups192.16.17.152oracle_rman_backup_incrementalcntrl_%U'; CONFIGURE RETENTION POLICY TO REDUNDANCY 7; CONFIGURE CONTROLFILE AUTOBACKUP ON; *******************************************************
*******COMMAND FOR THE CONNECTING TO RMAN************** rman LOG = \192.16.17.140dbbackups192.16.17.152oracle_rman_backup_incremental rmanlog_%date:~4,2%-%date:~7,2%-%date:~10%.txt APPEND CONNECT TARGET SYS/ORACLE@ORCL *******************************************************
********INCREMENTAL BACKUP COMMAND********************* RUN { BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr_backup' DATABASE; BACKUP ARCHIVELOG ALL DELETE INPUT; } ********************************************************************
Now I want to restore this backup to some other system with new database. How to do this recovery to some other database on new system.
I am trying to clone a database on another server with different direcory structure. So that path on the source db server are /u04 whereas on on target db server it would be /u03.
Since this I am testing it on small database initially I have kept all datafiles, backup and archivelogs at /u04 and /u03 on the source and target db servers respectively Now I have copied the backups from source db server to target and since path is changed, I cataloged it
However during restore I am getting
Quote:ORA-19870: error reading backup
Here are the session details
RMAN> catalog start with '/u03/oradata/db7fra'; searching for all files that match the pattern /u03/oradata/db7fra List of Files Unknown to the Database ===================================== File Name: /u03/oradata/db7fra/DB7/archivelog/2011_03_15/o1_mf_1_16_6qyvpb3w_.arc File Name: /u03/oradata/db7fra/DB7/backupset/2011_03_15/o1_mf_annnn_TAG20110315T123018_6qypyv5v_.bkp
[code].....
I have altered permissions on the backup files as well but of no use
oracle@dev-biz:/u03/oradata/db7fra/DB7/backupset/2011_03_15 $ls -ltr total 545260 -rwxrwxrwx 1 oracle dba 12419072 Mar 15 13:52 o1_mf_ncsnf_TAG20110315T123008_6qypyrz2_.bkp -rwxrwxrwx 1 oracle dba 3072 Mar 15 13:52 o1_mf_annnn_TAG20110315T125043_6qyr54to_.bkp -rwxrwxrwx 1 oracle dba 426496 Mar 15 13:52 o1_mf_annnn_TAG20110315T125006_6qyr3zk4_.bkp -rwxrwxrwx 1 oracle dba 14336 Mar 15 13:52 o1_mf_annnn_TAG20110315T123018_6qypyv5v_.bkp
SQL> SELECT * FROM V$VERSION; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for Linux: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production
after i backup my database,i check the alert log ,i found the following errror:
Mon May 14 09:19:42 2012 Errors in file /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc: Mon May 14 09:19:42 2012 Errors in file /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc: Mon May 14 09:19:42 2012 Errors in file /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc:
[code]....
the trace number 26967 :
[oracle@shenzhengair archivelog]$ cat /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
online backup done thru RMAN.Suppose i am taking online backup of full database. During the backup, user's are inserting/deleting/modifying data. This data is getting stored as online archives. Once the database backup is finished, how these archives are applied to the database to make the database up to date.
Backup entire database, without archived logs, while the database is open for user activity and also This backup should be the base for an incremental backup strategy.