I got a database on dataguard and my primary (db1) is shipping files to my standby (db2) with no problems.However, when I query:
select sequence#, status, applied from v$archived_log;
I see this: SEQUENCE# S APP ---------- - --- 4 A YES 5 A YES 6 A YES 7 A YES 8 A YES 9 A YES 10 A YES 11 A YES 12 A YES 13 A YES 14 A YES [code]....
So I did an alter system switch logfile on db1 then looked again and I can see new archived logs being applied.I thought all archived logs had to be applied on the standby since this is the very foundation of the standby database.Am I going to run in trouble later if I have a failover (unsynchronized database)
Assuming you have a 9i database . where you have it enabled in archive mode , yet constantly deleting the archived redo logs , due to space constraints .
Will you be able to perform a full level 0 backup , and the following incremental backups , in the absence of the archived redo logs ? And are these incremental backups enough to recover the database or particular data files , to the point of the backup itself at least ?
I have created a physical standby database and in it the logs are not getting applied.following is an extract of the standby alert log
Wed Sep 05 07:53:59 2012 Media Recovery Log /u01/oracle/oradata/ABC/archives/1_37638_765704228.arc Error opening /u01/oracle/oradata/ABC/archives/1_37638_765704228.arc Attempting refetch Media Recovery Waiting for thread 1 sequence 37638 Fetching gap sequence in thread 1, gap sequence 37638-37643 Wed Sep 05 07:53:59 2012 RFS[46]: Assigned to RFS process 3081 RFS[46]: Allowing overwrite of partial archivelog for thread 1 sequence 37638 RFS[46]: Opened log for thread 1 sequence *37638* dbid 1723205832 branch 765704228 Wed Sep 05 07:55:34 2012 RFS[42]: Possible network disconnect with primary database
However, the archived files are getting copied to the standby server.I tried registering and recovering the logs but it also failed..Follows some of the information,
Primary Oralce 11R2 EE SQL> select max(sequence#) from v$log where archived='YES'; MAX(SEQUENCE#) -------------- 37668 [code]...
Currently I am at the point where the configuration has been completed, and I just need to sync the standby database to the primary one. I can see in the log files that the archive logs are being shipped, but they are not applied on the standby system.
If I run "recover standby database;" manually in sqlplus I can see that it is trying to apply an archive log which is way too old (ORA-00279: change 9656498443 generated at 04/29/2008 08:45:08 needed for thread 1). I
n the alert log I can also see this error: Warning: Recovery target destination is in a sibling branch of the controlfile checkpoint. Recovery will only recover changes to datafiles.
At this point I was thinking that the standby database might be on a different incarnation compared to the primary, but this is not the case, they are both in incarnation 6:6 6 MVF 4023175798 CURRENT 48493546257 13-06-21
here i have an question with oracle database backup strategy.my question is
how to backup my oracle database call DB11G without archived logs while the database is open for user activity and also this should be the base for an incremental backup strategy?
Backup your entire database, without archived logs, while the database is open for user activity. This backup should be the base for an incremental backup strategy
I'm running Oracle 9i on AIX 5.2. I'm not using a recovery catalog, nor am I using media management software. I perform a full, online rman backup of the database and archived redo logs daily to disk, then use operating system commands to copy the backup to tape. There is only space on disk for two days' backups, so I need to have a retention policy of "redundancy = 1", and run a "delete obsolete" prior to the backup. The problem is that I don't want to subject the archived redo logs to this retention policy.
I have two physical standby databases connected by WAN to the primary site, and I might need archived redo logs that are a few days (or more) old in the event of a prolonged WAN outage. I've read about the "keep forever" option, but apparently it isn't available without using a recovery catalog. Is there any way to spare the archived redo logs from my retention policy?
Note: I want to "protect" the actual archived redo logs from the retention policy, not the backups of the archived redo logs.
URL....I'm practicing for the OCP test and one of the questions is that there is a backup from yesterday and the last archived logs are from the day before yesterday not mentioned if it's cold or hot backup.
If its a cold backup - cant we recover it? is it a must to have the archived redo logs also when recovering a cold backup? That sounds not logical since those logs are made only for a hot backup. URL.....
I have a Sap Primry Database and also a standby db that was working perfectly.. We migrated the primary db from windows 2003 to windows 2008 and brought the primary db up.. I had to create a controlfile and do a system copy and had to reset the logs on the Primary.. All came up and when i checked the standby it was receiving the logs but after a month. i see that it was not applying the logs as I think because of the sequence number .. it stopped.
I did the ffg as per the attachement...My logs have been shipped across but not applied, But What worrries me is the log sequence number on my Primary
SQL> select max(sequence#),thread# from gv$archived_log group by thread#;
Backup entire database, without archived logs, while the database is open for user activity and also This backup should be the base for an incremental backup strategy.
In Oracle Database 11.2.0.2, to delete audit trails after the audit records have been inserted into Oracle Audit Vault, is it necessary to schedule Oracle Audit Vault jobs to clean up audit trails on a scheduled basis, or AV automatically cleans up audit trails after the audit records have been inserted into the Audit Vault? I know there is a DBMS_AUDIT_MGMT package, but in 11gR2, the deletion of audit trails isn't done automatically?
I have to cleanup data from our tables (Production Environment) that contain millions of rows. The question is apart from the solution of the partitioned tables what alternative recommended solution suggests Oracle?
To delete these tables by using a cursor PL/SQL block or to import all the database and in the tables that we want to remove the old rows to use the QUERY option of the data pump utility.
I have used both ways and i have to admit that datapump solution is much much faster than the deletion that suffers from I/O disk.The question again is which method from these two is more reliable and less risky for the health of the database.
I applied this patch in our development environment. After validation, I need to apply the same in production, training and publication environments. The detailed information follows :
PowerPC_POWER5 / aix 5300-12 OPatch utility used : 11.2.0.3.0 OUI version : 11.2.0.2.0 Prereq "checkConflictAgainstOHWithDetail" passed. Composite patch 13923804 successfully applied. OPatch Session completed with warnings.
I have applied offline patch *10417948* on my database. How can see that patch applied on database/OH or not? I have applied one online patch few months ago....in that i have applied that patch on each database after installing it...using command :
when i execute a command opatch lsinventory -details i got the following output :
+Interim patches (2) :
Patch 10417948 : applied on Sat Feb 23 13:42:49 IST 2013 Unique Patch ID: 14586154 Created on 18 Oct 2012, 06:52:32 hrs PST8PDT Bugs fixed: 10417948 [code].......
For offline patch is it required to enable patch on every database?
My database is in open and noarchive log mode which was working fine but since from last 2 to 3 days it is throwing me an error ORA-00308: cannot open archived log with ora -00600
ORA-00308: cannot open archived log 'D:\HFTEST\ARCHIVE\ARC1_779994432.1' ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified. ORA-00600: internal error code, arguments: [kewrsp_split_partition_2], [87], [902828405], [11905], [], [], [], [], [], [], [], []
I dont understand why it so, i googled it but i didnt find anything.
I have a setup where i have one physical and logical standby from a primary database. In case of switch over between primary and physical database my logical apply gets stopped. Can a logical database be applied from a physical standby ?
Why showing at the status bar "3 records applied and saved" instated of "2 records applied and saved". 2 means 1 for Header and 1 for Detail. How I change status bar.
We recently configured data guard in test machine.Archives not applied in physical standby.Where i need to start investigation?
Primary SQL> select THREAD#,max(sequence#) from v$archived_log where applied='YES' group by thread#; THREAD# MAX(SEQUENCE#) ---------- -------------- 1 301 [code]...
How will you find out the list of patches applied to Oracle Database Home without using commands like opatch lsinventory -detal etc...
I think registry$history is a view from where we can find out the list of patches applied.
But I think it will not include all the bug fixes,stand alone or one-off patches.It will mainly list out the CPU patches applied(correct me if I am wrong).
My database is in open and noarchive log mode which was working fine but since from last 2 to 3 days it is throwing me an error
ORA-00308: cannot open archived log with ora -00600 ORA-00308: cannot open archived log 'D:\HFTEST\ARCHIVE\ARC1_779994432.1' ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified. ORA-00600: internal error code, arguments: [kewrsp_split_partition_2], [87], [902828405], [11905], [], [], [], [], [], [], [], []
I need to create a table which contains the details of what companies a specific seeker applied to when he logs in into his account.... Also, when an employer logs in, he needs to get the list of seekers who applied to him...give database schema required for such a situation asap.Say, the user has a primary key UID, and each employer has a primary key EID..
i'm trying to write a pl/sql to find all missing archived logs that are need for streams replication.
There is already a oracle metalink note for this. But yet it would give only the archive log name that contains my dba_capture.start_scn and we need to check if the files exist in disk or not!
The problem here is, when using ASM, dba_registered_archived_log view is truncation the file name and it is really difficult to pin point the logs. So is it fine to join this view with V$archived_log? is deleted and status column would do the trick? I modified the plsql as below. Is this fine/accurate?
CODEdbms_output.put_line('Capture will restart from SCN ' || lScn ||' in the following file:'); for cr in (select decode (a.name, NULL, 'NOT FOUND', a.name) name, to_char(a.completion_time, 'hh42:mi:ss') completion_time from v$archived_log a,dba_registered_archived_log b where lscn between b.first_scn and b.next_scn and a.deleted = 'YES' and a.status != 'A') loop f_rec :=1; [code]......