I have a RAC system with DR set-up, this is a test environment and it doesn't have any backup, why DR is required but it exist. Since this is a test a lot or archives gets generated and deleting the archives has become a daily job for this server manually.
I want have a script to delete archive logs which is in non-ASM (i.e. filesystem) after ensuring that the archive log has been applied in standby database. If this can done only by RMAN.
I have a DR setup with the following configuration
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS; CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE ENCRYPTION FOR DATABASE OFF; # default CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY
I dont want to backup the STDBY DB but I want the ARC files to be removed when applied so my flash area does not fill up. Is there some command(rman or not) that can fire off this policy?
We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.
We recently configured data guard in test machine.Archives not applied in physical standby.Where i need to start investigation?
Primary SQL> select THREAD#,max(sequence#) from v$archived_log where applied='YES' group by thread#; THREAD# MAX(SEQUENCE#) ---------- -------------- 1 301 [code]...
We have a request to configure data guard for databases on the production server. Here is my situation:
We have a backup strategy in place where the backups are being taken on a regular basis. Archive logs are deleted as soon as they are backed up.
My question now is , is there a way of configuring the Data guard in such a way that there would be no change to existing backup strategy (RMAN) and still duplicate the archive logs to another destination and not delete the archive logs by RMAN backup process on that destination while the first destination is deleted.
We will be deleting the logs in the second archive destination using a script which checks if the logs were applied.
I successfully created the standby database and the archive logs were properly moving on both the primary and the standby databases. For the proper transfer of the archive logs on the STANDBY database I used "FAL_CLIENT AND FAL_SERVER" in the pfile of the primary database specifying the location of the primary and the Standby respectively.
When I removed both the parameters from the pfile of the primary database still there was the transfer of the archive logs however there should not be "If I am not wrong" as I have removed both the parameters.
why there is still the transfer of the archive logs on the standby database.
Currently I am at the point where the configuration has been completed, and I just need to sync the standby database to the primary one. I can see in the log files that the archive logs are being shipped, but they are not applied on the standby system.
If I run "recover standby database;" manually in sqlplus I can see that it is trying to apply an archive log which is way too old (ORA-00279: change 9656498443 generated at 04/29/2008 08:45:08 needed for thread 1). I
n the alert log I can also see this error: Warning: Recovery target destination is in a sibling branch of the controlfile checkpoint. Recovery will only recover changes to datafiles.
At this point I was thinking that the standby database might be on a different incarnation compared to the primary, but this is not the case, they are both in incarnation 6:6 6 MVF 4023175798 CURRENT 48493546257 13-06-21
which of the following views on the physical standby will us correct information on synchronization with Primary database?
For example, when I checked v$archived_gap it did not return any rows but the max(applied_seq#) on v$archive_dest_status was lagging far behind from the max(sequence#) on Primary database
select max(applied_seq#) from v$archive_dest_status where dest_id=2; select max(sequence#) from v$archived_log where applied='YES'; select * from v$archive_gap;
i have found an issue regarding log archiving on dest1. yesterday one sequence number 76871 not archive to dest1.alert logfile content as follow. i configure standby and ship archive manually with window copy command. i need this archive to complete recovery on standby database.
Mon Oct 21 09:29:28 2013 ARC2: Completed archiving log# 3 seq# 76869 Mon Oct 21 09:39:28 2013 Thread 1 advanced to log sequence 76871 Current log# 2 seq# 76871 mem# 0: D:ORACLEORADATAORC1REDO02.LOG [code]....
1) How to find archive log gap from Primary Database site? 2) Do I need the DB_UNIQUE_NAME when setting the LOG_ARCHIVE_DEST_2 ? What is the purpose of DB_UNIQUE_NAME in LOG_ARCHIVE_DEST parameter? 3) If archivelog gap happens standby db goes out of sync with the primary database. What does out of sync actually mean? 4) Primary DB knows where to transport redo data based on the location mentioned in LOG_ARCHIVE_DEST_n of Primary DB. Am I correct in my understanding?
If I want to take Rman archive log backup with delete input command , how the archive logs will be copied to standby database
For eg
I am taking archive backup as
RMAN>backup archivelog all delete input;
here consider few archives are not copied to standby database (due to nw issue) then how standby will receives these missing archives as those are deleted by rman backup at primary side.
I am not getting any document related to above query.
I have a Primary database and Standby database both in ASM. Recently my archive logs got deleted and i am trying to recover my standby database with an incremental backup based on scn from primary database. But i face the below error when i recover the standby database with the incremental backup taken in primary database.
RMAN> recover database noredo;Starting recover at 06-NOV-13using target database control file instead of recovery catalogallocated channel: ORA_DISK_1channel ORA_DISK_1: SID=21 device type=DISKchannel ORA_DISK_1: starting incremental datafile backup set restorechannel ORA_DISK_1: specifying datafile(s) to restore from backup setdestination for restore of datafile 00001: +STDBY/11gdb/datafile/system.258.805921881destination for restore of datafile 00002: +STDBY/11gdb/datafile/sysaux.259.805921967destination for restore of datafile 00003: +STDBY/11gdb/datafile/undotbs1.260.805922023destination for restore of datafile 00004: +STDBY/11gdb/datafile
I am facing Deadlock issue in my transaction when record is been deleted in the same table in parallel from a PLSQL package. The package is been called from the Java code.The example and the table structure is given below.
Col1 of tab1 is foreign key to col1 of tab2. Commit will not be done until all the below dml scripts are executed in both environment.
1st session:
Delete from tab1 where col1=1001; Delete from tab2 where col1=2001;
Result: Deletion successful
2nd session:
Delete from tab1 where col1=1002; Delete from tab2 where col1=2002;
Result: Deletion successful
1st session:
Delete from tab1 where col1=1003; Delete from tab2 where col1=2003;
Result: Deletion successful
2nd session:
Delete from tab1 where col1=1004; Delete from tab2 where col1=2004;
Result: Query is executing for a longer time.
1st session:
Delete from tab1 where col1=1003; Delete from tab2 where col1=2003;
Result: Query is not executed and throws Deadlock in the back end.
Maybe you can tell me if you think it's right or wrong for Oracle to behave like that. What basically happened is that I've set up a trigger to capture deleted rows from testtab into testtab_del.
I have inserted and immediately after - deleted a row from testtab. Then I inserted another row, committed, and checked my testtab_del table.
I've seen that val1 was inserted and committed into testtab_del. This happened in spite of the fact that this row never existed as a *committed row*.
What do you think about this behavior in this scenario? Works as designed or not?
SQL> show user USER is "ANDREY" SQL> col tcol for a10 SQL> drop table testtab;
We have physical data guard configured version (10.2.0.4). We are in need to upgrade primary & standby database to 11G R2. Can we perform rolling upgrade.
I have a partitioned table that is streamed to another database. I need to archive data on that table. That is I need to add a partition and remove a partition.
If I make those changes to the source table, will it stream over to the destination table?
If not, can I ...
pause streaming make changes to source table make same changes to destination table sreenable streaming. I know making data changes to the destination table can screw up streams but not sure if that holds for ddl.
I need store history for two tables in my system. I thought that Flashback Data Archive will be the best option. There is also another ways to do this but don't focus on this. I need to to this by FDA (Flashback Data Archive);
So my prerequisite was to create tablespace and flash back archive, and alter table to be archived.
alter table teta_admin.t_prac flashback archive audit_flash_archive;
and everything works fine but on sys user. i can query this table using "as of timestamp" clause
select prac_id, imie, imie_2, nazwisko, nr_ew from teta_admin.t_prac as of timestamp to_timestamp('2011-08-23 08:20:00','yyyy-mm-dd hh24:mi:ss')
but final construction of idea was to create additional user (interface), grant select on teta_admin.t_prac object and query archive data from interface user. and this is point of my failure. this don't work on new user.
interface user have such sys privs:
SQL> SELECT * FROM dba_sys_privs 2 WHERE grantee = 'INTERFACE'; GRANTEE PRIVILEGE ADM ------------------------------ ---------------------------------------- --- INTERFACE CREATE SESSION NO
and table privs:
SQL> SELECT * FROM dba_tab_privs 2 WHERE grantee = 'INTERFACE';
Before I begin, I want to clarify that I am newbie in the administration of data warehouse.I need to know how to calculate the sizes of the archive and redo on data warehouse DB, in order to make an initial sizing of the BD on disks level.
Is there anyway to backup Flashback Data Archive (FBDA) data and can be restore on new database. I cannot find Oracle's document or any document explain about backing up this data.
i have configured physical standby in my local system, to check logshipping i created a table at primary db, wen i tried to check in standby, it says table does not exist..below are primary & standby alert entries..
Primary alert log
Fatal NI connect error 12514, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=172.16.0.98)(PORT=1522))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=STAND)(SERVER=dedicat ed)(CID=(PROGRAM=d:oracle11gappadministratorproduct11.1.0db_1inORACLE.EXE)(HOST=A960M)(USER=SYSTEM))(SERVER=dedicated))) VERSION INFORMATION: TNS for 64-bit Windows: Version 11.1.0.6.0 - Production