How To Back Up Database Call Without Archived Logs
May 11, 2011
here i have an question with oracle database backup strategy.my question is
how to backup my oracle database call DB11G without archived logs while the database is open for user activity and also this should be the base for an incremental backup strategy?
Backup your entire database, without archived logs, while the database is open for user activity. This backup should be the base for an incremental backup strategy
Backup entire database, without archived logs, while the database is open for user activity and also This backup should be the base for an incremental backup strategy.
Assuming you have a 9i database . where you have it enabled in archive mode , yet constantly deleting the archived redo logs , due to space constraints .
Will you be able to perform a full level 0 backup , and the following incremental backups , in the absence of the archived redo logs ? And are these incremental backups enough to recover the database or particular data files , to the point of the backup itself at least ?
I'm running Oracle 9i on AIX 5.2. I'm not using a recovery catalog, nor am I using media management software. I perform a full, online rman backup of the database and archived redo logs daily to disk, then use operating system commands to copy the backup to tape. There is only space on disk for two days' backups, so I need to have a retention policy of "redundancy = 1", and run a "delete obsolete" prior to the backup. The problem is that I don't want to subject the archived redo logs to this retention policy.
I have two physical standby databases connected by WAN to the primary site, and I might need archived redo logs that are a few days (or more) old in the event of a prolonged WAN outage. I've read about the "keep forever" option, but apparently it isn't available without using a recovery catalog. Is there any way to spare the archived redo logs from my retention policy?
Note: I want to "protect" the actual archived redo logs from the retention policy, not the backups of the archived redo logs.
I got a database on dataguard and my primary (db1) is shipping files to my standby (db2) with no problems.However, when I query:
select sequence#, status, applied from v$archived_log;
I see this: SEQUENCE# S APP ---------- - --- 4 A YES 5 A YES 6 A YES 7 A YES 8 A YES 9 A YES 10 A YES 11 A YES 12 A YES 13 A YES 14 A YES [code]....
So I did an alter system switch logfile on db1 then looked again and I can see new archived logs being applied.I thought all archived logs had to be applied on the standby since this is the very foundation of the standby database.Am I going to run in trouble later if I have a failover (unsynchronized database)
URL....I'm practicing for the OCP test and one of the questions is that there is a backup from yesterday and the last archived logs are from the day before yesterday not mentioned if it's cold or hot backup.
If its a cold backup - cant we recover it? is it a must to have the archived redo logs also when recovering a cold backup? That sounds not logical since those logs are made only for a hot backup. URL.....
The GAP archived log can not transport to target database by auto, how can i do?
Sun May 27 06:32:59 2012 FAL[client]: Failed to request gap sequence GAP - thread 1 sequence 161-161 DBID 1820932955 branch 775456988 FAL[client]: All defined FAL servers have been attempted. -------------------------------------------------------------
Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization parameter is defined to a value that is sufficiently large enough to maintain adequate log switch information to resolve archivelog gaps.
1> Does dataguard in 10g use ftp/rsh to transfer archived log files to standby database or some other protocol?
2> In my primary database, archives are getting generated normally, there is no error in alert log file. But archives are not getting transferred to standby database. I am able to connect through sys user from primary server to standby database & vice versa.
Also, tnsping is working fine.
All was working fine till 2 days back & no parameter has been changed from database side. I am not able to transfer the file manually through FTP to standby server. Does it is the problem? Or dataguard doesnt use FTP protocol to transfer the files?
i have a sequence for one of my table that this sequence's current value was 3000 yesterday but today when i checked current value of it, i surprised because the value changed to 50, can i check who changed my sequence? is exists any data dictionary that shows logs of modified database objects.
I am looking for a way to capture a TXs from one database and create a script or use it with tool, so we can capture TXs close to production while testing changes being implemented in database.
Env:
OS: Redhat Linux 5.0 DB: 10.2.0.4
Actually Requirement: We have active-active Golden Gate setup done for one of our DB and once a quarter we make changes in the database using DDL (CREATE / ALTER - TABLES, FUNCTIONS, TRIGGERS, etc). We are 24/7 environment and hence no downtime is affordable. What I like to know is a way to capture all TXs from one of the DB and create a script so we can run them while we are testing new changes in the database. Something like DBMS_WORKLOAD_CAPTURE Package (Available only after 11gR1) or some tools available in the market.
I also tried looking into Load Runner but felt it works with App tier than DB, I may be wrong there.
I have created a physical standby database and in it the logs are not getting applied.following is an extract of the standby alert log
Wed Sep 05 07:53:59 2012 Media Recovery Log /u01/oracle/oradata/ABC/archives/1_37638_765704228.arc Error opening /u01/oracle/oradata/ABC/archives/1_37638_765704228.arc Attempting refetch Media Recovery Waiting for thread 1 sequence 37638 Fetching gap sequence in thread 1, gap sequence 37638-37643 Wed Sep 05 07:53:59 2012 RFS[46]: Assigned to RFS process 3081 RFS[46]: Allowing overwrite of partial archivelog for thread 1 sequence 37638 RFS[46]: Opened log for thread 1 sequence *37638* dbid 1723205832 branch 765704228 Wed Sep 05 07:55:34 2012 RFS[42]: Possible network disconnect with primary database
However, the archived files are getting copied to the standby server.I tried registering and recovering the logs but it also failed..Follows some of the information,
Primary Oralce 11R2 EE SQL> select max(sequence#) from v$log where archived='YES'; MAX(SEQUENCE#) -------------- 37668 [code]...
I trying to backup archive logs using rman in standby database. I'm able to backup archive logs using simple command it get's successfully completed. rman > BACKUP ARCHIVELOG ALL When i'm trying to do with keep command it's getting failed.I'm trying to do on physical standby databaseBACKUP ARCHIVELOG ALL KEEP UNTIL TIME 'SYSDATE+100' TAG = 'TEST'.
I have a Sap Primry Database and also a standby db that was working perfectly.. We migrated the primary db from windows 2003 to windows 2008 and brought the primary db up.. I had to create a controlfile and do a system copy and had to reset the logs on the Primary.. All came up and when i checked the standby it was receiving the logs but after a month. i see that it was not applying the logs as I think because of the sequence number .. it stopped.
I did the ffg as per the attachement...My logs have been shipped across but not applied, But What worrries me is the log sequence number on my Primary
SQL> select max(sequence#),thread# from gv$archived_log group by thread#;
I've created a database package which is having record type and one procedure. I want to execute or call this package from oracle 6i form. How to do this.
I am using Release 11.2.0.3.0 - 64bit Production version of oracle. Now we are having 3-tier architecture, (firewal/web/app/DB).Now i saw , some of the 'sql' queries, running till ~10hrs in my database and those are part of application(module JDBC THIN CLIENT). After had a talk java guys, they ask to kill the sessions specific to those queries. They are part of search TO, in which user put some large values for the date range and went to other TAB, but these queries gets running infinitely in the database, and user is not interested in the result set.
So how to avoid these things, as because in past, our database has suffered resource contention leading to application slowness. So i was planing to set different timeouts using 'database resource consumer group' for online user request and batch request depending on the app server(that is by machine names) request.
So i have done below set up in my local to test one scenario, in which i will try give a database call from difference machine, and it should get timeout after the specified duration. But its not working , as expected. The calls from the specified machine are not getting assigned to the created 'Consumer group'.
Begin -- create the pending area dbms_resource_manager.create_pending_area(); END; / BEGIN -- Create the consumer group
[code]....
After this when i am verifying calls from machine, 'LR9XY7T8' they are belongs to the consumer group 'OTHER_GROUPS' and sql query not getting timed out within 60 seconds as mentioned.
My database is in open and noarchive log mode which was working fine but since from last 2 to 3 days it is throwing me an error ORA-00308: cannot open archived log with ora -00600
ORA-00308: cannot open archived log 'D:\HFTEST\ARCHIVE\ARC1_779994432.1' ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified. ORA-00600: internal error code, arguments: [kewrsp_split_partition_2], [87], [902828405], [11905], [], [], [], [], [], [], [], []
I dont understand why it so, i googled it but i didnt find anything.
My database is in open and noarchive log mode which was working fine but since from last 2 to 3 days it is throwing me an error
ORA-00308: cannot open archived log with ora -00600 ORA-00308: cannot open archived log 'D:\HFTEST\ARCHIVE\ARC1_779994432.1' ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified. ORA-00600: internal error code, arguments: [kewrsp_split_partition_2], [87], [902828405], [11905], [], [], [], [], [], [], [], []
i'm trying to write a pl/sql to find all missing archived logs that are need for streams replication.
There is already a oracle metalink note for this. But yet it would give only the archive log name that contains my dba_capture.start_scn and we need to check if the files exist in disk or not!
The problem here is, when using ASM, dba_registered_archived_log view is truncation the file name and it is really difficult to pin point the logs. So is it fine to join this view with V$archived_log? is deleted and status column would do the trick? I modified the plsql as below. Is this fine/accurate?
CODEdbms_output.put_line('Capture will restart from SCN ' || lScn ||' in the following file:'); for cr in (select decode (a.name, NULL, 'NOT FOUND', a.name) name, to_char(a.completion_time, 'hh42:mi:ss') completion_time from v$archived_log a,dba_registered_archived_log b where lscn between b.first_scn and b.next_scn and a.deleted = 'YES' and a.status != 'A') loop f_rec :=1; [code]......
I have recently installed a oracle 11g r2 standard edition in aix 6.1.database is on archive log but archived log is generated in daily different folder named after date on system.
i want to generate all archive log in only one folder.
ORA-16038: log 2 sequence# 284 cannot be archived ORA-19809: limit exceeded for recovery files ORA-00312: online log 2 thread 1: /redo02.log' USER (ospid: 792): terminating the instance due to error 16038 Fri Nov 02 13:14:19 2012 System state dump requested by (instance=1, osid=792), summary=[abnormal instance termination].
Fri Nov 02 13:14:19 2012 ARC3 started with pid=23, OS id=822 Errors in file _ora_792.trc: ORA-19815: WARNING: db_recovery_file_dest_size of 10240 bytes is 100.00% used, and has 0 remaining bytes available.