RMAN :: ORA-00258 / Manual Archiving In NOARCHIVELOG Mode Must Identify Log
Jan 27, 2013
My RMAN backup failed with below error.
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of sql command on default channel at 01/26/2013 22:48:56
RMAN-11003: failure during parse/execution of SQL statement: alter system archive log current
ORA-00258: manual archiving in NOARCHIVELOG mode must identify log
RMAN>
4. I have tried to login in locally (not using Quote: @<dbasename> but I get the same error message as above. I have set ORACLE_SID and LOCAL to the database name and still the same TNS name error
Starting backup at 06-MAY-13 channel ch00: starting compressed incremental level 0 datafile backup set channel ch00: specifying datafile(s) in backup set RMAN-03009: failure of backup command on ch00 channel at 05/06/2013 07:09:16
[Code]....
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03009: failure of backup command on ch00 channel at 05/06/2013 07:09:16 ORA-19602: cannot backup or copy active file in NOARCHIVELOG mode
on our 10.2.0.5 database, when we run full backup, my system performance comes to an halt. we run full backup and then do a validate backup to validate the structure of the database etc. Database performance takes a hit and all of the application connections goes in wait mode: On ASH or AWR - this is the top wait i see:
The customer wants the RMAN recovery catalog database to be highly available so that none of the RMAN database backup jobs are impacted at the time of taking the database backups. There are are 200+ databases running on OEL, RHEL and Windows. So we planned to host the recovery catalog database on Oracle Active DataGuard 11.2.0.1 Enterprise Edition on RedHat EL 5.8 version on two physical servers.
The Primary Instance will be in one server in the Primary DC and the Standby Instance will be on another server in another DC. Also all the database datafiles are hosted in ASM Diskgroups on SAN (DATA, FRA, REDO, ARCH diskgroups). Are there any specific RPM/patch/OS user custom/specific settings or configurations needed..?
BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production PL/SQL Release 11.1.0.7.0 - Production CORE 11.1.0.7.0 Production TNS for Linux: Version 11.1.0.7.0 - Production NLSRTL Version 11.1.0.7.0 - Productionredo logs multiplexed. [code]....
+ when the redolog 5 was archived how the archiving process works ?
let say both the log members are clean in case which one will be archived 5a or 5b ?
+ I can also see only one archive log is creating during log swtich
I'm currently working on a project which is to archive the old data and then purge the same data from the main table.
Here is a detail description:
There are around 50 odd tables from which I would need to archive the old data(matching certain filter conditions...not date based). Meaning I have to store the data in a temp table. Once stored in temp table then I would have to delete those rows from the main table. This temp table will be later exported and stored on ARchive database(a seperate database). These tables are very huge. One of the table is actually 250 GB in size. And all these tables have many indexes built - both normal and bitmap.The 250 GB size table has 40 million rows that need to be archived and purged. The total number of rows in the table are 540 million.On this table alone there are 50 bitmap indexes and 2 normal indexes. This table is partitioned based on date column.This date column is not used/useful in identifying the old data. There are around 20 tables which are quite similar in size to the above described table. Rest of them are little small when compared to the above table.
We have to execute this activity over a weekend which gives us about 48 hours time to complete the activity. Best possible ways to handle this activity. Most importantly should be able to complete the activity within the specified 48 hour window.
The solution what we are now thinking of is:
1. Create the temp table ---Create tmp_tbl as select * from main_table where <<conidtions identifying old data>>
2. Once the temp table is created. Make copy of indexes that exist on the main table and eventually drop them.
3. Execute a PL/SQL script to perform the bulk delete from main table and commit for every 100000 rows.
4. Once the bulk delete is finished then recreate the indexes on the main table using the copy made at earlier step.
Our main worry is about the step#4. Considering the size of these tables and the number of indexes to be built,we are not sure how long the index re-creation will run for each table.
depending on the possibilities we may have to split the activity in to 2-3 phases spreading across 2-3 weekends. Even then we are not sure whether we will be able to pull off this activity.
I am working on an archiving strategy. I want to roll off transactions that are older than seven days, but only if they are flagged as Completed. The numbers of transactions are very large so this is a worthwhile venture.
The only strategy I have been able to come up with so far is to partiton on date. Then when 7 days comes up, sweep the about-to-be archived day for the few remaining not Completed transactions, put those into a new table (a new version of this partiton) and switch partitions. Each day I do this until the older parititions are empty.
I just want to write some data in a particular table,But I dont want it to be archivedSo is it possible to disable archiving in session level Oracle 11g Rel 2
when i run a form no information shows up until i click execute query... i need the info to be their automatically to browse with the previous and next button
Phyical memory : 420G My database version : 11.2.0.3 running on linux machine.
Memory_target = 200G . I would like to allocate this value to following SGA components. I don't want to automatic memory management enabled. how to split 200G for following components. Is there any percentage for each components ?
The DB was created but when i log in again, it is looking for the pfile in the another DB's location rather than from "%ORACLE_HOME%DATABASE". Where i missed out to create new oracle_home folder?
I am running some small tests here in my test env, using dataguard. I have configured the Primary and Standby with Maximum Availability... they're running just fine. Now i want to execute a failover test (i have already ran a Switchover test with the Broker successfully).
My question is very simple, at my point of view: What are the required steps to execute a successful manual failover? For example, i have my env as follows:
- Primary: prim1 - Standby: stdb1
Suppose that the primary database crashes in an unrecoverable way... is this case a manual failover would be necessary.
To do so, i would have to execute the following command, in my Standby database:
-- stdb1 is the standby database... DGMGRL> failover to stdb1
The above command is correct? Are there any required configurations after the failover? I read the Oracle Docs, and it says
Those SQL reports are built with apex_items. Afterwards I'm updating them with a manual process (apex_application.g_fxx for ..loop). I'm using different IDs on both reports for the apex_items.
It's working fine for the first report displayed. But not for the second one. If I swap the two reports, it's always the first that will be updated.
Does it mean that we cannot use the apex_application to update more than one report and that I have to use a page per report?
if it is possible to create a failover setup without RAC and DG...For example:
I have 11.2.0.2 database (with EBS 12.1.3) on dbnode1...I would like to create another node to failover my primary datbas in case of any failure.
Steps i will follow: 1: creat dbnode2 2: install same os as dbnode1 3: install same oracle as dbnode1 4: share dbnode2 database between dbnode1 and dbnode2
Now, if hardware fails on dbnode1, can i manually failover and start my database on dbnode2?
I know we can do with RAC and DG,BUT without RAC and DG If not possible..
i am creating database manually in oracle 9i os version ibm-aix5.3 it showing following error
CREATE DATABASE "varathu" * ERROR at line 1: ORA-01501: CREATE DATABASE failed ORA-00200: controlfile could not be created ORA-00202: controlfile: '/backup/varathu/control01.ctl' ORA-27040: skgfrcre: create error, unable to create file IBM AIX RISC System/6000 Error: 13: Permission denied
I have some table spaces manual and others automatic, i just want to know what's the recommended one, to change it to the best way.
How to change Segment Space Management of a tablespace from MANUAL to AUTO? in oracle 10g R2 and also want to know the main difference between Manaul and Automatic Segment Space Management.
We have a 2 node RAC installation on IBM AIX 5.3 with Oracle Standard Edition. We need to create a manual standby database (becoz Standard Edition does not allow Dataguard confign). Can we do this on a different platform (eg: Linux/Windows). This is to make use of existing servers for the standby environment.
I am running a report which is called for a set of employees picked up in the cursor. For each employee the report is run in a different window. However, the report runs fine and the output gets saved in the local machine but the window doesn't get closed automatically.
How to close the reports window automatically without manual intervention?
we want to truncate a oracle Table in the Oracle DB. After the truncate the fact table will be loaded again. After the new load in the fact table we want to tell the times ten db to refresh the cache table. The cache Table is a user owned read-only cache group with no autorefresh. We want to tell times ten in a PL/SQL Block from Oracle DB that starts the refresh from the cache group in times ten. The refresh should not be a autorefresh because the refresh should only start if the fact table will new loaded after the truncate.
I'm trying to get a unique sequential element id for every field in a manual tabular form. Here is the code to generate the form:
SELECT apex_item.hidden(1, lpad(rownum,4,0)) row_num, apex_item.hidden(2, skillset_demand_id) sdi, apex_item.text(3, domain, 20, 20, null, 'f03_#row_num#') Domain, apex_item.text(4, target, 20, 20) Specialization, apex_item.text(5, skill, 20, 20) Skill FROM RI_SKILLSET_DEMAND WHERE WORK_ID = :P511_WORK_IDAccording to the APEX API manual I should be able to set the attribute id with p_item_id*, which in the example of a text item, is the sixth parameter. In line 3 of the code above, I set the p_item_id to 'f03_#row_num#' after having created row_num in line 1.
The HTML I get is: <input id="f03_#row_num#" type="text" value="xxxxxxxxx" maxlength="20" size="20" name="f03"> ... <input id="f03_#row_num#" type="text" value="xxxxxxxxx" maxlength="20" size="20" name="f03">What I want is: <input id="f03_0001" type="text" value="xxxxxxxxx" maxlength="20" size="20" name="f03"> ... <input id="f03_0002" type="text" value="xxxxxxxxx" maxlength="20" size="20" name="f03"> ... etc.
I've tried a lot of things but I can't seem to get parameter substitution inserted into the id attribute so I can get each element uniquely with javascript later.