Redo Multiplexing And Archiving
Aug 21, 2012
Database version
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Productionredo logs multiplexed.
[code]....
+ when the redolog 5 was archived how the archiving process works ?
let say both the log members are clean in case which one will be archived 5a or 5b ?
+ I can also see only one archive log is creating during log swtich
View 2 Replies
ADVERTISEMENT
Mar 1, 2011
I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
+DATA - for datafiles, control files, redo logs
+FRA - for achive logs, flash recovery. RMAN backup
Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
View 4 Replies
View Related
Apr 22, 2013
I'm currently working on a project which is to archive the old data and then purge the same data from the main table.
Here is a detail description:
There are around 50 odd tables from which I would need to archive the old data(matching certain filter conditions...not date based). Meaning I have to store the data in a temp table. Once stored in temp table then I would have to delete those rows from the main table. This temp table will be later exported and stored on ARchive database(a seperate database). These tables are very huge. One of the table is actually 250 GB in size. And all these tables have many indexes built - both normal and bitmap.The 250 GB size table has 40 million rows that need to be archived and purged. The total number of rows in the table are 540 million.On this table alone there are 50 bitmap indexes and 2 normal indexes. This table is partitioned based on date column.This date column is not used/useful in identifying the old data. There are around 20 tables which are quite similar in size to the above described table. Rest of them are little small when compared to the above table.
We have to execute this activity over a weekend which gives us about 48 hours time to complete the activity. Best possible ways to handle this activity. Most importantly should be able to complete the activity within the specified 48 hour window.
The solution what we are now thinking of is:
1. Create the temp table ---Create tmp_tbl as select * from main_table where <<conidtions identifying old data>>
2. Once the temp table is created. Make copy of indexes that exist on the main table and eventually drop them.
3. Execute a PL/SQL script to perform the bulk delete from main table and commit for every 100000 rows.
4. Once the bulk delete is finished then recreate the indexes on the main table using the copy made at earlier step.
Our main worry is about the step#4. Considering the size of these tables and the number of indexes to be built,we are not sure how long the index re-creation will run for each table.
depending on the possibilities we may have to split the activity in to 2-3 phases spreading across 2-3 weekends. Even then we are not sure whether we will be able to pull off this activity.
The database we are using is Oracle 10g.
View 1 Replies
View Related
Jan 29, 2011
I am working on an archiving strategy. I want to roll off transactions that are older than seven days, but only if they are flagged as Completed. The numbers of transactions are very large so this is a worthwhile venture.
The only strategy I have been able to come up with so far is to partiton on date. Then when 7 days comes up, sweep the about-to-be archived day for the few remaining not Completed transactions, put those into a new table (a new version of this partiton) and switch partitions. Each day I do this until the older parititions are empty.
View 7 Replies
View Related
Jul 30, 2013
I just want to write some data in a particular table,But I dont want it to be archivedSo is it possible to disable archiving in session level Oracle 11g Rel 2
View 3 Replies
View Related
Jan 27, 2013
My RMAN backup failed with below error.
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of sql command on default channel at 01/26/2013 22:48:56
RMAN-11003: failure during parse/execution of SQL statement: alter system archive log current
ORA-00258: manual archiving in NOARCHIVELOG mode must identify log
RMAN>
Recovery Manager complete.
View 2 Replies
View Related
Oct 1, 2011
what exactly redo log file contains.? whether they contained the ddl or dml statements or more than that.........?
View 1 Replies
View Related
Dec 28, 2012
I look after a database that contains GIS mapping data. We do not use Oracle Spatial - it's just a plain Oracle Standard DB. It is running in Noarchivelog mode (I know - it's not a good idea, but will be sorted when our new Sun T4-1 arrives).
There are only a couple of users who actually edit data in the database, but about 100 simultaneous users who access it. In day to day use we have no performance issues. The DB has 3 50Mb redo log groups, and these switch about every hour or so during normal use.
Every few weeks we do a bulk update of our underlying map data. This involves putting about 4Gb of data into the database (which is about 15Gb in total). This takes about 5 hours and whilst I'm sure our old Sun v240 server lacking power is a substantial cause, I think the lack of redo space makes matters worse. Last time we did this, the system clocked just over 200 redo log switches in 5 hours. There were lots of "Checkpoint Incomplete" messages in the log file too.
The software we use to load the map data doesn't allow the data to be loaded with a nologging switch.
I could resize the redo logs, but if I size them for the update workload - 3x 500Mb - we'll have some days where we don't get a redo log switch at all. Is this necessarily a problem?
The alternative I'm thinking of is prior to performing this update, we add an extra redo log group with a 1Gb file, run the update, then remove the redo log group and delete the file afterwards. Is there anything wrong with this approach?
View 1 Replies
View Related
Aug 22, 2012
What is the optimal redo log size for the database and how many log files required if desired to enable archive log mode.what can be the value for fast_start_mttr_target..?i think if it this parameter set we can have redo log advisor for optimal redolog size.we have 2 redolog groups with 2 members each size of 1 GB. Will it degrade db performance..?
Database version 11.1.0.7
Oracle apps R12
OS : Linux Redhat 5.5
View 1 Replies
View Related
Nov 14, 2006
how would I be able to calculate the Redo rate for using in the Required bandwidth Formula as seen below :
Required bandwidth Formula ((Redo rate in bytes per second / 0.7) * 8) / 1,000,000 = bandwidth in Mbps
Example: 385 KB/sec peak rate would require an available network bandwidth of at least
((394253 / 0.7) * 8) / 1,000,000 = 4.5 Mbps.
SOURCE OF FORMULA Network Bandwidth Implications of Oracle Data Guard...I'm using Oracle 10g
View 3 Replies
View Related
Sep 16, 2010
tell me the advantages of using more than one redo log thread? Currently i am setup with 3 logfile groups and 3 members in each group.Ie
logfile group 1 ('/u01/oradata/pri/redo01a.dbf',
'/u02/oradata/pri/redo01b.dbf',
'/u03/oradata/pri/redo01c.dbf') size 30M,
group 2 ('/u01/oradata/pri/redo02a.dbf',
'/u02/oradata/pri/redo02b.dbf',
'/u03/oradata/pri/redo02c.dbf' ) size 30M,
group 3 ('/u01/oradata/pri/redo03a.dbf',
'/u02/oradata/pri/redo03b.dbf',
'/u03/oradata/pri/redo03c.dbf' ) size 30M
View 4 Replies
View Related
Jun 13, 2012
I wanna know if the redo log members are mirror copies.All member files from a same redo group have the same data?Are there any different in mirror or multiplex a file?
View 4 Replies
View Related
Mar 20, 2012
I want to re-size redo log group on my production database .i have 10 redo log groups of 50mb each having 2 members.i want 4 redo groups to be of 250 mb each having 2 members and then i will drop that old 10 redo log group(50 mb ) , so that i will have only 4 redo log group of 250MB each having 2 members.But i have physical standby and logical standby configured on production database .
find attached file for redo log configuration of production database(CBSPROD),Logical standby database(CBSMIS), Physical standby database(CBSDR).
View 1 Replies
View Related
Aug 8, 2012
I'm using Oracle 10gR2 (10.2.0.4.0) 64 bits.
I got many times oracle ORA-00494 error and the database went down but since 29th of july the database have not been killed.
The error message is below :
ORA-00494: mise en file d'attente [CF] d�tenue pendant trop longtemps ( (more than 900 seconds)) par inst 1, osid 176484
ORA-00028: votre session a �t� ferm�e
My database is used for datawarehouse of many terabytes.
Initially the redo log size was 500Mbytes and I've set it to 3Gbytes. The maximum log switch is after 5 minutes. I want log to be switched every 20 minutes or every 30 minutes.
To obtain the size of redo logs I've executed this query :
SQL> select OPTIMAL_LOGFILE_SIZE from v$instance_recovery;
OPTIMAL_LOGFILE_SIZE
--------------------
54763
53,5 Gbytes is it not very big as redo log size? What's the maximum size of redo log? To set very big redo log size what are the requirements? Which precautions should I take before? What are the risks? Are any other ways to change the log switch frequency?
View 1 Replies
View Related
Mar 17, 2006
How can I disable redo generation for DML statements.
View 25 Replies
View Related
Feb 7, 2013
Is there any way to get the amount of redo generated in last 2 hour. which has below chareteistic
1. redo generated by currently connected session from last 2 hour.
2. redo generated by session disconnected during last 2 hour.
total_redo = disconnected sessoin during last 2 hour + connected session generating redo during last 2 hour.
View 7 Replies
View Related
May 28, 2013
is it Any way I can put the size of my redo log (During Install Oracle DB 11.1.0.7 )
I mean during installtion .??? becouse its by default 50 MB I need to be 200MB
View 10 Replies
View Related
Oct 25, 2012
Redo log information can be transmitted in one of two ways from the primary database to the standby database: either by ARCH or LGWR.
1. when ARCH involves
2. when LGWR involves
FAL_Client = (Should i enter net service name or Db name or service_name
FAL_Server = Should i enter net service name or Db name or service_name )
FAL_CLIENT='whichone'
[code]...
View 9 Replies
View Related
Jul 15, 2011
I am creating database instance from template. I have specified the location of redo log files. When I run the dbca utility it does creates the redo log files in specified directory. But the installation fails . When I checked the trace file. it says unable to locate the specified file(redo.log). But when i check in directory they are created.
I am using windows 32 bit oracle 11g
View 1 Replies
View Related
Sep 3, 2012
What is the purpose of Standby Redo Log Files in Data Guard configuration? When it is utilized by the database?
View 4 Replies
View Related
Aug 29, 2012
I learned that Oracle uses supplemental logging mechanısm to add the changed rows to redo log files and identify the changed rows on target replication database? Is that mechanism mandatory to handle the replication of data between updated and back up databases?
View 1 Replies
View Related
May 22, 2011
I was asked by my systems administrator if I could tell him how much redo log volume, on average, do I figure we generate in a day?
Just wondering how I might calculate this?
We have several production databases. If I wanted to calculate the above for one of them, would it be take all the redo logs for a day and total up the size in bytes? Maybe take a 5 day work week and take the average over the 5 days?
View 4 Replies
View Related
Dec 17, 2012
As we know that, MV is generating more redo logs during the FAST refresh. but i need more clarifications on that.
see the below examples:
exec dbms_mview.REFRESH ( LIST => 'mv_test', method=>'F');
PL/SQL procedure successfully completed.
select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name = 'redo size';
NAME VALUE
---------------------------------------------------------------- ----------
redo size 147144
see the redo size is 147144 bytes. Immediately, i refreshed in MV view. now there is no update or insert or delete stats happened in source tables. but i do see redo log generation is high for NO DATA refresh.
select a.name, b.value-147144
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name = 'redo size';
NAME VALUE
---------------------------------------------------------------- ----------
redo size 42352
For no rows refresh, it takes 42352 bytes.. why oracle generated redo logs when there is no DML operations happened in source table.
View 1 Replies
View Related
Mar 3, 2010
What's the difference between a dirty buffer and a redo buffer?
My understanding is that a dirty buffer is a changed buffer or whenever data changes in the buffer cache, it's marked as dirty. Also, a redo buffer keeps track of changes that were made to the data, so it's also referring to changed data as well...DWBn writes dirty buffers to disk and LGWR writes redo data to redo log filesHow can we differentiate between the two?
View 2 Replies
View Related
Jun 18, 2012
In one of our envirnoment i could see the redo size is high.Trying to understand why this is more
View 18 Replies
View Related
Jul 26, 2010
I have oracle 9i running on HP-UX, I would like to find how much redo we are generating in a given period of time, is there any script that I can use to get this information?
View 3 Replies
View Related
May 27, 2011
I learnt that logWriter writes in the redo log files when redo log buffer is 1/3 full, it means that 66 % of redo log buffer are always empty and never used,
if no, isn't a waste of memory (66 % always empty !)
View 5 Replies
View Related
Oct 23, 2012
11.2.0.2.0
OEL.
Design stage of New SAN. I can put a lot of the database, including redo on SSD.
1. Hardware sub tiering enabled. Has anyone ever gauged performance with enabling 11g Smart flash cache using the SSD as the flash device even with the hardware subtiering on? ie, if the hardware subtiering is moving hotblocks to most performant disk, is there much significant benefit having the flash cache on. Im guessing yes as the flash cache is working at the SGA level.
2. To redo or not to redo on SSD. Many notes say not to so as to avoid log file sync waits. Is there a way to keep it on the SSD and avoid the waits? Is there any risk of enhanced degradation of the disks over time with having so much sequential writes from the redo if I leave it on there?
How to Minimise Waits for 'Log File Sync' [ID 857576.1] (discusses keeping redo off ssd but only generally)
[URL]
View 1 Replies
View Related
Jan 17, 2007
I'm an Oracle novice and from what I've read so far, it seems that you should be able to do rollbacks and data recovery using the redo logs. I'm having a difficulty understanding the need for the undo tablespace.
View 2 Replies
View Related
Apr 25, 2013
it seems that you should be able to do rollbacks and data recovery using the redo logs. I'm having a difficulty understanding the need for the undo tablespace.
View 3 Replies
View Related