Performance Tuning :: Redo Logs Generating After Every 3 Min
Aug 13, 2010
This morning when I checked my archive logs, I suprised that the redo files are generating after every 3 min and each of file size is 50M, which is the actual size of both log members. I m using RAC database with DR server.Usally the total redo logs quantity for one day is 4 to 5. but since 10 pm of yesterday to 7 am today, the quantity of log files are 109, each of 50 M .
View 2 Replies
ADVERTISEMENT
Mar 17, 2006
How can I disable redo generation for DML statements.
View 25 Replies
View Related
Jun 18, 2012
In one of our envirnoment i could see the redo size is high.Trying to understand why this is more
View 18 Replies
View Related
Apr 6, 2011
The REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
View 3 Replies
View Related
Mar 24, 2013
how do you limit oracle redo?
View 2 Replies
View Related
Jul 1, 2010
How to Find Out Which SQL Statements Causing excessive Redo generation?
View 3 Replies
View Related
Feb 10, 2011
I would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
#Optimal log file size:
select optimal_logfile_size
from v$instance_recovery
----------------------------
OPTIMAL_LOGFILE_SIZE
1842
[code]....
View 9 Replies
View Related
Feb 25, 2011
I try to current redo log file location to multiplex redo log configuration. I did do this on the DB.I import a backup file to DB when I made this change, but its too slow running import process from old configuration. Approximately it is 4 times slow.
when I recovery old configuration for redo log file, import process is normal running...Why this changes hit to DB import process performance? The old redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/crtest1/oradata/redo04b.log"524288000
3"/crtest1/oradata/redo03a.log"524288000
3"/crtest1/oradata/redo03b.log"524288000
2"/crtest1/oradata/redo02a.log"524288000
2"/crtest1/oradata/redo02b.log"524288000
1"/crtest1/oradata/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
The new multiplex redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/opt/redolog/redo04b.log"524288000
3"/opt/redolog/redo03a.log"524288000
3"/usr/redolog/redo03b.log"524288000
2"/usr/redolog/redo02a.log"524288000
2"/disk1/redolog/redo02b.log"524288000
1"/disk1/redolog/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
I think that new configuration is better for old configuration at security issue.Here is the disk partitions on the server:
-bash-3.00$ df -lh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9.6G 2.2G 7.3G 23% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
[code]....
View 4 Replies
View Related
Oct 25, 2012
I have a active rollback segments. I am not able to drop the undo tablespace of this segment and archive logs are getting created.
View 2 Replies
View Related
Aug 8, 2012
I'm using Oracle 10gR2 (10.2.0.4.0) 64 bits.
I got many times oracle ORA-00494 error and the database went down but since 29th of july the database have not been killed.
The error message is below :
ORA-00494: mise en file d'attente [CF] d�tenue pendant trop longtemps ( (more than 900 seconds)) par inst 1, osid 176484
ORA-00028: votre session a �t� ferm�e
My database is used for datawarehouse of many terabytes.
Initially the redo log size was 500Mbytes and I've set it to 3Gbytes. The maximum log switch is after 5 minutes. I want log to be switched every 20 minutes or every 30 minutes.
To obtain the size of redo logs I've executed this query :
SQL> select OPTIMAL_LOGFILE_SIZE from v$instance_recovery;
OPTIMAL_LOGFILE_SIZE
--------------------
54763
53,5 Gbytes is it not very big as redo log size? What's the maximum size of redo log? To set very big redo log size what are the requirements? Which precautions should I take before? What are the risks? Are any other ways to change the log switch frequency?
View 1 Replies
View Related
May 28, 2013
is it Any way I can put the size of my redo log (During Install Oracle DB 11.1.0.7 )
I mean during installtion .??? becouse its by default 50 MB I need to be 200MB
View 10 Replies
View Related
Jan 17, 2007
I'm an Oracle novice and from what I've read so far, it seems that you should be able to do rollbacks and data recovery using the redo logs. I'm having a difficulty understanding the need for the undo tablespace.
View 2 Replies
View Related
Apr 25, 2013
it seems that you should be able to do rollbacks and data recovery using the redo logs. I'm having a difficulty understanding the need for the undo tablespace.
View 3 Replies
View Related
Sep 25, 2013
Version: 11.2.0.3Platform : Solaris 10
One of the Hitachi support guy has suggested to create a separate disk group for Online redo logs. His rationale was that ORLs was write only files and it would be better to put in a separate disk group.
View 6 Replies
View Related
Oct 4, 2010
I'm running Oracle 9i on AIX 5.2. I'm not using a recovery catalog, nor am I using media management software. I perform a full, online rman backup of the database and archived redo logs daily to disk, then use operating system commands to copy the backup to tape. There is only space on disk for two days' backups, so I need to have a retention policy of "redundancy = 1", and run a "delete obsolete" prior to the backup. The problem is that I don't want to subject the archived redo logs to this retention policy.
I have two physical standby databases connected by WAN to the primary site, and I might need archived redo logs that are a few days (or more) old in the event of a prolonged WAN outage. I've read about the "keep forever" option, but apparently it isn't available without using a recovery catalog. Is there any way to spare the archived redo logs from my retention policy?
Note: I want to "protect" the actual archived redo logs from the retention policy, not the backups of the archived redo logs.
View 3 Replies
View Related
Apr 23, 2013
While Configuring Data Guard for ORacle 10g (10.2.0.4) 64 bits on Windows 2007 Server 64 bits.I got few questions
1. What is the Default mode of Standby Database?
2. Should we Always Start Physical Standby Database to Recover Missing Redo Archive Log?
SQL> startup mount;
ORACLE instance started.
Total System Global Area 591396864 bytes
Fixed Size 2067496 bytes
Variable Size 163578840 bytes
Database Buffers 419430400 bytes
Redo Buffers 6320128 bytes
Database mounted.
SQL> alter database recover managed standby database disconnect from session;
Database altered.
3. When there are missing Redo Log Archives e.g.
----On Standby Database--------
SQL> SELECT RESETLOGS_ID,SEQUENCE#,STATUS,ARCHIVED FROM V$ARCHIVED_LOG
2 ORDER BY RESETLOGS_ID,SEQUENCE#;
RESETLOGS_ID SEQUENCE# S ARC
------------ ---------- - ---
812980008 15 A YES
812980008 16 A YES
812980008 17 A YES
812980008 18 A YES
[code]....
65 rows selected. Log 8, 9, 10, 11, 12, 13, 14, 15 are missing.
How to Apply / Recover These Logs on Standby Database?
View 11 Replies
View Related
Jul 25, 2012
URL....I'm practicing for the OCP test and one of the questions is that there is a backup from yesterday and the last archived logs are from the day before yesterday not mentioned if it's cold or hot backup.
If its a cold backup - cant we recover it? is it a must to have the archived redo logs also when recovering a cold backup? That sounds not logical since those logs are made only for a hot backup. URL.....
View 1 Replies
View Related
Jan 14, 2013
How do we check current redologs are sized properly.if there is any script to check that.
View 7 Replies
View Related
Mar 1, 2011
I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
+DATA - for datafiles, control files, redo logs
+FRA - for achive logs, flash recovery. RMAN backup
Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
View 4 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
Oct 31, 2011
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
View 39 Replies
View Related
Sep 30, 2010
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
View 24 Replies
View Related
Jun 16, 2010
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
View 2 Replies
View Related
Oct 20, 2010
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
View 6 Replies
View Related
Jun 16, 2011
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
View 2 Replies
View Related
Nov 15, 2010
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
View 13 Replies
View Related
Apr 12, 2013
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
View 13 Replies
View Related
Aug 9, 2010
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
View 6 Replies
View Related
Dec 6, 2011
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
View 1 Replies
View Related
Oct 17, 2011
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'
[code]...
View 9 Replies
View Related