Performance Tuning :: Redo Size?
Jun 18, 2012In one of our envirnoment i could see the redo size is high.Trying to understand why this is more
View 18 RepliesIn one of our envirnoment i could see the redo size is high.Trying to understand why this is more
View 18 RepliesThe REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
View 3 Replies View RelatedI would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
#Optimal log file size:
select optimal_logfile_size
from v$instance_recovery
----------------------------
OPTIMAL_LOGFILE_SIZE
1842
[code]....
How can I disable redo generation for DML statements.
View 25 Replies View Relatedhow do you limit oracle redo?
View 2 Replies View RelatedHow to Find Out Which SQL Statements Causing excessive Redo generation?
View 3 Replies View RelatedThis morning when I checked my archive logs, I suprised that the redo files are generating after every 3 min and each of file size is 50M, which is the actual size of both log members. I m using RAC database with DR server.Usally the total redo logs quantity for one day is 4 to 5. but since 10 pm of yesterday to 7 am today, the quantity of log files are 109, each of 50 M .
View 2 Replies View RelatedI try to current redo log file location to multiplex redo log configuration. I did do this on the DB.I import a backup file to DB when I made this change, but its too slow running import process from old configuration. Approximately it is 4 times slow.
when I recovery old configuration for redo log file, import process is normal running...Why this changes hit to DB import process performance? The old redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/crtest1/oradata/redo04b.log"524288000
3"/crtest1/oradata/redo03a.log"524288000
3"/crtest1/oradata/redo03b.log"524288000
2"/crtest1/oradata/redo02a.log"524288000
2"/crtest1/oradata/redo02b.log"524288000
1"/crtest1/oradata/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
The new multiplex redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/opt/redolog/redo04b.log"524288000
3"/opt/redolog/redo03a.log"524288000
3"/usr/redolog/redo03b.log"524288000
2"/usr/redolog/redo02a.log"524288000
2"/disk1/redolog/redo02b.log"524288000
1"/disk1/redolog/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
I think that new configuration is better for old configuration at security issue.Here is the disk partitions on the server:
-bash-3.00$ df -lh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9.6G 2.2G 7.3G 23% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
[code]....
I am running Oracle 10.2.0.1.0 on MS Windows 2003 server 64-bit with 16G RAM.
Here is the findings for my Oracle database.
SQL> select * * from v$sgainfo;
NAME BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size 1293560 No
Redo Buffers 7094272 No
Buffer Cache Size 830472192 Yes
[code]...
I find that the SGA component "Buffer Cache" is decreasing from the start "1.8G" and down to now 0.8G. On the other hand, the component "Shared Pool" is increasing from the start 0.3G to now 1.2G. I noticed that there are 100 operations of shrinking of "Buffer cache" and growth of "Shared Pool" in Oracle every day.Is it a indicator that I should raise up the SGA_MAX_SIZE?
I tried to increase the SGA_MAX_SIZE to 4G. But I cannot start the Oracle afterward.Is it a limitation of MS Windows(OS) or Oracle?I set the SGA_MAX_SIZE to 3G. This time, I can startup Oracle.What is the optimum/maximum I can set to SGA_MAX_SIZE?Is there any adverse effect/concern when setting the SGA_MAX_SIZE more than 2G?
Objective : To find solution to archieve data from 2 big tables which is occupying maximum size in the data base. With current data (From Jan 2005 to Sept 2011) it has records as mentioned below:
transaction - 41687927
trnansaction_dtl - 83945934
We need to load data and run monthly batches from October 2011 to current month which will increase this space.
1. Issue is there will not be having so much space.
2. Maintenance of such table is diffcult now.Also there is huge impact on performance. Can we think of partitioning the table base on date aswe query 1st table based on certain date range?
3. Most of reports use this table and creating performances issues
We have a table emp_details with 23772889 records. Our requirement is to increase few of the columns size in the table emp_details. We are following the below alter statement which is taking around 2 hours of time.
ALTER TABLE emp_details
MODIFY
(
address char(90)
,department char(30)
)
/
Is there any way to improve the above query performance?
I have got the following error yesterday
ORA-01555 caused by SQL statement below (SQL ID: fdxcyoin67ty8t, Query Duration=380128 sec, SCN: 0x0229.ff00afd0):
following are the existing settings
SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 96000
undo_tablespace string undo
[code]....
following are the details from v$undostat
select begin_time, end_time, undotsn, undoblks, maxquerylen, maxqueryid, activeblks, unexpiredblks, expiredblks, tuned_undoretention from v$undostat
where trunc(begin_time)=trunc(sysdate)-1 order by begin_time;
BEGIN_TIME END_TIME UNDOTSN UNDOBLKS MAXQUERYLEN MAXQUERYID ACTIVEBLKS UNEXPIREDBLKS EXPIREDBLKS TUNED_UNDORETENTION
-------------- -------------- ---------- ---------- ----------- ------------- ---------- ------------- ----------- -------------------
21-04-13 00:08 21-04-13 00:18 1 12733 378446 duqnawh32hp4u 91152 7068448 225440 345600
21-04-13 00:18 21-04-13 00:28 1 8951 379047 duqnawh32hp4u 99344 7072800 225440 345600
21-04-13 00:28 21-04-13 00:38 1 14073 379650 duqnawh32hp4u 90128 7075872 234656 345600
[code]....
Following are the details in AWR report (00:00 til 01:00 of 21-Apr-2013) .... not thet the error was produced at 00:42
Undo Segment Summary DB/Inst: DBCPY/dbcpy01 Snaps: 18853-18854
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, your - unexpired Released, uU - unexpired reUsed
[code]....
Undo Advisor information taken 'now' is as following
SQL> select dbms_undo_adv.longest_query(sysdate-2,sysdate) from dual;
DBMS_UNDO_ADV.LONGEST_QUERY(SYSDATE-2,SYSDATE)
----------------------------------------------
379650
SQL> select dbms_undo_adv.required_retention from dual;
[code]....
In above situation what should be my first choice (assuming increasing space is not an issue) - increase undo tablespace or increase undo retention?
If latter is the choice then what should be the value? Because as I understand present 96000 value is taken as lower limit and because of auto tuning the actual value (TUNED_UNDORETENTION) being used was 345600 In that case shall I set it to something > max(maxquerylen) i.e 379,650 + X?Or I shall increase the undo tablespace size?
From Undo Advisor output it looks to me that even if I increase the undo retention to 379650 current undo size will be able to support it (may be at the expense of DMLs)Is that right?
All the analysis till now on our system proves that our system is clearly I/O bound and db sequential read is the biggest culprit.
We have even identified the index which is being affected by sequential read. I am thinking of creating a new tablespace with 32K blocksize (currently all table spaces are 8k) and migrate this index to the new space. That way, Oracle will have to do less number of reads to get the required data.
But is there anything wrong in having just one tablespace with a differnt block size? Or is there anything that I have to be watchful about while doing it?
Which is the correct method to calculate actual data size in a table? becaue when I serach in google, I saw the below line.
"Oracle thumb rule says (actual space required for a table + 30 % space) will calculate the original space requirement for a table."
Method 1:
actual space = num_rows*avg_row_len
Method 2:
actual space = (Num of rows in a table) * (Avg_row_len) + ((Num of rows in a table) * (Avg_row_len)* 0.3)
is it Any way I can put the size of my redo log (During Install Oracle DB 11.1.0.7 )
I mean during installtion .??? becouse its by default 50 MB I need to be 200MB
I have a standalone DB of version 10.2.04.I am facing log file parallel write as one of the top events.I have increased the REDO log file size to 500m.But even then REDO switching is happening frequently.
select group#, bytes, archived, status, first_change#, first_time from v$log order BY first_change#;
GROUP# BYTES ARC STATUS FIRST_CHANGE# FIRST_TIME
---------- ---------- --- ---------------- ------------- ----------
10 262144000 NO INACTIVE 8509999 30-08-2012
12 524288000 NO INACTIVE 8612142 30-08-2012
11 262144000 NO INACTIVE 8676390 30-08-2012
9 262144000 NO CURRENT 8706330 30-08-2012
[code]....
We have one primary oracle database 10.2 and standby by database with no data guard. Initially we have 2 redo log group in primary and standby database.
We have recently add 2 more redo log and increase the size of log member from 50m to 200m in primary database. We don't have any problem in primary database.but in standby database we face a problem because we cannot open it. It always in mount stage in which . How we change the size of current redo log because we can't run. Alter system switch logfile command in mount stage.
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'
[code]...
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
View 10 Replies View Relatedare the most important performance keys we have to calculate or take in account to preserve or to increase the DB performance in terms of response times, and whatsoever according to performance ?
View 8 Replies View Related