Performance Tuning :: Excess Redo Generation
Jul 1, 2010How to Find Out Which SQL Statements Causing excessive Redo generation?
View 3 RepliesHow to Find Out Which SQL Statements Causing excessive Redo generation?
View 3 RepliesIs it possible to generate AWR report for the duration of 5 min? As we know that snapshots are generated for every 1 hour, which we specified in parameter.
By changing the parameter to 2 min, what could be the impact on database?
When I want to generate AWR report for database, I dont see any Snap IDs. What was disabled?
Is it possible to generate awr reports for unseen snap IDs?
Here below is the log:
Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance
----------- ------------ -------- ------------
49472052 WPSDBSTG 1 wpsdbstg
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type: html
Type Specified: html
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
------------ -------- ------------ ------------ ------------
* 49472052 1 WPSDBSTG wpsdbstg rcolnx88700
Using 49472052 for database Id
Using 1 for instance number
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing <return> without
specifying a number lists all completed snapshots.
Enter value for num_days: 1
Listing the last day's Completed Snapshots
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap:
How can I disable redo generation for DML statements.
View 25 Replies View RelatedIn one of our envirnoment i could see the redo size is high.Trying to understand why this is more
View 18 Replies View RelatedThe REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
View 3 Replies View Relatedhow do you limit oracle redo?
View 2 Replies View RelatedThis morning when I checked my archive logs, I suprised that the redo files are generating after every 3 min and each of file size is 50M, which is the actual size of both log members. I m using RAC database with DR server.Usally the total redo logs quantity for one day is 4 to 5. but since 10 pm of yesterday to 7 am today, the quantity of log files are 109, each of 50 M .
View 2 Replies View RelatedI would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
#Optimal log file size:
select optimal_logfile_size
from v$instance_recovery
----------------------------
OPTIMAL_LOGFILE_SIZE
1842
[code]....
I try to current redo log file location to multiplex redo log configuration. I did do this on the DB.I import a backup file to DB when I made this change, but its too slow running import process from old configuration. Approximately it is 4 times slow.
when I recovery old configuration for redo log file, import process is normal running...Why this changes hit to DB import process performance? The old redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/crtest1/oradata/redo04b.log"524288000
3"/crtest1/oradata/redo03a.log"524288000
3"/crtest1/oradata/redo03b.log"524288000
2"/crtest1/oradata/redo02a.log"524288000
2"/crtest1/oradata/redo02b.log"524288000
1"/crtest1/oradata/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
The new multiplex redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/opt/redolog/redo04b.log"524288000
3"/opt/redolog/redo03a.log"524288000
3"/usr/redolog/redo03b.log"524288000
2"/usr/redolog/redo02a.log"524288000
2"/disk1/redolog/redo02b.log"524288000
1"/disk1/redolog/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
I think that new configuration is better for old configuration at security issue.Here is the disk partitions on the server:
-bash-3.00$ df -lh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9.6G 2.2G 7.3G 23% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
[code]....
I have oracle 9i running on HP-UX, I would like to find how much redo we are generating in a given period of time, is there any script that I can use to get this information?
View 3 Replies View Relatedredo generation. As I found the below statement in another forum."Undo segment generates the redo data also, because undo segment is database changes, so it generates the redo data also."
How a Undo segment can generate Redo and Undo datas.
Redo is getting generated very high. how to find out the reason ? database kept under 2 node cluster. chcked alert log trace and log writer trace files. pasted the content as below:
--alert log trace from node1 ( node2 also has same type of message ). Archive destination disk group - TXCOM_BACKUP_01 having enough space ( 80gb )
Mon Jan 7 00:49:10 2013
Thread 1 advanced to log sequence 448546 (LGWR switch)
Current log# 1 seq# 448546 mem# 0: +TXCOM_DATA_01/txcom/onlinelog/group_1.274.785770579
Current log# 1 seq# 448546 mem# 1: +TXCOM_DATA_01/txcom/onlinelog/group_1.302.802265189
Mon Jan 7 00:49:10 2013
[code]...
In the alert log, I am able to see the archive destination disk group ( TXCOM_BACKUP_01 ) is getting DISMOUNTED and again getting MOUNTED during every archive file generation. .
Mon Jan 7 00:49:20 2013
SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted
SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted
SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted
SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted
archive destination parameter in both nodes are not configured. it should read diskgroup name. ( +TXCOM_BACKUP_01 ) and corresponding size limit. Should i configure this ?
SQL> show parameter db_recovery
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string
db_recovery_file_dest_size big integer 0
[code]...
should i bring the database to mount stage and set log_archive_max_proesses to high count ? now value is 2 ( default )
We had our production database hosted on Oracle 9.2.0. Few months back we have migrated it to Oracle 10.2.0.4.0. After Migration I have noticed that redo generation has become very very high. In earlier case no. of log files generating in production hours were around 30 where as after migration it become around 200 files per day. I have run statspack report on this database. Report is saying that db block change & disk write is become very high. Parameter timed_statistics has also been set to FALSE. Even then there is not any reduction on no. of log file generation. I had used import export for upgrading the databases.
View 13 Replies View RelatedI have created 5 materialize view with REFRESH FORCE option.
REFRESH is done with every 10 min.
My destination database is MySQL.
DBlink is created in ORACLE 10g
Materialize view is created using DBLINK.
Since refresh materialize view done with every 15 min, my Redo generate gone on toss.System is generating 1 redo file for 1 min. The size for redo log is 500MB.Since materialize views are created base on mysql table, system is doing complete refresh every time which causing to generate more and more redo.
Is there any setting/option/or solution to reduce high redo generation.
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'
[code]...
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
View 10 Replies View Relatedare the most important performance keys we have to calculate or take in account to preserve or to increase the DB performance in terms of response times, and whatsoever according to performance ?
View 8 Replies View RelatedI am working on an assignement where client is using Oracle 10g but stuck to using RBO Now the application team, from the GUI available to them build dynamic queries and some of them run very slow.
Neither the code can not be changed to tune the queries nor do we get the exact step in the plan which is an issue (being RBO).For some long running queries the Tuning advisor is not producing any recommendations.
Another hurdle is that all the application users are using same application user id so we can not write a logon trigger to use CBO for some particular queries to see what is happening in the background!
I want to tuning the next sql sentence. In this sql I want to get the hash_value and sql_text of the sentences that it's causing TX blocks. Is it possible?. This sentence works fine but sometimes It's slow.
SELECT DISTINCT hash_value,
sql_text
FROM gv$sql sq
WHERE hash_value IN (SELECT DISTINCT prev_hash_value
FROM gv$session se
WHERE sid IN (SELECT sid
FROM gv$lock l
WHERE type = 'TX'
AND ctime >= 2000
AND l.inst_id = se.inst_id
AND l.sid = se.sid)
AND sq.inst_id = se.inst_id);
[code]....