I noticed oracle background process ora_fbda_padwsdpr is suffering from buffer busy wait. When i further finding the object, it was on SYS_FBA_FA tables.
what is this is causing BUFFER BUSY WAIT. Also to add we have disabled flashback database.
Does cache buffer chain latch and buffer busy wait event are related to one any another.
Latch definition from Google says : Latches are simple, low-level serialization mechanisms to protect shared data structures in the system global area (SGA).
what does it mean my protect. Does this mean protects from aging as per LRU algorithm and getting removed from SGA or protect from other processes ,say from example from simultaneously DML operations. or both
Does buffer busy wait event occurs , because of the cache buffer chain latch ?
Is there any relationship b/w tuning BUFFER CACHE and BUFFER BUSY WAITS?
1) Buffer Busy Waits are happening as the User process found the same Datablock is being used by another user in the BUFFER CACHE. 2) And also happens, when the server process found the same Datablock are being used in the Datafile.
I am using 11.2.0.3.0 version of oracle. As told by the DBAs that two days back the system realised high wait_class i.e 'Application' wait, and its a lot high than regular value as per our system and we need to dig it down, to avoid any future issue. now using below query , i found that the high wait time due to wait class 'Application' is actually belongs to particular event 'SQL*Net break/reset to client', and the sample time was '9 AM' morning.
select time, round(max(case when event = 'SQL*Net break/reset to client' then time_waited_delta/1e3/decode(total_waits_delta,0,1,nvl(total_waits_delta,1)) end) ,2) SQL_break_reset_client, round(max(case when event = 'Wait for Table Lock' then time_waited_delta/1e3/decode(total_waits_delta,0,1,nvl(total_waits_delta,1)) end),2) Wait_for_Table_Lock , round(max(case when event = 'enq: KO - fast object checkpoint' then [code]....
Now i want to track it down further to the 'session/sql query/application' level resulting into such high value of wait time. so i queried, dba_hist_ active_ session_history, for the same sample duration (8.30 to 9 am) having event 'SQL*Net break/reset to client', and got two sessions(123,154) and their serial# , but there i got one sql_id(3ahgrey10ogh i.e a 'SELECT' query) specific to one session(123) but for other(125) no sqlid, and also SUM (wait_time + time_waited) is showing '0'.
Then i just removed the sampletime filter from the below query and observed that the same 'same session(123)+session serial#' was activated since 4 days back and was experiencing same waitevent 'SQL*Net break/reset to client', (it was having normal wait event 'db file sequential read' for sometime then after it went for this event ).
I have a partitioned table with degree for parallelism defined as 10.
I am getting maximum wait events on PX Deq: Execute Reply.For 2 hours trace the wait event is almost 1.5 hour.
I have done some seraching and i found this.
Quote: In principle:A parallel query against a partitioned table will use one slave per partition if the query is thought to span multiple partitions, and it can use all slaves on a single partition if the query is thought to target just one partition. Unfortunately, this is NOT strictly true.
It is possible for the optimizer to decide to use parallelism at degree M when accessing N partitions. Sometimes this can lead to very inefficient, brute-force, processing when a more efficient path is available. This can be a particular problem with multi-table joins that should be partition-wise joins. You may be better off leaving the tables defined as non-parallel and adding explicit parallel hints to the code for critical queries.
So i have following questions 1) What is the meaning of PX Deq: Execute Reply. 2) Is this not recommended to use DEGREE clause in Partitioning. 3) Defining DEGREE clause in partioning of table will automatically executes DML on table PARALLELY same a PARALLEL hint clause
we have a situation where both undo tablespaces were almost filled i.e UNDOTBS1 99% and UNDOTBS2 100% filled so i add data files to it and then i found a lot of blocking session and was just killing them through EM then i stop my front end listener and also down the service, now i don't have any blocking session but on EM a big WAIT is coming. alert log shows nothing serious, it was showing deadlock but now it is over as well.
I ran an AWR report. The database looks fine, but a data load that loaded 1 Million rows an hour is now doing 500K per hour.
Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) %DB time DB CPU 224 80.70 Other 2,668 0 28 10 9.99 System I/O 4,753 0 9 2 3.23 Administrative 1 0 6 5543 2.00 Commit 357 0 4 11 1.46
[code]....
The network value for wait: 630,601. What does this mean? Anything I should look at? When it was 1million per hour, the value was 4,563,000.
Top 5 Timed Foreground Events
Event Waits Time(s) Avg wait (ms) % DB time Wait Class DB CPU 224 80.70 unspecified wait event 2,666 28 10 9.99 Other control file sequential read 4,753 9 2 3.23 System I/O switch logfile command 1 6 5543 2.00 Administrative log file sync 357 4 11 1.46 Commit
We are using 11.2.0.3.0 on solaris 10 facing slow performance, following are the Wait Events in AWR report, Also if any specific document to analyze AWR report and to pin point the performance bottleneck.
Foreground Wait Events ********************** Avg %Time Total Wait wait Waits % DB Event Waits -outs Time (s) (ms) /txn time -------------------------- ------------ ----- ---------- ------- -------- ------ direct path read 308,729 0 21,191 69 58.0 39.5 db file sequential read 208,754 0 3,742 18 39.2 7.0 cursor: pin S 19,541,899 0 2,561 0 3,668.5 4.8 [code]....
I'm an application developer of an automotive company and developing a lot of database-based applications with either oracle forms or c#.Since we've moved from a 10g rac to 11g using a shared server configuration, the prevailing and overwhelming topic of addm performance analysis is "unusual network wait event" caused by virtual circuit waits. Therefore I cannot use grid control to detect bad sql as I could in 10g anymore, because all "tunable" sql is wiped out by virtual circuit wait.In top activity, I see virtual circuit wait on every type of statement (select, insert...) and pl/sql execution.
What do I have to do as an application developer to avoid virtual circuit waits? Especially in C#: we normally use auto committed dml statements and selects to fill either a datatable or generic list with a data reader. Usually we close a connection after each statement, but/and we are using connection pooling. How can such a activity cause virtual circuit waits? In Oracle Forms: Seems that we have a virtual circuit wait if we show sorted data in a block where not all records are fetched from database. It doesn't make sense to us to rewrite all blocks to always get all records due to performance reasons.
How do I have to write and execute my statements in C#, oracle forms and/or pl/sql to avoid virtual circuit wait?
Considering the below factors, I am planning to increase the buffer cache value from 256Mb to 512Mb.
1. Buffer cache hit ratio value is around 35% even in the normal period. 2. free buffer requested value is below during peak & normal hours below.
Statistic Total per Second per Trans --------------------------------- ------------------ -------------- ------------ free buffer requested 54,694,995 15,226.9 2,523.7 free buffer requested 23,412,674 6,501.7 2,585.9
3. most of the top 5 physical reads & logical reads queries are well tuned and some of queries are doing FTS on small tables (table count min 1500 max 35000). SO indexing option is not required for these queires. But these queries getting executed frequently.
SQL> show sga
Total System Global Area 2148534928 bytes Fixed Size 731792 bytes Variable Size 1879048192 bytes Database Buffers 268435456 bytes Redo Buffers 319488 bytes
5.top 5 waitevents during db slow performance & high cpu utilization (>80%) issue.
Top 5 Timed Events ~~~~~~~~~~~~~~~~~~ % Total Event Waits Time (s) Ela Time -------------------------------------------- ------------ ----------- -------- latch free 1,848,898 153,793 52.00 buffer busy waits 395,280 87,201 29.49 db file scattered read 3,488,648 34,199 11.56 enqueue 4,052 10,897 3.68 CPU time 5,567 1.88
6. Top 5 waitvents during normal activities and CPU utilization is around 40%.
Top 5 Timed Events ~~~~~~~~~~~~~~~~~~ % Total Event Waits Time (s) Ela Time -------------------------------------------- ------------ ----------- -------- CPU time 1,860 45.32 db file scattered read 1,133,669 985 23.99 imm op 776 605 14.73 sbtinfo2 208 139 3.40 sbtbackup 2 123 3.00
1) Is shutting down the DB flush all the data buffers, from the buffer Cache? 2) In any oracle version, do we have any way to flush only the buffer Cache.
I have some confusion about Keep Pool in Buffer Cache.
1. What is the reasoning for placing a table in the KEEP buffer pool because if it is frequently accessed, it will be around when needed (ie if it is constantly being accessed it will not age out) . 2. Would the table be still in the Default Pool if the Keep Pool is not sized and the command is being issued alter TABLE SCOTT.EMP storage (buffer_pool keep) ? 3. If the database is restarted will the table be wiped out of the Keep Pool and again be pinned to the Keep Pool ?
I am currently in the favorable situation in which I have excess amounts of memory available on the database server - a single node setup. The server only serves the single instance and no other processing. Database size is around 2.3tb and memory is 50gb. For the majority of processing, AIX is allocating a significant amount (anywhere from 30-40%) of the memory to the AIX file system cache (persistent pages).
I've been trying to find documentation about this, but have not had any luck yet. My guess is that it would be better to allow Oracle to cache this data - meaning increase the SGA target and max size to allow for a larger buffer cache. However, the nice thing about the AIX cache is if process memory is needed, the file system cache gives up pages. If the memory was allocated to the SGA, its pretty much locked in.
I have read several articles stating that a larger buffer cache is not always better, as a larger cache takes more management. But having both of the caches active seem to be a waste of memory, effectively storing the data twice - once in AIX persistent pages and a second time in Oracle database buffer cache.
Statspack has been configured for Active Dataguard on Primary database.We got an spike of Buffer busy waits for about 5 min in Active Dataguard, this was causing worse Application SQL's response time during this 5 min window.Below is what i got from statspack report for one hour
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment ~~~~~~~~ ---------- ------------------ -------- --------- ------------------- Begin Snap: 18611 21-Feb-13 22:00:02 236 2.2 End Snap: 18613 21-Feb-13 23:00:02 237 2.1 Elapsed: 60.00 (mins) [code]...
Why there could sudden spike of demand on UNDO data in Active Data Guard ?
I am trying to look at wait events for a long running query in TOAD.I start the query on one instance of TOAD and open the Session Browser on another instance.But I am surprised to find that in "TOtal Waits" on the RHS-> SQL*Net message from client is the longest time taking and is already -> 178577 units whereas I have just started the query.
Whereas in the Current Waits it shows DB File Scattered Read currectly as some seconds.
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */ INTO DWH_BILL_DET rq USING (SELECT rated_que_rowid, detail_rerate_flag_code, rerate_sel_key,
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns: (id number, job varchar2(20), time date, plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names. For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)? 2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname from emp e, dept d where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query, Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that? Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN OPEN v_refcursor FOR SELECT /*+ FULL(a) PARALLEL(a, 5) */ * FROM items a WHERE a.sku_status = 'A'
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
are the most important performance keys we have to calculate or take in account to preserve or to increase the DB performance in terms of response times, and whatsoever according to performance ?