Does cache buffer chain latch and buffer busy wait event are related to one any another.
Latch definition from Google says : Latches are simple, low-level serialization mechanisms to protect shared data structures in the system global area (SGA).
what does it mean my protect. Does this mean protects from aging as per LRU algorithm and getting removed from SGA or protect from other processes ,say from example from simultaneously DML operations. or both
Does buffer busy wait event occurs , because of the cache buffer chain latch ?
Considering the below factors, I am planning to increase the buffer cache value from 256Mb to 512Mb.
1. Buffer cache hit ratio value is around 35% even in the normal period. 2. free buffer requested value is below during peak & normal hours below.
Statistic Total per Second per Trans --------------------------------- ------------------ -------------- ------------ free buffer requested 54,694,995 15,226.9 2,523.7 free buffer requested 23,412,674 6,501.7 2,585.9
3. most of the top 5 physical reads & logical reads queries are well tuned and some of queries are doing FTS on small tables (table count min 1500 max 35000). SO indexing option is not required for these queires. But these queries getting executed frequently.
SQL> show sga
Total System Global Area 2148534928 bytes Fixed Size 731792 bytes Variable Size 1879048192 bytes Database Buffers 268435456 bytes Redo Buffers 319488 bytes
5.top 5 waitevents during db slow performance & high cpu utilization (>80%) issue.
Top 5 Timed Events ~~~~~~~~~~~~~~~~~~ % Total Event Waits Time (s) Ela Time -------------------------------------------- ------------ ----------- -------- latch free 1,848,898 153,793 52.00 buffer busy waits 395,280 87,201 29.49 db file scattered read 3,488,648 34,199 11.56 enqueue 4,052 10,897 3.68 CPU time 5,567 1.88
6. Top 5 waitvents during normal activities and CPU utilization is around 40%.
Top 5 Timed Events ~~~~~~~~~~~~~~~~~~ % Total Event Waits Time (s) Ela Time -------------------------------------------- ------------ ----------- -------- CPU time 1,860 45.32 db file scattered read 1,133,669 985 23.99 imm op 776 605 14.73 sbtinfo2 208 139 3.40 sbtbackup 2 123 3.00
1) Is shutting down the DB flush all the data buffers, from the buffer Cache? 2) In any oracle version, do we have any way to flush only the buffer Cache.
1)If i issue a DELETE statement to delete a row, will this statement drag any data from the datafile to database buffer? How is the change made by a DELETE statement recorded in buffer cache? How is this change then applied to the data in datafiles after commit?
I have some confusion about Keep Pool in Buffer Cache.
1. What is the reasoning for placing a table in the KEEP buffer pool because if it is frequently accessed, it will be around when needed (ie if it is constantly being accessed it will not age out) . 2. Would the table be still in the Default Pool if the Keep Pool is not sized and the command is being issued alter TABLE SCOTT.EMP storage (buffer_pool keep) ? 3. If the database is restarted will the table be wiped out of the Keep Pool and again be pinned to the Keep Pool ?
If sga Buffer Cache Size consume full of SGA_target size, if possible that it will cause performance any issue. I am facing CPU 100% consuming while single query execute, Which query generate monthly report data.
I have two question
1)How to fix the CPU 100% consuming
2)How to find total number user hit oracle specific schema.
Oracle 10.2.0.5 Standard Sga_target : 14G Sga_max :20G Pga :3G
Below SGA details NAME BYTES/1024/1024 -------------------------------- --------------- Fixed SGA Size 2.01795197 Redo Buffers 13.9804688 Buffer Cache Size 13632 Shared Pool Size 640 Large Pool Size 16 Java Pool Size 16 Streams Pool Size 16 Granule Size 16 Maximum SGA Size 20480 Startup overhead in Shared Pool 208 Free SGA Memory Available 6144
Say Database Buffer Cache configured as 2M and my updates may use 4m size,will it throw an error message or update will happen perfectly without any issues?
I am currently in the favorable situation in which I have excess amounts of memory available on the database server - a single node setup. The server only serves the single instance and no other processing. Database size is around 2.3tb and memory is 50gb. For the majority of processing, AIX is allocating a significant amount (anywhere from 30-40%) of the memory to the AIX file system cache (persistent pages).
I've been trying to find documentation about this, but have not had any luck yet. My guess is that it would be better to allow Oracle to cache this data - meaning increase the SGA target and max size to allow for a larger buffer cache. However, the nice thing about the AIX cache is if process memory is needed, the file system cache gives up pages. If the memory was allocated to the SGA, its pretty much locked in.
I have read several articles stating that a larger buffer cache is not always better, as a larger cache takes more management. But having both of the caches active seem to be a waste of memory, effectively storing the data twice - once in AIX persistent pages and a second time in Oracle database buffer cache.
Is there any relationship b/w tuning BUFFER CACHE and BUFFER BUSY WAITS?
1) Buffer Busy Waits are happening as the User process found the same Datablock is being used by another user in the BUFFER CACHE. 2) And also happens, when the server process found the same Datablock are being used in the Datafile.
I have a serious doubt in oracle architecture functionality, when a user issues a update statement the data blocks are carried to db buffer cache and where does the changes to the data blocks are made???? Does a copy of the data block is kept in db buffer cache and the changes are made to the block in buffer cache?? or the a copy of the data block is kept in undo tablespace and changes are made to the blocks in the undo tablespace???
In simple the changes to the data blocks are made at db buffer cache or undo tablespace?
We need to simulate load like production environment in test database without actually creating any data volume. Is there any tool which can be used to achieve this, if yes which one is best tool and why?
I'm trying to simulate a delete operation through using an update on a trigger my tables are
CREATE TABLE EMPLOYEE ( LNAME VARCHAR(15) NOT NULL, SSN CHAR(9) NOT NULL, salary FLOAT, dno INT NOT NULL, vst DATE, vet DATE, PRIMARY KEY (Ssn)); [code]....
What I want to do is whenever there is an update on vet( valid end time) in employee, delete the values from the employee table and insert the old values from employee into the emp_history table along with the new value for vet. Here's my trigger
CREATE TRIGGER trig4 AFTER UPDATE OF VET ON EMPLOYEE FOR EACH ROW BEGIN INSERT INTO EMP_HIST VALUES( : old.LNAME, : old.SSN, : old.salary, : old.dno, : old.vst, :new.vet); DELETE FROM EMPLOYEE WHERE(SSN = :NEW.ssn AND vet IS NOT NULL); END trig4; //ignore the space between : and o as it makes a smily
The problem is I get an error for a mutating change, what I'd like to know is if the above trigger is possible, and if so how to implement it without giving me an error. I mean it makes sense syntactically and logically(at least to me).
Following query is hanging either with 'Sequential access read' or 'Latch Free' wait event Important thing is the table which is self joined in subquery here does not have any index at all While it was hanged I tried to get trace of it and terminated twice. As such haven't got 'row source generataion' The table has only 120000 records and it shall update 34000 records
UPDATE invoice_header inv SET inv.modified_due_date = (SELECT inv1.btn_due_date FROM invoice_header inv1 WHERE inv.dct_code = inv1.dct_code AND inv1.release = 'A5')
[code]...
During 'sequential read' using p1,p2 values tried to get what the session is reading and found that it is using the table itself.
During lath free I found following SELECT name, 'Child '||child#, gets, misses, sleeps FROM v$latch_children WHERE addr= (select p1raw from v$session_wait where sid=18) UNION
[code]...
However instead of self join when I creaed global temporary table as
create global temporary table t as select * from invoice_header where release='A5'
And used it in the update as
UPDATE invoice_header inv SET inv.modified_due_date = (SELECT t.btn_due_date FROM t WHERE inv.dct_code = t.dct_code AND t.release = 'A5') WHERE inv.release = 'A5' AND inv.btn_due_date >= TRUNC (SYSDATE)
It updated the records in a second!!
Questions are 1) why it is producing 'sequential read' wait event when there is no index access or else why it is doing single block access when FTS is required? 2) Why is the 'latch free' wait event here and what it indicates here with 'cache buffer handles'? Is it because we are reading and updating the same segment?
know in case DDL of table is required. It has all nullable columns and no index at all. Since it is 9i I am unable to use MERGE effectively in this case
i have a application which uses 32 tables for retrieval in this 4 tables are important and have a size more than 100 mb can i move the index of these 4 tables cache memory to improve the applications retrieval performance if i done so ,then that will affect any other applications performance
i m working on oracle 10g using db_block_buffers. But i m not able to get information from database cache advice. Is there any method or procedure to activate cache advice despite of db_block_buffers use?