Buffer Cache Hit Ratio Is Low In Oracle 9i After Maintenance
Mar 3, 2011
Database : 9.2.0.7
Os : windows 2003 sevrer standdard edition
RAM 4 Gigs
The buffer cache hit ratio in this server is around 83%, where it normaly was around 98% before i did some maintenance activities.
I have done some maintenance activities in January on this database.
Maintenance activties includes below steps
1.In production i have deleted old data in the production tables
2.Reorganized tablespaces,tables
3.Rebuild indexes for those tables.
4. At last collected statistics for those tables.
Now after this activity the buffer cache hit ratio is very low.
View 8 Replies
ADVERTISEMENT
May 20, 2013
what will be best buffer cache hit ratio for a database for good performance
note : in general
View 2 Replies
View Related
Jul 30, 2012
Does cache buffer chain latch and buffer busy wait event are related to one any another.
Latch definition from Google says : Latches are simple, low-level serialization mechanisms to protect shared data structures in the system global area (SGA).
what does it mean my protect. Does this mean protects from aging as per LRU algorithm and getting removed from SGA
or
protect from other processes ,say from example from simultaneously DML operations.
or
both
Does buffer busy wait event occurs , because of the cache buffer chain latch ?
View 3 Replies
View Related
Mar 16, 2013
understand the metric Row Cache Hit Ratio in V$SYSMETRIC_HISTORY? Is it the dictionary cache hit ratio?
View 1 Replies
View Related
Jul 29, 2012
We are getting Negative values of Library cache hit ratio in AWR Report of 11g(11.2.0.3) with Solaris[tm] OE (64-bit). Why it shows negative value.
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 99.87 Redo NoWait %: 99.99
Buffer Hit %: 92.17 In-memory Sort %: 100.00
Library Hit %: -3,321.23 Soft Parse %: 81.95
Execute to Parse %: 92.88 Latch Hit %: 95.11
Parse CPU to Parse Elapsd %: 87.25 % Non-Parse CPU: 81.39
View 3 Replies
View Related
May 7, 2011
I am currently in the favorable situation in which I have excess amounts of memory available on the database server - a single node setup. The server only serves the single instance and no other processing. Database size is around 2.3tb and memory is 50gb. For the majority of processing, AIX is allocating a significant amount (anywhere from 30-40%) of the memory to the AIX file system cache (persistent pages).
I've been trying to find documentation about this, but have not had any luck yet. My guess is that it would be better to allow Oracle to cache this data - meaning increase the SGA target and max size to allow for a larger buffer cache. However, the nice thing about the AIX cache is if process memory is needed, the file system cache gives up pages. If the memory was allocated to the SGA, its pretty much locked in.
I have read several articles stating that a larger buffer cache is not always better, as a larger cache takes more management. But having both of the caches active seem to be a waste of memory, effectively storing the data twice - once in AIX persistent pages and a second time in Oracle database buffer cache.
View 4 Replies
View Related
May 30, 2013
I have a serious doubt in oracle architecture functionality, when a user issues a update statement the data blocks are carried to db buffer cache and where does the changes to the data blocks are made???? Does a copy of the data block is kept in db buffer cache and the changes are made to the block in buffer cache?? or the a copy of the data block is kept in undo tablespace and changes are made to the blocks in the undo tablespace???
In simple the changes to the data blocks are made at db buffer cache or undo tablespace?
View 7 Replies
View Related
Feb 22, 2013
how can we check the size of data buffer cache.
View 7 Replies
View Related
Apr 18, 2012
I want to know what exact process happens in oracle architecture when an update query is fired.
View 1 Replies
View Related
Mar 26, 2012
give a script to find how much my db buffe cache and redo log buffer cache is used and how much is free.
View 1 Replies
View Related
Nov 30, 2012
Considering the below factors, I am planning to increase the buffer cache value from 256Mb to 512Mb.
1. Buffer cache hit ratio value is around 35% even in the normal period.
2. free buffer requested value is below during peak & normal hours below.
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
free buffer requested 54,694,995 15,226.9 2,523.7
free buffer requested 23,412,674 6,501.7 2,585.9
3. most of the top 5 physical reads & logical reads queries are well tuned and some of queries are doing FTS on small tables (table count min 1500 max 35000). SO indexing option is not required for these queires. But these queries getting executed frequently.
SQL> show sga
Total System Global Area 2148534928 bytes
Fixed Size 731792 bytes
Variable Size 1879048192 bytes
Database Buffers 268435456 bytes
Redo Buffers 319488 bytes
5.top 5 waitevents during db slow performance & high cpu utilization (>80%) issue.
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
latch free 1,848,898 153,793 52.00
buffer busy waits 395,280 87,201 29.49
db file scattered read 3,488,648 34,199 11.56
enqueue 4,052 10,897 3.68
CPU time 5,567 1.88
6. Top 5 waitvents during normal activities and CPU utilization is around 40%.
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 1,860 45.32
db file scattered read 1,133,669 985 23.99
imm op 776 605 14.73
sbtinfo2 208 139 3.40
sbtbackup 2 123 3.00
View 11 Replies
View Related
Sep 27, 2010
1) Is shutting down the DB flush all the data buffers, from the buffer Cache?
2) In any oracle version, do we have any way to flush only the buffer Cache.
View 1 Replies
View Related
Aug 19, 2010
1)If i issue a DELETE statement to delete a row, will this statement drag any data from the datafile to database buffer? How is the change made by a DELETE statement recorded in buffer cache? How is this change then applied to the data in datafiles after commit?
View 7 Replies
View Related
Mar 28, 2011
How can i know which objects used keep buffer cache?
View 5 Replies
View Related
Jan 15, 2011
I have some confusion about Keep Pool in Buffer Cache.
1. What is the reasoning for placing a table in the KEEP buffer pool because if it is frequently accessed, it will be around when needed (ie if it is constantly being accessed it will not age out) .
2. Would the table be still in the Default Pool if the Keep Pool is not sized and the command is being issued alter TABLE SCOTT.EMP storage (buffer_pool keep) ?
3. If the database is restarted will the table be wiped out of the Keep Pool and again be pinned to the Keep Pool ?
View 2 Replies
View Related
Apr 25, 2013
I want to simulate latch : cache buffer chains wait event due to use of nested loop join for lookup tables
This is what a tried :
-- create parent / child tables
SQL>drop table emp1 purge;
drop table dept1 purge;
create table dept1 (dept_id number primary key,
dept_name char(30));
[Code]....
I traced many queries like the one given below (dept_id between 1 and n where n varied from 10 to 1000) but they always result in hash join
1* select d.dept_name, e.id from sys.dept1 d, sys.emp1 e where d.dept_id = e.dept_id and e.dept_id < 1000
Execution Plan
----------------------------------------------------------
Plan hash value: 619452140
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 998K| 41M| 680 (2)| 00:00:09 |
|* 1 | HASH JOIN | | 998K| 41M| 680 (2)| 00:00:09 |
|* 2 | TABLE ACCESS FULL| DEPT1 | 999 | 34965 | 4 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| EMP1 | 999K| 8780K| 672 (2)| 00:00:09 |
----------------------------------------------------------------------------
what can I do to get a nested loop join to simulate latch : cache buffer chains?
View 10 Replies
View Related
Jun 14, 2013
If sga Buffer Cache Size consume full of SGA_target size, if possible that it will cause performance any issue. I am facing CPU 100% consuming while single query execute, Which query generate monthly report data.
I have two question
1)How to fix the CPU 100% consuming
2)How to find total number user hit oracle specific schema.
Oracle 10.2.0.5 Standard
Sga_target : 14G
Sga_max :20G
Pga :3G
Below SGA details
NAME BYTES/1024/1024
-------------------------------- ---------------
Fixed SGA Size 2.01795197
Redo Buffers 13.9804688
Buffer Cache Size 13632
Shared Pool Size 640
Large Pool Size 16
Java Pool Size 16
Streams Pool Size 16
Granule Size 16
Maximum SGA Size 20480
Startup overhead in Shared Pool 208
Free SGA Memory Available 6144
View 2 Replies
View Related
Jan 26, 2011
Say Database Buffer Cache configured as 2M and my updates may use 4m size,will it throw an error message or update will happen perfectly without any issues?
View 2 Replies
View Related
Oct 3, 2010
Is there any relationship b/w tuning BUFFER CACHE and BUFFER BUSY WAITS?
1) Buffer Busy Waits are happening as the User process found the same Datablock is being used by another user in the BUFFER CACHE.
2) And also happens, when the server process found the same Datablock are being used in the Datafile.
View 5 Replies
View Related
Jun 20, 2013
What is the difference between cache fusion and Cache Coherency. Both are same or different functionality.
View 1 Replies
View Related
Aug 27, 2013
I have a table (Parent - Child).There is a requirement to maintain this table, thats the hierarchy of the oraganisation.So, every quater they will be updating the table. They will be importing the data through an excel and in that excel there are 3 action items,
=> Insert, Update and Delete (logical delete).
CREATE TABLE PARENT_CHILD_TBL ( "ID" VARCHAR2(6 BYTE) NOT NULL ENABLE, "ID_DESC" VARCHAR2(200 BYTE), "ID_LEVEL" VARCHAR2(200 BYTE), "PARENT_ID" VARCHAR2(200 BYTE) )
For Update:What all validation can come for an updation of an hierarchical data in general.Like = how to derive the level value at database side when the id is updated to some other level.= How to maintain the relation. A -> B -> D ( A is the grand parent here).A -> Ceg: if B is updated as parent node of A, then we should throw error (cyclic data). Any more validations for hierarchical data
View 6 Replies
View Related
Jul 10, 2012
I have some question.
TTversion : TimesTen Release 11.2.2.3.0 (64 bit Linux/x86_64) (tt112230:53376) 2012-05-24T09:20:08Z
We are testing a AWT cache group ( with CacheAwtParallelism=4 ).
Application(1 process) to the DML generates to TimesTen(DSN=TEST).
At this point, Are delivered to the 4 parallel DML?
[TEST]
Driver=/home/TimesTen/tt112230/lib/libtten.so
DataStore=/home/TimesTen/DataStore/TEST/test
PermSize=1024
TempSize=512
PLSQL=1
[code].......
View 7 Replies
View Related
Jun 17, 2011
We have few tables in our production database which are havoc in size and will increase in size in future too so as part of the corrective measures , we have jotted down the below 3 methods to manage the size of those tables :-
1> Partitioning the table and take the export of identified partitions and after that, truncate those partition.
2> Creating history tables and remove not so current data from the original table to history table.
View 3 Replies
View Related
Sep 14, 2011
,i had given the sample data below
create table ex (sno number,ename varchar2(10),job_code char(4),sal number);
insert into ex values(101,'John','Java',21000,'IT');
insert into ex values(102,'Michel','BI',25000,'IT');
insert into ex values(103,'Johny','Java',30000,'IT');
[code]...
My expected output is attached in a text file
View 12 Replies
View Related
Aug 5, 2013
SQL> select name,decode(unit,'bytes',value/1024/1024,value) as mb from v$pgastat; NAME MB---------------------------------------------------------------- ----------aggregate PGA target parameter 25600aggregate PGA auto target 2724.14648global memory bound 1024total PGA inuse 22601.7333total PGA allocated 26653.6230maximum PGA allocated
[code]....
I understand I have soft limit( aggregate PGA target parameter) which was overlimited (maximum PGA allocated = 35374.4638) hence we have over allocation count>0.Extra bytes read/written=13GB,hence we have excessive 13Gb that we had to flush on disk(excessive I/O operations) cause of limitation in 1024MB(global memory bound)(it's not enough to join or to sort something so we must do onepassor multipass) ,which defines the size of single operation of sort or join(so does it mean that it's some kind of sort_area_size and hash_area_size for automatic workarea_size_policy? and in this case what about _smm_max_size?)aggregate PGA auto target - is the amount of space(total) that Oracle can give for work areas running in automatic mode.
So I can't understand ratio between global memory bound and aggregate PGA auto target - why does the aggregate PGA auto target such tiny?(relatively process count)?Is the global memory bound static for particular aggregate PGA target parameter?
I can change it only by redefining aggregate PGA target parameter?What would be with aggregate PGA auto target if I started 10 sort operation and each of them takes about 1Gb of memory.How huge it would be? 10Gb?
View 4 Replies
View Related
Oct 15, 2012
In Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
STEP - 1
SELECT /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)*
FROM HR.Employees_copy
WHERE department_id = 20
GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY)
-------------------------------------------------------------------------------------------------------
202 Pat Fay 6000
201 Michael Hartstein 13000
Elapsed: 00:00:00.01
Execution Plan
----------------------------------------------------------
Plan hash value: 3837552314
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 130 | 4 (25)| 00:00:01 |
| 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
| 2 | HASH GROUP BY | | 2 | 130 | 4 (25)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
*690* bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
STEP - 2
INSERT INTO HR.employees_copy
VALUES(200, 'Dummy', 'User','Dummy.User@email.com',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
STEP - 3
SELECT /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)
FROM HR.Employees_copy
WHERE department_id = 20
GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY)
--------------------------------------------------------------------------------------------------
202 Pat Fay 6000
201 Michael Hartstein 13000
200 Dummy User 5000
Elapsed: 00:00:00.03
Execution Plan
----------------------------------------------------------
Plan hash value: 3837552314
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 195 | 4 (25)| 00:00:01 |
| 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
| 2 | HASH GROUP BY | | 3 | 195 | 4 (25)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| EMPLOYEES_COPY | 3 | 195 | 3 (0)| 00:00:01 |
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
*714* bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?
View 2 Replies
View Related
May 16, 2011
what is the reason for filling the folder "Oracle Jar Cache" with some files? Why should we clear the cache ? My clients are also facing same issue. When they use the application continuously(in forms9i) after some time, application getting hanged. When we follow these steps its working fine:
- close all IE
- and clear the cache ,
- then re-start machine
View 1 Replies
View Related
May 3, 2013
we want to truncate a oracle Table in the Oracle DB. After the truncate the fact table will be loaded again. After the new load in the fact table we want to tell the times ten db to refresh the cache table. The cache Table is a user owned read-only cache group with no autorefresh. We want to tell times ten in a PL/SQL Block from Oracle DB that starts the refresh from the cache group in times ten. The refresh should not be a autorefresh because the refresh should only start if the fact table will new loaded after the truncate.
View 1 Replies
View Related
Apr 21, 2013
i have a application which uses 32 tables for retrieval in this 4 tables are important and have a size more than 100 mb can i move the index of these 4 tables cache memory to improve the applications retrieval performance if i done so ,then that will affect any other applications performance
View 1 Replies
View Related
Oct 10, 2012
Are there any recommendations or good practices to set sequence CACHE parameter (for example one caching per hour, day etc)?
View 4 Replies
View Related