Server Administration :: How To Know Which Objects Used Keep Buffer Cache
Mar 28, 2011How can i know which objects used keep buffer cache?
View 5 RepliesHow can i know which objects used keep buffer cache?
View 5 Replies1)If i issue a DELETE statement to delete a row, will this statement drag any data from the datafile to database buffer? How is the change made by a DELETE statement recorded in buffer cache? How is this change then applied to the data in datafiles after commit?
View 7 Replies View RelatedSay Database Buffer Cache configured as 2M and my updates may use 4m size,will it throw an error message or update will happen perfectly without any issues?
View 2 Replies View RelatedDoes cache buffer chain latch and buffer busy wait event are related to one any another.
Latch definition from Google says : Latches are simple, low-level serialization mechanisms to protect shared data structures in the system global area (SGA).
what does it mean my protect. Does this mean protects from aging as per LRU algorithm and getting removed from SGA
or
protect from other processes ,say from example from simultaneously DML operations.
or
both
Does buffer busy wait event occurs , because of the cache buffer chain latch ?
how can we check the size of data buffer cache.
View 7 Replies View Relatedwhat will be best buffer cache hit ratio for a database for good performance
note : in general
I want to know what exact process happens in oracle architecture when an update query is fired.
View 1 Replies View Relatedgive a script to find how much my db buffe cache and redo log buffer cache is used and how much is free.
View 1 Replies View RelatedDatabase : 9.2.0.7
Os : windows 2003 sevrer standdard edition
RAM 4 Gigs
The buffer cache hit ratio in this server is around 83%, where it normaly was around 98% before i did some maintenance activities.
I have done some maintenance activities in January on this database.
Maintenance activties includes below steps
1.In production i have deleted old data in the production tables
2.Reorganized tablespaces,tables
3.Rebuild indexes for those tables.
4. At last collected statistics for those tables.
Now after this activity the buffer cache hit ratio is very low.
Considering the below factors, I am planning to increase the buffer cache value from 256Mb to 512Mb.
1. Buffer cache hit ratio value is around 35% even in the normal period.
2. free buffer requested value is below during peak & normal hours below.
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
free buffer requested 54,694,995 15,226.9 2,523.7
free buffer requested 23,412,674 6,501.7 2,585.9
3. most of the top 5 physical reads & logical reads queries are well tuned and some of queries are doing FTS on small tables (table count min 1500 max 35000). SO indexing option is not required for these queires. But these queries getting executed frequently.
SQL> show sga
Total System Global Area 2148534928 bytes
Fixed Size 731792 bytes
Variable Size 1879048192 bytes
Database Buffers 268435456 bytes
Redo Buffers 319488 bytes
5.top 5 waitevents during db slow performance & high cpu utilization (>80%) issue.
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
latch free 1,848,898 153,793 52.00
buffer busy waits 395,280 87,201 29.49
db file scattered read 3,488,648 34,199 11.56
enqueue 4,052 10,897 3.68
CPU time 5,567 1.88
6. Top 5 waitvents during normal activities and CPU utilization is around 40%.
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 1,860 45.32
db file scattered read 1,133,669 985 23.99
imm op 776 605 14.73
sbtinfo2 208 139 3.40
sbtbackup 2 123 3.00
1) Is shutting down the DB flush all the data buffers, from the buffer Cache?
2) In any oracle version, do we have any way to flush only the buffer Cache.
I have some confusion about Keep Pool in Buffer Cache.
1. What is the reasoning for placing a table in the KEEP buffer pool because if it is frequently accessed, it will be around when needed (ie if it is constantly being accessed it will not age out) .
2. Would the table be still in the Default Pool if the Keep Pool is not sized and the command is being issued alter TABLE SCOTT.EMP storage (buffer_pool keep) ?
3. If the database is restarted will the table be wiped out of the Keep Pool and again be pinned to the Keep Pool ?
I want to simulate latch : cache buffer chains wait event due to use of nested loop join for lookup tables
This is what a tried :
-- create parent / child tables
SQL>drop table emp1 purge;
drop table dept1 purge;
create table dept1 (dept_id number primary key,
dept_name char(30));
[Code]....
I traced many queries like the one given below (dept_id between 1 and n where n varied from 10 to 1000) but they always result in hash join
1* select d.dept_name, e.id from sys.dept1 d, sys.emp1 e where d.dept_id = e.dept_id and e.dept_id < 1000
Execution Plan
----------------------------------------------------------
Plan hash value: 619452140
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 998K| 41M| 680 (2)| 00:00:09 |
|* 1 | HASH JOIN | | 998K| 41M| 680 (2)| 00:00:09 |
|* 2 | TABLE ACCESS FULL| DEPT1 | 999 | 34965 | 4 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| EMP1 | 999K| 8780K| 672 (2)| 00:00:09 |
----------------------------------------------------------------------------
what can I do to get a nested loop join to simulate latch : cache buffer chains?
If sga Buffer Cache Size consume full of SGA_target size, if possible that it will cause performance any issue. I am facing CPU 100% consuming while single query execute, Which query generate monthly report data.
I have two question
1)How to fix the CPU 100% consuming
2)How to find total number user hit oracle specific schema.
Oracle 10.2.0.5 Standard
Sga_target : 14G
Sga_max :20G
Pga :3G
Below SGA details
NAME BYTES/1024/1024
-------------------------------- ---------------
Fixed SGA Size 2.01795197
Redo Buffers 13.9804688
Buffer Cache Size 13632
Shared Pool Size 640
Large Pool Size 16
Java Pool Size 16
Streams Pool Size 16
Granule Size 16
Maximum SGA Size 20480
Startup overhead in Shared Pool 208
Free SGA Memory Available 6144
I am currently in the favorable situation in which I have excess amounts of memory available on the database server - a single node setup. The server only serves the single instance and no other processing. Database size is around 2.3tb and memory is 50gb. For the majority of processing, AIX is allocating a significant amount (anywhere from 30-40%) of the memory to the AIX file system cache (persistent pages).
I've been trying to find documentation about this, but have not had any luck yet. My guess is that it would be better to allow Oracle to cache this data - meaning increase the SGA target and max size to allow for a larger buffer cache. However, the nice thing about the AIX cache is if process memory is needed, the file system cache gives up pages. If the memory was allocated to the SGA, its pretty much locked in.
I have read several articles stating that a larger buffer cache is not always better, as a larger cache takes more management. But having both of the caches active seem to be a waste of memory, effectively storing the data twice - once in AIX persistent pages and a second time in Oracle database buffer cache.
Is there any relationship b/w tuning BUFFER CACHE and BUFFER BUSY WAITS?
1) Buffer Busy Waits are happening as the User process found the same Datablock is being used by another user in the BUFFER CACHE.
2) And also happens, when the server process found the same Datablock are being used in the Datafile.
I have a serious doubt in oracle architecture functionality, when a user issues a update statement the data blocks are carried to db buffer cache and where does the changes to the data blocks are made???? Does a copy of the data block is kept in db buffer cache and the changes are made to the block in buffer cache?? or the a copy of the data block is kept in undo tablespace and changes are made to the blocks in the undo tablespace???
In simple the changes to the data blocks are made at db buffer cache or undo tablespace?
I learnt that logWriter writes in the redo log files when redo log buffer is 1/3 full, it means that 66 % of redo log buffer are always empty and never used,
if no, isn't a waste of memory (66 % always empty !)
How to move a partition of a table to db_keep_cache ?
View 8 Replies View RelatedI am in charge of several instances located on a Linux server CentOS, virtualized on a ESX 3.5 environment.
From time to time (every 4 to 5 days), I have some errors in the alert.log. Last occurence was last night :
Corrupt block relative dba: 0x01004e12 (file 4, block 19986)
Fractured block found during buffer read
Data in bad block:
type: 6 format: 2 rdba: 0x01004e12
last change scn: 0x0000.131aaa5b seq: 0x2 flg: 0x04
[Code] ........
We are doing user manual backup (with BEGIN/END BACKUP) every night at 8PM, ending at 9PM approx. Then, fractured blocks never occur during backups. At 1AM, the maintenance window is opening, thus explaining the GATHER_STATS_JOB job.
When I check corruption on early morning, I am always unable to reproduce the problem. DBV is OK without issues. We never had a problem with the data itself, whatever it is a table or an index in the reported failed block.
I would like to know what could cause these logical corruption, and how to stop them ?
We are getting Negative values of Library cache hit ratio in AWR Report of 11g(11.2.0.3) with Solaris[tm] OE (64-bit). Why it shows negative value.
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 99.87 Redo NoWait %: 99.99
Buffer Hit %: 92.17 In-memory Sort %: 100.00
Library Hit %: -3,321.23 Soft Parse %: 81.95
Execute to Parse %: 92.88 Latch Hit %: 95.11
Parse CPU to Parse Elapsd %: 87.25 % Non-Parse CPU: 81.39
I have a doubt on invalid objects.What would be the impact to database and application if there are many invalid objects in database?
View 3 Replies View RelatedHow can I delete all the objects from my database?
View 18 Replies View RelatedI am getting below error while connecting to sqlplus.
SQL*Plus: Release 10.2.0.5.0 - Production on Tue Mar 22 12:47:48 2011
Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
ERROR:
ORA-06550: line 1, column 7:
PLS-00201: identifier 'DBMS_OUTPUT.DISABLE' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
[code]....
Executed the below scripts but it didnt resolve the issue, whereas some of the SYS objects and catproc got invalid...
dbmsotpt.sql
dbmsapin.sql
Now even after reexecuting the catproc.sql and utlrp...Sys objects and the catproc status is still INvalid.
I tried to manually compile the sys objects, but it didnt work.
OWNER SUBSTR(OBJECT_NAME,1,40) OBJECT_TYPE
-------------------- ---------------------------------------- --------------------
SYS DBMS_XPLAN PACKAGE BODY
SYS AQ$AQ_SRVNTFN_TABLE VIEW
SYS DBMS_LOGREP_DEF_PROC PACKAGE
SYS DBMS_LOGREP_DEF_PROC PACKAGE BODY
[code]....
how to go about making the SYS objects and catproc VALID and resolve the error which i mentioned above.
How monitor the unusable objects in oracle 10g (10.2.0.1.0) os rhel 5? is it same as invalid objects and can be monitor as same?
View 4 Replies View Relatedi am trying to shrink tablespace of 100gb which has objects.
i tried coalesce,then i try to shrink and also tried to resize the datafile with no luck
error message can't resize
I am getting the following errors when I try drop a tablespace.
I already did the following.
a) The tablespace & its datafiles offline.
b) I have purged dba_recyclebin.
SQL> drop tablespace db_maintenance including contents and datafiles;
drop tablespace db_maintenance including contents and datafiles
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-38301: can not perform DDL/DML over objects in Recycle Bin
here are so much invalid dba_objects. Do I need to care/repair?
SELECT owner, COUNT (*)
FROM dba_objects
WHERE status != 'VALID'
GROUP BY ROLLUP (OWNER)
output
ZIM4_RO71
ZIM473
PUBLIC16
SYS35
195
I was importing one schema from Oracle 10g to 11g using traditional import. I imported as a SYS user, so all the objects created in SYS schema. how can I remove these objects and retain only default SYS objects
View 11 Replies View RelatedI heard that USERS tablespace should not contain any other application schema objects.
If the above statements is true , why it should not contain other schema objects ?