Query Parsing And Buffer Cache?
Apr 18, 2012I want to know what exact process happens in oracle architecture when an update query is fired.
View 1 RepliesI want to know what exact process happens in oracle architecture when an update query is fired.
View 1 RepliesDoes cache buffer chain latch and buffer busy wait event are related to one any another.
Latch definition from Google says : Latches are simple, low-level serialization mechanisms to protect shared data structures in the system global area (SGA).
what does it mean my protect. Does this mean protects from aging as per LRU algorithm and getting removed from SGA
or
protect from other processes ,say from example from simultaneously DML operations.
or
both
Does buffer busy wait event occurs , because of the cache buffer chain latch ?
how can we check the size of data buffer cache.
View 7 Replies View Relatedwhat will be best buffer cache hit ratio for a database for good performance
note : in general
give a script to find how much my db buffe cache and redo log buffer cache is used and how much is free.
View 1 Replies View RelatedDatabase : 9.2.0.7
Os : windows 2003 sevrer standdard edition
RAM 4 Gigs
The buffer cache hit ratio in this server is around 83%, where it normaly was around 98% before i did some maintenance activities.
I have done some maintenance activities in January on this database.
Maintenance activties includes below steps
1.In production i have deleted old data in the production tables
2.Reorganized tablespaces,tables
3.Rebuild indexes for those tables.
4. At last collected statistics for those tables.
Now after this activity the buffer cache hit ratio is very low.
Considering the below factors, I am planning to increase the buffer cache value from 256Mb to 512Mb.
1. Buffer cache hit ratio value is around 35% even in the normal period.
2. free buffer requested value is below during peak & normal hours below.
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
free buffer requested 54,694,995 15,226.9 2,523.7
free buffer requested 23,412,674 6,501.7 2,585.9
3. most of the top 5 physical reads & logical reads queries are well tuned and some of queries are doing FTS on small tables (table count min 1500 max 35000). SO indexing option is not required for these queires. But these queries getting executed frequently.
SQL> show sga
Total System Global Area 2148534928 bytes
Fixed Size 731792 bytes
Variable Size 1879048192 bytes
Database Buffers 268435456 bytes
Redo Buffers 319488 bytes
5.top 5 waitevents during db slow performance & high cpu utilization (>80%) issue.
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
latch free 1,848,898 153,793 52.00
buffer busy waits 395,280 87,201 29.49
db file scattered read 3,488,648 34,199 11.56
enqueue 4,052 10,897 3.68
CPU time 5,567 1.88
6. Top 5 waitvents during normal activities and CPU utilization is around 40%.
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 1,860 45.32
db file scattered read 1,133,669 985 23.99
imm op 776 605 14.73
sbtinfo2 208 139 3.40
sbtbackup 2 123 3.00
1) Is shutting down the DB flush all the data buffers, from the buffer Cache?
2) In any oracle version, do we have any way to flush only the buffer Cache.
1)If i issue a DELETE statement to delete a row, will this statement drag any data from the datafile to database buffer? How is the change made by a DELETE statement recorded in buffer cache? How is this change then applied to the data in datafiles after commit?
View 7 Replies View RelatedHow can i know which objects used keep buffer cache?
View 5 Replies View RelatedI have some confusion about Keep Pool in Buffer Cache.
1. What is the reasoning for placing a table in the KEEP buffer pool because if it is frequently accessed, it will be around when needed (ie if it is constantly being accessed it will not age out) .
2. Would the table be still in the Default Pool if the Keep Pool is not sized and the command is being issued alter TABLE SCOTT.EMP storage (buffer_pool keep) ?
3. If the database is restarted will the table be wiped out of the Keep Pool and again be pinned to the Keep Pool ?
I want to simulate latch : cache buffer chains wait event due to use of nested loop join for lookup tables
This is what a tried :
-- create parent / child tables
SQL>drop table emp1 purge;
drop table dept1 purge;
create table dept1 (dept_id number primary key,
dept_name char(30));
[Code]....
I traced many queries like the one given below (dept_id between 1 and n where n varied from 10 to 1000) but they always result in hash join
1* select d.dept_name, e.id from sys.dept1 d, sys.emp1 e where d.dept_id = e.dept_id and e.dept_id < 1000
Execution Plan
----------------------------------------------------------
Plan hash value: 619452140
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 998K| 41M| 680 (2)| 00:00:09 |
|* 1 | HASH JOIN | | 998K| 41M| 680 (2)| 00:00:09 |
|* 2 | TABLE ACCESS FULL| DEPT1 | 999 | 34965 | 4 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| EMP1 | 999K| 8780K| 672 (2)| 00:00:09 |
----------------------------------------------------------------------------
what can I do to get a nested loop join to simulate latch : cache buffer chains?
If sga Buffer Cache Size consume full of SGA_target size, if possible that it will cause performance any issue. I am facing CPU 100% consuming while single query execute, Which query generate monthly report data.
I have two question
1)How to fix the CPU 100% consuming
2)How to find total number user hit oracle specific schema.
Oracle 10.2.0.5 Standard
Sga_target : 14G
Sga_max :20G
Pga :3G
Below SGA details
NAME BYTES/1024/1024
-------------------------------- ---------------
Fixed SGA Size 2.01795197
Redo Buffers 13.9804688
Buffer Cache Size 13632
Shared Pool Size 640
Large Pool Size 16
Java Pool Size 16
Streams Pool Size 16
Granule Size 16
Maximum SGA Size 20480
Startup overhead in Shared Pool 208
Free SGA Memory Available 6144
Say Database Buffer Cache configured as 2M and my updates may use 4m size,will it throw an error message or update will happen perfectly without any issues?
View 2 Replies View RelatedI am currently in the favorable situation in which I have excess amounts of memory available on the database server - a single node setup. The server only serves the single instance and no other processing. Database size is around 2.3tb and memory is 50gb. For the majority of processing, AIX is allocating a significant amount (anywhere from 30-40%) of the memory to the AIX file system cache (persistent pages).
I've been trying to find documentation about this, but have not had any luck yet. My guess is that it would be better to allow Oracle to cache this data - meaning increase the SGA target and max size to allow for a larger buffer cache. However, the nice thing about the AIX cache is if process memory is needed, the file system cache gives up pages. If the memory was allocated to the SGA, its pretty much locked in.
I have read several articles stating that a larger buffer cache is not always better, as a larger cache takes more management. But having both of the caches active seem to be a waste of memory, effectively storing the data twice - once in AIX persistent pages and a second time in Oracle database buffer cache.
Is there any relationship b/w tuning BUFFER CACHE and BUFFER BUSY WAITS?
1) Buffer Busy Waits are happening as the User process found the same Datablock is being used by another user in the BUFFER CACHE.
2) And also happens, when the server process found the same Datablock are being used in the Datafile.
I have a serious doubt in oracle architecture functionality, when a user issues a update statement the data blocks are carried to db buffer cache and where does the changes to the data blocks are made???? Does a copy of the data block is kept in db buffer cache and the changes are made to the block in buffer cache?? or the a copy of the data block is kept in undo tablespace and changes are made to the blocks in the undo tablespace???
In simple the changes to the data blocks are made at db buffer cache or undo tablespace?
I have a query which is using literals
strquery:='SELECT SUMTOTAL FROM tab1 WHERE BATCHNO = '''
|| gBNo
|| ''' AND A_ID = '''
|| g_id
|| ''' AND L_ID = '''
|| g_LId
|| ''' AND S_Code = ''C_3'' ';
execute immediate strquery; I have been asked to use a bind variable to avoid hard parsing.How do i do it?
What is the difference between cache fusion and Cache Coherency. Both are same or different functionality.
View 1 Replies View RelatedIn Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
STEP - 1
SELECT /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)*
FROM HR.Employees_copy
WHERE department_id = 20
GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY)
-------------------------------------------------------------------------------------------------------
202 Pat Fay 6000
201 Michael Hartstein 13000
Elapsed: 00:00:00.01
Execution Plan
----------------------------------------------------------
Plan hash value: 3837552314
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 130 | 4 (25)| 00:00:01 |
| 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
| 2 | HASH GROUP BY | | 2 | 130 | 4 (25)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
*690* bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
STEP - 2
INSERT INTO HR.employees_copy
VALUES(200, 'Dummy', 'User','Dummy.User@email.com',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
STEP - 3
SELECT /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)
FROM HR.Employees_copy
WHERE department_id = 20
GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY)
--------------------------------------------------------------------------------------------------
202 Pat Fay 6000
201 Michael Hartstein 13000
200 Dummy User 5000
Elapsed: 00:00:00.03
Execution Plan
----------------------------------------------------------
Plan hash value: 3837552314
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 195 | 4 (25)| 00:00:01 |
| 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
| 2 | HASH GROUP BY | | 3 | 195 | 4 (25)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| EMPLOYEES_COPY | 3 | 195 | 3 (0)| 00:00:01 |
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
*714* bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?
I have some question.
TTversion : TimesTen Release 11.2.2.3.0 (64 bit Linux/x86_64) (tt112230:53376) 2012-05-24T09:20:08Z
We are testing a AWT cache group ( with CacheAwtParallelism=4 ).
Application(1 process) to the DML generates to TimesTen(DSN=TEST).
At this point, Are delivered to the 4 parallel DML?
[TEST]
Driver=/home/TimesTen/tt112230/lib/libtten.so
DataStore=/home/TimesTen/DataStore/TEST/test
PermSize=1024
TempSize=512
PLSQL=1
[code].......
I want to pass a damn long query(which includes a lot of column names, tables, joins, union, where clauses etc. and whose length is more than 120000) in a Ref cursor (that's length is more than 32767). Query is stored in a LONG type variable V_QRY in stored procedure, and I am opening that ref cursor like below-
OPEN P_RPT_TEST FOR V_QRY;
at run time its giveing string buffer too small error.
show an ex to use string buffer for select statemnt
View 1 Replies View RelatedI intend to write procedures/functions in PL/SQL to parse Name field into various fields : Title, FirstName, MiddleName, LastName, Gender.
E.g:
Nguyen Van A
Nguyen Thi B
After parsing, the result will be shown as:
TitleFirstNameMiddleNameLastNameGender
MrNguyenVanAMale
MsNguyenThiBFemale
Parsed.gif ( 4.94K )
Number of downloads: 2
I supposed that Title & Gender are realized through MiddleName field. If MiddleName's values in (Thi, Dieu) then Title is assigned as Ms, and Gender = "F". Otherwise, Title = "Mr", and Gender = "M".
2/ Another procedure/function is [i]ParseAddress with the requirement as:[/i]Address field is divided into Street, Group, Area, Ward, County fields
E.g.:No 6 Sum Street - Group 8 - Area 2 - ABCD Ward - London
The result:
StreetGroupArea Ward County
No 6 Sum StreetGroup 8Area 2ABCD London
I have tried coding by Visual Basic, it is OK. But if I interpret to PL/SQL ->it doesn't work.
I intend to write procedures/functions in PL/SQL to parse Name
field into various fields : Title, FirstName, MiddleName, LastName, Gender
E.g:
Nguyen Van A
Nguyen Thi B
After parsing, the result will be shown as:
TitleFirstNameMiddleNameLastNameGender
MrNguyenVanAMale
MsNguyenThiBFemale
I supposed that Title & Gender are realized through MiddleName field. If MiddleName's values in (Thi, Dieu) then Title is assigned as Ms, and Gender =
Female. Otherwise, Title = "Mr", and Gender = "Male".
2/ Another procedure/function is ParseAddress with the
requirement as:
Address field is divided into Street, Group, Area, Ward, County fields
E.g.:
No 6 Sum Street - Group 8 - Area 2 - ABCD Ward - London
The result:
StreetGroupAreaWardCounty No 6 Sum StreetGroup 8Area 2ABCDLondon
I have tried coding by Visual Basic, it is OK. But if I interpret to PL/SQL ->
it doesn't work.
I have the following table A that contain one column "MYREC"
MYREC
1253 69889897 89884 891254
568989 89897891 321 698751232
1239892 123358798 7899 58123457
I need to parse the variable column, the issue is that the number of spaces.
The string might start with 2 or more white spaces, that I can get rid of with the LTRIM function..I'm having difficulties with the rest ):
oracle PL/SQL. I have almost finished this xml parsing task but their is one problem. Actually in our table there are more than 70-80 columns & due to that only I don't want to put the hard coded column name in my procedure, because if I will do that, the unnecessary procedure size will be increase(means line of code).Here is our procedure
Create or replace procedure loadMyXML(dir_name IN varchar2, xmlfile IN varchar2) AS
l_bfile BFILE;
l_clob CLOB;
l_parser dbms_xmlparser.Parser;
l_doc dbms_xmldom.DOMDocument;
l_nl1 dbms_xmldom.DOMNodeList;
l_nl2 dbms_xmldom.DOMNodeList;
l_n dbms_xmldom.DOMNode;
node1 dbms_xmldom.DOMNode;
l_colName VARCHAR2(100);
[code]...
I have some BLOB contents in the format of...
DK99F17,AA,032820130840,Other
ABCD,AA,032820130840,OV
AAZ123,BC,032820130932,DWL
CBA12345,ZA,032820130939,Other
Each BLOB is associated to a file name in the format...
03282013100002_thisfile.txt
The blob for each file may be zero rows to n rows in size, but typically there are 2 to 5 rows (four rows were shown in the rows above).The following kind of gets me there, but not quite as it splits up the BLOB rows at the comma and not the line break (HEX=0D0A / CRLF).
with rec as
(select fs.file_name, utl_raw.cast_to_varchar2(fs.file_data) file_data
from tada.files_store fs
where fs.file_name like '%citations.txt'
and trunc(fs.date_created) = to_date('26-MAR-2013','DD-MON-YYYY'))
[code].....
I want to load XML into base table using PL/SQL procedure.For that I have wrote procedure but that does not work well .
View 4 Replies View RelatedIdentical statements from this link : Parsing in Oracle — DatabaseJournal.com d. The bind variable types of the new statement should be of same type as the identified matching statement. i am getting confuse here .. when parsing occurs some links saying about bind variable.but official document never said about bind variables.
View 3 Replies View Related