Performance Tuning :: PLSQL And Related Cursors Generated In V$sql
Jan 26, 2012
I wanted to know how to find the links between cursors that have been created in v$sql after compiling a PLSQL procedure.
For example, if i compile the following procedure :
create or replace
PROCEDURE PROCEDURE1
AS
BEGIN
insert into t values(1,1);
END PROCEDURE1;
, the 2 following cursors are created in v$sql :
select SQL_ID, SQL_TEXT from v$sql
where sql_id in ('71pj8t5nz1d80','2s567zb6684sh');
"SQL_ID""SQL_TEXT"
"2s567zb6684sh""BEGIN PROCEDURE1; END;"
"71pj8t5nz1d80""INSERT INTO T VALUES(1,1)"
Thus, i would like to know how to find that the cursor 71pj8t5nz1d80 is called/linked by the cursor 2s567zb6684sh.
This would be useful for interpreting some sql statistics in v$sqlstats in where such cursors appear and related sql statistics are accounted twice.
View 1 Replies
ADVERTISEMENT
Feb 11, 2011
I have few queries on PGA memory management.Since these queries are based on 2-3 examples not exactly same by nature I am summarising it after my understanding for the same
As I understand many workareas can be allocated to a single sql statement and number and sizes of theses workareas is controlled internally by Oracle when Automatic Memory management (PGA_aggregate_target and workarea_size_policy=Auto are set) Since many sessions share the PGA memory, the amount of memory available to each session may vary and if less amount of memory is available for a session for sorting then TEMP tablespace is used
[1] Can we say paging happens and can be checked at this time?
[2] Is there a difference in handling memory while populating pl/sql tables?
As I have encountered ora-04030 while some our developers were populating pl/sql tables but never encountered this error for sorting, hash joins etc Though I don't remember the width of pl/sql table, I am sure the developer used 'LIMIT' clause during bulk collect and still faced the issue.
With a single session on the server, I noticed that the difference in values displayed issuing 'free' command in linux and output values from sesstat did not match at all while there wasn't any heavy OS process involved during the period. I was expecting 'used' and 'free' values displayed by free command (linux) will change and difference would be approximately equals 'before and after values of session pga memory.
[3] Isn't it expected to match?
[4] Can we say in dedicated server, at any moment of time, the SUM of 'session pga memory' represents all the memory used by Oracle SGA, at that point of time?
select sum(value)/1024/1024 "memory in MB" from v$sesstat where statistic#=20;
During one of the tests I got following output (divide value by 10 for my visibility and avoid formatting)
SQL> select a.name, to_char(b.value/10, '999,999,999') value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name like '%ga memory%'; 2 3 4
[code]...
The above query is showing above values even when the pl/sql block execution is completed 30 minutes back
[5] Do we call this as 'memory leak' where memory is not released even while some time has passed since session has done something?Of course I am not checking at OS level as mentioned in question [3] above the values won't match!
Still the output of free command for reference(After the pl/sql block executed)
SQL> !free
total used free shared buffers cached
Mem: 3016796 2999660 17136 0 4308 1173260
-/+ buffers/cache: 1822092 1194704
Swap: 1048568 636124 412444
--(After the pl/sql block executed)
SQL> select * from v$pgastat;
NAMEVALUEUNIT
aggregate PGA target parameter 524288000bytes
aggregate PGA auto target 456256512bytes
global memory bound 26214400bytes
total PGA inuse 17328128bytes
[code]...
[6] What could be the significance of negative values of 'session pga memory/max'?
Last We have an OLTP system and in the night we run batch processes in 2-4 sessions
Suppose I have 10 GB RAM and with PGA setting of 3.5 GB Now I want the batch process sessions to use max possible memory during nighttime and toggle the setting back in the morning
[7] With above settings (10 GB RAM and 3.5 GB PGA) how can I divide the memory among 4 sessions?
Shall I set 1) PGA_aggregate_target=0 2)Workarea_size_policy=manual 3) Sort_are_size 4) Hash_area_size
[8] What would be approx values for parameter 3 and 4? will it be straight 3.5 GB/ 4?
View 8 Replies
View Related
Jun 12, 2012
Our production DB version is 11.2.0.1 and we do use DBArtisian. Everyday morning we check table space usage using that tool and it generates the below listed query. It used to run in 15 secs. But since one week it is running for 5/6/8 mins. I have updated the statistics on sys objects and tuning advisor created the execution plan. So, now it is running in 2 mins. Nothing has been changed in the DB configuration. I see the same query running in secs in dev environment. understand what might be the issue and how could I improve the performance of this query.
SELECT SUB.TABLESPACE_NAME, SUB.STATUS, SUB.EXTENT_MANAGEMENT, SUB.SEGMENT_SPACE_MANAGEMENT, SUB.TOTAL_SPACE_MB, SUB.USED_SPACE_MB, SUB.FREE_SPACE_MB, SUB.PERCENT_FREE_SPACE, SUB.CONTENTS, SUB.TABLESPACE_GROUP, CASE WHEN SUB.TABLESPACE_NAME = P.VALUE AND SUB.CONTENTS = 'UNDO' THEN 'YES' ELSE 'NO' END
[code]...
View 5 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
Oct 31, 2011
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
View 39 Replies
View Related
Sep 30, 2010
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
View 24 Replies
View Related
Jun 16, 2010
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
View 2 Replies
View Related
Oct 20, 2010
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
View 6 Replies
View Related
Jun 16, 2011
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
View 2 Replies
View Related
Nov 15, 2010
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
View 13 Replies
View Related
Apr 12, 2013
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
View 13 Replies
View Related
Aug 9, 2010
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
View 6 Replies
View Related
Dec 6, 2011
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
View 1 Replies
View Related
Oct 17, 2011
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'
[code]...
View 9 Replies
View Related
Sep 3, 2010
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
View 10 Replies
View Related
Mar 17, 2012
are the most important performance keys we have to calculate or take in account to preserve or to increase the DB performance in terms of response times, and whatsoever according to performance ?
View 8 Replies
View Related
Mar 27, 2013
I am working on an assignement where client is using Oracle 10g but stuck to using RBO Now the application team, from the GUI available to them build dynamic queries and some of them run very slow.
Neither the code can not be changed to tune the queries nor do we get the exact step in the plan which is an issue (being RBO).For some long running queries the Tuning advisor is not producing any recommendations.
Another hurdle is that all the application users are using same application user id so we can not write a logon trigger to use CBO for some particular queries to see what is happening in the background!
View 11 Replies
View Related
Jul 4, 2012
I want to tuning the next sql sentence. In this sql I want to get the hash_value and sql_text of the sentences that it's causing TX blocks. Is it possible?. This sentence works fine but sometimes It's slow.
SELECT DISTINCT hash_value,
sql_text
FROM gv$sql sq
WHERE hash_value IN (SELECT DISTINCT prev_hash_value
FROM gv$session se
WHERE sid IN (SELECT sid
FROM gv$lock l
WHERE type = 'TX'
AND ctime >= 2000
AND l.inst_id = se.inst_id
AND l.sid = se.sid)
AND sq.inst_id = se.inst_id);
[code]....
View 7 Replies
View Related
Nov 2, 2012
I see one of my SQL's which is ran by the user on a 10.2.0.3 database changing its SQL_ID after some runs even if the query is not changed a bit! However the HASH VALUE for this query remains the same.
how a same query can have different SQL_ID's but same HASH_VALUE?
Note: Statistics are not modified on the base tables of this query.
View 10 Replies
View Related
Aug 10, 2011
I am running Oracle 10.2.0.1.0 on MS Windows 2003 server 64-bit with 16G RAM.
Here is the findings for my Oracle database.
SQL> select * * from v$sgainfo;
NAME BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size 1293560 No
Redo Buffers 7094272 No
Buffer Cache Size 830472192 Yes
[code]...
I find that the SGA component "Buffer Cache" is decreasing from the start "1.8G" and down to now 0.8G. On the other hand, the component "Shared Pool" is increasing from the start 0.3G to now 1.2G. I noticed that there are 100 operations of shrinking of "Buffer cache" and growth of "Shared Pool" in Oracle every day.Is it a indicator that I should raise up the SGA_MAX_SIZE?
I tried to increase the SGA_MAX_SIZE to 4G. But I cannot start the Oracle afterward.Is it a limitation of MS Windows(OS) or Oracle?I set the SGA_MAX_SIZE to 3G. This time, I can startup Oracle.What is the optimum/maximum I can set to SGA_MAX_SIZE?Is there any adverse effect/concern when setting the SGA_MAX_SIZE more than 2G?
View 6 Replies
View Related
Jul 11, 2012
Here i have three tier application. I want to know it host name from sid or sqlid . I want to know which query run on which host. Because i have one user from application to database. So i want to know which query consume more time on which host ?
View 19 Replies
View Related
Mar 17, 2006
How can I disable redo generation for DML statements.
View 25 Replies
View Related
Jul 26, 2011
I am getting enq: TS - contention as top event in AWR Report.where can i look into database to fix this issue.
View 14 Replies
View Related
Aug 14, 2010
I want to run some OLTP benchmarks on my system. I have looked up the TPC-E benchmarking suite .. but the documentation on the site makes no sense to me .
View 1 Replies
View Related
May 20, 2010
My DB is oracle10g.
I have AWR report comparison for two different days. I want to find the below things.
1. Which day has better performance?
2. What are the top two findings on the report.
I attached the report.
Here are my answer. Please correct me if i am wrong.
1. Which day has better performance? Second day has higher load. Since redosize is showing very high.
2. What are the top two findings on the report.
a) Compared to two days, first day, little bit more I/0 wait for single block read.
b) Compared to two days, 2nd day, it takes higher CPU.
However, which day is best compared to two days?
View 5 Replies
View Related
Jul 12, 2013
I have installed database in one server. I would like to enable AWR into it. Statistics_level is set to Typical. While running the below script to enable the AWR, its gives error -
SQL> exec dbms_scheduler.enable('GATHER_STATS_JOBS');
BEGIN dbms_scheduler.enable('GATHER_STATS_JOBS'); END;
*
ERROR at line 1:
ORA-27476: "SYS.GATHER_STATS_JOBS" does not exist
ORA-06512: at "SYS.DBMS_ISCHED", line 4343
ORA-06512: at "SYS.DBMS_SCHEDULER", line 2802
ORA-06512: at line 1
make AWR automatical generation.
View 3 Replies
View Related
Aug 31, 2010
In my awr reports I have CPU time(in seconds) as one of top 5 timed events.Does that mean oracle is waiting for my cpu to execute the statements.But my cpu usage is 0-10% always
View 4 Replies
View Related
Jun 7, 2012
how to reduce the clustering_factor's value which appears in the user_indexes view?
in my table ,its clustering factor's value is so high:
SQL> SELECT UI.clustering_factor,UI.num_rows,UI.index_type,UI.distinct_keys FROM USER_INDEXES UI WHERE UI.table_name = 'TAWB_AWB';
CLUSTERING_FACTOR NUM_ROWS INDEX_TYPE DISTINCT_KEYS
----------------- ---------- --------------------------- -------------
83609 187603 NORMAL 187603
and its block numbers is 5063
SQL> SELECT COUNT(DISTINCT DBMS_ROWID.rowid_block_number(ROWID)) BLOCK_NUM FROM TAWB_AWB A;
BLOCK_NUM
----------
5063
View 10 Replies
View Related
Jun 27, 2011
How can I reduce the consistent gets of the sql:
select b.OFFER_ID,
b.OFFER_CODE,
b.OFFER_NAME,
b.OFFER_COMMENTS,
b.BAND_ID,
b.CAN_BE_BUY_ALONE,
b.PRICING_PLAN_ID,
b.PRIORITY,
b.STATE,
[code]...
I have already added the indexes for the outer sql to reduce the cost from 374 to 150. But the the consistent gets seems not reduce that much. I notice that the function will cost 6 consistents gets per execution. It seems it not very high. But if this function goes with the sql, it will cost a very high consistent gets, if I remove this function, the sql only cost 1903 consistents gets.
So what I am thinking there should be two ways to reduce the consistent gets..The first one is reduce the recursive call of the sql. The second is reduce the consistent gets of the function. (but it seems that the consistent gets in this function is very low, only 6.)
View 39 Replies
View Related
Jan 19, 2011
I want to load 10 millions records from staging table to master table.One logic must be take during the load, the logic is rows already present in master table means, we need to update corresponding rows in master table otherwise rows insert in target table.
I have been using bulk collect and forall method to load data. it shows better performance compare then cursor row by row process. As per oracle doucmentation, we cannot use SELECT statements inside FORALL condition so we could not use logic inside the forall condition.
View 2 Replies
View Related