Performance Tuning :: Why There Is No Row Resource Operation Information In Tkprof Output
May 13, 2011
why there is no "Row Source Operation" information when I trace a long and complicated SQL query (It takes at least 25 minutes ro run)?
However, I can get the "Row Source Operation" information for a simple SQL Query.
Here are the tkprof output for a simple query and complicated query:
tkprof Output for a simple query
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 13 0.00 0.00 0 16 0 173
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15 0.00 0.00 0 16 0 173
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 3139
Rows Row Source Operation
------- ---------------------------------------------------
173 TABLE ACCESS FULL MY_TABLE (cr=16 pr=0 pw=0 time=0 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 13 0.00 0.00
SQL*Net more data to client 24 0.00 0.00
SQL*Net message from client 13 5.33 5.47
Tkprof output for a complex query
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.38 0.40 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.38 0.40 0 0 0 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 3139
No Row Source Operation information here Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file sequential read 73062 0.25 692.23
db file scattered read 16099 0.18 76.65
latch: shared pool 4 0.02 0.03
latch: cache buffers chains 3 0.04 0.04
latch free 16 0.00 0.00
View 1 Replies
ADVERTISEMENT
Jun 9, 2010
understanding the TKPROF output for the session that was executing an insert statement(inserting 70 lakh data) on which a row level trigger get fired and select from following table.The trace was run for 1 hour.
Table name - > GS_MAP_RCC_CCIT
This table is having 37 rows as in on single block and having primary key index on RCC_NUM that is also contained on single block.We are getting maximum wait events on db_file_sequential_read.
As per my understanding this is due to the contention for the same block because for each row these queries get fired.
View 14 Replies
View Related
Apr 10, 2013
i want to know about trace file and TKPROF. any example.
View 1 Replies
View Related
Aug 31, 2011
I am trying to run a trace file with TKPROF. It throws an error :
MEMORY FAULT
The size of the trace file is 8MB. I tried with some other file trace files bigger in size (10MB) than the above file, it works fine. I tried seeing any permission right is required on that trace file is required for the tkprof to excute, but it has got the same permissions like the other trace files.
The problem is that it is not showing any error number or any other Error Description other than "MEMORY FAULT".
View 1 Replies
View Related
Jan 13, 2011
find the attached tkprof'ed file of session
I started the trace after the query started (upon user's complaint)
However even after tracing the session for more than 30 minutes I am not geeting where the 30 minutes are accounted in this file
View 11 Replies
View Related
Jun 11, 2010
How can i differentiate between system issued sql's and user issued sql's in the tkprof report ?
View 5 Replies
View Related
Sep 6, 2013
I am using Release 11.2.0.3.0 - 64bit Production version of oracle. Now we are having 3-tier architecture, (firewal/web/app/DB).Now i saw , some of the 'sql' queries, running till ~10hrs in my database and those are part of application(module JDBC THIN CLIENT). After had a talk java guys, they ask to kill the sessions specific to those queries. They are part of search TO, in which user put some large values for the date range and went to other TAB, but these queries gets running infinitely in the database, and user is not interested in the result set.
So how to avoid these things, as because in past, our database has suffered resource contention leading to application slowness. So i was planing to set different timeouts using 'database resource consumer group' for online user request and batch request depending on the app server(that is by machine names) request.
So i have done below set up in my local to test one scenario, in which i will try give a database call from difference machine, and it should get timeout after the specified duration. But its not working , as expected. The calls from the specified machine are not getting assigned to the created 'Consumer group'.
Begin
-- create the pending area
dbms_resource_manager.create_pending_area();
END;
/
BEGIN
-- Create the consumer group
[code]....
After this when i am verifying calls from machine, 'LR9XY7T8' they are belongs to the consumer group 'OTHER_GROUPS' and sql query not getting timed out within 60 seconds as mentioned.
View 9 Replies
View Related
Mar 28, 2013
How can I find out the particular oracle session which was consuming high memory in the past?
I can't get the data in v$sessstat
Unable to get the information in AWR
dba_hist_active_session_history do not have field which indicate memory related information
Shall I concetrate on EVENT in dba_hist_active_session_history which continuosly had sort, direct path read
Or
Locate sql_id from dba_hist_sqlstat with high SORTS_DELTA for snapshots belonging to problematic time period and then using the sql_id query dba_hist_active_session_history
which approach I shall take to find out the session which consumed most memory in the past?
View 8 Replies
View Related
Sep 6, 2011
Is there a webpage that posts the units of the fields in statspack Example readtim in PERFSTAT.STATS$FILESTATXS (centiseconds?)
SELECT snap_id
,phyrds
,readtim
,phywrts
,writetim
,wait_count
FROM PERFSTAT.STATS$FILESTATXS
View 2 Replies
View Related
Jul 7, 2012
Where filter middle_rows save before join and grop by operation?
It is save rows in PGA Private SQL Area or save blocks in SGA databuffer?
View 11 Replies
View Related
Nov 16, 2011
I executed a query which executed quickly (1.7 seconds) but since its output took time in displaying on the console the time shown by 'set timing on was 39.5 seconds
also I took trace (tkprof) for the same.My query is why the timings under 'Total Waited' (43.19 and 1.69) are not added to the elapsed time 1.83 seconds
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.06 0 10 0 0
Fetch 758 0.03 1.77 0 0 0 11345
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 760 0.03 1.83 0 10 0 11345
[code]....
View 1 Replies
View Related
Jan 19, 2011
I have a SQL Loader Log File Output. I would like to delete Table Information from the output file.Following is my code.
//PATH OPENED AS TEXT BY STREAM READER
System.IO.StreamReader strm = File.OpenText(batchFileName);
//OUTPUT
[Code]....
View 8 Replies
View Related
Jul 3, 2012
My application runs a batch procedure weekly once for searching 'A_Text' from a column in Table1 in a clob column in Table2 and inserts accordingly into another Table3.
code snippet is like this -
---
CURSOR cr_sn
IS
SELECT serial_number
[Code]....
TABLE1 will have at least 1.1 Million rows but not significantly more than this.
This procedure takes 24+ hours to complete. I tried -
1. putting parallel hint ( INSERT /*+ PARALLEL*/INTO Table3)
2. partitioning TABLE2 based on last_update_date and putting a where clause in the above query last_update_date ( last_update_date between date1 and date2)
View 3 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
Oct 31, 2011
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
View 39 Replies
View Related
Sep 30, 2010
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
View 24 Replies
View Related
Jun 16, 2010
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
View 2 Replies
View Related
Oct 20, 2010
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
View 6 Replies
View Related
Jun 16, 2011
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
View 2 Replies
View Related
Nov 15, 2010
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
View 13 Replies
View Related
Apr 12, 2013
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
View 13 Replies
View Related
Aug 9, 2010
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
View 6 Replies
View Related
Dec 6, 2011
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
View 1 Replies
View Related
Oct 17, 2011
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'
[code]...
View 9 Replies
View Related
Sep 3, 2010
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
View 10 Replies
View Related
Mar 17, 2012
are the most important performance keys we have to calculate or take in account to preserve or to increase the DB performance in terms of response times, and whatsoever according to performance ?
View 8 Replies
View Related
Oct 21, 2011
understand this stats from the TKPROF. The below stats is for INSERT SELECT statment.
Here what does Disk and Current values refer ?
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 1141.84 1117.85 1829639 517129893 1749 109
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 1141.84 1117.85 1829639 517129893 1749 109
[code]...
View 1 Replies
View Related
May 8, 2013
How do i get tkprof trace file in sqldeveloper tool ? I donot have access sql in command mode.
View 5 Replies
View Related
Jan 6, 2011
I am trying to get a readable version of a .trc file generated by Oracle
10.2.0.4.0 with tkprof, but I still have been unable to get this done...and this is what I have already tried:
tkprof /u00/app/oracle/admin/DB/udump/*BITN1234.trc /batch/salidas/student/BITNTRACE.txt
And whether I run this from my PL/SQL process (PRO*C) or from a command line, it returns:
TKPROF: Release 10.2.0.4.0 - Production on Thu Jan 6 11:04:38 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
could not open trace file /u00/app/oracle/admin/DB/udump/BITN1234.trc
I have already (from my PRO*C code and from command line)...
- made sure that the file is in the directory.
- run this from the udump directory, where the .trc file is...didn't work.
- run this from the udump directory, and specifying explicitely the
complete path anyway in the tkprof line (redundant...I know)...didn't work.
- tried to copy the file to another directory in order to run the tkprof,
and it returns:
cp: BITN1234.trc: The file access permissions do not allow the specified
action.
- tried changing privileges with:
chmod a+r BITN1234.trc
and it returns:
chmod: BITN1234.trc: Operation not permitted.
- tried to run tkprof01 instead of tkprof and it returns:
We trust you have received the usual lecture from the local System Administrator. It usually boils down to these two things:
#1) Respect the privacy of others.
#2) Think before you type.
View 3 Replies
View Related
Mar 27, 2013
I am working on an assignement where client is using Oracle 10g but stuck to using RBO Now the application team, from the GUI available to them build dynamic queries and some of them run very slow.
Neither the code can not be changed to tune the queries nor do we get the exact step in the plan which is an issue (being RBO).For some long running queries the Tuning advisor is not producing any recommendations.
Another hurdle is that all the application users are using same application user id so we can not write a logon trigger to use CBO for some particular queries to see what is happening in the background!
View 11 Replies
View Related