Performance Tuning :: How To Find Out Session Consuming High Resource In Past
Mar 28, 2013
How can I find out the particular oracle session which was consuming high memory in the past?
I can't get the data in v$sessstat
Unable to get the information in AWR
dba_hist_active_session_history do not have field which indicate memory related information
Shall I concetrate on EVENT in dba_hist_active_session_history which continuosly had sort, direct path read
Or
Locate sql_id from dba_hist_sqlstat with high SORTS_DELTA for snapshots belonging to problematic time period and then using the sql_id query dba_hist_active_session_history
which approach I shall take to find out the session which consumed most memory in the past?
I am using Oracle 11g On windows 2003 Server, and in the task manager, tnslsnr.exe comsuming approx 1 GB of RAM. After restarting the listener it came to some mb(20-30). but Again after few days it raies and consume lots of memory..
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 13 0.00 0.00 SQL*Net more data to client 24 0.00 0.00 SQL*Net message from client 13 5.33 5.47
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 3139
No Row Source Operation information here Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 1 0.00 0.00 db file sequential read 73062 0.25 692.23 db file scattered read 16099 0.18 76.65 latch: shared pool 4 0.02 0.03 latch: cache buffers chains 3 0.04 0.04 latch free 16 0.00 0.00
I am using Release 11.2.0.3.0 - 64bit Production version of oracle. Now we are having 3-tier architecture, (firewal/web/app/DB).Now i saw , some of the 'sql' queries, running till ~10hrs in my database and those are part of application(module JDBC THIN CLIENT). After had a talk java guys, they ask to kill the sessions specific to those queries. They are part of search TO, in which user put some large values for the date range and went to other TAB, but these queries gets running infinitely in the database, and user is not interested in the result set.
So how to avoid these things, as because in past, our database has suffered resource contention leading to application slowness. So i was planing to set different timeouts using 'database resource consumer group' for online user request and batch request depending on the app server(that is by machine names) request.
So i have done below set up in my local to test one scenario, in which i will try give a database call from difference machine, and it should get timeout after the specified duration. But its not working , as expected. The calls from the specified machine are not getting assigned to the created 'Consumer group'.
Begin -- create the pending area dbms_resource_manager.create_pending_area(); END; / BEGIN -- Create the consumer group
[code]....
After this when i am verifying calls from machine, 'LR9XY7T8' they are belongs to the consumer group 'OTHER_GROUPS' and sql query not getting timed out within 60 seconds as mentioned.
I am facing one performance issue, in which the query cost is very low compare to cpu cost and as a result the cpu always show the high graph.I am also attaching the gv$sql and gv$sql_plan data of this query.
This is the query:
SELECT PTLS.ITEMTYPE , PTLS.ITEMID , PTLS.STAGEID, TS.USERID, SUM(PREVIOUSHOURS) AS PREVIOUSHOURS, MIN(STARTDATE) AS STARTDATE, MAX(STARTDATE) AS ENDDATE FROM PROJECTTIMELOGSSTAGE PTLS, PROJECTTIMESHEETITEM PTSI, TIMESHEET TS WHERE PTLS.PROJECTID = :B2 AND TS.TIMESHEETID = PTSI.TIMESHEETID AND TS.USERID = :B1 AND PTSI.TIMESHEETID = PTLS.TIMESHEETID AND PTSI.ITEMTYPE = PTLS.ITEMTYPE AND PTSI.ITEMID = PTLS.ITEMID AND (PTSI.ISPWFITEM = 'N' OR PTSI.ISPWFITEM IS NULL) AND PTLS.ITEMTYPE NOT IN ('OtherTsk','NewTsk','Loc','Glb') AND (PTLS.ITEMTYPE, PTLS.ITEMID ) IN (SELECT ITEMTYPE, ITEMID FROM PROJECTTIMELOGSSTAGE PTLS1 WHERE PTLS1.PROJECTID = :B2 AND PTLS1.TIMESHEETID = :B3 ) GROUP BY PTLS.ITEMTYPE, PTLS.ITEMID, PTLS.STAGEID, TS.USERID
Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-
Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.
I have two tables with same columns(15 of them), I am trying to find difference between two tables using minus operator and then insert in stage table using below code
Issue is table1 has 50 million records table2 is empty
so when first time when we execute this v_collection1,v_collection2 collection will have 50 million records in it which will go in memory, I think this is not good, because going in memory will eat memory and resources while sorting and other activities ?
After fetching records in collection we are inserting that in stage table and then COMMIT so i think that wont be good because committing 50 million will generate large amount of redo?
below is snippet of my code
DECLARE type lst_collection1 IS TABLE OF table1.col1%type INDEX BY binary_integer; type lst_collection2 IS [code].......
The prod stats has been implemented in development. The stats has been gathered 2 months back on dev while in production the stats has been gathered 2 weeks back.
My question shouldn't the high volume of data causes changes in plan in both the environment? My thinking is that plan can be different as the high volume of data are changing in prod it may lead to a different plan.
The below query is utilizing more than 17 Gb temp space. But still it is getting failed out due to insufficient temp space. is there any way to rewrite this query to reduce the temp utilization?
In a 3-node RAC setup; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS CPU time 5,802 34.9
RFS ping 15 5,118 33,671 30.8 Other
Log file sequential read 234,831 5,036 21 30.3 System I/O
Sql*Net more data from client 24,1711,08745 6.5 Network
Db file sequential read130,939 4533 2.7 User I/O
Findings:- On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
1)Are these symptoms of undersized log buffer? 2)I feel Network wait can be reduced by tweaking SDU & TDU values based on MDU.
We are using the 11g AMM feature and Memory_Target set to 96GB and total RAM on the Server is 128GB Now the top and free shows up only 200MB memory free on the system.
There are 2 process dbw0 and dbw1 which consumes the top memory and this is 30GB per dbw.
Why is the dbw process taking up so much memory when there is not much load on the database.
Doing some data conversion at the moment and using V$SESSION_LONGOPS to predict when the current task will be finished so I can run the next one.
V$SESSION_LONGOPS seems to have only the last 16 long operations for the session. Older operations are automatically purged. My bigger tables have 32 partitions, so after the first 16 are processed, I cannot tell which partition I am up to.
Googling "old longops" and "longops history" didn't work, nor did the same searches on this site. The Oracle Reference manual section on V$SESSION_LONGOPS did not mention that older entries are purged.
understanding the TKPROF output for the session that was executing an insert statement(inserting 70 lakh data) on which a row level trigger get fired and select from following table.The trace was run for 1 hour.
Table name - > GS_MAP_RCC_CCIT
This table is having 37 rows as in on single block and having primary key index on RCC_NUM that is also contained on single block.We are getting maximum wait events on db_file_sequential_read.
As per my understanding this is due to the contention for the same block because for each row these queries get fired.
Predicate Information (identified by operation id): ---------------------------------------------------
1 - filter("OBJECT_TYPE"='VIEW')
I understand that because the query is accessing large number of blocks which exceeds 6 seconds threshold, the session appears in v$session_longops And this has nothing to do with the time spent in displaying the records on the screen
Now, why the following query does not appear in v$session_longops?
select /*+ full(dbo) */ count(*) from dbo;
select /*+ full(dbo) */ count(*) from dbo;
COUNT(*) ---------- 1006525
Elapsed: 00:00:01.36 dv3_erie-dev_08 >select * from table(dbms_xplan.display_CURSOR(null,null,'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------ SQL_ID 4mgjwp3tv70db, child number 0 ------------------------------------- select /*+ full(dbo) */ count(*) from dbo
The table has all nullable columns and thus I assume the index on ID can't be used for this COUNT Thus ideally it shall be accessing same number of blocks and thus shall appear in v$session_longops
I've been searching the web for examples of how to run a trace.It's needed for a session different then current in trace LEVEL 4 (I need the bind variables values in the trace).
Unfortunately, I couldn't trace with DBMS_SUPPORT.START_TRACE_IN_SESSION, I understand that this is because it was only introduced in Oracle 11g.
how can i trace a session in level 4 on Oracle 10g for another session?
At my prod instance one sql query ran for about 9 hours.Now the sql query completed with success.Suddenly our ops team want to know the which sql query was running for 9 hours.As the query got success no record is being found at session level.
i am trying to find the index want to rebuild or not for that i have analyzed that index after that i don't know how to calculate the ration could any one steps to do calculate the following ratio
Run the ANALYZE INDEX command on the index to validate its structure and then calculate the ratio of LF_BLK_LEN/LF_BLK_LEN+BR_BLK_LEN and if it isn?t near 1.0 (i.e. greater than 0.7 or so) then the index should be rebuilt. Or if the ratio BR_BLK_LEN/ LF_BLK_LEN+BR_BLK_LEN is nearing 0.3.
I have two table APPLICATION_1 and APPLICATION_2, where user's login date is captured.There will be multiple records for each user.Same user may or may not be present in both tables.
The requirement is to find those users who have not logged-in in the last 3 years.
-- Test Case
DROP TABLE application_1; DROP TABLE application_2; CREATE TABLE application_1 ( seq NUMBER , user_id VARCHAR2(30),
[code].....
Using the below query I'm able to get the desired output.
SELECT user_id, MAX(login_date) last_login_date FROM ( SELECT user_id, MAX(login_date) login_date FROM application_1 GROUP BY user_id UNION ALL SELECT user_id, MAX(login_date) login_date FROM application_2 GROUP BY user_id ) GROUP BY user_id
How do i find a particular SQL or a set of SQL's which are excuted against a table (user identified table) that is either a very frequently executed query against that table or high impact SQL against that table? I am currently looking through the AWR reports to go through all the queries but i was wondering if there are any dictionary views where we can find this info from?