Performance Tuning :: Query Taking High CPU And Execution Time In Oracle 11gR2
Dec 24, 2012
The below query is taking high CPU almost 98% and longer time to execute.
SELECT ancestor,
Max(D.alarmstate) ALARMSTATE,
Max(D.sialarmstate) SIALARMSTATE,
Max(D.uncralarmstate) UNCRALARMSTATE,
Max(M.commstate) COMMSTATE,
Max(M.nncommstate) NNCOMMSTATE,
Max(M.servicestate) SERVICESTATE,
Max(M.abnormal) ABNORMAL,
CASE
[code]....
View 15 Replies
ADVERTISEMENT
Mar 9, 2010
In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T
WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT)
IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT
FROM LOAD_TRADE_ORDER
WHERE IND_IS_BAD_RECORD='N');
Tables Used:
oTRADE_ORDER_EMP_ALLOCATION Row count (329525880)
oLOAD_TRADE_ORDER Row count (29281)
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
View 4 Replies
View Related
Sep 1, 2010
For an query, cost was 16Lakhs and was taking 30min, I brought down the cost to 1.5lakhs, but still it is taking 30min.
There were many outer joins and same table has been Used(FROM clause) 5 times in the query. I have introduced WITH clause, and brought down the cost.
View 7 Replies
View Related
Sep 25, 2013
select
serialnumber from product where productid in
(select /*+ full parallel(producttask 16) */productid from producttask where
startedtimestamp > to_date('2013-07-04 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
and startedtimestamp < to_date('2013-07-05 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
and producttasktypeid in
[code]....
Explain plan output:
Plan hash value: 2779236890
-----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name| Rows| Bytes | Cost (%CPU)| Time| Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT || 1 | 29 | 9633M (8)|999:59:59 |||
|* 1 | FILTER |||| ||||
| 2 | PARTITION RANGE ALL || 738M| 19G| 6321K (1)| 21:04:17 | 1 | 6821 |
[code]....
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( EXISTS (<not feasible>)
4 - filter("PRODUCTID"=:B1)
5 - filter(ROWNUM<100)
12 - access("MODELID"=:B1)
[code]....
Note: - SQL profile "SYS_SQLPROF_014153616b850002" used for this statement
View 2 Replies
View Related
Aug 25, 2010
I am facing a very strange issue with one of our Oracle query. The query is usually completes in a minute or two. Even the execution plan of the query is good and it works perfect most of the times, as expected. The query fetches about 1000-2000 records each day.
But on a given day, the query takes about 30-40 mins to execute completely. Upon checking the load on DB server, there are no other processes running which can impact the run time of this query. Moreover, the record counts fetched are almost same as compared to other days. There is no pattern observed as that this phenomenon occurs. it all happens once in a while.
Configuration is Oracle 10g with RAC environment on LINUX
View 5 Replies
View Related
Nov 23, 2010
I have one query in my production which is taking more CPU time. when that statement executing the CUP is taking more than 90%
I am attaching the sql query and indexes on the table.
View 4 Replies
View Related
Apr 12, 2013
How can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.
View 7 Replies
View Related
Jan 13, 2009
Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-
---------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 178K| 6609K| 129K|
| 1 | TABLE ACCESS FULL| BHI_TRACKING | 178K| 6609K| 129K|
----------------------------------------------------------------
Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.
View 14 Replies
View Related
Jul 11, 2013
The problem was describe:
- First time to execute: Using all indexes on 2 tables
- Second time to execute: Using only indexes on first table, full table scan on the other
- Third time to execute: Do FTS on both of tables.
Now, I show the objects and relate information here:
The Tables:
system@dbwap> select count(*) from my_wap.news_relation;
COUNT(*)
----------
272708
system@dbwap> select count(*) from my_wap.news_content;
COUNT(*)
----------
95092
system@dbwap> desc my_wap.news_content;
Name Null? Type
----------------------------------------------------- -------- ----------------
ID NOT NULL NUMBER(11)
SUBJECT NOT NULL VARCHAR2(500)
TITLE VARCHAR2(4000)
STATE NUMBER(1)
IMGPATH VARCHAR2(500)
ALIGN VARCHAR2(10)
[Code]....
View 7 Replies
View Related
Nov 27, 2012
The below query is utilizing more than 17 Gb temp space. But still it is getting failed out due to insufficient temp space. is there any way to rewrite this query to reduce the temp utilization?
SELECT T12.FRGHT_AMT_CURCY_CD,T23.LAST_UPD,T11.PAR_OU_ID,T9.MAIN_PH_NUM,T23.DISCNT_PERCENT,T23.X_ERROR_NUM,T18.ADDR,T14.X_ECO_B_END_1141,
T14.X_ECO_A_END_1141,T9.X_ECO_VALIDATION_FLG,T23.X_ECO_ERR_DESCR,T14.ASSET_NUM,T20.NAME,T23.X_ECO_REASON2,T14.X_ECO_B_END_ID,
T14.ASSET_NUM,T14.X_ECO_B_END_IWPC,T23.X_AE_CON_PH_NUM,T23.SHIP_ADDR_ID,T19.NAME,T23.X_BE_CON_LST_NAME,T23.CREATED_BY,T23.X_ECO_LOCATION,T8.LOC,
T3.MODIFICATION_NUM,T10.INTEGRATION_ID,T23.INTEGRATION_ID,T23.X_MESSAGE,T9.PR_ADDR_ID,T12.ACCNT_ID,T23.X_BEARERNO,T23.X_SUB_STATUS_CD,
[code]....
View 3 Replies
View Related
Aug 6, 2012
I'm planning to decrease the time taken to execute data by managing the redo log file but I'm kinda stuck in some aspect : > Why is my OPTIMAL_LOGFILE_SIZE is showing NULL ? > I'm trying to resize the LOGFILE capacity from 100M to 200M and I'm also adding 1 more LOG GROUP with 200M capacity too but turned out that didn't decrease my execution time.
View 12 Replies
View Related
Jun 4, 2010
attached query giving consistent execution plan but different timings across run
SELECT /*+ INDEX (CRT CRT_CUN_FK_I)*/
DISTINCT odr.dve_id
FROM company_requirements crt, orders odr, lelo_products la_pct
WHERE crt.qtn_cun_id = 10035637--10000021--10035667
AND crt.ID = odr.crt_id_quote_implemented
AND NVL (odr.cancellation_date, '31-Dec-9999') = '31-Dec-9999'
[code]....
we have 4 databases, 2 on each servers, such that db1 and db2 on server1 and db3 and db4 on server2
refer count of the records for column of biggest table in the query, taken on all 4 databases (The column is nullable)
select count(*) from company_requirements crt WHERE crt.qtn_cun_id = 10035637
db1 = 73335
db2 = 89073
db3 = 81182
db4 = 82936
First I executed the query on db1 and db2 while there wasn't any user logged on to the system
db1
**********
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 17.47 473.39 85704 1508102 0 0
[code]...
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file sequential read 85704 0.31 460.55
latch free 1 0.00 0.00
SQL*Net message from client 1 14.98 14.98
[code]...
Why the elasped time changed when data and plan hasn't changed at all? Also why the plan has different stats for round 1 and 2 on db1 and db2?
I ran it 2 times each round each database so hard parsing shall not be issue.Also why the number of rows accessed are different in db1,db2 and db3,db4 especially for step1 when count of crt.qtn_cun_id is similar?
In fact when the query was taking long I was the only user on the system Also I used hard coded value (no bind variables at all)
I checked num_rows, distinct keys as well which are quite similar across all 4 databases Also no stats where gather during the query execution
What I should have checked or monitored?
View 10 Replies
View Related
Oct 26, 2013
I wish to run a SQL query and measure elapsed time, then compare the values to other Oracle DBs from other companies. That will give me a feeling if our DB performs well.For example in UNIX world, you can create a random 4GB file to measure throughput I/O and compare the values (for example 4MB/sec).
What's the simplest way to compare DB response time from forum members to our own DB? I don't need 100% accurate numbers.
View 1 Replies
View Related
Dec 26, 2011
I am executing the query below:
INSERT INTO temp_vendor(vendor_record_seq_no,checksum,rownumber,transaction_type,iu_flag)
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */ vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1,
vendor_data vd2,
vendor_data vd3,
(SELECT rownumber,
[code]....
It is taking different approaches (execution plans) while executing for same set of parameters. Due to which sometimes it executes successfully, but sometimes it fills all TEMP space and get failed. I am pasting both the execution plan (different from expalin plan) below:
I. Successfull Execution Plan:
------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 65612 (100)| | | |
|* 1 | HASH JOIN | | 1 | 6121 | 65612 (1)| 00:13:08 | | |
[code]....
II. Failed with TEMP space Execution Plan:
--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 1967 (100)| | | |
|* 1 | FILTER | | | | | | | |
| 2 | SORT GROUP BY | | 1 | 8233 | 1967 (3)| 00:00:24 | | |
|* 3 | HASH JOIN | | 1 | 8233 | 1966 (3)| 00:00:24 | | |
[code]....
View 8 Replies
View Related
Aug 17, 2011
The below query is taking more than 5minutes to return the data for any criteria.The big tables are
SECURITY_POSITION_SUMMARY -- 60Million
WEB_TEAM_X_ACCOUNT_BM -- 26Million
and the rest of those are small tables..All the indexes are in place and I have tried with few hints but this query is slow.
WITH REPS
AS (SELECT DISTINCT REP_SET.FILTER_TOKEN
FROM (SELECT /*+ INDEX (wdsd WEBDATASETDTL_PK_TEAM) */
DISTINCT
WDSD.DATA_SETTING_ID, WDSD.FILTER_TOKEN
FROM WEB_DATA_SETTING_DETAIL WDSD,
[code]....
View 1 Replies
View Related
Jul 31, 2012
We are using the 11g AMM feature and Memory_Target set to 96GB and total RAM on the Server is 128GB Now the top and free shows up only 200MB memory free on the system.
There are 2 process dbw0 and dbw1 which consumes the top memory and this is 30GB per dbw.
Why is the dbw process taking up so much memory when there is not much load on the database.
View 4 Replies
View Related
Jan 24, 2013
My oracle database version is 11.2.0.3.0 where i am having one schema in that schema i am having 3 same tables with same structure same data but with different name.
but problem is in first table when i perform select query it takes 5 sec, in another table it is taking 0 sec and in third table it is taking 10 sec.
View 1 Replies
View Related
Apr 3, 2012
We have a query which makes Oracle behave very strangely. It is a straight-forward join between four tables of about 30.000 rows each, with some simple comparisons and some NOT LIKE:s.
When we run this query, it either takes about 1 second or more than 1.000 seconds to run and return the approximately 5.000 rows of the result. If we run the same query over and over again, it fluctuates back and forth between two different execution plans, apparently at random, 3 times out of 4 selecting the 1.000 second version and 1 time out of 4 the 1 second version.
There are no other connections to the database, the schema is not modified, the data is identical, the query is identical, and the response is identical, but the execution time alternates between 1 second and 1.000 seconds.On the same database instance we have another schema which is identical, but with slightly less data, which is used for development. The 1.000 second run times did not happen in that schema, but only in the test system's database.
Therefore we would REALLY like to understand what happens and why, so that we can avoid triggering this in the future. We could try locking the 1 second execution plan, but then we're afraid of doing the same thing wrong again in the future.
Here are the two execution plans that Oracle switches between, more or less at random:
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
5455 5455 5455 HASH JOIN (cr=15663 pr=10536 pw=0 time=855673 us cost=82273 size=2707430769293 card=14028138701)
79272 79272 79272 TABLE ACCESS FULL GROUPS (cr=1008 pr=0 pw=0 time=22154 us cost=277 size=10693 card=289)
[code]...
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
5455 5455 5455 HASH JOIN (cr=15664 pr=0 pw=0 time=778178696 us cost=30838477 size=741611997206725 card=3842549208325)
375411 375411 375411 TABLE ACCESS FULL GROUP_GROUPS_FLAT (cr=3782 pr=0 pw=0 time=51533 us cost=1029 size=25152738 card=375414)
[code]...
The query:
select g.ucid, a.ucid
from account a, groups g, group_members gm, group_groups_flat ggf
where a.ucid = gm.ucid_member
and gm.ucid_group = ggf.ucid_member
[code]...
And excerpts from the schema:
CREATE TABLE "PDB"."GROUPS"
(
"UCID" VARCHAR2(256 BYTE),
"UNIX_GID" NUMBER(*,0),
[...]
[code]...
View 4 Replies
View Related
Oct 16, 2012
I am building a database to store call quality statistics for VOIP networks. It is a very insert heavy application, and data reliability is of relatively minimal importance (in the sense that a few corrupt call records here and there doesn't matter the way corruption does in for example a banks database). Long term storage is also unimportant, most customers only wish to keep 3 months of data readily available in the database. Most do not even archive the older data.
To that end I am searching for every possible way to improve my insert performance and the internet has turned me onto the idea of NOLOGGING. These are the steps I have taken to reduce my IO consumed by the Redo and Undo logs.
1. I am inserting with the APPEND_VALUES hint.
2. I have disabled force logging at the database level
3. I have disabled force logging at the tablespace level
4. I have disabled logging on the relevant table and each of its indices
As best I can tell this is all I can do to minimize Redo/Undo, but based on my observations of the Disk portion of the WinServer2008 Performance Monitor, this has made little to no change in the amount of IO to my REDO and UNDO files. IO to the .dbf containing my table makes up less than 20% of the total disk IO for oracle.exe, the rest is the REDO and UNDO logs.
asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:897564200346274711
The above article is a little over my head but I am able to extract from it that I will never entirely eliminate REDO/UNDO, which is fine, but I would think I could get it lower than it currently is.ted.
View 26 Replies
View Related
Dec 29, 2012
I am aware that from 11g, memory_target is sufficient for memeory management between SGA and PGA.
what happens if MEMORY_TARGET set to non-zero and SGA_TARGET set to zero values in a 11g database? Does it enable automatic memory management within the SGA?
We regularly hit by ORA-4031 errors. Also, memory_target advisory (v$memory_target_advice) does not show any advisory information.
for eg:
memory_max_target = 500m
memory_target = 500m
and
sga_max_size=500m
sga_target=0
View 6 Replies
View Related
May 5, 2011
why different elapsed time in Oracle9i and oracle10g for the same query ?
View 2 Replies
View Related
Apr 27, 2012
I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.
I have verified the following
All records in tables in 2 schemas are same.
All indexes are same
Analyzed the tables
Gathered Histogram on all the columns as per the first schema.
But now i still have the same problem, don't know what could be the problem.
Table_nameNum_RowsBlocks
PRPSL_LST_T5866107159
PRPSL_WKFLW_ACTVTY_T5829904030
ITEM_CHR_VAL_T5134340104049020
ITEM_RGN_ASSN_T8571220137215
Also attached 2 screen shots of OEM Plans..
View 2 Replies
View Related
Jul 23, 2010
I am facing one performance issue, in which the query cost is very low compare to cpu cost and as a result the cpu always show the high graph.I am also attaching the gv$sql and gv$sql_plan data of this query.
This is the query:
SELECT PTLS.ITEMTYPE , PTLS.ITEMID , PTLS.STAGEID, TS.USERID, SUM(PREVIOUSHOURS) AS PREVIOUSHOURS, MIN(STARTDATE) AS STARTDATE, MAX(STARTDATE) AS ENDDATE FROM PROJECTTIMELOGSSTAGE PTLS, PROJECTTIMESHEETITEM PTSI, TIMESHEET TS WHERE PTLS.PROJECTID = :B2 AND TS.TIMESHEETID = PTSI.TIMESHEETID AND TS.USERID = :B1 AND PTSI.TIMESHEETID = PTLS.TIMESHEETID AND PTSI.ITEMTYPE = PTLS.ITEMTYPE AND PTSI.ITEMID = PTLS.ITEMID AND (PTSI.ISPWFITEM = 'N' OR PTSI.ISPWFITEM IS NULL) AND PTLS.ITEMTYPE NOT IN ('OtherTsk','NewTsk','Loc','Glb') AND (PTLS.ITEMTYPE, PTLS.ITEMID ) IN (SELECT ITEMTYPE, ITEMID FROM PROJECTTIMELOGSSTAGE PTLS1 WHERE PTLS1.PROJECTID = :B2 AND PTLS1.TIMESHEETID = :B3 ) GROUP BY PTLS.ITEMTYPE, PTLS.ITEMID, PTLS.STAGEID, TS.USERID
View 17 Replies
View Related
Jan 22, 2009
how to reduce the cpu cost for a query at query level.
View 10 Replies
View Related
Feb 13, 2013
I have two tables with same columns(15 of them), I am trying to find difference between two tables using minus operator and then insert in stage table using below code
Issue is table1 has 50 million records
table2 is empty
so when first time when we execute this v_collection1,v_collection2 collection will have 50 million records in it which will go in memory, I think this is not good, because going in memory will eat memory and resources while sorting and other activities ?
After fetching records in collection we are inserting that in stage table and then COMMIT so i think that wont be good because committing 50 million will generate large amount of redo?
below is snippet of my code
DECLARE
type lst_collection1
IS
TABLE OF table1.col1%type INDEX BY binary_integer;
type lst_collection2
IS
[code].......
View 4 Replies
View Related
Jun 4, 2010
The prod stats has been implemented in development. The stats has been gathered 2 months back on dev while in production the stats has been gathered 2 weeks back.
My question shouldn't the high volume of data causes changes in plan in both the environment? My thinking is that plan can be different as the high volume of data are changing in prod it may lead to a different plan.
View 6 Replies
View Related
Apr 13, 2011
In a 3-node RAC setup; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS
CPU
time 5,802 34.9
RFS
ping 15 5,118 33,671 30.8 Other
Log file sequential
read 234,831 5,036 21 30.3 System I/O
Sql*Net
more data from
client 24,1711,08745 6.5 Network
Db file sequential
read130,939 4533 2.7 User I/O
Findings:-
On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
1)Are these symptoms of undersized log buffer?
2)I feel Network wait can be reduced by tweaking SDU & TDU values based on MDU.
View 2 Replies
View Related
Feb 4, 2011
How to find the tables in the database on which high DMLs are firing.
View 5 Replies
View Related
Mar 28, 2013
How can I find out the particular oracle session which was consuming high memory in the past?
I can't get the data in v$sessstat
Unable to get the information in AWR
dba_hist_active_session_history do not have field which indicate memory related information
Shall I concetrate on EVENT in dba_hist_active_session_history which continuosly had sort, direct path read
Or
Locate sql_id from dba_hist_sqlstat with high SORTS_DELTA for snapshots belonging to problematic time period and then using the sql_id query dba_hist_active_session_history
which approach I shall take to find out the session which consumed most memory in the past?
View 8 Replies
View Related
Mar 2, 2013
How to find time log for query or any procedure like start time and end time and total time.
So that I can tune that queries properly.
Also how can we find estimated query running time.?
View -1 Replies
View Related