Performance Tuning :: Servers Will Be Running SELECT Which Returns Zero Rows All Time
Feb 11, 2011
Our application servers will be running a SELECT which returns zero rows all the time.This SELECT is put into a package and this package will be called by application servers very frequently which is causing unnecessary CPU.
Original query and plan
SQL> SELECT SEGMENT_JOB_ID, SEGMENT_SET_JOB_ID, SEGMENT_ID, TARGET_VERSION
FROM AIMUSER.SEGMENT_JOBS
WHERE SEGMENT_JOB_ID NOT IN
(SELECT SEGMENT_JOB_ID
FROM AIMUSER.SEGMENT_JOBS) 2 3 4 5 ;
[code]....
Which option will be better or do we have other options?They need to pass the column's with zero rows to a ref cursor.
View 6 Replies
ADVERTISEMENT
Apr 27, 2012
I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.
I have verified the following
All records in tables in 2 schemas are same.
All indexes are same
Analyzed the tables
Gathered Histogram on all the columns as per the first schema.
But now i still have the same problem, don't know what could be the problem.
Table_nameNum_RowsBlocks
PRPSL_LST_T5866107159
PRPSL_WKFLW_ACTVTY_T5829904030
ITEM_CHR_VAL_T5134340104049020
ITEM_RGN_ASSN_T8571220137215
Also attached 2 screen shots of OEM Plans..
View 2 Replies
View Related
Aug 25, 2010
Is there any way i can Get how many rows are processing with UPDATE statement while the Update statement is still running.
View 2 Replies
View Related
Sep 25, 2013
select
serialnumber from product where productid in
(select /*+ full parallel(producttask 16) */productid from producttask where
startedtimestamp > to_date('2013-07-04 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
and startedtimestamp < to_date('2013-07-05 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
and producttasktypeid in
[code]....
Explain plan output:
Plan hash value: 2779236890
-----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name| Rows| Bytes | Cost (%CPU)| Time| Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT || 1 | 29 | 9633M (8)|999:59:59 |||
|* 1 | FILTER |||| ||||
| 2 | PARTITION RANGE ALL || 738M| 19G| 6321K (1)| 21:04:17 | 1 | 6821 |
[code]....
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( EXISTS (<not feasible>)
4 - filter("PRODUCTID"=:B1)
5 - filter(ROWNUM<100)
12 - access("MODELID"=:B1)
[code]....
Note: - SQL profile "SYS_SQLPROF_014153616b850002" used for this statement
View 2 Replies
View Related
May 23, 2012
select rl.org_rollup_skey from (select fc.org_skey as "FC_ORG_SKEY" from IA.HIST_FCT_FCST_SLS fc
inner join IA.DIM_ORG do
on fc.org_skey = do.org_skey
where do.org_nam IN ('101', '485','486')) p
INNER JOIN IA.DIM_ORG_HIER h
ON p.fc_org_skey = h.desc_org_skey
inner join IA.FCT_FCST_SLS_ORG_ROLLUP rl
on h.GPRNT_ORG_SKEY = rl.org_rollup_skey
Above join is taking is running forever even as subquery
(select fc.org_skey as "FC_ORG_SKEY" from IA.HIST_FCT_FCST_SLS fc
inner join IA.DIM_ORG do
on fc.org_skey = do.org_skey
where do.org_nam IN ('101', '485','486'))
returns no rows and this subquery give result in 10 seconds according to me Full query should not take more tha 20 secs.
View 1 Replies
View Related
Jan 12, 2011
How to enable tracing for sessions connected using shared servers ?
View 1 Replies
View Related
Nov 17, 2011
I am getting temp tablespace error "ORA-01652: unable to extend temp segment by 128 in tablespace TEMP" for the following code.
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */ vd1.vendor_record_seq_no,
tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1, vendor_data vd2, vendor_data vd3,
(SELECT rownumber, MAX (DECODE (control_column_seq_no, 91150, original_value, NULL)) AS value1,
[code]...
Right now used tables has the following number of records-
SELECT COUNT(*) FROM vendor_data --292890442
SELECT COUNT(*) FROM temp_vendor_data --0
SELECT COUNT(*) FROM temp_vendor_record --0
This query is part of an application, but consuming too much of temporary tablespace (68 GB allocated). I found it out by using query below:
select * from v$session a, v$sql b
where a.sql_id=b.sql_id
and status = 'ACTIVE'
I am not sure, why this problem is occuring.
View 5 Replies
View Related
Feb 19, 2012
How do i find a particular SQL or a set of SQL's which are excuted against a table (user identified table) that is either a very frequently executed query against that table or high impact SQL against that table? I am currently looking through the AWR reports to go through all the queries but i was wondering if there are any dictionary views where we can find this info from?
View 2 Replies
View Related
Jul 31, 2013
I have a Datamart DB which has a continuously increasing volume. I run a daily optimize job, to have the data analysed using the below:-
EXECUTE dbms_stats.gather_schema_stats(ownname=> 'xxx' ,estimate_percent=> 25 , DEGREE => 4, cascade=> TRUE );
analyze table xxx.abc compute statistics; But this optimization itself is taking nearly 4 hrs to complete and I can't afford to have the delay.
Is there a better way of running this optimization?
View 9 Replies
View Related
Oct 25, 2011
We have a program that is taking about 13-14 hours to run and we need to generate traces to see where it is taking so long. I usually use 10046 for the tracing, I'm wondering if the traces can be built incremently so that it doesn't become one huge trace file.
View 14 Replies
View Related
Nov 13, 2012
The scale of the tests that generate the following scenario is not huge right now, only 50 users simulated (or you can think of them as independently running threads if you like). But here is the crunch, the queries generated (from generic transaction layer) are all running against a table that has 600 columns! We can't really control this right now, but this is causing masses amounts of IO (5GB per request) making requests queue for disk availability (which are setup RAID 0/1); its even noticable for as few as 3 threads.
I have rendered the SQL on one occasion to execute in 13 seconds for a single user but this appears short lived as when stats were freshly gathered it went up to the normal 90-120 seconds. I've added the original query to the file, however the findings here along with our DBA (who I trust implicitly) suggest that no amount of editing the query will improve the response times, increasing the PGA/SGA (currently 4/6GB respectively) will only delay the queuing for a bit and compression can work either. In short it looks as though we've hit hardware restrictions already for this particular scenario.
As I can't really explain how my rendered query no longer takes 13 seconds, it's niggling me that we might be missing a trick.So I was hoping for some guidance on possible ways of optimising these type of queries against such wide tables, in other words possibilities that we haven't considered...
Attached is the query and plan.
View 9 Replies
View Related
Sep 26, 2013
How can i check if paging happening while running the query. As i have 4gb of PGA target but the query is taking long time in parallel and has hash join.
how to check paging in 11gr2
View 3 Replies
View Related
May 2, 2012
How can we check completion status for running sql query. i.e. how much % completed
SQL> begin
2 delete from gsmcrmdw.wc_loy_txn_f_aa
3 where integration_id in
4 ( select integration_id
5 from support_olap.recover_wc_loy_txn_f_953to955
6 );
7 commit;
8 end;
9 /
View 35 Replies
View Related
Aug 23, 2010
We were trying to insert approx 76 million records worth of approx 4 GB in a table when the insert operation failed with the archive log error.
The database was hanged, we re-started the database and currently there are no records in the table but when we are firing a select
select count(*)
from table
It's taking approx 2 mins to come out with the result.
We checked and found that there are no locks on the table currently.
what do we need to do to get the performance back
View 5 Replies
View Related
Nov 23, 2010
I have one query in my production which is taking more CPU time. when that statement executing the CUP is taking more than 90%
I am attaching the sql query and indexes on the table.
View 4 Replies
View Related
Sep 26, 2012
update xml eating up a lot of time is there any way to tune
SELECT UPDATEXML(:B3 , '/FCUBS_RES_ENV/FCUBS_BODY/FLD/FN[@TYPE="' || :B2 ||
'"]', :B1 )
FROM
DUAL
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 499 0.44 0.90 0 3 0 0
Fetch 499 1.49 2.87 0 0 0 499
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 999 1.93 3.77 0 3 0 499
real code
SELECT updatexml(l_xml,
'/FCUBS_RES_ENV/FCUBS_BODY/FLD/FN[@TYPE="' ||
upper(replace(cspkes_misc.fn_getparam(p_parent_list,
l_parent_list_clob,
'Y',
l_cnt,
'>'),
'-',
'_')) || '"]/text()',
l_fn_str)
INTO l_xml
FROM dual;
View 4 Replies
View Related
Apr 12, 2013
How can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.
View 7 Replies
View Related
Dec 12, 2011
Till statspack we had
elapsed time = time spent on waits + time CPU was used
Total time during snaps = Elapsed time + (may be) time waited for CPU...In AWR is it possible to draw such equation? I can see that the AWR report has following elements
1) End Snap time - Begin Snap time
2) DB time - as mentioned at the top of AWR report
3) DB CPU - in "Top 5 Timed Foreground Events" (I assume this is 'CPU used by sesson timing' in statspack)
4) Total of time for all Statistics in "Time Model Statistics"
5) BUSY_TIME + IDLE_TIME - "Operating System Statistics"
Time between 2 snapshots? or what else? Also for which seconds to multiply to 'DB Time(s)' per second and 'DB CPU(s)' per second in Load Profile to get the db time and CPU time?
View 2 Replies
View Related
Feb 13, 2013
For example, we have a table ACCOUNT (snowflake dimension containing other dimension keys) and I have many fact tables based on this dimension. Normally data warehouse load happens like first dimensions needs to be loaded and then facts. Our frequency of loads is 30 mins.
To increase the rate in which the data will be available in the facts (as its a financial application), am considering to have two batches one with dimension and another one with fact (came to this conclusion as there is no dependency like first dimensions to be loaded then only fact) just the update might get missed sometimes. But if I do that, when dimension gets loaded, it will be read in the facts in another session. Will this affect the performance ?
LOADING (insert/update) and selecting data from table at the same time. Will it affect the performance in any way.
View 1 Replies
View Related
Jul 9, 2012
I understand that when data is read from the disk, I/O is done..And When computations are done then CPU is used..Then where the following equation fits?
DB Time = sum of database CPU time + waits
Is I/O considered as a part of CPU time?
Does this equation changes with SAN, OS caching?
View 3 Replies
View Related
Feb 2, 2013
Elapsed Time (s)CPU Time (s)Executions Elap per Exec (s) % Total DB TimeSQL IdSQL ModuleSQL Text
3,263 32 1 3263.49 2.79 3ta0ms19fvgds httpd.exe SELECT CASE WHEN COUNT >= 1...Elapsed
6,360 164 2 3180.17 5.44 51jx99dm0swv7 cpm_srvscript@ahcaxasmil1b (TNS V1-V3) SELECT /*+ CCL<PFT_GET_RP...
On AWR, I see two script that are out of ordinary, and I want to make sure that I interpret them correctly.
1) "Elap per Exec (s)" shows 3263.49 with 1 "Executions".
2) "Elap per Exec (s)" shows 3180.17 each execution with 2 "Executions".
Does this mean that this script ran for ~ 54 minutes (3263.49 / 60 seconds) for 1 "Execution" and ~ 53 minutes (3180.17 / 60 seconds) per each execution? I need to understand "Elapsed Time (s),CPU Time (s),Executions ,Elap per Exec (s), % Total DB Time" represent.
View 9 Replies
View Related
Jun 2, 2011
We have a person running a query and following is the explain plan
explain plan for
select distinct(extractvalue(xmltype(a.email_variables), '/CalliopeData/Attributes/HOTEL_BRAND')) as ThisBrand
from hh.t_ecomm_mem_relations a
where extractvalue(xmltype(a.email_variables), '/CalliopeData/Attributes/HOTEL_BRAND') not in (select b.code_brand from hh.t_pr_brand b)
and a.code_corr_ecat = 'PREA'
and a.status = 'S'
and a.audit_time > sysdate - 1
;
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1904775187
-------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 32018 | 25 (4)| 00:00:01 | | |
| 1 | HASH UNIQUE | | 14 | 32018 | 25 (4)| 00:00:01 | | |
|* 2 | FILTER | | | | | | | |
| 3 | PARTITION RANGE ITERATOR | | 14 | 32018 | 17 (0)| 00:00:01 | KEY | 13 |
|* 4 | TABLE ACCESS BY LOCAL INDEX ROWID| T_ECOMM_MEM_RELATIONS | 14 | 32018 | 17 (0)| 00:00:01 | KEY | 13 |
|* 5 | INDEX RANGE SCAN | X_ECOMM_MEM_RELATIONS3 | 15 | | 3 (0)| 00:00:01 | KEY | 13 |
|* 6 | INDEX FULL SCAN | I_PR_BRAND | 1 | 3 | 1 (0)| 00:00:01 | | |
-------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter( NOT EXISTS (SELECT /*+ */ 0 FROM "HH"."T_PR_BRAND" "B" WHERE
LNNVL("B"."CODE_BRAND"<>EXTRACTVALUE("XMLTYPE"."XMLTYPE"(:B1),'/CalliopeData/Attributes/HOTEL_BRAND'))))
4 - filter("A"."STATUS"='S')
5 - access("A"."AUDIT_TIME">SYSDATE@!-1 AND "A"."CODE_CORR_ECAT"='PREA')
filter("A"."CODE_CORR_ECAT"='PREA')
6 - filter(LNNVL("B"."CODE_BRAND"<>EXTRACTVALUE("XMLTYPE"."XMLTYPE"(:B1),'/CalliopeData/Attributes/HOTEL_BRAND')))
=========================
I tried not exists and some Antijoin hints in the subquery which is used in filter NOT IN. I tried minus too.
View 15 Replies
View Related
Oct 14, 2010
I'm trying to write a query that counts how many sessions are active during a 1 second time interval, then returns the maximum number of sessions active during any time interval, and all the time intervals that hit that max.
Here's a sample of the inner query results:
"INTERVAL_VALUE""SESSIONS"
"13:14:47" 13
"13:14:52" 13
"13:14:54" 13
"13:19:05" 4
"13:19:28" 4
[code]....
The max(sessions) is 13, so what I want the final output to be is:
"INTERVAL_VALUE""SESSIONS"
"13:14:47" 13
"13:14:52" 13
"13:14:54" 13
Here is the create sql for the test data:
CREATE TABLE "SESSION_TABLE"
(
"SESSIONKEY" NUMBER,
"SESSION_START_TIME" TIMESTAMP,
"SESSION_END_TIME" TIMESTAMP,
CONSTRAINT "PK_SESSIONKEY" PRIMARY KEY ("SESSIONKEY")
);
[code]....
Here is my query that works:
SELECT
maxval.interval_value,
allval.sessions,
licenselimit
FROM
(SELECT
[code]....
View 2 Replies
View Related
Sep 4, 2011
If a table(have a primary key) is empty(after truncate),the sql of dml(insert,update) is very quickly,but if the table have many rows about 10,000,000 rows, the dml is very slowly,why?
View 6 Replies
View Related
Apr 27, 2012
I have a table which contains 8,21,177 amount of data totally.Now I am trying to delete around 4,84,000 of data from this table by using just one filter i.e. my query is something like below
DELETE /*+ parallel(resource,4) */ FROM resource where created_by = 'MIGN'
This is going to delete 4,84,000 rows of data . But my current issue is this is taking lots of time to delete the data . To be precise , its almost taking 25 hours to delete this data..The created_by column is indexed .
Execution Plan
----------------------------------------------------------
Plan hash value: 2389236532
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 499 | 20459 | 39 (0)| 00:00:
01 |
| 1 | DELETE | RESOURCE | | | |
[code]....
View 26 Replies
View Related
Jul 11, 2013
The problem was describe:
- First time to execute: Using all indexes on 2 tables
- Second time to execute: Using only indexes on first table, full table scan on the other
- Third time to execute: Do FTS on both of tables.
Now, I show the objects and relate information here:
The Tables:
system@dbwap> select count(*) from my_wap.news_relation;
COUNT(*)
----------
272708
system@dbwap> select count(*) from my_wap.news_content;
COUNT(*)
----------
95092
system@dbwap> desc my_wap.news_content;
Name Null? Type
----------------------------------------------------- -------- ----------------
ID NOT NULL NUMBER(11)
SUBJECT NOT NULL VARCHAR2(500)
TITLE VARCHAR2(4000)
STATE NUMBER(1)
IMGPATH VARCHAR2(500)
ALIGN VARCHAR2(10)
[Code]....
View 7 Replies
View Related
Apr 15, 2011
Test1 table have around 385772300 rows. below delete and select statment talking lot of time.
Select stament taking more than 1 hrs.
SELECT TO_NUMBER(MAX(f.T3))
--INTO v_FISCAL_MONTH_ID
FROM Test1 f;
delete statment taking more than 2 hours
DELETE FROM TEST1 WHERE TRUNC(T10) < TRUNC(ADD_MONTHS(SYSDATE,-36));
CREATE TABLE Test1
(
[Code].....
View 4 Replies
View Related
Sep 11, 2013
Yesterday, there were performance issue at server. So today, when i am generating report for that particular period, found snapshot id sequence is serially but with skipped hourly timed. Instead of generating report at 15:30, it generated at 16:30.
Enter value for num_days: 2
Listing the last 2 days of Completed Snapshots
Snap
Instance DB Name Snap Id Snap Started Level
------------ ------------ --------- ------------------ -----
tagidev TAGIDEV 2857 10 Sep 2013 00:30 1
2858 10 Sep 2013 01:30 1
2859 10 Sep 2013 02:30 1
2860 10 Sep 2013 03:30 1
2861 10 Sep 2013 04:30 1
2862 10 Sep 2013 05:31 1
[code]....
Below are the details at alert log -
Tue Sep 10 14:28:20 2013
Thread 1 cannot allocate new log, sequence 7029
Checkpoint not complete
Current log# 2 seq# 7028 mem# 0: E:APPORACLEORADATATAGIDEVREDO02.LOG
Thread 1 advanced to log sequence 7029 (LGWR switch)
[code]....
1) why snap didn't started at 15:30?
2) since database just started at the scheduled time of AWR snap time. But generated at 16:32 instead of 16:30, though last services "SMCO" is started at 16:42. How it snap id generated for this particular time?
3) what does "kewastUnPackStats(): bad magic 1 (0x000000001B3CE48D, 0)" mean?
View 7 Replies
View Related
Aug 17, 2010
Is there any way to reduce the index creation time. I have one table which has 7700000 records and every day this table get truncate and we create with create table as select statement and then create the 4 indexes and each index took 5 minutes so in totality it took 20 minutes in index creation.
View 2 Replies
View Related
May 5, 2011
why different elapsed time in Oracle9i and oracle10g for the same query ?
View 2 Replies
View Related