Performance Tuning :: Alternate Query Instead Of Querying Table Twice?
Nov 11, 2012
The Item data for individual cycles is as below.
Item_tbl
ItemRundate StddateStatus
P103-Nov-1203-Nov-12A
P104-Nov-1204-Nov-12D
P2 04-Nov-1203-Nov-12A
The requirement is I have to get the details of all data of previous Active cycle(status A) when the Item became disabled(status = D) for Input date.
In above case,since for Item P1 and on cycle date 04-Nov-12,status is D,I have to consider the previous active cycle which is 03-Nov-12. Based on above std date,the data is queried from another table to get all the Items. Item P2 should not be considered in above case.
Below is the code which I have written which considers the rundate as Input parameter.
-- To get the Items disabled for Input date
with Itemdisabled as
(
select item,stddate maxcycledate
from Item_tbl
where rundate = stddate
[code]....
In above case,I'm querying the Item_tbl twice once for getting the disabled Items and once for getting the Previous cycle which is active.
Is there any way to query above only once and get the required results using Lag/Lead functions etc.
View 5 Replies
ADVERTISEMENT
Jan 27, 2012
When we use the AUTOTRACE / EXPLAIN PLAN we can see the (estimated) best execution plan the Optimizer found for our SQL Command. Is there a way to display all alternate execution plans the Optimizer has considered ?
View 1 Replies
View Related
Apr 24, 2012
I am inserting data into a global temporary table and then using 'parallel' hint to query from this temporary table. I remember reading that the queries on the temp table may not run in parallel as the parallel sessions may not be able to see the data in the temporary table
However the execution plan as well as px_session, v$sql indicate that the query on the temporary table in fact run in parallel mode
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------
SQL_ID 7d68g52g0mskz, child number 0
-------------------------------------
select /*+ gather_plan_statistics parallel(t,4) */ * from dbo_gtt t order by id,object_id
Plan hash value: 5815349
--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |Used-Tmp|
--------------------------------------------------------------------------------------------------------------------------
| 1 | PX COORDINATOR | | 1 | | 99999 |00:00:01.46 | 3 | | | |
|
| 2 | PX SEND QC (ORDER) | :TQ10001 | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|
| 3 | SORT ORDER BY | | 0 | 1 | 0 |00:00:00.01 | 0 | 11M| 1311K| 424K (0)|
|
| 4 | PX RECEIVE | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|
| 5 | PX SEND RANGE | :TQ10000 | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|
| 6 | PX BLOCK ITERATOR | | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|
|* 7 | TABLE ACCESS FULL| DBO_GTT | 0 | 1 | 0 |00:00:00.01 | 0 | | | |
|
--------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
7 - access(:Z>=:Z AND :Z<=:Z)
select px_servers_executions from v$sql where sql_text like 'select%dbo_gtt t%';
PX_SERVERS_EXECUTIONS
---------------------
0
4
select sid, qcsid, server#, degree from v$px_session where qcsid = 228;
SID QCSID SERVER# DEGREE
---------- ---------- ---------- ----------
247 228 1 4
196 228 2 4
224 228 3 4
234 228 4 4
226 228 1 4
252 228 2 4
212 228 3 4
229 228 4 4
228 228
9 rows selected.
unfortunately I do not have access to get trace (tkprof) report. What must have happened during the execution?
View 3 Replies
View Related
Feb 10, 2012
We have a table with huge data which is skewed on a 'status' column. The 'status' column has 6 distinct values with 1 particular value occupying 80-85% records.
In the batch process we query the data on the status and process the retrieved records. My senior is insisting on partitioning which I see not much feasible considering cost implications just for a part of functionality
See there are 6 status 'A','B','C','D','E','F'
with 'A' occupying 80% records
'B' to 'F' occupies 2% till 14% records in the table(approx)
1) Create a conditional index on status (using case) to have records with all statuses except 'A' Then create If-ELSE structure
IF input parameter is 'A'
select /*+ FULL Parallel(t) */ * from t where status='A';
ELSE
Select /*+ INDEX (t conditional_index) */ * from t where status in ('B','C');
END IF;
I want to create conditional index here for 2 reasons
1] since it will have values for status except 'A' this nullify the chance that this index will be picked up when status='A' will be queried
Thus making the performance worst (status ='A' is for 80% records) - The IF-ELSE is additional protection
2] Less impact on the DMLS as the index will not be on status='A' which contribute to large chunk of records
2)Populate a dummy table which would contain rowid and status. Since the business closes at 21:00 and batch process starts at 21:30
Between these times periods refresh the dummy table every day using merge (to catch business transactions during the day)
Now during the batch process retrieve records from the main table using the rowids in the dummy table depending on the input status value
3)Create index on status
Make sure hard coded status values are used in the database procedures
Gather stats with the histograms
And leave it to the Optimizer to choose the best possible path
View 3 Replies
View Related
Jun 24, 2011
Below query is taking a long time...
select gam.SOL_ID,COUNT(gam.FORACID) from gam,smt where
gam.ACID=smt.ACID and gam.ACID NOT IN(select ACID from imt) and
gam.SCHM_TYPE in('SBA','CCA','CAA','ODA') and GAM.ACCT_CLS_FLG='N' and
gam.SOL_ID IN(select SOL_ID from IMT) group by gam.SOL_ID
/
attached is the explain plan.
in which index on IMT table is not used. And the query is doing a FTS on IMT table. What needs to be done to avoid FTS on IMT table.
View 10 Replies
View Related
Oct 9, 2013
I have table with 129 million records.
If I just to select count(*) on the table its taking more than a minute in Sql Developer.
The table structure is as below, Primary key is a sequence and then 3 foriegn keys and one non-unique index on the date column.
<Table_Name>
column1 NOT NULL NUMBER ( Primary Key)
column2 NOT NULL NUMBER ( FK1)
[Code].....
View 1 Replies
View Related
Sep 23, 2010
When i run a script that does a select from a single table (table has 33521868 records)the query is executed in about .094 seconds. I use the exact same query to insert into a temporary table and the query takes 10 minutes and more.
What should I be doing to speed up this process. Also tried using hints and it does not speed up the insert.
View 3 Replies
View Related
Feb 19, 2012
How do i find a particular SQL or a set of SQL's which are excuted against a table (user identified table) that is either a very frequently executed query against that table or high impact SQL against that table? I am currently looking through the AWR reports to go through all the queries but i was wondering if there are any dictionary views where we can find this info from?
View 2 Replies
View Related
Jul 11, 2013
Below query is degrading the performance of database. As we know that, without where clause, query do full table scan.Now, it is written to generate the sequence no.
SQL> explain plan for
2 SELECT NVL(MAX(P.NUM_SERIAL_NO), 0) + 1 FROM CNFGTR_IRDA_ENVELOPE_DTLS P
3 /
Explained.
SQL> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 3345343365
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
[code].....
Index is not created on the column.
View 6 Replies
View Related
Jun 16, 2010
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
View 2 Replies
View Related
Aug 9, 2010
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
View 6 Replies
View Related
Oct 31, 2011
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
View 39 Replies
View Related
Jun 21, 2011
I use oracle 11g r2
i have created a table for objects
create table table_t
(Record_no number,
t_no number,
position SDO_GEOMETRY
, occupation_time number);
and inserted many values (examples below )
INSERT INTO tra VALUES
(2,
20503,
SDO_GEOMETRY
(2001,
NULL,
SDO_POINT_TYPE (1387, 0, NULL),
NULL,
NULL),
23037
)
/
and( position) indexed as Rtree spatial index
now when i run spatial query such as
SELECT * FROM tra t WHERE
SDO_FILTER(t.position, MDSYS.SDO_GEOMETRY(2001,NULL,NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1,1003,3),
MDSYS.SDO_ORDINATE_ARRAY(0,0,9000,0)), 'querytype=WINDOW') = 'TRUE' and t.position.sdo_point.X=1;
i do not know how many IO accrued ?
i tried set autotrace on
but the physical read is 0 , this is not possible because i have more than 100000 objects there and all indexed as R-tree
View 2 Replies
View Related
Dec 14, 2010
In SQL, almost all the thing which are possible with join is possible with sub-query also and vice-a-versa.
So when should I use sub-query and when should I go for join?
View 9 Replies
View Related
Jan 13, 2009
Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-
---------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 178K| 6609K| 129K|
| 1 | TABLE ACCESS FULL| BHI_TRACKING | 178K| 6609K| 129K|
----------------------------------------------------------------
Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.
View 14 Replies
View Related
Dec 26, 2011
I am executing the query below:
INSERT INTO temp_vendor(vendor_record_seq_no,checksum,rownumber,transaction_type,iu_flag)
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */ vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1,
vendor_data vd2,
vendor_data vd3,
(SELECT rownumber,
[code]....
It is taking different approaches (execution plans) while executing for same set of parameters. Due to which sometimes it executes successfully, but sometimes it fills all TEMP space and get failed. I am pasting both the execution plan (different from expalin plan) below:
I. Successfull Execution Plan:
------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 65612 (100)| | | |
|* 1 | HASH JOIN | | 1 | 6121 | 65612 (1)| 00:13:08 | | |
[code]....
II. Failed with TEMP space Execution Plan:
--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 1967 (100)| | | |
|* 1 | FILTER | | | | | | | |
| 2 | SORT GROUP BY | | 1 | 8233 | 1967 (3)| 00:00:24 | | |
|* 3 | HASH JOIN | | 1 | 8233 | 1966 (3)| 00:00:24 | | |
[code]....
View 8 Replies
View Related
Jun 25, 2010
We are facing performance issues on our production instance 10g(10.2.0.4) 32-bit OELinux 5.3 2GB SGA. The performance is mainly related to one of the table which is sized about 32Gb. We have rebuild the indexes as well but problemstill persist. We are considering to pin SQL statement in shared pool which is hitting the same table frequently. But as far what we have find, is that we can only pin procedures or function in shared pool. True/false?If we can, then how to pin SQL statement in shared pool?If we can not, then is there any other way?
View 3 Replies
View Related
Nov 27, 2012
A change(running a statistics collection package) was made, and now a query is running impossibly slow, disabling use of an application function.
This is cause by an sql , which's sql_id is know to me: 4zty77fkf907j when I run
select * from GV$SQL_PLAN_STATISTICS where SQL_ID = '4zty77fkf907j'
I get that the plan hash value is 3405747833.
I ran SELECT Q.PARSING_SCHEMA_NAME
,Q.SQL_ID,ROUND(Q.ELAPSED_TIME / Q.EXECUTIONS)/1000000 AS AVG_USECS
,Q.CHILD_NUMBER
,Q.HASH_VALUE
,Q.PLAN_HASH_VALUE
,'''' || REPLACE(Q.SQL_FULLTEXT
[code]....
and seen that the result was 2 rows(for 2 RAC nodes), with a figure of hundreds of seconds of execution, meaning - bad plan.
when I run
select * from dba_hist_sql_plan where sql_id='4zty77fkf907j'
I get plan hash value of 3390684693. I assume this might be a "good plan"
For a start, how can I hint my query to use that "good plan" before I use stored outlines to implement it?
View 2 Replies
View Related
Mar 21, 2013
I have Following wjich takes some minutes to executes i want to be tune so this query Executes fast.
Query :
SELECT a.CHDR_EXCH_CD ,TMHR_EXCH_TM_CD,'S' Sec_type,
round(SUM (Decode(csdt_Depo_Typ,'I',(Cal_Scheme_Rate(csdt_rsm_code,TO_DATE(:P_DT_FR,'DD-MM-RR'),csdt_stsc_cd,csdt_scp_qty)*csdt_scp_qty)-
(Cal_Scheme_Rate(csdt_rsm_code,TO_DATE(:P_DT_FR,'DD-MM-
[Code]...
Explain Plan Result :
Plan
SELECT STATEMENT ALL_ROWS
Cost: 1,669 Bytes: 67 Cardinality: 1
15 HASH GROUP BY
Bytes: 67 Cardinality: 1
14 CONCATENATION
[Code].....
After i see result , no 4 in explain plan result gives TABLE ACCESS FULL . i want to be indexing on that how to do this..
This table MG_COLL_SCP_DTL have index like this
CREATE UNIQUE INDEX CSDT_PK ON MG_COLL_SCP_DTL
(CSDT_CHDR_TRANS_NUM, CSDT_PROD_TYP, CSDT_TRAN_SR_NO, CSDT_CHDR_CDTL_COLL_TYP, CSDT_CHDR_CDTL_COLL_TYP_CD,
CSDT_STSC_CD, CSDT_CHDR_CLNT_CD, CSDT_CHDR_CLNT_TM_CD)
[Code]....
How to Reduce cost ???
View 4 Replies
View Related
Aug 17, 2012
I am facing a weird situation wherein the explain plan of same sql in SIT and PROD is different.In fact the explain plan is very costly in Prod.Also the DB version of both SIT and PROD is same.
Below is the sql and corresponding explain plan in Prod and SIT respectively.
Query:
SELECT seq,CCN,ProcessorPart,root_item,comp_path,Item,comp_item,comp_item_type,
lag(comp_item_type,1,'PART') over(PARTITION BY seq ORDER BY lvl)Nxt_comp_item_type,lvl,bom_qty,
ROUND(CASE min(abs(bom_qty)) OVER (PARTITION BY seq ORDER BY lvl)
WHEN 0 THEN 0 ELSE 1 END * EXP (SUM (LN (nullif(abs(bom_qty),0))) OVER (PARTITION BY seq ORDER BY lvl))) Ulti_qty,
'AMER'
[code]...
The tables referred in above query is small tables containing arnd 10k records.The above tables are partitioned on Region and not indexed.
Explain Plan in Prod: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS165173613539322883634804
SORT UNIQUE236360
UNION-ALL
PARTITION LIST SINGLE117240
[code]...
Explain Plan in SIT: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS3211689
SORT UNIQUE347240
UNION-ALL
PARTITION LIST SINGLE172120
[code]...
I am not able to attribute why there is a huge change in Cost between SIT and Prod.Apparently the Job is going for 3-5 hours which used to get completed within 20mins in SIT.
View 5 Replies
View Related
Feb 21, 2013
I have a query optimized as to it indexes and others runs immediately when the answer is few records in SQL Server such as Oracle, however when the result is large eg 20,000 records all data access times are very diferent. The query returns many fields (about 20) and some of them are of type Varchar 250 and some of 2000 I understand here may be the problem, but not is because for similar results (20,000 records) sql run in 2 seconds and Oracle but it responds little to have full data takes around 30 seconds. The problem is really in bringing information to all these fields since if the inquiry it also but only returning a numeric field is done in 2 seconds. Tests I've done them both through ODBC, in the Toad as in the own Oracle console on the server, so it is not problem Driver or flow of data through the network, I would like to think that this is some of the settings I think there is as much difference between Oracle and Sql. The databases are ORACLE 10 and SQL Server 2008.
View 1 Replies
View Related
Dec 8, 2010
There is a table in Database with millions of records and a query --- Select rowid, ANI, DNIS, message from tbl_sms_talkies where rownum<=:"SYS_B_0" ---- using the high CPU and also this query having high number of executions.
View 10 Replies
View Related
May 5, 2011
why different elapsed time in Oracle9i and oracle10g for the same query ?
View 2 Replies
View Related
Jun 1, 2012
I don't have any dba privileges, can you share a scripts which can tell how many block my query is fetching with or without indexes. How do i also get buffer hit, how can i get i/o without sql trace as i don't have access to dump_dest
I have a below query
SELECT DISTINCT ser_id AS STA_ser_id, rct_name AS STA_name
FROM sd_servicecalls, rep_codes, rep_codes_text
WHERE ser_sta_oid = rcd_oid
AND rcd_oid = rct_rcd_oid
AND rct_name IN ('New', 'Awaiting Approval', 'Approved', 'In Progress', 'Awaiting Supplier', 'Awaiting RFC', 'Awaiting Release', 'Pending Release', 'On Hold', 'Resolved', 'Implemented', 'Closed');
Does large hash value in explain plan mean more resource needed and more time to execute the query, How can i use ADDM for the above sql.
View 7 Replies
View Related
May 18, 2012
I am working with following select clause:
select distinct S.ID ID
from
ods.hso_Scheduled H,
ods.SO_SCHEDULED S
where
S.insertion_date >= to_date('01-DEC-2011') and S.insertion_date < to_date('01-FEB-2012')
and H.ID=S.ID
Both the involved tables, HSO_SCHEDULED is having 15 million records and SO_SCHEDULED table is having 7 million records.
I have created following indexes on these tables:
Indexes on SO_SCHEDULED:
Index name Column name
SS_IDX1ID, SO_SUB_ITEM__ID
SS_IDX2INSERTION_DATE
SS_IDX3ID, INSERTION_DATE
SS_IDX4ID, SO_SUB_ITEM__ID, INSERTION_DATE
SO_SCHEDULED_ID_PKID
Indexes on HSO_SCHEDULED:
HSS_IDX1ID, SO_SUB_ITEM__ID, LAST_UPDATING_DATE
HSS_IDX2ID, LAST_UPDATING_DATE
HSS_IDX3ID
My problem is despite of having relevant indexes present, my query is not hitting them and hence the performance is very bad.
Explain Plan:
Execution Plan
----------------------------------------------------------
Plan hash value: 574170360
--------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 814K| 38M| | 9574 (1)| 00:02:15 | | | |
| 1 | PX COORDINATOR | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | 814K| 38M| | 9574 (1)| 00:02:15 | Q1,02 | P->S | QC (RAND) |
| 3 | HASH UNIQUE | | 814K| 38M| 185M| 9574 (1)| 00:02:15 | Q1,02 | PCWP | |
|* 4 | HASH JOIN | | 2653K| 124M| | 9564 (1)| 00:02:14 | Q1,02 | PCWP | |
| 5 | PX JOIN FILTER CREATE| :BF0000 | 814K| 22M| | 3903 (1)| 00:00:55 | Q1,02 | PCWP | |
| 6 | PX RECEIVE | | 814K| 22M| | 3903 (1)| 00:00:55 | Q1,02 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | 814K| 22M| | 3903 (1)| 00:00:55 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | 814K| 22M| | 3903 (1)| 00:00:55 | Q1,00 | PCWC | |
|* 9 | TABLE ACCESS FULL| SO_SCHEDULED | 814K| 22M| | 3903 (1)| 00:00:55 | Q1,00 | PCWP | |
| 10 | PX RECEIVE | | 14M| 272M| | 5654 (1)| 00:01:20 | Q1,02 | PCWP | |
| 11 | PX SEND HASH | :TQ10001 | 14M| 272M| | 5654 (1)| 00:01:20 | Q1,01 | P->P | HASH |
| 12 | PX JOIN FILTER USE | :BF0000 | 14M| 272M| | 5654 (1)| 00:01:20 | Q1,01 | PCWP | |
| 13 | PX BLOCK ITERATOR | | 14M| 272M| | 5654 (1)| 00:01:20 | Q1,01 | PCWC | |
| 14 | TABLE ACCESS FULL| HSO_SCHEDULED | 14M| 272M| | 5654 (1)| 00:01:20 | Q1,01 | PCWP | |
--------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("H"."ID"="S"."ID")
9 - filter("S"."INSERTION_DATE">=TO_DATE(' 2011-12-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"S"."INSERTION_DATE"<TO_DATE(' 2012-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
View 6 Replies
View Related
Apr 27, 2012
I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.
I have verified the following
All records in tables in 2 schemas are same.
All indexes are same
Analyzed the tables
Gathered Histogram on all the columns as per the first schema.
But now i still have the same problem, don't know what could be the problem.
Table_nameNum_RowsBlocks
PRPSL_LST_T5866107159
PRPSL_WKFLW_ACTVTY_T5829904030
ITEM_CHR_VAL_T5134340104049020
ITEM_RGN_ASSN_T8571220137215
Also attached 2 screen shots of OEM Plans..
View 2 Replies
View Related
Aug 10, 2012
The below query takes more than 30 minutes to return data.All the objects used are views. There is no direct reference to any table.The views with _mnth_ have data for 7 distinct months. The base table for all the views have a composite PK on the columns AR_ID (or ACCT_AR_ID),MSRMNT_PRD_ID
I need the order by, as the query is part of informatica code, and the order by works in the further processing.
SELECT ac.ar_id AS acct_ar_id, m.msrmnt_prd_dt AS msrmnt_prd_dt
--removed the rest of column list to reduce size of code.
FROM edxf.ar_rsrv_mnth_v ac,
edxf.crdt_acct_mnth_v c,
edxf.crdt_acct_v ca,
(SELECT msrmnt_prd_id, msrmnt_prd_dt
FROM edxf.msrmnt_prd_v
WHERE msrmnt_prd_id =
[code]....
Also, the count of data in the views is as below.
ViewTotal countCount for 1 msrmnt_prd_id
---------------------------------------------------------
ar_rsrv_mnth_v1841892281945
crdt_acct_mnth_v664941457087369
crdt_acct_v12258728NA
View 7 Replies
View Related
Aug 10, 2011
when am trying to use nvl for one condition it is taking lot of time to execute but when am removing nvl function then the query executing in 2 min. condition is given below
(HOI2.ORG_INFORMATION1)=nvl(TO_CHAR(:p_set_of_books_id) , HOI2.ORG_INFORMATION1)
but when am using the same condition as below the querry executing in 2 min
(HOI2.ORG_INFORMATION1)=TO_CHAR(:p_set_of_books_id)
my query given below
(SELECT cust.customer_number cust_no, cust.customer_name customer,
cnv.item_no, SUM(wd.shipped_quantity) shp_qty_nos,
0 rtn_qty_nos,
ROUND(SUM(cnv.cnf * wd.shipped_quantity), 3) shp_qty_tons,
0 rtn_qty_tons, 0 net_shp_qty_nos, 0 net_shp_qty_tons
[code]...
View 30 Replies
View Related
Sep 14, 2012
I have a query which takes 5 minutes when run through the java app which uses hibernate. I've cut and pasted the SQL directly from hiberate trace file and run it in sqlplus/sqldeveloper and it runs instantly (0.01 seconds)(uses the index all ok and explain plan looks good - see below.) I don't know how to get the explain plan when it's running through the app or why it should be any different anyway as the query is identical.
My query is as follows:
SELECT /*+ INDEX (SPD SPD_SEQ_CODE) */ SPD.*
FROM SEQ_ADDR_DATA SPD, SEQ_ADDR_LEVELS SPL
WHERE SPD.SPVR_ID = '10'
AND SPL.SPLE_ID = SPD.SPLE_ID
AND SPL.SPLE_LEVEL <= '2' AND SPDA_ID NOT IN
[code]....
View 7 Replies
View Related
Sep 1, 2010
For an query, cost was 16Lakhs and was taking 30min, I brought down the cost to 1.5lakhs, but still it is taking 30min.
There were many outer joins and same table has been Used(FROM clause) 5 times in the query. I have introduced WITH clause, and brought down the cost.
View 7 Replies
View Related