Performance Tuning :: Index 15 Columns When Cannot Predict Columns Will Be Used In Where Clause?
Apr 4, 2011
I am running a fairly busy Oracle 10gR2 DB, one of the tables has about 120 columns and this table receives on average 1500 insertions per second. The table is partitioned and the partitioning is based on the most important of the two timestamp columns. There are two timestamps, they hold different times.
Out of these 120 columns, about 15 need to be indexed. Out of the 15 two of them are timestamp, at least one of these two timestamp columns is always in the where clause the queries.
Now the challenge is, the queries we run can have any combination of the 13 other columns + one timestamp. In reality the queries never have more than 7 or 8 columns in the where clause but even if we had only 4 columns in the where clause we would still have the same problem.
So if I create one concatenated index for all these columns it will not be very efficient because after the 4th or 5th column the sorting would no longer be very useful and I believe the optimiser would simply not use the rest of the index. So queries that use the leading columns of the index in sequence work well, but if I need to query the 10th column the I have performance issues.
Now, if I create multiple single column indexes oracle will have to work a lot harder to maintain all these indexes and it will create performance issues (I have tried that). Besides, if I have multiple single column indexes the optimiser will do nested loops twice or three times and will hit only the first few columns of the where clause so I think it will kind of be the same as the long concatenated index.
What I am trying to do is exactly what the Bitmap index would do, it would be very good if I could use the AND condition that a Bitmap index uses. This way I could have N number of single column indexes which the optimiser could pick from and serve the query with exactly the ones it needs. But unfortunately using the Bitmap index here is not an option given the large amount of inserts that I get on this table.
I have been looking for alternatives, I have considered creating multiple shorter concatenated indexes but this still would not address the issue since many queries would still not be served properly and therefore would take a very long time to complete.
What I had in mind would be some sort of multidimensional index, I am not even sure if such thing exists. But essentially it would be some sort of index that could serve a query efficiently regardless of the fact that the where clause has the 1st, 3rd and last columns of the index.
So considering how widely used Oracle is and how many super large databases there are out there, this problem must be common.
View 12 Replies
ADVERTISEMENT
Mar 5, 2012
I have two tables like below-
CREATE TABLE prod_vendor_record (
vendor_record_seq_no NUMBER NOT NULL,
study_seq_no NUMBER NOT NULL,
vendor_subject_seq_no NUMBER NULL,
control_dataset_seq_no NUMBER NOT NULL,
checksum NUMBER NOT NULL,
processing_flag VARCHAR2(1) NULL,
[code]....
and executing below query on those tables-
insert into prod_temp_vendor(vendor_record_seq_no,checksum,rownumber,transaction_type,iu_flag)
select vr.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U' from
prod_vendor_record vr, prod_temp_vendor_record_20000 tvr
where vr.study_seq_no=25707
and vr.control_dataset_seq_no=3910
and vr.key_hash=tvr.key_hash
and dbms_lob.compare(vr.key_col_val, tvr.key_col_val) = 0
and tvr.error_flag is null;
let me know on which columns of PROD_VENDOR_RECORD table to apply index to make processing faster. As I tried to build index like below-
CREATE INDEX idx_prod_vendor_record
ON prod_vendor_record (
study_seq_no,
control_dataset_seq_no,
key_hash
)
/
But it is not being used by above query (see execution plan)
--------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 5168 (100)| | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | 212 | 445K| 5168 (1)| 00:01:03 | Q1,02 | P->S | QC (RAND) |
[code]....
View 15 Replies
View Related
Dec 14, 2010
I have been reading Oracle documentaion about access paths, got strucked at the concept 'Leading columns in index'. what is meant by 'Leading columns of an index', how to find/judge them that they are leading columns.
The url where I am reading is[URL]...
View 4 Replies
View Related
Mar 30, 2006
we have oracle 8i/win nt. (Prod Server running 24 X7)
On doing a explain plan on long running queries. i realized a couple of columns (that were used in the WHERE condition, were not having any indexes.)
More importantly, here we do not have a Test environment.
I would like to know if i can go ahead and create indexes for those 2 columns by issuing
CREATE INDEX index_name on table_name(Column_name);
on our PDB.
Would this be of any impact on our PDB? Do i check anything more prior to building these indexes. or Do have i create them at a down time of PDB.
View 4 Replies
View Related
Jan 28, 2011
I came across situation where a Nullable column is not using index for 'order by' clause. I added Not Null condition in the 'where' condition but it wasn't useful. I don't wanted to make composite index with not nullable column or with constant or modify column to 'Not Null'
So I carried out test cases and during which I found that in one case the sql statement does 'fast full scan' for data access but does not use index for 'order by' sorting
here are the steps
Initially I kept the column Nullable
SQL> create sequence s5;
Sequence created.
SQL> create table t5 as select s5.nextval id,a.* from dba_objects a where rownum<1001;
Table created.
SQL> set pages 100
SQL> select column_name,nullable from user_tab_columns where table_name='T5';
SQL> create index i5 on t5(id);
Index created.
SQL> exec dbms_stats.gather_table_stats(user,'T5',cascade=>true);
PL/SQL procedure successfully completed.
exit
SQL> alter session set events '10046 trace name context forever, level 12';
select *
from
t5 where id is not null order by id
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 68 0.00 0.00 0 16 0 1000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 70 0.01 0.00 0 16 0 1000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
------- ---------------------------------------------------
1000 SORT ORDER BY (cr=16 pr=0 pw=0 time=4771 us)
1000 TABLE ACCESS FULL T5 (cr=16 pr=0 pw=0 time=1157 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 68 0.00 0.00
SQL*Net message from client 68 49.49 49.72
********************************************************************************
select /*+ index(t i5) */ *
from
t5 t where id is not null order by id
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 68 0.00 0.00 0 150 0 1000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 70 0.00 0.00 0 150 0 1000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
------- ---------------------------------------------------
1000 TABLE ACCESS BY INDEX ROWID T5 (cr=150 pr=0 pw=0 time=5167 us)
1000 INDEX FULL SCAN I5 (cr=71 pr=0 pw=0 time=3141 us)(object id 4673065)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 69 0.00 0.00
SQL*Net message from client 69 22.89 28.04
Now I modified the 'id' column to Not Null
SQL> alter table t5 modify id not null;
SQL> set pages 100
SQL> select column_name,nullable from user_tab_columns where table_name='T5';
COLUMN_NAME N
------------------------------ -
ID N
OWNER Y
OBJECT_NAME Y
SUBOBJECT_NAME Y
OBJECT_ID Y
DATA_OBJECT_ID Y
OBJECT_TYPE Y
CREATED Y
LAST_DDL_TIME Y
TIMESTAMP Y
STATUS Y
TEMPORARY Y
GENERATED Y
SECONDARY Y
14 rows selected.
select *
from
t5 order by id
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 29 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 68 0.00 0.00 0 16 0 1000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 70 0.01 0.01 0 45 0 1000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
------- ---------------------------------------------------
1000 SORT ORDER BY (cr=16 pr=0 pw=0 time=2398 us)
1000 TABLE ACCESS FULL T5 (cr=16 pr=0 pw=0 time=1152 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 68 0.00 0.00
SQL*Net message from client 68 37.74 37.91
********************************************************************************
select /*+ index(t i5) */ *
from
t5 t order by id
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 68 0.00 0.00 0 150 0 1000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 70 0.00 0.00 0 150 0 1000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
------- ---------------------------------------------------
1000 TABLE ACCESS BY INDEX ROWID T5 (cr=150 pr=0 pw=0 time=4166 us)
1000 INDEX FULL SCAN I5 (cr=71 pr=0 pw=0 time=3142 us)(object id 4673065)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 68 0.00 0.00
SQL*Net message from client 68 8.28 8.45
select id
from
t5 order by id
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 68 0.00 0.00 0 6 0 1000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 70 0.00 0.00 0 6 0 1000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
------- ---------------------------------------------------
1000 SORT ORDER BY (cr=6 pr=0 pw=0 time=1342 us)
1000 INDEX FAST FULL SCAN I5 (cr=6 pr=0 pw=0 time=1093 us)(object id 4673065)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 68 0.00 0.00
SQL*Net message from client 68 1.88 1.89
Questions are
1) Why adding 'where id is not null wasn't enough for the index to get used in 'order by'?
2) While we got 'fast full scan' why index wasn't used for 'order by' clause?
3) Do we need the indexed column in where clause for being used in 'order by clause' too?
4) Do we need 'order by' clause if we are selecting only the indexed column with sequence generated values?
View 5 Replies
View Related
Mar 28, 2012
What could be the strategy on deciding which columns to create partitions on? I understand for deciding this, first, we need to know the columns we are using in the WHERE clause
consider following scenario assume that emp table is very large
(1)
Query - select * from emp where empno=<pk_value>
what could the partitioning column here?
This is confusing as we access with, quite selective criteria here but we access lot of data. No particular date range Or No particular flag, value to check with! Would hash partition on the pk_column will be useful here?
(2)
select * from emp where empno=<pk_value> and deptno=<some value> what could the partitioning column here? I assume deptno here. Right?
In general what could be the considerations in deciding the partitioning columns? whether the column is not a unique key column Or the column is preferrable if used in joins Or the column is not updateable
Finally will the pruning (take place) if the query spans across multiple partitions, though Not all partitions?
View 21 Replies
View Related
Feb 14, 2013
We have a table emp_details with 23772889 records. Our requirement is to increase few of the columns size in the table emp_details. We are following the below alter statement which is taking around 2 hours of time.
ALTER TABLE emp_details
MODIFY
(
address char(90)
,department char(30)
)
/
Is there any way to improve the above query performance?
View 2 Replies
View Related
Sep 25, 2013
DB Used : Oracle 10g.
A table X : NUM, INST are column names
NUM ----- INST
1234 ----- 23,22,21,78
2235 ----- 20,7,2,1
1298 ----- 23,22,21,65,98
9087 ----- 20,7,2,1
-- Based upon requirement :
1) Split values from "INST" Column : suppose 23
2) Find all values from "NUM" column for above splitted value i.e 23 ,
Eg:
For Inst : 23 ,
It's corresponding "NUM" values are : 1234,1298
3) Save these values into
A table Y : INST, NUM are column names.
INST NUM
23 1234,1298
1) I have a thousand records in Table X , and for all of those records i need to split and save data into Table Y.Hence, I need to do this task with best possible performance.
2) After this whenever a new data comes in Table X, above 'split & save' operation should automatically be called and append corresponding data wherever possible..
View 4 Replies
View Related
Feb 6, 2011
I am trying to update columns of Table A with the columns of Table B. Both these tables have 60,000 rows each. I tried this operation using following 2 queries:
Query 1
Update TableA A
set
(A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3
from TableB
where A.CODE=B.CODE)
Query 2
Update TableA A
set
(A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3
from TableB
where A.CODE=B.CODE)
where exists
A.code = (select B.code
from TableB B
where A.code=B.code)
When i execute these two above queries, it keeps executing indefinitely.
View 4 Replies
View Related
Jun 19, 2013
select * from nrc_trans_descr where type_id_nrc=60013 -- it has 18 columns and i have hard coded 60013 for simplification here.60013 is derived from 3 other table Output is ( it can have many rows too.typically for each type_id_nrc there is one row ).
TYPE_ID_NRC TRIGGER_STATUS INSTALLMENT_TYPE_ID_NRC
---------------------------------------------------------------------
60013 0 61013
i have to pass TYPE_ID_NRC and INSTALLMENT_TYPE_ID_NRC to restriction_id column in a different table. currently i am doing like this
select * FROM DISCOUNT_RESTRICTIONS WHERE discount_id in (12085,12086)
and (restricted_id in ( select type_id_nrc from nrc_trans_descr where type_id_nrc=60013)
or restricted_id in ( select installment_type_id_nrc from nrc_trans_descr where type_id_nrc=60013));
am using ORACLE 10GR2(solution for 11gr2 is welcome too)
View 16 Replies
View Related
Mar 18, 2011
I want to Suppress the Dummy columns from the UNION Clause of Query.
SELECT
'HEADER' AS record_type
,null C2
,null C3
,null C4
,null C5
FROM dual
UNION ALL
SELECT empid
[code]....
Now I need to remove the all the Dummy fields associated to "HEADER" and "TRAILER" records.
View 2 Replies
View Related
Jun 28, 2011
I have a huge table (about 60 gb) partition over range. The index on this table is global index created on 4 columns together. I have a query which is running very slowly. The explain plan is showing the use of this global index.Explain plan is not showing pstart and pend because the index is global.
View 6 Replies
View Related
May 9, 2012
this is my test data :
create table tab_1(c1 number,c2 number,c3 number);
create table tab_2(c1 number,c2 number,c3 number,c4 number,c5 number,c6 number);
insert into tab_1 values(1,1,1);
insert into tab_1 values(2,2,2);
insert into tab_2 values(1,1,1,3,3,3);
insert into tab_2 values(2,2,2,4,4,4);
The final result would be, update columns c1,c2,c3 from table tab_1 whith the values of columns c4,c5,c6 from tab_2 where tab_1(c1,c2,c3) exist in tab_2(c1,c2,c3).
My first aproach was :
merge into tab_1
using (select c1,c2,c3,c4,c5,c6 from tab_2) tab_2
on (tab_1.c1=tab_2.c1 and
tab_1.c2=tab_2.c2 and
tab_1.c3=tab_2.c3
)
[code]....
but gets error : ORA-01779
create table temp_table as
select tab1.c1,
tab1.c2,
tab1.c3,
tab2.c4 new_1,
[code]....
View 1 Replies
View Related
Nov 19, 2010
I have a query which had a join:
a.c1=b.c1 and a.c2=@var
where @var is user supplied input at runtime...We had a index on a.c2 . The CBO would use this index to generate an opitimised query plan.We found some records from table "b" were dropping due to inner join. So we made a change in join. It'd be like
a.c1(+)=b.c1 and nvl(a.c2,@var)=@var
This query is no longer using the index, instead its doing a full table scan causing the query to slowdown.I have tried creating index on nvl(a.c2,'31-dec-9999')
But the CBO won't use it.Anyway to create index on this col so that full table scan can be avoided?
View 2 Replies
View Related
Aug 9, 2013
How to force an index if the table not using the index?
View 10 Replies
View Related
May 4, 2010
The below sql is giving different number of result sets while adding further columns in select clause.i.e After adding the columns 4,5,6 in the below query its giving different number of result set.In this case the result set count would be 5.
Before adding the columns 4,5,6,the result set count was 11.
SELECT PAYMENT_METHOD_MAP.NETTINGGROUP_ID,
PAYMENT_METHOD_MAP.CREDITPAYMENTMETHOD_CD,
PAYMENT_METHOD_MAP.DEBITPAYMENTMETHOD_CD,
PAYMENT_METHOD_MAP.AGENT_ID,
SETTLEMENT.NETTINGGROUP_ID,
SETTLEMENT.SETTLEMENTDATE
[code]....
View 8 Replies
View Related
Oct 17, 2011
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'
[code]...
View 9 Replies
View Related
Dec 17, 2012
Suppose i have two columns ID and Status (or any number of columns ) belongs to table customer. Now I want to create below index
Ex. create index test1 on customer (ID , Status )
But before creating this index i want to check that whether there is already index on these 2 columns ? May be by other name ?
View 23 Replies
View Related
Sep 30, 2010
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
View 24 Replies
View Related
Sep 13, 2013
I have a table with a non-unique index consisting of three columns. The first column is not null while the remaining two are nullable. Queries using this index will chiefly be made in two ways.
1. Column one and two having values. Column three is null.
2. Column one and thre having values. Column two is null.
In both cases I expect range scan will be used since it's non-unique. In the first case the scan will be on values in column one and two. But what happens in case two. Will the range be on colum one, column two(being null) and cxolumn three? Or will it be on just column one since the second column is null? I have done some testing. I can see , using EXPLAN PLAN, that range scan is used in both cases. how the index is used?
Is there any other drawbacks with an index like this?
View 13 Replies
View Related
Mar 6, 2013
somehow cannot understand why index is not used
in Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
1. Included only indexed column and got a perfect plan
explain plan for2 select s.x_cnt
3 from reported_summary s
4 where s.x_cnt>0;
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2674489506
-------------------------------------------------------
| Id | Operation | Name | Cost (%CPU)|
-------------------------------------------------------
| 0 | SELECT STATEMENT | | 306 (8)|
|* 1 | INDEX FAST FULL SCAN| S_NUI01 | 306 (8)|
-------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("s"."x_CNT">0)
2. Included some other column and got TABLE ACCESS FULL
explain plan for2 select s.x_cnt,s.ru_id
3 from reported_summary s
4 where s.x_cnt>0;
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2142873335
---------------------------------------------------------------
| Id | Operation | Name | Cost (%CPU)|
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2421 (3)|
|* 1 | TABLE ACCESS FULL| REPORTED_SUMMARY | 2421 (3)|
---------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("s"."x_CNT">0)
3. Included all other columns and got TABLE ACCESS FULL as well explain plan for2 select s.x_cnt,s.* 3 from reported_summary s 4 where s.x_cnt>0;
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 2142873335
---------------------------------------------------------------
| Id | Operation | Name | Cost (%CPU)|
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2421 (3)|
|* 1 | TABLE ACCESS FULL| REPORTED_SUMMARY | 2421 (3)|
---------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("s"."x_CNT">0)
View 5 Replies
View Related
Jun 16, 2010
I want to create a report by using one field and one text as columns name in layout but display the all the columns. I mention the 5 column names in query.how can I write function in summary column.
View 4 Replies
View Related
Jul 19, 2012
What should be the order of columns while creating b-tree non-unique index.
Low cardinality first
or
Hight cardinality first
View 15 Replies
View Related
Jan 21, 2011
I have a two question.
Question 1:How to select all columns from table except those columns which i type in query
Question 2:How to select all columns from table where all columns are not null without type each column name which is in empty data
View 5 Replies
View Related
Mar 31, 2012
This query is taking 7 hours to execute as I am retrieving data from history table dept_hist.
select count(distinct empid), e.group_nm, d.date,
from emp e, dept_hist d
where e.deptno = d.deptno
and e.up_ts > sysdate -30
[Code]...
Its taking 7 hours to execute.restructing this query.
View 4 Replies
View Related
Jun 4, 2013
avoid duplication of **where** clause in my query.
In my below query, **JOIN** condition is same for both the queries and **WHERE** condition also same except this clause "and code.code_name="transaction_1" In **IF ** condition only credit and debit is swapped on both queries, due to this **Credit and Debit** and this where clause "and code.code_name="transaction_1" I am duplicating the query. avoid this duplication. I am using oracle 11g
SELECT day AS business_date,
SUM(amount) AS AMOUNT,
type_amnt AS amount_type,
[Code]....
View 2 Replies
View Related
Dec 27, 2010
high number of executions of specific types of queries which is using only rownum clause. For exam.
select ani, rowid from tbl_smschat_upuor where rownum<=:"SYS_B_0";
DB is having high number of executions of these type of queries and these when I m checking the execution plan for the same type of queries it is accessing the full table scan.
======================execution plan for above query
1000 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 91289622
--------------------------------------------------------------------------------
[code]....
View 3 Replies
View Related
Dec 8, 2010
There is a table in Database with millions of records and a query --- Select rowid, ANI, DNIS, message from tbl_sms_talkies where rownum<=:"SYS_B_0" ---- using the high CPU and also this query having high number of executions.
View 10 Replies
View Related
Jul 11, 2013
Below query is degrading the performance of database. As we know that, without where clause, query do full table scan.Now, it is written to generate the sequence no.
SQL> explain plan for
2 SELECT NVL(MAX(P.NUM_SERIAL_NO), 0) + 1 FROM CNFGTR_IRDA_ENVELOPE_DTLS P
3 /
Explained.
SQL> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 3345343365
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
[code].....
Index is not created on the column.
View 6 Replies
View Related
Mar 2, 2011
In my below query example , i have to pass more than 4000-5000 paramter in "a1.num" in below query. what is the best way to handle this, also if I pass more than 2000 paramter , the query takes a long time to execute. How can we solve the performance issue as well how I can pass more parameter.
SELECT c1, c2,
TO_CHAR (c3, 'HH24:MI'),
c4,
c5,
[code]...
View 12 Replies
View Related