Performance Tuning :: Fragmentation Can Reduce Performance In Query Times

Jun 16, 2010

I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..

I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.

Any useful script or query to do this, any interesting oracle system view?

View 2 Replies


ADVERTISEMENT

Performance Tuning :: How To Reduce Cost Of Query

Mar 21, 2013

I have Following wjich takes some minutes to executes i want to be tune so this query Executes fast.

Query :

SELECT a.CHDR_EXCH_CD ,TMHR_EXCH_TM_CD,'S' Sec_type,
round(SUM (Decode(csdt_Depo_Typ,'I',(Cal_Scheme_Rate(csdt_rsm_code,TO_DATE(:P_DT_FR,'DD-MM-RR'),csdt_stsc_cd,csdt_scp_qty)*csdt_scp_qty)-
(Cal_Scheme_Rate(csdt_rsm_code,TO_DATE(:P_DT_FR,'DD-MM-

[Code]...

Explain Plan Result :

Plan
SELECT STATEMENT ALL_ROWS
Cost: 1,669 Bytes: 67 Cardinality: 1
15 HASH GROUP BY
Bytes: 67 Cardinality: 1
14 CONCATENATION

[Code].....

After i see result , no 4 in explain plan result gives TABLE ACCESS FULL . i want to be indexing on that how to do this..

This table MG_COLL_SCP_DTL have index like this

CREATE UNIQUE INDEX CSDT_PK ON MG_COLL_SCP_DTL
(CSDT_CHDR_TRANS_NUM, CSDT_PROD_TYP, CSDT_TRAN_SR_NO, CSDT_CHDR_CDTL_COLL_TYP, CSDT_CHDR_CDTL_COLL_TYP_CD,
CSDT_STSC_CD, CSDT_CHDR_CLNT_CD, CSDT_CHDR_CLNT_TM_CD)

[Code]....

How to Reduce cost ???

View 4 Replies View Related

Performance Tuning :: Remove Table Fragmentation

Dec 29, 2011

I have tried below steps for removing the table fregmentation but for some table i am not getting good result here.

1. It will collect the data which are having more than 100MB fragmentation.

select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len
/1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE" from
dba_tables where owner in('a','b','c','d') and round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2)
> 100 order by 8 desc;

2. then move the object(table) to the same tablespace.

alter table abc move;
alter table bcd move;
alter table efg move;

3. also rebuild the dependent objects.

alter index abc_PK rebuild online;

4. Then analyze the table which are having more than 100MB of fragmentation.

exec dbms_stats.gather_table_stats('a','abc');
exec dbms_stats.gather_table_stats('b','bcd');
exec dbms_stats.gather_table_stats('c','cdf');

after that when check the table fragmentation, i am getting the same result, which i have collected from the 1st query.

View 7 Replies View Related

Performance Tuning :: How To Reduce Clustering_factor Value

Jun 7, 2012

how to reduce the clustering_factor's value which appears in the user_indexes view?

in my table ,its clustering factor's value is so high:

SQL> SELECT UI.clustering_factor,UI.num_rows,UI.index_type,UI.distinct_keys FROM USER_INDEXES UI WHERE UI.table_name = 'TAWB_AWB';

CLUSTERING_FACTOR NUM_ROWS INDEX_TYPE DISTINCT_KEYS
----------------- ---------- --------------------------- -------------
83609 187603 NORMAL 187603

and its block numbers is 5063

SQL> SELECT COUNT(DISTINCT DBMS_ROWID.rowid_block_number(ROWID)) BLOCK_NUM FROM TAWB_AWB A;
BLOCK_NUM
----------
5063

View 10 Replies View Related

Performance Tuning :: How To Reduce Consistent Gets Of SQL

Jun 27, 2011

How can I reduce the consistent gets of the sql:

select b.OFFER_ID,
b.OFFER_CODE,
b.OFFER_NAME,
b.OFFER_COMMENTS,
b.BAND_ID,
b.CAN_BE_BUY_ALONE,
b.PRICING_PLAN_ID,
b.PRIORITY,
b.STATE,
[code]...

I have already added the indexes for the outer sql to reduce the cost from 374 to 150. But the the consistent gets seems not reduce that much. I notice that the function will cost 6 consistents gets per execution. It seems it not very high. But if this function goes with the sql, it will cost a very high consistent gets, if I remove this function, the sql only cost 1903 consistents gets.

So what I am thinking there should be two ways to reduce the consistent gets..The first one is reduce the recursive call of the sql. The second is reduce the consistent gets of the function. (but it seems that the consistent gets in this function is very low, only 6.)

View 39 Replies View Related

Performance Tuning :: How To Reduce Dml Locks

May 24, 2012

After ran db health check, my database report gives the following details

dml_locks OK. dml_locks = 3396, transactions = 849.

View 6 Replies View Related

Performance Tuning :: Number Of Times Sql Had Been Executed / Run?

Oct 25, 2011

how many number of times the sql's had been executed/run in an oracle session. Executions in V$sql, is it the right place to look at? Will it give the number of times the sql has been executed in a particular session?

View 3 Replies View Related

Performance Tuning :: ASH Report - SQL Has Been Executed For 1567 Times?

Sep 12, 2012

In ASH report

there is a section that goes like this

SQL ID Planhashed Sampled # of Executions % ActivityEvent% Event Top Row Source
fdy93qpr1227 1567 7.58direct path read 3.65TABLE ACCESS - FULL

does it suggest that this SQL has been executed for 1567 times is this correct .

View 7 Replies View Related

Performance Tuning :: How To Reduce Index Creation Time

Aug 17, 2010

Is there any way to reduce the index creation time. I have one table which has 7700000 records and every day this table get truncate and we create with create table as select statement and then create the 4 indexes and each index took 5 minutes so in totality it took 20 minutes in index creation.

View 2 Replies View Related

Performance Tuning :: WINDOW SORT - Reduce Time?

Nov 30, 2011

In my insert query, Window sort takes longer time i.e. 93% of total execution time, How do i reduce this time? are there any tuning parameters availabe for this?

View 5 Replies View Related

Performance Tuning :: Query Performance Gain Using Statistics?

Aug 9, 2010

Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.

Q1. If it is true what statistics do we adjust for influencing the execution plan and how?

For example, I have the following simple query:

select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;

emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.

If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.

Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?

I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.

View 6 Replies View Related

Performance Tuning :: How Length Of Column Width Effects Index Performance

Sep 30, 2010

How the length of column width effects index performance?

For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)

Table key consist of(id, job, time)

Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).

What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.

View 24 Replies View Related

Performance Tuning :: How To Show Right IO Of Query

Jun 21, 2011

I use oracle 11g r2

i have created a table for objects

create table table_t
(Record_no number,
t_no number,
position SDO_GEOMETRY
, occupation_time number);

and inserted many values (examples below )

INSERT INTO tra VALUES
(2,
20503,
SDO_GEOMETRY
(2001,
NULL,
SDO_POINT_TYPE (1387, 0, NULL),
NULL,
NULL),
23037
)
/

and( position) indexed as Rtree spatial index

now when i run spatial query such as

SELECT * FROM tra t WHERE
SDO_FILTER(t.position, MDSYS.SDO_GEOMETRY(2001,NULL,NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1,1003,3),
MDSYS.SDO_ORDINATE_ARRAY(0,0,9000,0)), 'querytype=WINDOW') = 'TRUE' and t.position.sdo_point.X=1;

i do not know how many IO accrued ?

i tried set autotrace on

but the physical read is 0 , this is not possible because i have more than 100000 objects there and all indexed as R-tree

View 2 Replies View Related

Performance Tuning :: Query Doing FTS On A Big Table

Jun 24, 2011

Below query is taking a long time...

select gam.SOL_ID,COUNT(gam.FORACID) from gam,smt where
gam.ACID=smt.ACID and gam.ACID NOT IN(select ACID from imt) and
gam.SCHM_TYPE in('SBA','CCA','CAA','ODA') and GAM.ACCT_CLS_FLG='N' and
gam.SOL_ID IN(select SOL_ID from IMT) group by gam.SOL_ID
/

attached is the explain plan.

in which index on IMT table is not used. And the query is doing a FTS on IMT table. What needs to be done to avoid FTS on IMT table.

View 10 Replies View Related

Performance Tuning :: When To Use Sub-query And When To Use Join

Dec 14, 2010

In SQL, almost all the thing which are possible with join is possible with sub-query also and vice-a-versa.

So when should I use sub-query and when should I go for join?

View 9 Replies View Related

Performance Tuning :: Performance Standard Edition Without Partitioning?

Jun 16, 2011

How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?

300 Millions in only one table with 500K transactions / day is too much?

Simple database with simple schema.

How many records begin to be too many?

View 2 Replies View Related

Performance Tuning :: Procedure Performance On New Database Import?

Nov 15, 2010

Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.

The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.

any possible reason that we'd have to re-install a procedure to correct a performance problem?

View 13 Replies View Related

Performance Tuning :: Checking Delete Performance In Package

Apr 12, 2013

I need to check the package performance and need to improve the package performance.

1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.

View 13 Replies View Related

Performance Tuning :: How To Improve The Performance Of Export Job (expdp)

Dec 6, 2011

I have an issue with export(expdp).

When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.

expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk

Do i need any look into any memory parameters for this?

View 1 Replies View Related

Performance Tuning :: Query With High Cpu Usage?

Jan 13, 2009

Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-

---------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 178K| 6609K| 129K|
| 1 | TABLE ACCESS FULL| BHI_TRACKING | 178K| 6609K| 129K|
----------------------------------------------------------------

Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.

View 14 Replies View Related

Performance Tuning :: Same Query Using Different Execution Plans?

Dec 26, 2011

I am executing the query below:

INSERT INTO temp_vendor(vendor_record_seq_no,checksum,rownumber,transaction_type,iu_flag)
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */ vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1,
vendor_data vd2,
vendor_data vd3,
(SELECT rownumber,

[code]....

It is taking different approaches (execution plans) while executing for same set of parameters. Due to which sometimes it executes successfully, but sometimes it fills all TEMP space and get failed. I am pasting both the execution plan (different from expalin plan) below:

I. Successfull Execution Plan:

------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 65612 (100)| | | |
|* 1 | HASH JOIN | | 1 | 6121 | 65612 (1)| 00:13:08 | | |

[code]....

II. Failed with TEMP space Execution Plan:

--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 1967 (100)| | | |
|* 1 | FILTER | | | | | | | |
| 2 | SORT GROUP BY | | 1 | 8233 | 1967 (3)| 00:00:24 | | |
|* 3 | HASH JOIN | | 1 | 8233 | 1966 (3)| 00:00:24 | | |

[code]....

View 8 Replies View Related

Performance Tuning :: Pin Sql Query In Shared Pool?

Jun 25, 2010

We are facing performance issues on our production instance 10g(10.2.0.4) 32-bit OELinux 5.3 2GB SGA. The performance is mainly related to one of the table which is sized about 32Gb. We have rebuild the indexes as well but problemstill persist. We are considering to pin SQL statement in shared pool which is hitting the same table frequently. But as far what we have find, is that we can only pin procedures or function in shared pool. True/false?If we can, then how to pin SQL statement in shared pool?If we can not, then is there any other way?

View 3 Replies View Related

Performance Tuning :: Run Query With Different Plan_hash Value Then In GV$SQL_PLAN?

Nov 27, 2012

A change(running a statistics collection package) was made, and now a query is running impossibly slow, disabling use of an application function.

This is cause by an sql , which's sql_id is know to me: 4zty77fkf907j when I run

select * from GV$SQL_PLAN_STATISTICS where SQL_ID = '4zty77fkf907j'

I get that the plan hash value is 3405747833.

I ran SELECT Q.PARSING_SCHEMA_NAME

,Q.SQL_ID,ROUND(Q.ELAPSED_TIME / Q.EXECUTIONS)/1000000 AS AVG_USECS
,Q.CHILD_NUMBER
,Q.HASH_VALUE
,Q.PLAN_HASH_VALUE
,'''' || REPLACE(Q.SQL_FULLTEXT

[code]....

and seen that the result was 2 rows(for 2 RAC nodes), with a figure of hundreds of seconds of execution, meaning - bad plan.

when I run

select * from dba_hist_sql_plan where sql_id='4zty77fkf907j'

I get plan hash value of 3390684693. I assume this might be a "good plan"

For a start, how can I hint my query to use that "good plan" before I use stored outlines to implement it?

View 2 Replies View Related

Performance Tuning :: Same Query With Different Explain Plan In Two Database?

Aug 17, 2012

I am facing a weird situation wherein the explain plan of same sql in SIT and PROD is different.In fact the explain plan is very costly in Prod.Also the DB version of both SIT and PROD is same.

Below is the sql and corresponding explain plan in Prod and SIT respectively.

Query:
SELECT seq,CCN,ProcessorPart,root_item,comp_path,Item,comp_item,comp_item_type,
lag(comp_item_type,1,'PART') over(PARTITION BY seq ORDER BY lvl)Nxt_comp_item_type,lvl,bom_qty,
ROUND(CASE min(abs(bom_qty)) OVER (PARTITION BY seq ORDER BY lvl)
WHEN 0 THEN 0 ELSE 1 END * EXP (SUM (LN (nullif(abs(bom_qty),0))) OVER (PARTITION BY seq ORDER BY lvl))) Ulti_qty,
'AMER'

[code]...

The tables referred in above query is small tables containing arnd 10k records.The above tables are partitioned on Region and not indexed.

Explain Plan in Prod: COST CARDINALITY BYTES

SELECT STATEMENT, GOAL = ALL_ROWS165173613539322883634804
SORT UNIQUE236360
UNION-ALL
PARTITION LIST SINGLE117240

[code]...

Explain Plan in SIT: COST CARDINALITY BYTES

SELECT STATEMENT, GOAL = ALL_ROWS3211689
SORT UNIQUE347240
UNION-ALL
PARTITION LIST SINGLE172120

[code]...

I am not able to attribute why there is a huge change in Cost between SIT and Prod.Apparently the Job is going for 3-5 hours which used to get completed within 20mins in SIT.

View 5 Replies View Related

Performance Tuning :: Insert Into Table Slows Down Query

Sep 23, 2010

When i run a script that does a select from a single table (table has 33521868 records)the query is executed in about .094 seconds. I use the exact same query to insert into a temporary table and the query takes 10 minutes and more.

What should I be doing to speed up this process. Also tried using hints and it does not speed up the insert.

View 3 Replies View Related

Performance Tuning :: Query Sql Server Faster Than Oracle?

Feb 21, 2013

I have a query optimized as to it indexes and others runs immediately when the answer is few records in SQL Server such as Oracle, however when the result is large eg 20,000 records all data access times are very diferent. The query returns many fields (about 20) and some of them are of type Varchar 250 and some of 2000 I understand here may be the problem, but not is because for similar results (20,000 records) sql run in 2 seconds and Oracle but it responds little to have full data takes around 30 seconds. The problem is really in bringing information to all these fields since if the inquiry it also but only returning a numeric field is done in 2 seconds. Tests I've done them both through ODBC, in the Toad as in the own Oracle console on the server, so it is not problem Driver or flow of data through the network, I would like to think that this is some of the settings I think there is as much difference between Oracle and Sql. The databases are ORACLE 10 and SQL Server 2008.

View 1 Replies View Related

Performance Tuning :: Query Using Row Num In Where Clause With Millions Of Records

Dec 8, 2010

There is a table in Database with millions of records and a query --- Select rowid, ANI, DNIS, message from tbl_sms_talkies where rownum<=:"SYS_B_0" ---- using the high CPU and also this query having high number of executions.

View 10 Replies View Related

Performance Tuning :: Alternate Query Instead Of Querying Table Twice?

Nov 11, 2012

The Item data for individual cycles is as below.

Item_tbl
ItemRundate StddateStatus
P103-Nov-1203-Nov-12A
P104-Nov-1204-Nov-12D
P2 04-Nov-1203-Nov-12A

The requirement is I have to get the details of all data of previous Active cycle(status A) when the Item became disabled(status = D) for Input date.

In above case,since for Item P1 and on cycle date 04-Nov-12,status is D,I have to consider the previous active cycle which is 03-Nov-12. Based on above std date,the data is queried from another table to get all the Items. Item P2 should not be considered in above case.

Below is the code which I have written which considers the rundate as Input parameter.

-- To get the Items disabled for Input date
with Itemdisabled as
(
select item,stddate maxcycledate
from Item_tbl
where rundate = stddate

[code]....

In above case,I'm querying the Item_tbl twice once for getting the disabled Items and once for getting the Previous cycle which is active.

Is there any way to query above only once and get the required results using Lag/Lead functions etc.

View 5 Replies View Related

Performance Tuning :: How To Find Top Query Running On A Table

Feb 19, 2012

How do i find a particular SQL or a set of SQL's which are excuted against a table (user identified table) that is either a very frequently executed query against that table or high impact SQL against that table? I am currently looking through the AWR reports to go through all the queries but i was wondering if there are any dictionary views where we can find this info from?

View 2 Replies View Related

Performance Tuning :: Why Different Elapsed Time In Oracle9i And 10g For Same Query

May 5, 2011

why different elapsed time in Oracle9i and oracle10g for the same query ?

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved