Using Rownum In PL/SQL Can Significantly Reduce Performance And Throughput Of Queries
Nov 27, 2010
Using rownum in PL/SQL can significantly reduce performance and throughput of queries.
For example,
CODEselect *
from (select ...
from ...
join ... on ...
join ... on ...
left join ... on ...
where ...
group by ...)
where rownum < 500
takes much more time on a heavy loaded db than
CODEselect Y.*
from (
select X.*, rank() over(order by ...) rnk
from (select ...
from ...
join ... on ...
join ... on ...
left join ... on ...
where ...
group by ...) X) Y
where rnk < 500
I suspect it's because Oracle optimizer goals all_rows and first_rows.
View 2 Replies
ADVERTISEMENT
Jun 16, 2010
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
View 2 Replies
View Related
Dec 27, 2010
high number of executions of specific types of queries which is using only rownum clause. For exam.
select ani, rowid from tbl_smschat_upuor where rownum<=:"SYS_B_0";
DB is having high number of executions of these type of queries and these when I m checking the execution plan for the same type of queries it is accessing the full table scan.
======================execution plan for above query
1000 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 91289622
--------------------------------------------------------------------------------
[code]....
View 3 Replies
View Related
Jun 7, 2012
how to reduce the clustering_factor's value which appears in the user_indexes view?
in my table ,its clustering factor's value is so high:
SQL> SELECT UI.clustering_factor,UI.num_rows,UI.index_type,UI.distinct_keys FROM USER_INDEXES UI WHERE UI.table_name = 'TAWB_AWB';
CLUSTERING_FACTOR NUM_ROWS INDEX_TYPE DISTINCT_KEYS
----------------- ---------- --------------------------- -------------
83609 187603 NORMAL 187603
and its block numbers is 5063
SQL> SELECT COUNT(DISTINCT DBMS_ROWID.rowid_block_number(ROWID)) BLOCK_NUM FROM TAWB_AWB A;
BLOCK_NUM
----------
5063
View 10 Replies
View Related
Jun 27, 2011
How can I reduce the consistent gets of the sql:
select b.OFFER_ID,
b.OFFER_CODE,
b.OFFER_NAME,
b.OFFER_COMMENTS,
b.BAND_ID,
b.CAN_BE_BUY_ALONE,
b.PRICING_PLAN_ID,
b.PRIORITY,
b.STATE,
[code]...
I have already added the indexes for the outer sql to reduce the cost from 374 to 150. But the the consistent gets seems not reduce that much. I notice that the function will cost 6 consistents gets per execution. It seems it not very high. But if this function goes with the sql, it will cost a very high consistent gets, if I remove this function, the sql only cost 1903 consistents gets.
So what I am thinking there should be two ways to reduce the consistent gets..The first one is reduce the recursive call of the sql. The second is reduce the consistent gets of the function. (but it seems that the consistent gets in this function is very low, only 6.)
View 39 Replies
View Related
May 24, 2012
After ran db health check, my database report gives the following details
dml_locks OK. dml_locks = 3396, transactions = 849.
View 6 Replies
View Related
Mar 21, 2013
I have Following wjich takes some minutes to executes i want to be tune so this query Executes fast.
Query :
SELECT a.CHDR_EXCH_CD ,TMHR_EXCH_TM_CD,'S' Sec_type,
round(SUM (Decode(csdt_Depo_Typ,'I',(Cal_Scheme_Rate(csdt_rsm_code,TO_DATE(:P_DT_FR,'DD-MM-RR'),csdt_stsc_cd,csdt_scp_qty)*csdt_scp_qty)-
(Cal_Scheme_Rate(csdt_rsm_code,TO_DATE(:P_DT_FR,'DD-MM-
[Code]...
Explain Plan Result :
Plan
SELECT STATEMENT ALL_ROWS
Cost: 1,669 Bytes: 67 Cardinality: 1
15 HASH GROUP BY
Bytes: 67 Cardinality: 1
14 CONCATENATION
[Code].....
After i see result , no 4 in explain plan result gives TABLE ACCESS FULL . i want to be indexing on that how to do this..
This table MG_COLL_SCP_DTL have index like this
CREATE UNIQUE INDEX CSDT_PK ON MG_COLL_SCP_DTL
(CSDT_CHDR_TRANS_NUM, CSDT_PROD_TYP, CSDT_TRAN_SR_NO, CSDT_CHDR_CDTL_COLL_TYP, CSDT_CHDR_CDTL_COLL_TYP_CD,
CSDT_STSC_CD, CSDT_CHDR_CLNT_CD, CSDT_CHDR_CLNT_TM_CD)
[Code]....
How to Reduce cost ???
View 4 Replies
View Related
Aug 17, 2010
Is there any way to reduce the index creation time. I have one table which has 7700000 records and every day this table get truncate and we create with create table as select statement and then create the 4 indexes and each index took 5 minutes so in totality it took 20 minutes in index creation.
View 2 Replies
View Related
Nov 30, 2011
In my insert query, Window sort takes longer time i.e. 93% of total execution time, How do i reduce this time? are there any tuning parameters availabe for this?
View 5 Replies
View Related
May 23, 2011
Recently upgraded from Oracle 10.2.04 to 11g with a few bumps on the road most of which I've been able to resolve, but there's one that continues to confuse me.
Pretty vanilla INSERT statement in which the SELECT portion on its own runs in about 2 to 5 seconds (all results returned) on a facility by facility basis. When I try to combine this with an INSERT statement it ends up running for 12+ minutes per facility. The explain plan looks good and I've even tried emptying the target table prior to running the INSERT.
I've gathered schema/table statistics to no avail. I also tried using it as a CREATE TABLE AS statement and it still takes the 12 minutes per facility.
View 3 Replies
View Related
Jun 30, 2013
It seems certain queries search by the number of days to ship(number of days between the order and shipping dates). What kind of index would improve the performance of these queries?
View 2 Replies
View Related
May 24, 2011
I am stuck with a query which is taking a lot of time to execute. Below is the pseudo code of the same:
SELECT TAB_ALIAS1.COL1,TAB_ALIAS1.COL2,TAB_ALIAS1.COL3
FROM TABLE1 TAB_ALIAS1
WHERE TAB_ALIAS1.COL4 = <INPUT PARAMETER1>
AND TRUNC(TAB_ALIAS1.ELAP_TIME) =
(
SELECT MAX(ELAP_TIME)
[code]....
View 6 Replies
View Related
Jan 28, 2011
We are using the below query to create a Materialized View but it has been running since 3 hours. It is an Oracle 9i database running in HP-UX.The quey is as follows,
(SELECT
/*+ use_nl(A) parallel (A,4)*/
A.ICD_CODE AS ICD_CODE,
A.ICD_DESC AS ICD_DESC,
A.PROC_GROUP as PROC_GROUP,
B.COMPL_ICD_CODE AS COMPL_ICD_CODE,
B.COMPL_GRP_TXT AS COMPL_GRP_TXT,
C.PROC_TYPE AS PROC_TYPE ,
[code]....
View 21 Replies
View Related
Oct 28, 2010
Name some database tool from which I can check the SQL Queries which my application is running.
NOTE: I do not want to check the queries which I am executing at the SQL command prompt but queries that are being run by my application at the backend.
View 4 Replies
View Related
Jan 31, 2012
In search queries generally we select 10-25 columns (more can't be displayed on the screen) from 5-10 tables
Say in case of insurance related application, the search might be on policy number, policy holder's first name, policy holder's last name, region, policy type etc.
And not to many columns we are displaying on the screen, say, 4 tables have collectively 4 * 20 = 80 columns, then we are displaying say 12-15 columns with 2-3 columns have aggregates on it.
since the search criteria (e.g. first name, last name, policy number etc.) is not known till last moment it will be a generic dynamic query
Is it possible that instead we create a Materialized view with query with only joining conditions but no filter conditions and selecting only columns to be displayed on the screen and then we will refresh the materialized view (to take care of recent business transactions) and fire refined query with filter criteria on this materialized view
Select col1,col2,col3,col4,col5
From tab1,tab2,tab3,tab4
Where tab1.col1=tab2.col1
And tab2.col2=tab3.col2
And tab2.col2=tab4.col2;
Will it improve performance of the search functionality
View 2 Replies
View Related
Sep 9, 2011
here we have an scenario where we want to find out all the sql statements that are executed in a particular time. The sql statements are executed via our application. I tried in awr report but it shows only the sql query which has taken long time to execute. and i even tried in V$session and V$sqlarea. how to view the executed sql statements in a particular session/current session
View 3 Replies
View Related
Aug 28, 2013
Does parallel hint works in cursor queries? The cursor query is something like :
cursor c is
select /*+ parallel(s,8) */
from table ref_tab s ---- >>
<where condition>;
The table ref_tab hold data for a single day at any point of time and gets truncate before loading the next days data.On average the table holds around 7 million rows and doesn't contains any index (think that's fine as all together we are loading the whole set).And, we are using bulk logic with save exceptions to open the cursor and load the data into the target table.
View 13 Replies
View Related
Feb 11, 2011
I have few queries on PGA memory management.Since these queries are based on 2-3 examples not exactly same by nature I am summarising it after my understanding for the same
As I understand many workareas can be allocated to a single sql statement and number and sizes of theses workareas is controlled internally by Oracle when Automatic Memory management (PGA_aggregate_target and workarea_size_policy=Auto are set) Since many sessions share the PGA memory, the amount of memory available to each session may vary and if less amount of memory is available for a session for sorting then TEMP tablespace is used
[1] Can we say paging happens and can be checked at this time?
[2] Is there a difference in handling memory while populating pl/sql tables?
As I have encountered ora-04030 while some our developers were populating pl/sql tables but never encountered this error for sorting, hash joins etc Though I don't remember the width of pl/sql table, I am sure the developer used 'LIMIT' clause during bulk collect and still faced the issue.
With a single session on the server, I noticed that the difference in values displayed issuing 'free' command in linux and output values from sesstat did not match at all while there wasn't any heavy OS process involved during the period. I was expecting 'used' and 'free' values displayed by free command (linux) will change and difference would be approximately equals 'before and after values of session pga memory.
[3] Isn't it expected to match?
[4] Can we say in dedicated server, at any moment of time, the SUM of 'session pga memory' represents all the memory used by Oracle SGA, at that point of time?
select sum(value)/1024/1024 "memory in MB" from v$sesstat where statistic#=20;
During one of the tests I got following output (divide value by 10 for my visibility and avoid formatting)
SQL> select a.name, to_char(b.value/10, '999,999,999') value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name like '%ga memory%'; 2 3 4
[code]...
The above query is showing above values even when the pl/sql block execution is completed 30 minutes back
[5] Do we call this as 'memory leak' where memory is not released even while some time has passed since session has done something?Of course I am not checking at OS level as mentioned in question [3] above the values won't match!
Still the output of free command for reference(After the pl/sql block executed)
SQL> !free
total used free shared buffers cached
Mem: 3016796 2999660 17136 0 4308 1173260
-/+ buffers/cache: 1822092 1194704
Swap: 1048568 636124 412444
--(After the pl/sql block executed)
SQL> select * from v$pgastat;
NAMEVALUEUNIT
aggregate PGA target parameter 524288000bytes
aggregate PGA auto target 456256512bytes
global memory bound 26214400bytes
total PGA inuse 17328128bytes
[code]...
[6] What could be the significance of negative values of 'session pga memory/max'?
Last We have an OLTP system and in the night we run batch processes in 2-4 sessions
Suppose I have 10 GB RAM and with PGA setting of 3.5 GB Now I want the batch process sessions to use max possible memory during nighttime and toggle the setting back in the morning
[7] With above settings (10 GB RAM and 3.5 GB PGA) how can I divide the memory among 4 sessions?
Shall I set 1) PGA_aggregate_target=0 2)Workarea_size_policy=manual 3) Sort_are_size 4) Hash_area_size
[8] What would be approx values for parameter 3 and 4? will it be straight 3.5 GB/ 4?
View 8 Replies
View Related
Nov 17, 2011
We have a big hierarchical query which is now running for a long time (around 6 hours. earlier it was running for 3 hours). We have to tune this query so that we run the jobs within a stipulated time frame.
The query below inserts around 42 million records in to the table WK_ACCT_WSTORE. I have attached in the text file.
View 4 Replies
View Related
Jul 4, 2013
I would like to know if using varchar parameter in sql queries with number column can result in performance degrade.
Ex: Procedure testa ( myparam varchar) is
begin
select col1 into var1 from table where colno = myparam;
end;
Here col no is a number column and myparam is varchar. I feel its better to change the parameter to number.
View 7 Replies
View Related
Nov 7, 2012
I would want to know how can we predict how many rows are fetched per second for a particular query. What are the factors which are responsible for this.hoe does this whole process of fetching records from Source happens. Like when a query is fired how does it try to access the table and fetch records and how what are the factors whichc are responsible for this and how can we predict how many rows can be fetched per sec
View 8 Replies
View Related
Feb 22, 2013
the Metric "Loader Throughput (rows per second)" gives out a notification on my Oracle 11gR2 instance:
Loader Throughput (rows per second) for <hostname>:1158_Management_Service,XMLLoader0 exceeded the critical threshold (3000). Current value: 3695.652173913043478260869565217391304348My problem is, that I can't interprete this value and don't know, if that is really terrible so I have to do something.
which and what this value belongs and what does that mean?
View 3 Replies
View Related
Jan 28, 2011
When I view Database Throughput Metric Data, e.g. Consistent Read Gets (per second) using "Real Time: 30 Second Refresh", it appears that timestamp are 5 hours behind the current time. This can be also observed by "Last Upload" timestamp from "All Metrics" view of the database instance for "Throughput" Metrics.
It was not like this for EM GC10.2.0.2. I'm wondering if anything changed in GC 11g.
I even tried to set low alarm thresholds for "Consistent Read Gets" metric, it doesn't seems to work on the timestamp of the realtime data.
View 1 Replies
View Related
Jan 23, 2007
our system has always been running on mysql database and recently we have switched to oracle. As the current system is coded using mysql query syntax, when i run this program using oracle database, i got a error. The language that I'm using is JSP.
this is the error message:
The following query could not run on oracle. To convert these mysql queries to oracle compatible queries.
SELECT productID,productName FROM products order by productName;
select newsID,newsDate,newsHeadLine1 from news order by newsDate Desc limit 3
SELECT fuji_products.productID, productName_Display FROM products,products_availability where products_availability.productID=products.productID and (product_status='enabled' or product_status='all') AND category='12'
SELECT catID, catSub1 from category where catSub = '"+ prodCat +"' AND catSub1 is not null group by catSub1 order by catSub1
View 6 Replies
View Related
Nov 22, 2012
I tried to make my Query as simple as possible but also it contains the problem:
SELECT A.Id,A.Attachment,A.CreateDateTime,A.No
,rownum as RowNumber
FROM (
SELECT A.Id,A.Attachment,A.CreateDateTime,A.No FROM GOutgoingLetter A ,
(
SELECT B.Id, B.createdatetime as of0 FROM GOutgoingLetter B
WHERE
Exists
[code]...
when I use Both Order By And RowNum in my Query, Two columns of final Select Are Null: NO & Attachment. whereas this Columns aren't empty.when I comment each one of "ORDER BY B.of0" or ",rownum as RowNumber " everything is correct!!
View 1 Replies
View Related
Mar 29, 2010
select rownum, CATR_ID, CAT_ID, CATR_REG_COPY, CATR_REG_LABEL, CATR_ACQUIRED_DATE, CATR_REG_DATE, CATR_MEDIA_COMMENTS, CATR_WITH_DIGITAL,
CATR_ORIGINAL, CATR_LINK, CATR_CREATED_BY, CATR_CREATED_DATE, CATR_MODIFIED_BY, CATR_MODIFIED_DATE, CATR_CHECKOUT, Available,
CATR_RETURN_DATE, LOCN_ID, LOCN_SITE, LOCN_LOCATION, MTYPE_GROUP, MTYPE_NAME, ACCESS_LEVEL, DESCRIPTION, CAT_TITLE, CAT_DESCRIPTION,
CATEGORY_ID, CAT_AUTHOR, CAT_PUBLISHED_DATE, CAT_PUBLISHER, CAT_EVAL_RELEVANT_KEYWORDS, CAT_REG_NUMBER, CAT_REG_SUBNUMBER, U_NAME
[Code]..
There are over 1500 records, but this query does not return any row. If i change rownum >= 100 to rownum <= 100 it returns first hundred records though... What is wrong here?
View 12 Replies
View Related
Jan 15, 2013
In my sql query, how can i fetch the row with max row count? the query has around 10 columns.
View 2 Replies
View Related
Apr 4, 2010
When i try to execute a query, which is organised as the below example, it retrieves data..
select * from (
select col1, col2, col3, col4....coln
from TABLE_ONE left outer join TABLE_TWO
-- some conditions and group by clause
order by 1 asc
)
where rownum <=1000;
Again if I use Column alias in the ORDER BY clause col1, the query won't retrieve data.
Also If I use ORDER BY 4 instead of ORDER BY 1, the query wont return data...
select * from (
select col1, col2, col3, col4....coln
from TABLE_ONE left outer join TABLE_TWO
-- some conditions and group by clause
order by 4 asc
)
where rownum <=1000;
The whole issue revolves around the inner ORDER BY Clause and external ROWNUM condition..If I eliminate any of the two, the query works fine...I am not sure if indexes have some role to play in it...
View 1 Replies
View Related
Jan 25, 2013
I'm on Oracle Database 10g Enterprise Edition Release 10.2.0.4.0
I am working on a project where have lots of view on a different schema. For performance reasons, we create tables on those views and index them.
The application that uses these tables requires a numeric primary key of a specific length, e.g. number(10). Not all tables have a natural key that matches this requirement, so I added a rownum to the query. I had hoped that casting the rownum to a number(10) would result in the same datatype once the table is created.
e.g.
SQL> create or replace view rownum_to_number10_vw as
2 select cast(rownum as number(10)) objectid, dummy from dual;
View created
SQL> describe rownum_to_number10_vw;
Name Type Nullable Default Comments
-------- ---------------- -------- ------- --------
OBJECTID NUMBER(10) Y
DUMMY VARCHAR2(1 BYTE) Y
SQL> perfect! Now create a table based on this view:
SQL> create table rownum_to_number10_tb as 2 select * from rownum_to_number10_vw;
Table created
SQL> describe rownum_to_number10_tb;
Name Type Nullable Default Comments
-------- ---------------- -------- ------- --------
OBJECTID NUMBER Y
DUMMY VARCHAR2(1 BYTE) Y Oracle does not pick up on the number(10) cast!
How can I force Oracle to create a column with the same datetype as the underlying query?
ps:I know that the 10 in number(10) is more like a constraint than a datatype, but the application that uses this table will create an additional column if the datatype > 10. I want to prevent that from happening...
View 8 Replies
View Related
Feb 18, 2013
I have a requirement in SQL that I have to number each row. Hence I thought of using ROWNUM. But the sql query I'm using uses UNION operator. Hence I used like this
select a,b,rownum as 'field1' from table1
union
select c,d,1 as 'field1' from table2
Will the above query solve my purpose?
View 11 Replies
View Related