SQL & PL/SQL :: Know Execution Time Without Running Query?
Jul 13, 2008Is there a way to know how much time will a query take to execute without running it, just like using the autotrace (traceonly) and explain plan utility.
View 16 RepliesIs there a way to know how much time will a query take to execute without running it, just like using the autotrace (traceonly) and explain plan utility.
View 16 RepliesI want to know that is there any way to know the execution time of a sql query.
View 1 Replies View RelatedI am facing a very strange issue with one of our Oracle query. The query is usually completes in a minute or two. Even the execution plan of the query is good and it works perfect most of the times, as expected. The query fetches about 1000-2000 records each day.
But on a given day, the query takes about 30-40 mins to execute completely. Upon checking the load on DB server, there are no other processes running which can impact the run time of this query. Moreover, the record counts fetched are almost same as compared to other days. There is no pattern observed as that this phenomenon occurs. it all happens once in a while.
Configuration is Oracle 10g with RAC environment on LINUX
The below query is taking high CPU almost 98% and longer time to execute.
SELECT ancestor,
Max(D.alarmstate) ALARMSTATE,
Max(D.sialarmstate) SIALARMSTATE,
Max(D.uncralarmstate) UNCRALARMSTATE,
Max(M.commstate) COMMSTATE,
Max(M.nncommstate) NNCOMMSTATE,
Max(M.servicestate) SERVICESTATE,
Max(M.abnormal) ABNORMAL,
CASE
[code]....
I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.
I have verified the following
All records in tables in 2 schemas are same.
All indexes are same
Analyzed the tables
Gathered Histogram on all the columns as per the first schema.
But now i still have the same problem, don't know what could be the problem.
Table_nameNum_RowsBlocks
PRPSL_LST_T5866107159
PRPSL_WKFLW_ACTVTY_T5829904030
ITEM_CHR_VAL_T5134340104049020
ITEM_RGN_ASSN_T8571220137215
Also attached 2 screen shots of OEM Plans..
I am using Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
I have 2 schemas in my application.
1. Application schema
2. EOD(End of day) schema.
End of day schema is populated from Application schema whenever user runs EOD process. The tables are pulled like this.
1. Master tables : Always deleted and reinserted at each EOD process
2. Log tables for each transaction table: Delta between the last EOD and current EOD data is pulled and are used for populating transaction tables
3. Transaction tables: These are populated from log tables pulled from previous step. The logic is like this
Now based on these tables about 30 reports are generated in EOD schema. Please note that each transaction table will have a EOD_ID and any report generated uses the where condition <transaction table>.EOD_ID = <current EOD_ID>
My log table contract_log and transaction table is contract in EOD schema.
contract_log table has data like this
contract_number contract_date customer_idqtyrateeffective_date
11/Jan/2010CUST-0110101/Jan/2010
1NULLNULLNULL112/Jan/2010
EOD on 1st Jan 2010 constructs contract table as
contract_numbercontract_datecustomer_idqtyrateeod_id
11/Jan/2010CUST-011010EOD-1
(Since the change of rate 11 is not visible on 1st Jan 2010 because it is effective on 2nd Jan 2010)
EOD on 2nd Jan 2010 constructs contract table as
contract_numbercontract_datecustomer_idqtyrateeod_id
11/Jan/2010CUST-011011EOD-2
(Since the change of rate 11 is visible on 2nd Jan 2010)
This logic is working fine. But we run more than 20-30 EODs the processing time increased to 10-15 hours.
It took some time to figure out the issue as a single query when run from toad or pl sql developer runs in few seconds but as a part of the whole package it takes 2-3 hours(each query).
The problem found was that oracle execution plan gets corrupted when the process starts. So what we did was to analyze the tables
after they are pulled. This perfectly solved our problem. Currently the whole process is taking only about 12-13 minutes where about 3 minutes is lost on analyze tables and indexes. I know this is a temporary solution as I need to get out of online analyze of tables and indexes.
My code for table and index regeneration is as below
PROCEDURE sp_gather_table_index_stats(pc_table_name VARCHAR2) IS
CURSOR cur_ind IS
SELECT index_name
FROM user_indexes
WHERE table_name = pc_table_name;
BEGIN
EXECUTE IMMEDIATE ' begin DBMS_STATS.gather_table_stats(user,' || '''' ||
pc_table_name || '''' || '); end;';
FOR cur_ind_rows IN cur_ind LOOP
EXECUTE IMMEDIATE ' begin DBMS_STATS.gather_index_stats(user,' || '''' ||
cur_ind_rows.index_name || '''' || '); end;';
END LOOP;
END;
In our production environment some SP's are executing longer duration, but when same SP is executed from PLSQL Developer client it is executing vary quickly.
View 3 Replies View RelatedHow can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.
View 7 Replies View RelatedI have huge package having numerous procedures. Therein there is one wrapper procedure which is invoked from front end and which in turn calls other stored procedures within the package. The stored procedures called return the result to the wrapper procedure which in turn calls other stored procedures.
The ultimate result is returned to the calling environment by the wrapper procedure itself.I need to know the time taken in totality by the wrapper procedure along with the individual execution time of each procedure called by the wrapper procedure. I am not allowed to modify the code for adding timestamp capturing though.
The problem was describe:
- First time to execute: Using all indexes on 2 tables
- Second time to execute: Using only indexes on first table, full table scan on the other
- Third time to execute: Do FTS on both of tables.
Now, I show the objects and relate information here:
The Tables:
system@dbwap> select count(*) from my_wap.news_relation;
COUNT(*)
----------
272708
system@dbwap> select count(*) from my_wap.news_content;
COUNT(*)
----------
95092
system@dbwap> desc my_wap.news_content;
Name Null? Type
----------------------------------------------------- -------- ----------------
ID NOT NULL NUMBER(11)
SUBJECT NOT NULL VARCHAR2(500)
TITLE VARCHAR2(4000)
STATE NUMBER(1)
IMGPATH VARCHAR2(500)
ALIGN VARCHAR2(10)
[Code]....
In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T
WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT)
IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT
FROM LOAD_TRADE_ORDER
WHERE IND_IS_BAD_RECORD='N');
Tables Used:
oTRADE_ORDER_EMP_ALLOCATION Row count (329525880)
oLOAD_TRADE_ORDER Row count (29281)
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
Is there any oracle dictionary view which captures the queries being run by users on the database and time taken to execute those queries?We need to find out the OS user not the database user since we have to identify the users who are executing long running queries.We require this basically to monitor the long running queries on the database.
View 2 Replies View RelatedI'm planning to decrease the time taken to execute data by managing the redo log file but I'm kinda stuck in some aspect : > Why is my OPTIMAL_LOGFILE_SIZE is showing NULL ? > I'm trying to resize the LOGFILE capacity from 100M to 200M and I'm also adding 1 more LOG GROUP with 200M capacity too but turned out that didn't decrease my execution time.
View 12 Replies View Relatedattached query giving consistent execution plan but different timings across run
SELECT /*+ INDEX (CRT CRT_CUN_FK_I)*/
DISTINCT odr.dve_id
FROM company_requirements crt, orders odr, lelo_products la_pct
WHERE crt.qtn_cun_id = 10035637--10000021--10035667
AND crt.ID = odr.crt_id_quote_implemented
AND NVL (odr.cancellation_date, '31-Dec-9999') = '31-Dec-9999'
[code]....
we have 4 databases, 2 on each servers, such that db1 and db2 on server1 and db3 and db4 on server2
refer count of the records for column of biggest table in the query, taken on all 4 databases (The column is nullable)
select count(*) from company_requirements crt WHERE crt.qtn_cun_id = 10035637
db1 = 73335
db2 = 89073
db3 = 81182
db4 = 82936
First I executed the query on db1 and db2 while there wasn't any user logged on to the system
db1
**********
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 17.47 473.39 85704 1508102 0 0
[code]...
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file sequential read 85704 0.31 460.55
latch free 1 0.00 0.00
SQL*Net message from client 1 14.98 14.98
[code]...
Why the elasped time changed when data and plan hasn't changed at all? Also why the plan has different stats for round 1 and 2 on db1 and db2?
I ran it 2 times each round each database so hard parsing shall not be issue.Also why the number of rows accessed are different in db1,db2 and db3,db4 especially for step1 when count of crt.qtn_cun_id is similar?
In fact when the query was taking long I was the only user on the system Also I used hard coded value (no bind variables at all)
I checked num_rows, distinct keys as well which are quite similar across all 4 databases Also no stats where gather during the query execution
What I should have checked or monitored?
We have a bunch of jobs scheduled using DBMS_JOB (yes, I know I should be using DBMS_SCHEDULER, but we haven't migrated there yet). We are running Oracle 11.2.0.3 on Windows Server 2003 x64.
For example, we have a job that is supposed to run every Wednesday at 20:00. The Interval we have set up is "NEXT_ DAY (TRUNC(SYSDATE), 'WEDNESDAY')+20/24". This has been working as intended. Today (Monday), however, the job kicked off at 11:52. It was the wrong day and the wrong time.
I don't see anything weird in my alert log. Where else should I check to figure out why this job ran today?
JOB LAST_DATE LAST_SEC NEXT_DATE NEXT_SEC INTERVAL WHAT
293 1/7/2013 11:52:46 AM 11:52:46 1/9/2013 8:00:00 PM 20:00:00 NEXT_DAY(TRUNC(SYSDATE), 'WEDNESDAY')+20/24 ACQUISITIONS.WORKLOAD_STATUS_UPDATE_NOTIF;
when I am executing the below query getting diffrent count every time and not able to guess what is happening.
SELECT
count(1)
FROM table1 where table1.LAST_UPDATE_DATE >= current_date - interval '9' day
I have the where cluase as below , I would like to know how does oracle decides which one to execute first,
WHERE S.PERSPECTIVE='S'and s.shipment_gid=sb.shipment_gid AND SB.BILL_GID=CBIL.INVOICE_GIDand inv.invoice_gid=cbil.invoice_gid AND S.SOURCE_LOCATION_GID=LC.LOCATION_GID and l.location_gid=lc.location_gid AND TRUNC(cbil.insert_date)=TRUNC(tc.tesco_cal_date)AND(lc.location_gid='N' OR lc.corporation_gid='TESCO.10719')AND s.source_location_gid=lc.location_gid AND tc.tesco_year='2013' AND tc.tesco_period=6AND tc.tesco_week_number=23
when i run a form no information shows up until i click execute query... i need the info to be their automatically to browse with the previous and next button
View 3 Replies View RelatedI have the below query for which ename column has an index. As of my knowledge below queries 1st and 2st will not use index. Hence i used the 3rd statement and that too its not using the index. Finally i used the 4th query, but even the 4th query is not using the index. Then how do i make this query to use my index??? Do i need to create a function based index for this?
1. select * from emp where ename !='BH' ;
2. select * from emp where ename <> 'BH';
3. select * from emp where ename not in ('BH');
4. select * from emp where ename < 'BH' or ename > 'BH';
Our application servers will be running a SELECT which returns zero rows all the time.This SELECT is put into a package and this package will be called by application servers very frequently which is causing unnecessary CPU.
Original query and plan
SQL> SELECT SEGMENT_JOB_ID, SEGMENT_SET_JOB_ID, SEGMENT_ID, TARGET_VERSION
FROM AIMUSER.SEGMENT_JOBS
WHERE SEGMENT_JOB_ID NOT IN
(SELECT SEGMENT_JOB_ID
FROM AIMUSER.SEGMENT_JOBS) 2 3 4 5 ;
[code]....
Which option will be better or do we have other options?They need to pass the column's with zero rows to a ref cursor.
I am executing the query below:
INSERT INTO temp_vendor(vendor_record_seq_no,checksum,rownumber,transaction_type,iu_flag)
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */ vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1,
vendor_data vd2,
vendor_data vd3,
(SELECT rownumber,
[code]....
It is taking different approaches (execution plans) while executing for same set of parameters. Due to which sometimes it executes successfully, but sometimes it fills all TEMP space and get failed. I am pasting both the execution plan (different from expalin plan) below:
I. Successfull Execution Plan:
------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 65612 (100)| | | |
|* 1 | HASH JOIN | | 1 | 6121 | 65612 (1)| 00:13:08 | | |
[code]....
II. Failed with TEMP space Execution Plan:
--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 1967 (100)| | | |
|* 1 | FILTER | | | | | | | |
| 2 | SORT GROUP BY | | 1 | 8233 | 1967 (3)| 00:00:24 | | |
|* 3 | HASH JOIN | | 1 | 8233 | 1966 (3)| 00:00:24 | | |
[code]....
I Have three field and first field for START TIME ,Second END TIME & Third DURATION AND Putting START TIME AND END TIME i am getting duration in minutes by using code
""SELECT TO_CHAR
(TRUNC (SYSDATE)
+ (TO_DATE (:T_DONATION_END_TIME, 'HH24MI') - TO_DATE (:T_DONATION_START_TIME, 'HH24MI')),
'HH24MI')
INTO :T_DONATION_DURATION
[code].......
I want to execute a DML query with execute immediate statement. That DML query length exceeds 4000 characters. This query has Xquery related conditions, i can not split the query. when i tried execute it is giving "string literal too long". I tried with DBMS_SQL.Parse() and DBMS_SQL.Execute also, but it is giving same error. I have to execute this DML query inside a Procedure. We are using oracle 10g version
View 13 Replies View Relatedi am facing group by issue when running the query.the error code is ORA-00979: not a GROUP BY expression.i suspect is due to the subquery in the SELECT clause.
SELECT
WHINR100.COMPANY,
WHINH210.SEQN,
FXINH039.OTBP,
WHINR100.BPID,
TCCOM100.NAMA,
WHINR100.ITEM,
WHINH210.RCNO,
FXINH051.btch,
case when (whinh210.ORNO in (select fxinh033.pdno from fxinh033))
[code]...
I'm trying to create a report in the following format
Year Name Reg1 Reg2 Reg3
2001 Al 3 4 5
2001 Le 4 1 1
2001 7 5 6
2002 Sue 2 4 1
2002 Al 1 3 6
2002 Jim 6 1 3
2002 16 15 16
2003 Jim 4 -3 2
2003 Le -2 4 5
2003 20 16 23
Note that the totals are accumulating horizontally, broken on year.
MY DB Version: 10.2.o
OS: Windows Server 2003
I am trying to import on table which i have the export dump file which i take using expdp previously when i load that table on the same host
by using below command:
expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
after that i zip that dump and move it to external usb and now i need that table i copy that table and unzip that that dump
Command i am using to do the import is :
impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
But the query of import is still runing even not showing any amount of rows to be imported.
i already make the tablespace in which the table was previosuly before dropping but when i check the sapce of tablespace that is also not consuming one error i got preiviously while performing this task is:
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "CDR"."SYS_IMPORT_TABLE_03" successfully loaded/unloaded
Starting "CDR"."SYS_IMPORT_TABLE_03": cdr/********@tsiindia directory=TEST_DIR dumpfile=CAT_IN_DATA_042012.DMP tables=CAT_IN_DATA_042012 logfile=impdpCAT_IN_DATA_042012.log
[code]....
i check streams_pool_size it will show zero and then i make it to 48M and after that
SQL> show parameter streams_pool_size;
NAME TYPE VALUE
-----------
streams_pool_size big integer 48M
But still it takes time
I need to say I am an absolute NOOB when it comes to SQL.I need a script to run in TOAD that will reference a CSV file saved onto my local hard drive. I'll try and describe exactly what I need to do.
The current script which I use via TOAD on our companies READ ONLY database is this:
SELECT d.number_id,
d.status_id,
FROM table.number_t d
WHERE d.number_id IN ('1230001', '1230002', '1230003')
This will return a result for each number that exists within the table.number table along with the status of each number i.e. active or inactive. A very basic query.
What I need to be able to do is run that query but instead of having to copy each number into TOAD manually, I need TOAD to check a .csv file of said numbers and then return the results.So I imagine the query would look something like:
SELECT d.number_id,
d.status_id,
FROM table.number_t d
WHERE d.number_id IN (check
provide me the query to check our database is running using spfile or running using pfile.
View 2 Replies View RelatedHow do i find a particular SQL or a set of SQL's which are excuted against a table (user identified table) that is either a very frequently executed query against that table or high impact SQL against that table? I am currently looking through the AWR reports to go through all the queries but i was wondering if there are any dictionary views where we can find this info from?
View 2 Replies View RelatedI have a query similar to the one below which is embedded in a PRO*C program. The query works fine when run in the PRO*C program against a 8i database but fails with an "ORA-02015: cannot select FOR UPDATE from remote table" error when run against a 10g database. The PRO*C program is executing the sql using "EXEC SQL".
QUERY: Select last_name, first_name from Member
where ....
FOR UPDATE OF LAST_NAME;
The other thing to note is this SQL query works fine via sqlplus in a 10g environment.
ADDITIONAL DETAILS: The above query is selecting data from a base table via a user view
VIEW: select * from
otherschema.member@connection_identifier;
This view was created in this manner to allow the user account access to the underlaying table without creating explicit permissions.