Longer SP Execution Time
May 5, 2011In our production environment some SP's are executing longer duration, but when same SP is executed from PLSQL Developer client it is executing vary quickly.
View 3 RepliesIn our production environment some SP's are executing longer duration, but when same SP is executed from PLSQL Developer client it is executing vary quickly.
View 3 RepliesI want to know that is there any way to know the execution time of a sql query.
View 1 Replies View RelatedIs there a way to know how much time will a query take to execute without running it, just like using the autotrace (traceonly) and explain plan utility.
View 16 Replies View RelatedI am facing a very strange issue with one of our Oracle query. The query is usually completes in a minute or two. Even the execution plan of the query is good and it works perfect most of the times, as expected. The query fetches about 1000-2000 records each day.
But on a given day, the query takes about 30-40 mins to execute completely. Upon checking the load on DB server, there are no other processes running which can impact the run time of this query. Moreover, the record counts fetched are almost same as compared to other days. There is no pattern observed as that this phenomenon occurs. it all happens once in a while.
Configuration is Oracle 10g with RAC environment on LINUX
How can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.
View 7 Replies View RelatedI have huge package having numerous procedures. Therein there is one wrapper procedure which is invoked from front end and which in turn calls other stored procedures within the package. The stored procedures called return the result to the wrapper procedure which in turn calls other stored procedures.
The ultimate result is returned to the calling environment by the wrapper procedure itself.I need to know the time taken in totality by the wrapper procedure along with the individual execution time of each procedure called by the wrapper procedure. I am not allowed to modify the code for adding timestamp capturing though.
The problem was describe:
- First time to execute: Using all indexes on 2 tables
- Second time to execute: Using only indexes on first table, full table scan on the other
- Third time to execute: Do FTS on both of tables.
Now, I show the objects and relate information here:
The Tables:
system@dbwap> select count(*) from my_wap.news_relation;
COUNT(*)
----------
272708
system@dbwap> select count(*) from my_wap.news_content;
COUNT(*)
----------
95092
system@dbwap> desc my_wap.news_content;
Name Null? Type
----------------------------------------------------- -------- ----------------
ID NOT NULL NUMBER(11)
SUBJECT NOT NULL VARCHAR2(500)
TITLE VARCHAR2(4000)
STATE NUMBER(1)
IMGPATH VARCHAR2(500)
ALIGN VARCHAR2(10)
[Code]....
In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T
WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT)
IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT
FROM LOAD_TRADE_ORDER
WHERE IND_IS_BAD_RECORD='N');
Tables Used:
oTRADE_ORDER_EMP_ALLOCATION Row count (329525880)
oLOAD_TRADE_ORDER Row count (29281)
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
Is there any oracle dictionary view which captures the queries being run by users on the database and time taken to execute those queries?We need to find out the OS user not the database user since we have to identify the users who are executing long running queries.We require this basically to monitor the long running queries on the database.
View 2 Replies View RelatedI'm planning to decrease the time taken to execute data by managing the redo log file but I'm kinda stuck in some aspect : > Why is my OPTIMAL_LOGFILE_SIZE is showing NULL ? > I'm trying to resize the LOGFILE capacity from 100M to 200M and I'm also adding 1 more LOG GROUP with 200M capacity too but turned out that didn't decrease my execution time.
View 12 Replies View RelatedThe below query is taking high CPU almost 98% and longer time to execute.
SELECT ancestor,
Max(D.alarmstate) ALARMSTATE,
Max(D.sialarmstate) SIALARMSTATE,
Max(D.uncralarmstate) UNCRALARMSTATE,
Max(M.commstate) COMMSTATE,
Max(M.nncommstate) NNCOMMSTATE,
Max(M.servicestate) SERVICESTATE,
Max(M.abnormal) ABNORMAL,
CASE
[code]....
attached query giving consistent execution plan but different timings across run
SELECT /*+ INDEX (CRT CRT_CUN_FK_I)*/
DISTINCT odr.dve_id
FROM company_requirements crt, orders odr, lelo_products la_pct
WHERE crt.qtn_cun_id = 10035637--10000021--10035667
AND crt.ID = odr.crt_id_quote_implemented
AND NVL (odr.cancellation_date, '31-Dec-9999') = '31-Dec-9999'
[code]....
we have 4 databases, 2 on each servers, such that db1 and db2 on server1 and db3 and db4 on server2
refer count of the records for column of biggest table in the query, taken on all 4 databases (The column is nullable)
select count(*) from company_requirements crt WHERE crt.qtn_cun_id = 10035637
db1 = 73335
db2 = 89073
db3 = 81182
db4 = 82936
First I executed the query on db1 and db2 while there wasn't any user logged on to the system
db1
**********
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 17.47 473.39 85704 1508102 0 0
[code]...
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file sequential read 85704 0.31 460.55
latch free 1 0.00 0.00
SQL*Net message from client 1 14.98 14.98
[code]...
Why the elasped time changed when data and plan hasn't changed at all? Also why the plan has different stats for round 1 and 2 on db1 and db2?
I ran it 2 times each round each database so hard parsing shall not be issue.Also why the number of rows accessed are different in db1,db2 and db3,db4 especially for step1 when count of crt.qtn_cun_id is similar?
In fact when the query was taking long I was the only user on the system Also I used hard coded value (no bind variables at all)
I checked num_rows, distinct keys as well which are quite similar across all 4 databases Also no stats where gather during the query execution
What I should have checked or monitored?
I have a piece of code that joined the same table onto itself twice in order to get the previous, current and future year's into columns in the same output.
Up until recently this has been working fine but the most recent data has just been uploaded into the table and now it comes up with an error.
On the second (left outer) join it now says that the column is ambiguously defined (ORA-00918). It doesn't matter which order the joins are in it is always the second join that the error pops up on.
I'm trying to select only codes from a column that are above a certain length. how would this be achieved? I've tried char_length(fieldname) > x in the where clause but i'm getting the error ORA-00904: "char_length" invalid identifier.
View 2 Replies View RelatedI am facing a problem in my database that whenever I load any kind of database objects in a particular tablespace then it gives:-
ORA-08103: object no longer exists
If I load in any other tablespace, everything works fine. I am getting error while export/import, create a table etc. How can I resolve it. I was wondering that may be it is facing problem while writing to a particular datafile.
Select query will run fine. But when will run full query (insert + select). we will get below error after 8-10Hrs. No rows insert on table but table is exist.
Oracle Database Version: 11.2.0.3
17:06:30 SQL> @PNLP.sql
17:06:46 128 /
insert into SN.I$_GLBL
*
ERROR at line 1:
ORA-08103: object no longer exists
[code]....
Note: PNLP.sql : Script contain insert + select statement:Select query don't have any issue but while running full query we are getting above error.
Problem : print ref cursor gives error (ORA-08103: object no longer exists)/ and message "no rows selected" if ref cursor is based on GTT and commit is called in procedure.
CREATE GLOBAL TEMPORARY TABLE gtt_RB
(
A number
)
/
CREATE TABLE t_RB
(
[code]....
now with commit ;
create or replace procedure p_RB ( p_in in number, c out sys_refcursor)
as
begin
update t1_RB set a = p_in ;
insert into gtt_RB(A) values (1);
[code]....
no rows selected
1/ Once values are selected in ref cursor why it does not shows result set outside ?
2/ Are ref cursors are binded with temp tables ?
3/ if 'YES' is that should be the same case with normal tables ?
4/ Commit is necessary preserve incremental value in t1_rb how we commit with using ref cursors ?
5/ Is it a Bug?
Recently upgraded from Oracle 10.2.04 to 11g with a few bumps on the road most of which I've been able to resolve, but there's one that continues to confuse me.
Pretty vanilla INSERT statement in which the SELECT portion on its own runs in about 2 to 5 seconds (all results returned) on a facility by facility basis. When I try to combine this with an INSERT statement it ends up running for 12+ minutes per facility. The explain plan looks good and I've even tried emptying the target table prior to running the INSERT.
I've gathered schema/table statistics to no avail. I also tried using it as a CREATE TABLE AS statement and it still takes the 12 minutes per facility.
I have a form reading record information from a flat file and inserting data into a number of varied tables. At the end of file (last line read) the form issues a commit to make all inserts/updates permanent.
There is a record that disappears in particular and this everytime I re-run the test. I mean than in Debug mode, I find it in the right table everytime I query that table until the moment the commit is issued in the code - which is definitely the opposite of what should happen. I understand that for some reason a rollback happens for that record (others are ok after the commit) but my point is that if for some constraint reasons, the record did not qualify, an error should have popped up right from the Insert that added that record, right? Then How comes I find it in the table up until the moment of the commit ?? ...is the deferrable constraint property a clue for digging further?
i tried to apply a sql case statement in sql loader control file in " " the load succeed with 258 chars case or decode statement but when i add more cases it return sql loader 350 token longer than max
LOAD DATA
INFILE 'F:Vouvou20110613_102_951454.unl'
BADFILE 'F:Vouvou.pad'
DISCARDFILE 'F:Vouvou.dic'
replace
INTO TABLE vou_test_2
FIELDS TERMINATED BY '|'
TRAILING NULLCOLS
[code].......
the above one succeed when i edit the case statement as follow
ACCOUNT_2001 "CASE WHEN:ACCOUNTTYPE1='2001'THEN :REWARDAMOUNT1 WHEN :ACCOUNTTYPE2='2001' THEN :REWARDAMOUNT2
WHEN :ACCOUNTTYPE3='2001' THEN :REWARDAMOUNT3
WHEN :ACCOUNTTYPE4='2001' THEN :REWARDAMOUNT4
WHEN :ACCOUNTTYPE5='2001' THEN :REWARDAMOUNT5
WHEN :ACCOUNTTYPE6='2001' THEN :REWARDAMOUNT6
WHEN :ACCOUNTTYPE7='2001' THEN :REWARDAMOUNT7
WHEN :ACCOUNTTYPE8='2001' THEN :REWARDAMOUNT8
WHEN :ACCOUNTTYPE9='2001' THEN :REWARDAMOUNT9
WHEN :ACCOUNTTYPE10='2001' THEN :REWARDAMOUNT10 END"
i got the Error sql loader 350 token longer than max allowable length of 258 chars .
note : i cant modify the table structure to shorten the column names .
I am trying to generate ddl for a baseline created in Oracle Enterprise Manager 10g Release 5 Grid Control 10.2.0.5.0. I used to be able to do this in a prior version of oem 10.2.0.4.0. Since the upgrade I am no longer able to view a baseline that has been generated successfully. I can also no longer generate ddl from a baseline capture. There are no errors ... it just seems to time out.
View 1 Replies View RelatedI was a very contented software tester with SQLPlus skills when I had access to the classic version. I guess all things must come to an end as we have started seeing our systems under test migrate to 11g...and along comes the DOS prompt SQL*Plus.
Where I could previously ignore the cries that TOAD was better, I no longer have my right-click copy/paste option, and I don't know alot about the DOS prompt command line.
Our test lab administrator would like us to start working with SQL Developer for our testing needs...I am guessing because it installs automatically with the database. I read TOAD has a bit more functionality at this point(correct?)
What is better for a general database user with background of classic SQL Plus- the dos prompt SQL Plus or the more visual SQL developer?
I have a query which had a join:
a.c1=b.c1 and a.c2=@var
where @var is user supplied input at runtime...We had a index on a.c2 . The CBO would use this index to generate an opitimised query plan.We found some records from table "b" were dropping due to inner join. So we made a change in join. It'd be like
a.c1(+)=b.c1 and nvl(a.c2,@var)=@var
This query is no longer using the index, instead its doing a full table scan causing the query to slowdown.I have tried creating index on nvl(a.c2,'31-dec-9999')
But the CBO won't use it.Anyway to create index on this col so that full table scan can be avoided?
I create a data block on a table when I am inserting new record only one record have been saved. Last record no longer exist.
View 4 Replies View RelatedWhen I try to extract the date tag value from XML data, the time stored in 20120602153021 format i.e., YYYYMMDD24HHMISS format. The following statement extracts only date as 02-JUN-12 however do not extract the time part.
If I try the same in SQLplus with to_date it works however fails in PL/SQL.
XML data:
<?xml version="1.0"?>
<RECORD>
<REGTIMESTAMP>20120601130010</REGTIMESTAMP>
</RECORD>
PL/SQL Extract:
CURSOR c_xml_record
IS
SELECT extract(value(d), '//ACTIVATIONTS/text()').getStringVal() AS REGTIMESTAMP,
FROM t_xml_data x,
[code].......
how to set interval time every 4hrs in dbms_jobs but starting time 3.00am.
i am set trunc(sysdate)+4/24. but it will take starting at 12.00,4.00,.....in this way..
I have one inline view query which shows exec\ fetch : 2 sec\ 19 sec It gives 500 rows as final out put, when i give rownum<100 it shows exec\ fetch : 1 sec\ 000 sec, and i cannot use this rownum< 100 alternative as this is inline subquery of big query.
What does this exec and Fetch time is?
How to improve fetch time, (esp with sub-query) ?
I Have three field and first field for START TIME ,Second END TIME & Third DURATION AND Putting START TIME AND END TIME i am getting duration in minutes by using code
""SELECT TO_CHAR
(TRUNC (SYSDATE)
+ (TO_DATE (:T_DONATION_END_TIME, 'HH24MI') - TO_DATE (:T_DONATION_START_TIME, 'HH24MI')),
'HH24MI')
INTO :T_DONATION_DURATION
[code].......
I have a table which stores apointment start times and appointment end times. For the sake of this thread I will call them appt.start_time and appt.end_time. I then have a check in time and a check out time for the customer. The only thing is they ONLY way to distinguish between a check in time and a check out time is which one has the earlier time and which one has the later time. Obviously the earlier time will be the check and the later time will be the check out.
This is fine, however sometimes they may forget to check a person in or out and I need to determine whether the time should be insert into the check_in column or the check_out column. To do this I was thinking of comparing the time with the appointment start and end time and if it was closer to the appointment start time put it into the check_in column and if its closer to the appointment end time put it into the check_out column. But I was wondering how I would go about doing this.
The time I will want to compare against the appointment start and end time I will store in a variable called v_time and have this as part of my query, im just unsure of what way to write the query so as to check if the time is closer to the start or end time.
. I have this query:
select asl1.agentsessionid, asl1.endtime, asl2.starttime, 127 as agentstatus
from
(
select asl1.agentsessionid as sessionid1, min(asl2.agentsessionid) as sessionid2
from cti.agentsessionlog asl1
[code]...
As you can see from my where statement I want to compare the endtime with the startime. This query returns zero results. Is there a way to write the where statement different so I can have results?