I am facing a very strange issue with one of our Oracle query. The query is usually completes in a minute or two. Even the execution plan of the query is good and it works perfect most of the times, as expected. The query fetches about 1000-2000 records each day.
But on a given day, the query takes about 30-40 mins to execute completely. Upon checking the load on DB server, there are no other processes running which can impact the run time of this query. Moreover, the record counts fetched are almost same as compared to other days. There is no pattern observed as that this phenomenon occurs. it all happens once in a while.
Configuration is Oracle 10g with RAC environment on LINUX
Is there a way to know how much time will a query take to execute without running it, just like using the autotrace (traceonly) and explain plan utility.
In our production environment some SP's are executing longer duration, but when same SP is executed from PLSQL Developer client it is executing vary quickly.
We have a Oracle 10g database with RAC and Dataguard. When we look at the AWR report, the wait time shown by Oracle for this database is very high.
Service Time : 15.36% Wait Time : 84.64%
This would imply Oracle is waiting for resources 85% of the time and only processing SQL queries during 15% of its non-idle time. However when we check the OS (RHEL), the iowait is only about 10% and the CPU is 80% idle. This means that that processing horsepower is available.
As such, the results between the OS and Oracle database (AWR report) seems contradictory. OS says we have CPU/IO capacity, however Oracle says we don't.
How can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.
I have huge package having numerous procedures. Therein there is one wrapper procedure which is invoked from front end and which in turn calls other stored procedures within the package. The stored procedures called return the result to the wrapper procedure which in turn calls other stored procedures.
The ultimate result is returned to the calling environment by the wrapper procedure itself.I need to know the time taken in totality by the wrapper procedure along with the individual execution time of each procedure called by the wrapper procedure. I am not allowed to modify the code for adding timestamp capturing though.
Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-
Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.
- First time to execute: Using all indexes on 2 tables
- Second time to execute: Using only indexes on first table, full table scan on the other
- Third time to execute: Do FTS on both of tables.
Now, I show the objects and relate information here:
The Tables:
system@dbwap> select count(*) from my_wap.news_relation;
COUNT(*) ---------- 272708
system@dbwap> select count(*) from my_wap.news_content;
COUNT(*) ---------- 95092
system@dbwap> desc my_wap.news_content; Name Null? Type ----------------------------------------------------- -------- ---------------- ID NOT NULL NUMBER(11) SUBJECT NOT NULL VARCHAR2(500) TITLE VARCHAR2(4000) STATE NUMBER(1) IMGPATH VARCHAR2(500) ALIGN VARCHAR2(10)
In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT) IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT FROM LOAD_TRADE_ORDER WHERE IND_IS_BAD_RECORD='N');
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
Is there any oracle dictionary view which captures the queries being run by users on the database and time taken to execute those queries?We need to find out the OS user not the database user since we have to identify the users who are executing long running queries.We require this basically to monitor the long running queries on the database.
I'm planning to decrease the time taken to execute data by managing the redo log file but I'm kinda stuck in some aspect : > Why is my OPTIMAL_LOGFILE_SIZE is showing NULL ? > I'm trying to resize the LOGFILE capacity from 100M to 200M and I'm also adding 1 more LOG GROUP with 200M capacity too but turned out that didn't decrease my execution time.
The below query is utilizing more than 17 Gb temp space. But still it is getting failed out due to insufficient temp space. is there any way to rewrite this query to reduce the temp utilization?
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 1 0.00 0.00 db file sequential read 85704 0.31 460.55 latch free 1 0.00 0.00 SQL*Net message from client 1 14.98 14.98
[code]...
Why the elasped time changed when data and plan hasn't changed at all? Also why the plan has different stats for round 1 and 2 on db1 and db2?
I ran it 2 times each round each database so hard parsing shall not be issue.Also why the number of rows accessed are different in db1,db2 and db3,db4 especially for step1 when count of crt.qtn_cun_id is similar?
In fact when the query was taking long I was the only user on the system Also I used hard coded value (no bind variables at all)
I checked num_rows, distinct keys as well which are quite similar across all 4 databases Also no stats where gather during the query execution
I have the where cluase as below , I would like to know how does oracle decides which one to execute first,
WHERE S.PERSPECTIVE='S'and s.shipment_gid=sb.shipment_gid AND SB.BILL_GID=CBIL.INVOICE_GIDand inv.invoice_gid=cbil.invoice_gid AND S.SOURCE_LOCATION_GID=LC.LOCATION_GID and l.location_gid=lc.location_gid AND TRUNC(cbil.insert_date)=TRUNC(tc.tesco_cal_date)AND(lc.location_gid='N' OR lc.corporation_gid='TESCO.10719')AND s.source_location_gid=lc.location_gid AND tc.tesco_year='2013' AND tc.tesco_period=6AND tc.tesco_week_number=23
when i run a form no information shows up until i click execute query... i need the info to be their automatically to browse with the previous and next button
I have the below query for which ename column has an index. As of my knowledge below queries 1st and 2st will not use index. Hence i used the 3rd statement and that too its not using the index. Finally i used the 4th query, but even the 4th query is not using the index. Then how do i make this query to use my index??? Do i need to create a function based index for this?
1. select * from emp where ename !='BH' ; 2. select * from emp where ename <> 'BH'; 3. select * from emp where ename not in ('BH'); 4. select * from emp where ename < 'BH' or ename > 'BH';
It is taking different approaches (execution plans) while executing for same set of parameters. Due to which sometimes it executes successfully, but sometimes it fills all TEMP space and get failed. I am pasting both the execution plan (different from expalin plan) below:
I Have three field and first field for START TIME ,Second END TIME & Third DURATION AND Putting START TIME AND END TIME i am getting duration in minutes by using code
I want to execute a DML query with execute immediate statement. That DML query length exceeds 4000 characters. This query has Xquery related conditions, i can not split the query. when i tried execute it is giving "string literal too long". I tried with DBMS_SQL.Parse() and DBMS_SQL.Execute also, but it is giving same error. I have to execute this DML query inside a Procedure. We are using oracle 10g version
I'm trying to figure out, based on total scheduled shift time and scheduled breaks what the effective schedule time is by hour for a particular employee.
So I have a shift query that gives me this (using a cross-day shift here because they do happen-not sure if that will impact things):
I want to know how I can find which query is taking more time , for example some query's are run from unix, java and from toad,sqlplus. and one query is taking much more time to execute, so how i can get that query and all the details.
I have a query which is executing fast in dev env,but very long time in qa env.What is the criteria when this behaviour occurs.Though qa is having more data than dev.But still it is taking long time for 1 rows also.When I am using the query rownum<=1.So What to check for this.