I have a session on a system here that has been stuck on a DELETE statement for a very long time and the session is pegging the CPU. Using TOAD here is the "current statement":
DELETE FROM WORKFLOW
WHERE ID = :B1
ID is the primary key of the table.Here are some relevant stats, also from TOAD's session browser:
Elapsed time = 35507986900
CPU time = 35531815481
Buffer gets = 972040769
Disk reads = 951289273
Executions = 71462
I'm not sure I understand "executions" because from the information I have from the people who initiated this, this particular delete should only be occurring 30 times... maybe that stat means something other than what I think it does.I also ran a trace for 30 seconds using:
TKPROF: Release 11.2.0.1.0 - Development on Wed Feb 27 04:04:50 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
[root@localhost trace]#
The resulting file is attached. Now the offending query in the tkprof, based on my interpretation, is the select from the CMTE00 table, which contains 2344482 rows and no index on the workflow_id column. The relationship between CMTE00 and WORKFLOW tables are 1 to 1. There is a foreign key on CMTE00 pointing to the primary key of WORKFLOW which is what I assume initiated this query - I assume this is oracle checking the referential integrity since our code is not executing that statement. Also of interest, prior to this delete statement, the corresponding entry in CMTE00 was deleted in the same transaction. Google searching "db scattered file read" lead me to one of your (Don - if you read this) articles and appears to indicate that individual blocks are being fetched off the disk and this is what is taking up all the time.
In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT) IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT FROM LOAD_TRADE_ORDER WHERE IND_IS_BAD_RECORD='N');
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below
Plan hash value: 2798378986 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 || 1 | DELETE | MON_TXNS | | | | ||* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 || 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------
I am using JDBC to run a few queries from my Java program (multi-threaded one).I am facing an issue where a select statement is blocking a delete statement. From the java code point of view, there are 2 different threads accessing the same tables (whith different DB connection objects).
When the block occurs (which i was able to find out from the java thread dump that there is a lock on oracle), the below is the output:
SQL> SELECT TO_CHAR(sysdate, 'DD-MON-YYYY HH24:MI:SS') 2 || ' User '||s1.username || '@' || s1.machine 3 || ' ( SID= ' || s1.sid || ' ) with the statement: ' || sqlt2.sql_text ||' is blocking the SQL statement on '|| s2.username || '@' 4 5 || s2.machine || ' ( SID=' || s2.sid || ' ) blocked SQL -> ' 6 ||sqlt1.sql_text AS blocking_status FROM v$lock l1, v$session s1, v$lock l2 , 7 v$session s2,v$sql sqlt1, v$sql sqlt2 8 WHERE s1.sid =l1.sid 9 AND s2.sid =l2.sid AND sqlt1.sql_id= s2.sql_id AND sqlt2.sql_id= s1.prev_sql_id AND l1.BLOCK =1 10 AND l2.request > 0 AND l1.id1 = l2.id1 AND l2.id2 = l2.id2; [code]...
From the above it can be seen that a select statement is blocking a delete. Unless the select is select for Update, it should not block other statements is not it ?
We are firing a normal Drop command on our database and the database version is 10.2.0.4.The database is running on AIX v5.The command is taking more time than usual .
When i am monitoring the session i can see that a call is being made to procedure "aw_drop_proc".Could i ask you if this is something that is taking more time than usual.
We are not having any partitions on the nested tables .We have a pack of tables and we are dropping this pack through a procedure.The pack comprises of nested tables & normal tables.To drop a nested table it is taking around 6 seconds(Table with no rows) and a normal table(With no rows) it is taking 17 milli seconds.We have a partition on Normal table.
The same operation in windows is taking very less time when compared to AIX.
select rl.org_rollup_skey from (select fc.org_skey as "FC_ORG_SKEY" from IA.HIST_FCT_FCST_SLS fc inner join IA.DIM_ORG do on fc.org_skey = do.org_skey where do.org_nam IN ('101', '485','486')) p INNER JOIN IA.DIM_ORG_HIER h ON p.fc_org_skey = h.desc_org_skey inner join IA.FCT_FCST_SLS_ORG_ROLLUP rl on h.GPRNT_ORG_SKEY = rl.org_rollup_skey
Above join is taking is running forever even as subquery
(select fc.org_skey as "FC_ORG_SKEY" from IA.HIST_FCT_FCST_SLS fc inner join IA.DIM_ORG do on fc.org_skey = do.org_skey where do.org_nam IN ('101', '485','486'))
returns no rows and this subquery give result in 10 seconds according to me Full query should not take more tha 20 secs.
I am trying to run following sql query,but it is throwing following error.
SQL> delete from b$gc_count_temp a INNER JOIN COMPLEMENTS ON b$gc_count_temp.CON NECTION_ID_TEMP=COMPLEMENTS.feature_conn_id 2 WHERE a.current_designation is null and a.current_low is null and a.current _high is null;
delete from b$gc_count_temp a INNER JOIN COMPLEMENTS ON b$gc_count_temp.CONNECTI ON_ID_TEMP=COMPLEMENTS.feature_conn_id * ERROR at line 1: ORA-00933: SQL command not properly ended
I am getting ORA-00933 after running below mentioned delete statement;
DELETE FROM REPOSITORY.MEDIASEGMENT MS INNER JOIN REPOSITORY.ROUTINGEVENT RE ON TRIM(MS.Segment_Key) = TRIM(RE.uuid) INNER JOIN REPOSITORY.TEMPCONTACT TC ON TRIM(RE.Contact_Key) = TRIM(TC.vduid) WHERE TC.CREATETIME BETWEEN (TO_DATE('04/24/2008 00:00:00','MM/DD/YYYY, HH24:MI:SS')) AND (TO_DATE('04/30/2008 23:59:00','MM/DD/YYYY, HH24:MI:SS'))
DELETE FROM sre_t WHERE TO_CHAR(end_dt,'yyyy')<'2000' or TO_CHAR(start_dt)<'yyyy')<'2000';
It's executing for 15 to 20 minutes after that i got the error "session timed out"..The table is having four crore records.The delete statement is deleting 12,00000 records.
I have to write a procedure that accepts schema name, table name and column value as parameters....I knew that i need to use metadata to do that deleting manually.
HOW to use variable P_TMPLID in following statement
TYPE typ_unrecon IS TABLE OF REC_' || P_TMPLID ||'_UNRECON%ROWTYPE index by binary_integer;
because its throwing error while compiling
and also in statement FORALL i IN unrecondata.FIRST .. unrecondata.LAST SAVE EXCEPTIONS --STRSQL := ''; --STRSQL := ' INSERT INTO REC_' || P_TMPLID ||'_UNRECON VALUES ' || unrecondata(i); -- EXECUTE IMMEDIATE STRSQL; INSERT INTO REC_' || P_TMPLID ||'_UNRECON VALUES unrecondata(i);---throwing error on this statement commit; --dbms_output.put_line(unrecondata(2).TRANSID); EXCEPTION
In the following merge statement in the USINg clause...I am using a select stament of one schema WEDB.But that same select statement should take data from 30 schemeas and then check the condition below condition
ON(source.DNO = target.DNO AND source.BNO=target.BNO);
I thought that using UNIONALL for select statement of the schemas as below.
I have a table called transaction_dw and I need to select all records that have an account balance that has been below 0 in the past 6 months. initial query I tried was:
select account_balance, timestamp from transaction_dw where account_balance < 0 and timestamp between sysdate and sysdate - 6;
but this is only taking 6 days off the sysdate rather than months, how I can get it to take off 6 months?
In one of my application oracle net connection is taking 30 to 40 ms(tnsping servicename),but in actual connection is always taking more than 45 ms, and my application requirement is below than 10 to 20 ms.
And can i use any other connection method like hostname, ezconnect etc.
I am working with a new client and am still waiting on access to systems so I'm a bit hampered in looking at details of the database and environment. The database is OLTP and typical response time for returning result sets is under 2 seconds, often less than 1 second.
A simple SELECT on a few columns of v$session_longops (no subqueries, group by, having, etc) ran for more than 5 minutes.
A external reporting application ( SQL SERVER REPORT SERVICES) sends in some comma separated parameter values, which has to be queried against a table with 6-10 million records.
Length of the comma separated value can go upto 5000-6000 in length based on user input in the front end application. This application sends this value in comma separated. i.e., like
'AA1-11101,AA2-34346,AA4-534399,.....' like this at a time the application can send upto 500 values each of length 10. i.e, maximum length can be upto 5000. I used CLOB to handle this because since the length can be 5000 and varchar2 can handle only 4000 long literals.
But the time taken by the CLOB to verify against the table using INSTR is more compared to VARCHAR2.
I found that VARCHAR2 does'nt take much time. Is it a good idea to have VARCHAR2 in the PLSQL procedure as parameter instead of CLOB, since PLSQL VARCHAR2 can handle upto 32000 long values.
I want to know how I can find which query is taking more time , for example some query's are run from unix, java and from toad,sqlplus. and one query is taking much more time to execute, so how i can get that query and all the details.
I have a query which is executing fast in dev env,but very long time in qa env.What is the criteria when this behaviour occurs.Though qa is having more data than dev.But still it is taking long time for 1 rows also.When I am using the query rownum<=1.So What to check for this.