In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT) IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT FROM LOAD_TRADE_ORDER WHERE IND_IS_BAD_RECORD='N');
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
select serialnumber from product where productid in (select /*+ full parallel(producttask 16) */productid from producttask where startedtimestamp > to_date('2013-07-04 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and startedtimestamp < to_date('2013-07-05 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and producttasktypeid in
I am facing a very strange issue with one of our Oracle query. The query is usually completes in a minute or two. Even the execution plan of the query is good and it works perfect most of the times, as expected. The query fetches about 1000-2000 records each day.
But on a given day, the query takes about 30-40 mins to execute completely. Upon checking the load on DB server, there are no other processes running which can impact the run time of this query. Moreover, the record counts fetched are almost same as compared to other days. There is no pattern observed as that this phenomenon occurs. it all happens once in a while.
Configuration is Oracle 10g with RAC environment on LINUX
How can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.
Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-
Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.
- First time to execute: Using all indexes on 2 tables
- Second time to execute: Using only indexes on first table, full table scan on the other
- Third time to execute: Do FTS on both of tables.
Now, I show the objects and relate information here:
The Tables:
system@dbwap> select count(*) from my_wap.news_relation;
COUNT(*) ---------- 272708
system@dbwap> select count(*) from my_wap.news_content;
COUNT(*) ---------- 95092
system@dbwap> desc my_wap.news_content; Name Null? Type ----------------------------------------------------- -------- ---------------- ID NOT NULL NUMBER(11) SUBJECT NOT NULL VARCHAR2(500) TITLE VARCHAR2(4000) STATE NUMBER(1) IMGPATH VARCHAR2(500) ALIGN VARCHAR2(10)
The below query is utilizing more than 17 Gb temp space. But still it is getting failed out due to insufficient temp space. is there any way to rewrite this query to reduce the temp utilization?
I'm planning to decrease the time taken to execute data by managing the redo log file but I'm kinda stuck in some aspect : > Why is my OPTIMAL_LOGFILE_SIZE is showing NULL ? > I'm trying to resize the LOGFILE capacity from 100M to 200M and I'm also adding 1 more LOG GROUP with 200M capacity too but turned out that didn't decrease my execution time.
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 1 0.00 0.00 db file sequential read 85704 0.31 460.55 latch free 1 0.00 0.00 SQL*Net message from client 1 14.98 14.98
[code]...
Why the elasped time changed when data and plan hasn't changed at all? Also why the plan has different stats for round 1 and 2 on db1 and db2?
I ran it 2 times each round each database so hard parsing shall not be issue.Also why the number of rows accessed are different in db1,db2 and db3,db4 especially for step1 when count of crt.qtn_cun_id is similar?
In fact when the query was taking long I was the only user on the system Also I used hard coded value (no bind variables at all)
I checked num_rows, distinct keys as well which are quite similar across all 4 databases Also no stats where gather during the query execution
I wish to run a SQL query and measure elapsed time, then compare the values to other Oracle DBs from other companies. That will give me a feeling if our DB performs well.For example in UNIX world, you can create a random 4GB file to measure throughput I/O and compare the values (for example 4MB/sec).
What's the simplest way to compare DB response time from forum members to our own DB? I don't need 100% accurate numbers.
It is taking different approaches (execution plans) while executing for same set of parameters. Due to which sometimes it executes successfully, but sometimes it fills all TEMP space and get failed. I am pasting both the execution plan (different from expalin plan) below:
and the rest of those are small tables..All the indexes are in place and I have tried with few hints but this query is slow.
WITH REPS AS (SELECT DISTINCT REP_SET.FILTER_TOKEN FROM (SELECT /*+ INDEX (wdsd WEBDATASETDTL_PK_TEAM) */ DISTINCT WDSD.DATA_SETTING_ID, WDSD.FILTER_TOKEN FROM WEB_DATA_SETTING_DETAIL WDSD, [code]....
We are using the 11g AMM feature and Memory_Target set to 96GB and total RAM on the Server is 128GB Now the top and free shows up only 200MB memory free on the system.
There are 2 process dbw0 and dbw1 which consumes the top memory and this is 30GB per dbw.
Why is the dbw process taking up so much memory when there is not much load on the database.
My oracle database version is 11.2.0.3.0 where i am having one schema in that schema i am having 3 same tables with same structure same data but with different name.
but problem is in first table when i perform select query it takes 5 sec, in another table it is taking 0 sec and in third table it is taking 10 sec.
We have a query which makes Oracle behave very strangely. It is a straight-forward join between four tables of about 30.000 rows each, with some simple comparisons and some NOT LIKE:s.
When we run this query, it either takes about 1 second or more than 1.000 seconds to run and return the approximately 5.000 rows of the result. If we run the same query over and over again, it fluctuates back and forth between two different execution plans, apparently at random, 3 times out of 4 selecting the 1.000 second version and 1 time out of 4 the 1 second version.
There are no other connections to the database, the schema is not modified, the data is identical, the query is identical, and the response is identical, but the execution time alternates between 1 second and 1.000 seconds.On the same database instance we have another schema which is identical, but with slightly less data, which is used for development. The 1.000 second run times did not happen in that schema, but only in the test system's database.
Therefore we would REALLY like to understand what happens and why, so that we can avoid triggering this in the future. We could try locking the 1 second execution plan, but then we're afraid of doing the same thing wrong again in the future.
Here are the two execution plans that Oracle switches between, more or less at random:
select g.ucid, a.ucid from account a, groups g, group_members gm, group_groups_flat ggf where a.ucid = gm.ucid_member and gm.ucid_group = ggf.ucid_member
[code]...
And excerpts from the schema: CREATE TABLE "PDB"."GROUPS" ( "UCID" VARCHAR2(256 BYTE), "UNIX_GID" NUMBER(*,0), [...]
I am building a database to store call quality statistics for VOIP networks. It is a very insert heavy application, and data reliability is of relatively minimal importance (in the sense that a few corrupt call records here and there doesn't matter the way corruption does in for example a banks database). Long term storage is also unimportant, most customers only wish to keep 3 months of data readily available in the database. Most do not even archive the older data.
To that end I am searching for every possible way to improve my insert performance and the internet has turned me onto the idea of NOLOGGING. These are the steps I have taken to reduce my IO consumed by the Redo and Undo logs.
1. I am inserting with the APPEND_VALUES hint.
2. I have disabled force logging at the database level
3. I have disabled force logging at the tablespace level
4. I have disabled logging on the relevant table and each of its indices
As best I can tell this is all I can do to minimize Redo/Undo, but based on my observations of the Disk portion of the WinServer2008 Performance Monitor, this has made little to no change in the amount of IO to my REDO and UNDO files. IO to the .dbf containing my table makes up less than 20% of the total disk IO for oracle.exe, the rest is the REDO and UNDO logs.
The above article is a little over my head but I am able to extract from it that I will never entirely eliminate REDO/UNDO, which is fine, but I would think I could get it lower than it currently is.ted.
I am aware that from 11g, memory_target is sufficient for memeory management between SGA and PGA.
what happens if MEMORY_TARGET set to non-zero and SGA_TARGET set to zero values in a 11g database? Does it enable automatic memory management within the SGA?
We regularly hit by ORA-4031 errors. Also, memory_target advisory (v$memory_target_advice) does not show any advisory information.
for eg: memory_max_target = 500m memory_target = 500m
I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.
I have verified the following All records in tables in 2 schemas are same. All indexes are same Analyzed the tables Gathered Histogram on all the columns as per the first schema.
But now i still have the same problem, don't know what could be the problem.
I am facing one performance issue, in which the query cost is very low compare to cpu cost and as a result the cpu always show the high graph.I am also attaching the gv$sql and gv$sql_plan data of this query.
This is the query:
SELECT PTLS.ITEMTYPE , PTLS.ITEMID , PTLS.STAGEID, TS.USERID, SUM(PREVIOUSHOURS) AS PREVIOUSHOURS, MIN(STARTDATE) AS STARTDATE, MAX(STARTDATE) AS ENDDATE FROM PROJECTTIMELOGSSTAGE PTLS, PROJECTTIMESHEETITEM PTSI, TIMESHEET TS WHERE PTLS.PROJECTID = :B2 AND TS.TIMESHEETID = PTSI.TIMESHEETID AND TS.USERID = :B1 AND PTSI.TIMESHEETID = PTLS.TIMESHEETID AND PTSI.ITEMTYPE = PTLS.ITEMTYPE AND PTSI.ITEMID = PTLS.ITEMID AND (PTSI.ISPWFITEM = 'N' OR PTSI.ISPWFITEM IS NULL) AND PTLS.ITEMTYPE NOT IN ('OtherTsk','NewTsk','Loc','Glb') AND (PTLS.ITEMTYPE, PTLS.ITEMID ) IN (SELECT ITEMTYPE, ITEMID FROM PROJECTTIMELOGSSTAGE PTLS1 WHERE PTLS1.PROJECTID = :B2 AND PTLS1.TIMESHEETID = :B3 ) GROUP BY PTLS.ITEMTYPE, PTLS.ITEMID, PTLS.STAGEID, TS.USERID
I have two tables with same columns(15 of them), I am trying to find difference between two tables using minus operator and then insert in stage table using below code
Issue is table1 has 50 million records table2 is empty
so when first time when we execute this v_collection1,v_collection2 collection will have 50 million records in it which will go in memory, I think this is not good, because going in memory will eat memory and resources while sorting and other activities ?
After fetching records in collection we are inserting that in stage table and then COMMIT so i think that wont be good because committing 50 million will generate large amount of redo?
below is snippet of my code
DECLARE type lst_collection1 IS TABLE OF table1.col1%type INDEX BY binary_integer; type lst_collection2 IS [code].......
The prod stats has been implemented in development. The stats has been gathered 2 months back on dev while in production the stats has been gathered 2 weeks back.
My question shouldn't the high volume of data causes changes in plan in both the environment? My thinking is that plan can be different as the high volume of data are changing in prod it may lead to a different plan.
In a 3-node RAC setup; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS CPU time 5,802 34.9
RFS ping 15 5,118 33,671 30.8 Other
Log file sequential read 234,831 5,036 21 30.3 System I/O
Sql*Net more data from client 24,1711,08745 6.5 Network
Db file sequential read130,939 4533 2.7 User I/O
Findings:- On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
1)Are these symptoms of undersized log buffer? 2)I feel Network wait can be reduced by tweaking SDU & TDU values based on MDU.
How can I find out the particular oracle session which was consuming high memory in the past?
I can't get the data in v$sessstat Unable to get the information in AWR
dba_hist_active_session_history do not have field which indicate memory related information Shall I concetrate on EVENT in dba_hist_active_session_history which continuosly had sort, direct path read Or Locate sql_id from dba_hist_sqlstat with high SORTS_DELTA for snapshots belonging to problematic time period and then using the sql_id query dba_hist_active_session_history
which approach I shall take to find out the session which consumed most memory in the past?