Can this be optimized, in dev and Ist we didn't realize since 1000 rows were there, but in PERF since 2 mil rows are there this is taking a long time,
SET SERVEROUTPUT ON DECLARE counter number := 0; CURSOR insertValues IS select roleid, productcode, functioncode, typecode, restrictiontype, value1 from restrictions where actionmode = 'INSERT';
[code]...
can this be done in a single update since Selects /Updates are happening on same table
How can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.
elapsed time = time spent on waits + time CPU was used
Total time during snaps = Elapsed time + (may be) time waited for CPU...In AWR is it possible to draw such equation? I can see that the AWR report has following elements
1) End Snap time - Begin Snap time 2) DB time - as mentioned at the top of AWR report 3) DB CPU - in "Top 5 Timed Foreground Events" (I assume this is 'CPU used by sesson timing' in statspack) 4) Total of time for all Statistics in "Time Model Statistics" 5) BUSY_TIME + IDLE_TIME - "Operating System Statistics"
Time between 2 snapshots? or what else? Also for which seconds to multiply to 'DB Time(s)' per second and 'DB CPU(s)' per second in Load Profile to get the db time and CPU time?
For example, we have a table ACCOUNT (snowflake dimension containing other dimension keys) and I have many fact tables based on this dimension. Normally data warehouse load happens like first dimensions needs to be loaded and then facts. Our frequency of loads is 30 mins.
To increase the rate in which the data will be available in the facts (as its a financial application), am considering to have two batches one with dimension and another one with fact (came to this conclusion as there is no dependency like first dimensions to be loaded then only fact) just the update might get missed sometimes. But if I do that, when dimension gets loaded, it will be read in the facts in another session. Will this affect the performance ?
LOADING (insert/update) and selecting data from table at the same time. Will it affect the performance in any way.
I understand that when data is read from the disk, I/O is done..And When computations are done then CPU is used..Then where the following equation fits?
On AWR, I see two script that are out of ordinary, and I want to make sure that I interpret them correctly.
1) "Elap per Exec (s)" shows 3263.49 with 1 "Executions". 2) "Elap per Exec (s)" shows 3180.17 each execution with 2 "Executions".
Does this mean that this script ran for ~ 54 minutes (3263.49 / 60 seconds) for 1 "Execution" and ~ 53 minutes (3180.17 / 60 seconds) per each execution? I need to understand "Elapsed Time (s),CPU Time (s),Executions ,Elap per Exec (s), % Total DB Time" represent.
I am looking at an existing utility which inserts data into configuration tables. The utility is fairly basic, you simply add the UPDATE / INSERT / DELETE sql commands to a .sql file, set up a few params in a .sh script in order to tell it which Database / Schema to run against and away it goes, doing some logging, etc on the way.
Most of the time this is fine. However there is one table that causes big performance problems. This large table holds rating data and it has two large triggers on it. It also gets updated quite a bit with new rating tariffs.
The triggers check that many fields are not null or are certain values... but they also check that dates of the rates do no overlap, etc. So, in short, they do a lot of work. I can see that these are the main performance obstacle. I have no ability to alter or disable these triggers, this is a core table supplied by the vendor and as such I cannot manipulate it.
So looking at the things I can change, what am I left with?... only the way I load the data..
I can consider using SQLloader in order to handle INSERTS or using the APPEND hint in order to perform a direct path insert rather than having individual INSERT statements.
I can try to ensure that my data is sorted along the same lines as the index on the table in order to ensure that I am updating the index nodes in as streamlined way as possible. I can improve performance still more, or even circumnavigate the drag of the triggers?
I have a table which contains 8,21,177 amount of data totally.Now I am trying to delete around 4,84,000 of data from this table by using just one filter i.e. my query is something like below
DELETE /*+ parallel(resource,4) */ FROM resource where created_by = 'MIGN'
This is going to delete 4,84,000 rows of data . But my current issue is this is taking lots of time to delete the data . To be precise , its almost taking 25 hours to delete this data..The created_by column is indexed .
Execution Plan ---------------------------------------------------------- Plan hash value: 2389236532
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------
- First time to execute: Using all indexes on 2 tables
- Second time to execute: Using only indexes on first table, full table scan on the other
- Third time to execute: Do FTS on both of tables.
Now, I show the objects and relate information here:
The Tables:
system@dbwap> select count(*) from my_wap.news_relation;
COUNT(*) ---------- 272708
system@dbwap> select count(*) from my_wap.news_content;
COUNT(*) ---------- 95092
system@dbwap> desc my_wap.news_content; Name Null? Type ----------------------------------------------------- -------- ---------------- ID NOT NULL NUMBER(11) SUBJECT NOT NULL VARCHAR2(500) TITLE VARCHAR2(4000) STATE NUMBER(1) IMGPATH VARCHAR2(500) ALIGN VARCHAR2(10)
Yesterday, there were performance issue at server. So today, when i am generating report for that particular period, found snapshot id sequence is serially but with skipped hourly timed. Instead of generating report at 15:30, it generated at 16:30.
Enter value for num_days: 2
Listing the last 2 days of Completed Snapshots Snap Instance DB Name Snap Id Snap Started Level ------------ ------------ --------- ------------------ ----- tagidev TAGIDEV 2857 10 Sep 2013 00:30 1 2858 10 Sep 2013 01:30 1 2859 10 Sep 2013 02:30 1 2860 10 Sep 2013 03:30 1 2861 10 Sep 2013 04:30 1 2862 10 Sep 2013 05:31 1
[code]....
Below are the details at alert log -
Tue Sep 10 14:28:20 2013 Thread 1 cannot allocate new log, sequence 7029 Checkpoint not complete Current log# 2 seq# 7028 mem# 0: E:APPORACLEORADATATAGIDEVREDO02.LOG Thread 1 advanced to log sequence 7029 (LGWR switch)
[code]....
1) why snap didn't started at 15:30?
2) since database just started at the scheduled time of AWR snap time. But generated at 16:32 instead of 16:30, though last services "SMCO" is started at 16:42. How it snap id generated for this particular time?
3) what does "kewastUnPackStats(): bad magic 1 (0x000000001B3CE48D, 0)" mean?
Is there any way to reduce the index creation time. I have one table which has 7700000 records and every day this table get truncate and we create with create table as select statement and then create the 4 indexes and each index took 5 minutes so in totality it took 20 minutes in index creation.
Elapsed Time (s) CPU Time (s) Executions Elap per Exec (s) % Total DB Time SQL Id SQL Module SQL Text 2,423 1 3,919 0.62 1.83 gt49gg0fnc5x8 srv_dr@ahs (TNS V1-V3) UPDATE /*+ CCL<OENDB_FILE... 2,227 14 1 2227.16 1.68 bggfx8a04prj9 SQL*Plus select * from (select n.source... .........
On [SQL ordered by Elapsed Time], [SQL Module] shows an indication that a SQL was executed by which process (i.e. srv_dr@ahs)outside of SQL*PLUS.If [SQL Modeule] shows as [SQL*Plus], does it mean the query was run in SQL*PLUS manually or directly?I have the SQL ID. How do I find out who, how, and exactly what time it was run?
In my insert query, Window sort takes longer time i.e. 93% of total execution time, How do i reduce this time? are there any tuning parameters availabe for this?
In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT) IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT FROM LOAD_TRADE_ORDER WHERE IND_IS_BAD_RECORD='N');
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.
I have verified the following All records in tables in 2 schemas are same. All indexes are same Analyzed the tables Gathered Histogram on all the columns as per the first schema.
But now i still have the same problem, don't know what could be the problem.
I executed a query which executed quickly (1.7 seconds) but since its output took time in displaying on the console the time shown by 'set timing on was 39.5 seconds
also I took trace (tkprof) for the same.My query is why the timings under 'Total Waited' (43.19 and 1.69) are not added to the elapsed time 1.83 seconds
here we have an scenario where we want to find out all the sql statements that are executed in a particular time. The sql statements are executed via our application. I tried in awr report but it shows only the sql query which has taken long time to execute. and i even tried in V$session and V$sqlarea. how to view the executed sql statements in a particular session/current session
We have a big hierarchical query which is now running for a long time (around 6 hours. earlier it was running for 3 hours). We have to tune this query so that we run the jobs within a stipulated time frame.
The query below inserts around 42 million records in to the table WK_ACCT_WSTORE. I have attached in the text file.
Our application servers will be running a SELECT which returns zero rows all the time.This SELECT is put into a package and this package will be called by application servers very frequently which is causing unnecessary CPU.
Original query and plan
SQL> SELECT SEGMENT_JOB_ID, SEGMENT_SET_JOB_ID, SEGMENT_ID, TARGET_VERSION FROM AIMUSER.SEGMENT_JOBS WHERE SEGMENT_JOB_ID NOT IN (SELECT SEGMENT_JOB_ID FROM AIMUSER.SEGMENT_JOBS) 2 3 4 5 ; [code]....
Which option will be better or do we have other options?They need to pass the column's with zero rows to a ref cursor.
select serialnumber from product where productid in (select /*+ full parallel(producttask 16) */productid from producttask where startedtimestamp > to_date('2013-07-04 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and startedtimestamp < to_date('2013-07-05 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and producttasktypeid in
I'm planning to decrease the time taken to execute data by managing the redo log file but I'm kinda stuck in some aspect : > Why is my OPTIMAL_LOGFILE_SIZE is showing NULL ? > I'm trying to resize the LOGFILE capacity from 100M to 200M and I'm also adding 1 more LOG GROUP with 200M capacity too but turned out that didn't decrease my execution time.
I wish to run a SQL query and measure elapsed time, then compare the values to other Oracle DBs from other companies. That will give me a feeling if our DB performs well.For example in UNIX world, you can create a random 4GB file to measure throughput I/O and compare the values (for example 4MB/sec).
What's the simplest way to compare DB response time from forum members to our own DB? I don't need 100% accurate numbers.