Performance Tuning :: Loading And Reading At Same Time

Feb 13, 2013

For example, we have a table ACCOUNT (snowflake dimension containing other dimension keys) and I have many fact tables based on this dimension. Normally data warehouse load happens like first dimensions needs to be loaded and then facts. Our frequency of loads is 30 mins.

To increase the rate in which the data will be available in the facts (as its a financial application), am considering to have two batches one with dimension and another one with fact (came to this conclusion as there is no dependency like first dimensions to be loaded then only fact) just the update might get missed sometimes. But if I do that, when dimension gets loaded, it will be read in the facts in another session. Will this affect the performance ?

LOADING (insert/update) and selecting data from table at the same time. Will it affect the performance in any way.

View 1 Replies


ADVERTISEMENT

Performance Tuning :: Sql Taking More CPU Time?

Nov 23, 2010

I have one query in my production which is taking more CPU time. when that statement executing the CUP is taking more than 90%

I am attaching the sql query and indexes on the table.

View 4 Replies View Related

Performance Tuning :: Update XML Eating Up A Lot Of Time

Sep 26, 2012

update xml eating up a lot of time is there any way to tune

SELECT UPDATEXML(:B3 , '/FCUBS_RES_ENV/FCUBS_BODY/FLD/FN[@TYPE="' || :B2 ||
'"]', :B1 )
FROM
DUAL

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 499 0.44 0.90 0 3 0 0
Fetch 499 1.49 2.87 0 0 0 499
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 999 1.93 3.77 0 3 0 499

real code
SELECT updatexml(l_xml,
'/FCUBS_RES_ENV/FCUBS_BODY/FLD/FN[@TYPE="' ||
upper(replace(cspkes_misc.fn_getparam(p_parent_list,
l_parent_list_clob,
'Y',
l_cnt,
'>'),
'-',
'_')) || '"]/text()',
l_fn_str)
INTO l_xml
FROM dual;

View 4 Replies View Related

Performance Tuning :: Avg Time Taken By Execution Plan

Apr 12, 2013

How can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.

View 7 Replies View Related

Performance Tuning :: Time Calculation In AWR Report

Dec 12, 2011

Till statspack we had

elapsed time = time spent on waits + time CPU was used

Total time during snaps = Elapsed time + (may be) time waited for CPU...In AWR is it possible to draw such equation? I can see that the AWR report has following elements

1) End Snap time - Begin Snap time
2) DB time - as mentioned at the top of AWR report
3) DB CPU - in "Top 5 Timed Foreground Events" (I assume this is 'CPU used by sesson timing' in statspack)
4) Total of time for all Statistics in "Time Model Statistics"
5) BUSY_TIME + IDLE_TIME - "Operating System Statistics"

Time between 2 snapshots? or what else? Also for which seconds to multiply to 'DB Time(s)' per second and 'DB CPU(s)' per second in Load Profile to get the db time and CPU time?

View 2 Replies View Related

Performance Tuning :: Database Time In AWR Report

Jul 9, 2012

I understand that when data is read from the disk, I/O is done..And When computations are done then CPU is used..Then where the following equation fits?

DB Time = sum of database CPU time + waits

Is I/O considered as a part of CPU time?

Does this equation changes with SAN, OS caching?

View 3 Replies View Related

Performance Tuning :: SQL Ordered By Elapsed Time

Feb 2, 2013

Elapsed Time (s)CPU Time (s)Executions Elap per Exec (s) % Total DB TimeSQL IdSQL ModuleSQL Text

3,263 32 1 3263.49 2.79 3ta0ms19fvgds httpd.exe SELECT CASE WHEN COUNT >= 1...Elapsed
6,360 164 2 3180.17 5.44 51jx99dm0swv7 cpm_srvscript@ahcaxasmil1b (TNS V1-V3) SELECT /*+ CCL<PFT_GET_RP...

On AWR, I see two script that are out of ordinary, and I want to make sure that I interpret them correctly.

1) "Elap per Exec (s)" shows 3263.49 with 1 "Executions".
2) "Elap per Exec (s)" shows 3180.17 each execution with 2 "Executions".

Does this mean that this script ran for ~ 54 minutes (3263.49 / 60 seconds) for 1 "Execution" and ~ 53 minutes (3180.17 / 60 seconds) per each execution? I need to understand "Elapsed Time (s),CPU Time (s),Executions ,Elap per Exec (s), % Total DB Time" represent.

View 9 Replies View Related

Performance Tuning :: Lot Of Time Being Taken To Delete Data From A Table

Apr 27, 2012

I have a table which contains 8,21,177 amount of data totally.Now I am trying to delete around 4,84,000 of data from this table by using just one filter i.e. my query is something like below

DELETE /*+ parallel(resource,4) */ FROM resource where created_by = 'MIGN'

This is going to delete 4,84,000 rows of data . But my current issue is this is taking lots of time to delete the data . To be precise , its almost taking 25 hours to delete this data..The created_by column is indexed .

Execution Plan
----------------------------------------------------------
Plan hash value: 2389236532

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------------

| 0 | DELETE STATEMENT | | 499 | 20459 | 39 (0)| 00:00:
01 |

| 1 | DELETE | RESOURCE | | | |
[code]....

View 26 Replies View Related

Performance Tuning :: Lost Index From Second Execution Time?

Jul 11, 2013

The problem was describe:

- First time to execute: Using all indexes on 2 tables

- Second time to execute: Using only indexes on first table, full table scan on the other

- Third time to execute: Do FTS on both of tables.

Now, I show the objects and relate information here:

The Tables:

system@dbwap> select count(*) from my_wap.news_relation;

COUNT(*)
----------
272708

system@dbwap> select count(*) from my_wap.news_content;

COUNT(*)
----------
95092

system@dbwap> desc my_wap.news_content;
Name Null? Type
----------------------------------------------------- -------- ----------------
ID NOT NULL NUMBER(11)
SUBJECT NOT NULL VARCHAR2(500)
TITLE VARCHAR2(4000)
STATE NUMBER(1)
IMGPATH VARCHAR2(500)
ALIGN VARCHAR2(10)

[Code]....

View 7 Replies View Related

Performance Tuning :: Max And Delete Statement Talking Lot Of Time?

Apr 15, 2011

Test1 table have around 385772300 rows. below delete and select statment talking lot of time.

Select stament taking more than 1 hrs.

SELECT TO_NUMBER(MAX(f.T3))
--INTO v_FISCAL_MONTH_ID
FROM Test1 f;

delete statment taking more than 2 hours

DELETE FROM TEST1 WHERE TRUNC(T10) < TRUNC(ADD_MONTHS(SYSDATE,-36));
CREATE TABLE Test1
(

[Code].....

View 4 Replies View Related

Performance Tuning :: Snapshot Skipped In Between Scheduled Time?

Sep 11, 2013

Yesterday, there were performance issue at server. So today, when i am generating report for that particular period, found snapshot id sequence is serially but with skipped hourly timed. Instead of generating report at 15:30, it generated at 16:30.

Enter value for num_days: 2

Listing the last 2 days of Completed Snapshots
Snap
Instance DB Name Snap Id Snap Started Level
------------ ------------ --------- ------------------ -----
tagidev TAGIDEV 2857 10 Sep 2013 00:30 1
2858 10 Sep 2013 01:30 1
2859 10 Sep 2013 02:30 1
2860 10 Sep 2013 03:30 1
2861 10 Sep 2013 04:30 1
2862 10 Sep 2013 05:31 1

[code]....

Below are the details at alert log -

Tue Sep 10 14:28:20 2013
Thread 1 cannot allocate new log, sequence 7029
Checkpoint not complete
Current log# 2 seq# 7028 mem# 0: E:APPORACLEORADATATAGIDEVREDO02.LOG
Thread 1 advanced to log sequence 7029 (LGWR switch)

[code]....

1) why snap didn't started at 15:30?

2) since database just started at the scheduled time of AWR snap time. But generated at 16:32 instead of 16:30, though last services "SMCO" is started at 16:42. How it snap id generated for this particular time?

3) what does "kewastUnPackStats(): bad magic 1 (0x000000001B3CE48D, 0)" mean?

View 7 Replies View Related

Performance Tuning :: How To Reduce Index Creation Time

Aug 17, 2010

Is there any way to reduce the index creation time. I have one table which has 7700000 records and every day this table get truncate and we create with create table as select statement and then create the 4 indexes and each index took 5 minutes so in totality it took 20 minutes in index creation.

View 2 Replies View Related

Performance Tuning :: Why Different Elapsed Time In Oracle9i And 10g For Same Query

May 5, 2011

why different elapsed time in Oracle9i and oracle10g for the same query ?

View 2 Replies View Related

Performance Tuning :: SQL Ordered By Elapsed Time And SQL Module?

Feb 24, 2013

Elapsed Time (s) CPU Time (s) Executions Elap per Exec (s) % Total DB Time SQL Id SQL Module SQL Text
2,423 1 3,919 0.62 1.83 gt49gg0fnc5x8 srv_dr@ahs (TNS V1-V3) UPDATE /*+ CCL<OENDB_FILE...
2,227 14 1 2227.16 1.68 bggfx8a04prj9 SQL*Plus select * from (select n.source...
.........

On [SQL ordered by Elapsed Time], [SQL Module] shows an indication that a SQL was executed by which process (i.e. srv_dr@ahs)outside of SQL*PLUS.If [SQL Modeule] shows as [SQL*Plus], does it mean the query was run in SQL*PLUS manually or directly?I have the SQL ID. How do I find out who, how, and exactly what time it was run?

View 5 Replies View Related

Performance Tuning :: WINDOW SORT - Reduce Time?

Nov 30, 2011

In my insert query, Window sort takes longer time i.e. 93% of total execution time, How do i reduce this time? are there any tuning parameters availabe for this?

View 5 Replies View Related

Performance Tuning :: Delete Statement Is Taking More Time For Execution?

Mar 9, 2010

In my code I am using delete statement which is taking too much time to execute.

Statement is as follow:

DELETE FROM TRADE_ORDER_EMP_ALLOCATION T
WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT)
IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT
FROM LOAD_TRADE_ORDER
WHERE IND_IS_BAD_RECORD='N');

Tables Used:
oTRADE_ORDER_EMP_ALLOCATION Row count (329525880)
oLOAD_TRADE_ORDER Row count (29281)

Every column in "IN" clause and select clause is containing index on it

Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.

Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.

way for fast execution in of above delete statement?

View 4 Replies View Related

Performance Tuning :: Query Running For A Long Time In Second Schema

Apr 27, 2012

I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.

I have verified the following
All records in tables in 2 schemas are same.
All indexes are same
Analyzed the tables
Gathered Histogram on all the columns as per the first schema.

But now i still have the same problem, don't know what could be the problem.

Table_nameNum_RowsBlocks
PRPSL_LST_T5866107159
PRPSL_WKFLW_ACTVTY_T5829904030
ITEM_CHR_VAL_T5134340104049020
ITEM_RGN_ASSN_T8571220137215

Also attached 2 screen shots of OEM Plans..

View 2 Replies View Related

Performance Tuning :: Elapsed Time Not Accounted In Tkprof File

Jan 13, 2011

find the attached tkprof'ed file of session

I started the trace after the query started (upon user's complaint)

However even after tracing the session for more than 30 minutes I am not geeting where the 30 minutes are accounted in this file

View 11 Replies View Related

Performance Tuning :: Elapsed Time In Trace File Output

Nov 16, 2011

I executed a query which executed quickly (1.7 seconds) but since its output took time in displaying on the console the time shown by 'set timing on was 39.5 seconds

also I took trace (tkprof) for the same.My query is why the timings under 'Total Waited' (43.19 and 1.69) are not added to the elapsed time 1.83 seconds

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.06 0 10 0 0
Fetch 758 0.03 1.77 0 0 0 11345
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 760 0.03 1.83 0 10 0 11345
[code]....

View 1 Replies View Related

Performance Tuning :: How To Find Out Queries That Are Executed In Particular Range Of Time

Sep 9, 2011

here we have an scenario where we want to find out all the sql statements that are executed in a particular time. The sql statements are executed via our application. I tried in awr report but it shows only the sql query which has taken long time to execute. and i even tried in V$session and V$sqlarea. how to view the executed sql statements in a particular session/current session

View 3 Replies View Related

Performance Tuning :: Cost Of Query Is Brought Down But Taking Same Time

Sep 1, 2010

For an query, cost was 16Lakhs and was taking 30min, I brought down the cost to 1.5lakhs, but still it is taking 30min.

There were many outer joins and same table has been Used(FROM clause) 5 times in the query. I have introduced WITH clause, and brought down the cost.

View 7 Replies View Related

Performance Tuning :: Hierarchical Queries - Run Jobs Within Stipulated Time Frame

Nov 17, 2011

We have a big hierarchical query which is now running for a long time (around 6 hours. earlier it was running for 3 hours). We have to tune this query so that we run the jobs within a stipulated time frame.

The query below inserts around 42 million records in to the table WK_ACCT_WSTORE. I have attached in the text file.

View 4 Replies View Related

Performance Tuning :: Servers Will Be Running SELECT Which Returns Zero Rows All Time

Feb 11, 2011

Our application servers will be running a SELECT which returns zero rows all the time.This SELECT is put into a package and this package will be called by application servers very frequently which is causing unnecessary CPU.

Original query and plan

SQL> SELECT SEGMENT_JOB_ID, SEGMENT_SET_JOB_ID, SEGMENT_ID, TARGET_VERSION
FROM AIMUSER.SEGMENT_JOBS
WHERE SEGMENT_JOB_ID NOT IN
(SELECT SEGMENT_JOB_ID
FROM AIMUSER.SEGMENT_JOBS) 2 3 4 5 ;
[code]....

Which option will be better or do we have other options?They need to pass the column's with zero rows to a ref cursor.

View 6 Replies View Related

Performance Tuning :: Select Query Taking Time Even After Using Parallel Hint?

Sep 25, 2013

select
serialnumber from product where productid in
(select /*+ full parallel(producttask 16) */productid from producttask where
startedtimestamp > to_date('2013-07-04 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
and startedtimestamp < to_date('2013-07-05 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
and producttasktypeid in

[code]....

Explain plan output:

Plan hash value: 2779236890
-----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name| Rows| Bytes | Cost (%CPU)| Time| Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT || 1 | 29 | 9633M (8)|999:59:59 |||
|* 1 | FILTER |||| ||||
| 2 | PARTITION RANGE ALL || 738M| 19G| 6321K (1)| 21:04:17 | 1 | 6821 |

[code]....

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( EXISTS (<not feasible>)
4 - filter("PRODUCTID"=:B1)
5 - filter(ROWNUM<100)
12 - access("MODELID"=:B1)

[code]....

Note: - SQL profile "SYS_SQLPROF_014153616b850002" used for this statement

View 2 Replies View Related

Performance Tuning :: Increased LOGFILE Capacity Not Decrease Execution Time?

Aug 6, 2012

I'm planning to decrease the time taken to execute data by managing the redo log file but I'm kinda stuck in some aspect : > Why is my OPTIMAL_LOGFILE_SIZE is showing NULL ? > I'm trying to resize the LOGFILE capacity from 100M to 200M and I'm also adding 1 more LOG GROUP with 200M capacity too but turned out that didn't decrease my execution time.

View 12 Replies View Related

Performance Tuning :: Standard SQL Query To Benchmark Oracle DB Response Time?

Oct 26, 2013

I wish to run a SQL query and measure elapsed time, then compare the values to other Oracle DBs from other companies. That will give me a feeling if our DB performs well.For example in UNIX world, you can create a random 4GB file to measure throughput I/O and compare the values (for example 4MB/sec).

What's the simplest way to compare DB response time from forum members to our own DB? I don't need 100% accurate numbers.

View 1 Replies View Related

Performance Tuning :: Query Taking High CPU And Execution Time In Oracle 11gR2

Dec 24, 2012

The below query is taking high CPU almost 98% and longer time to execute.

SELECT ancestor,
Max(D.alarmstate) ALARMSTATE,
Max(D.sialarmstate) SIALARMSTATE,
Max(D.uncralarmstate) UNCRALARMSTATE,
Max(M.commstate) COMMSTATE,
Max(M.nncommstate) NNCOMMSTATE,
Max(M.servicestate) SERVICESTATE,
Max(M.abnormal) ABNORMAL,
CASE
[code]....

View 15 Replies View Related

Performance Tuning :: Real Time Speed Improvements And Auto-trace Results

Feb 4, 2011

I've got a query running a select count (*) over a table. The default plan takes in the order of 15 minutes to return, a hinted plan to use a different index takes 3 minutes to return.

Unfortunately I cant get at the index stats and a few other areas which I suspect may be key here.When running autotrace against the two queries I see fairly different values as one would expect.

Query

select count (*) from fulfilmentitem bfi where created >= sysdate-30 AND bfi.status = 'FA' AND bfi.fulfilmentmethod = 'D'
Slow run
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | 33119 (1)|
| 1 | SORT AGGREGATE | | 1 | 15 | |
|* 2 | TABLE ACCESS BY INDEX ROWID| FULFILMENTITEM | 12525 | 183K| 33119 (1)|
|* 3 | INDEX RANGE SCAN | IDX_FULFIL_METHODSTATUS | 250K| | 1786 (1)|
----------------------------------------------------------------------------------------------
[code]....

IDX_FULFIL_METHODSTATUS is across FULFILMENTMETHOD & STATUS in that order.
IDX_BFI_CREATED is on CREATED and is approx 70% of the size of the other index

The row counts estimated in the explain plan are out, the count(*) comes in at 32.8k rows.As you will have seen, the fast run shows a pretty significant consistent get increase compared to the slow run and a decent though not dramatic physical read drop.

My uncertainty is around if these changes in consistent get/phys read values would typically be enough to suggest the real time improvements I'm observing or if other (albeit perhaps temporary) factors are involved. It is a prod OLTP environment so the data will be rapidly changing and that may be a factor.

I know it can never be an exact science without intimately knowing the hardware/current loads etc but I also know that there's enough experience on these boards to have a loose handle on if the time shifts between queries are likely (or not) to be reflective of the stat changes or if those differences alone shouldn't (or typically wouldn't account) for it.

I'm thinking about instructing the query to ignore its original plan but am hesitant to do so without being a little more confident that it's not just a timing thing or something other than the change of index approach which may be causing the improvement. the autotrace stat changes observed I couldn't put my hand on heart say "yup - that change is good, ignore the default index all the time for this job".

View 11 Replies View Related

Performance Tuning :: Why Elapsed Time Changed While Execution Plan And Stats Remains The Same

Jun 4, 2010

attached query giving consistent execution plan but different timings across run

SELECT /*+ INDEX (CRT CRT_CUN_FK_I)*/
DISTINCT odr.dve_id
FROM company_requirements crt, orders odr, lelo_products la_pct
WHERE crt.qtn_cun_id = 10035637--10000021--10035667
AND crt.ID = odr.crt_id_quote_implemented
AND NVL (odr.cancellation_date, '31-Dec-9999') = '31-Dec-9999'

[code]....

we have 4 databases, 2 on each servers, such that db1 and db2 on server1 and db3 and db4 on server2

refer count of the records for column of biggest table in the query, taken on all 4 databases (The column is nullable)

select count(*) from company_requirements crt WHERE crt.qtn_cun_id = 10035637
db1 = 73335
db2 = 89073
db3 = 81182
db4 = 82936

First I executed the query on db1 and db2 while there wasn't any user logged on to the system

db1
**********
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 17.47 473.39 85704 1508102 0 0

[code]...

Elapsed times include waiting on following events:

Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file sequential read 85704 0.31 460.55
latch free 1 0.00 0.00
SQL*Net message from client 1 14.98 14.98

[code]...

Why the elasped time changed when data and plan hasn't changed at all? Also why the plan has different stats for round 1 and 2 on db1 and db2?

I ran it 2 times each round each database so hard parsing shall not be issue.Also why the number of rows accessed are different in db1,db2 and db3,db4 especially for step1 when count of crt.qtn_cun_id is similar?

In fact when the query was taking long I was the only user on the system Also I used hard coded value (no bind variables at all)

I checked num_rows, distinct keys as well which are quite similar across all 4 databases Also no stats where gather during the query execution

What I should have checked or monitored?

View 10 Replies View Related

Forms :: Slow Performance Using OLE2 Reading Xls File

Oct 10, 2011

I just trying to import some informations from excel to Oracle using OLE2 over Oracle Forms 6i, but It´s very slow when I have import under then 10k lines. anything to optimize that ? Follow the code used...

application OLE2.Obj_Type;
workbooks OLE2.Obj_Type;
workbook OLE2.Obj_Type;

[Code]....

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved