Performance Tuning :: Increased LOGFILE Capacity Not Decrease Execution Time?
Aug 6, 2012
I'm planning to decrease the time taken to execute data by managing the redo log file but I'm kinda stuck in some aspect : > Why is my OPTIMAL_LOGFILE_SIZE is showing NULL ? > I'm trying to resize the LOGFILE capacity from 100M to 200M and I'm also adding 1 more LOG GROUP with 200M capacity too but turned out that didn't decrease my execution time.
View 12 Replies
ADVERTISEMENT
Apr 12, 2013
How can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.
View 7 Replies
View Related
Jul 11, 2013
The problem was describe:
- First time to execute: Using all indexes on 2 tables
- Second time to execute: Using only indexes on first table, full table scan on the other
- Third time to execute: Do FTS on both of tables.
Now, I show the objects and relate information here:
The Tables:
system@dbwap> select count(*) from my_wap.news_relation;
COUNT(*)
----------
272708
system@dbwap> select count(*) from my_wap.news_content;
COUNT(*)
----------
95092
system@dbwap> desc my_wap.news_content;
Name Null? Type
----------------------------------------------------- -------- ----------------
ID NOT NULL NUMBER(11)
SUBJECT NOT NULL VARCHAR2(500)
TITLE VARCHAR2(4000)
STATE NUMBER(1)
IMGPATH VARCHAR2(500)
ALIGN VARCHAR2(10)
[Code]....
View 7 Replies
View Related
Mar 9, 2010
In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T
WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT)
IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT
FROM LOAD_TRADE_ORDER
WHERE IND_IS_BAD_RECORD='N');
Tables Used:
oTRADE_ORDER_EMP_ALLOCATION Row count (329525880)
oLOAD_TRADE_ORDER Row count (29281)
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
View 4 Replies
View Related
Dec 24, 2012
The below query is taking high CPU almost 98% and longer time to execute.
SELECT ancestor,
Max(D.alarmstate) ALARMSTATE,
Max(D.sialarmstate) SIALARMSTATE,
Max(D.uncralarmstate) UNCRALARMSTATE,
Max(M.commstate) COMMSTATE,
Max(M.nncommstate) NNCOMMSTATE,
Max(M.servicestate) SERVICESTATE,
Max(M.abnormal) ABNORMAL,
CASE
[code]....
View 15 Replies
View Related
Jun 4, 2010
attached query giving consistent execution plan but different timings across run
SELECT /*+ INDEX (CRT CRT_CUN_FK_I)*/
DISTINCT odr.dve_id
FROM company_requirements crt, orders odr, lelo_products la_pct
WHERE crt.qtn_cun_id = 10035637--10000021--10035667
AND crt.ID = odr.crt_id_quote_implemented
AND NVL (odr.cancellation_date, '31-Dec-9999') = '31-Dec-9999'
[code]....
we have 4 databases, 2 on each servers, such that db1 and db2 on server1 and db3 and db4 on server2
refer count of the records for column of biggest table in the query, taken on all 4 databases (The column is nullable)
select count(*) from company_requirements crt WHERE crt.qtn_cun_id = 10035637
db1 = 73335
db2 = 89073
db3 = 81182
db4 = 82936
First I executed the query on db1 and db2 while there wasn't any user logged on to the system
db1
**********
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 17.47 473.39 85704 1508102 0 0
[code]...
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file sequential read 85704 0.31 460.55
latch free 1 0.00 0.00
SQL*Net message from client 1 14.98 14.98
[code]...
Why the elasped time changed when data and plan hasn't changed at all? Also why the plan has different stats for round 1 and 2 on db1 and db2?
I ran it 2 times each round each database so hard parsing shall not be issue.Also why the number of rows accessed are different in db1,db2 and db3,db4 especially for step1 when count of crt.qtn_cun_id is similar?
In fact when the query was taking long I was the only user on the system Also I used hard coded value (no bind variables at all)
I checked num_rows, distinct keys as well which are quite similar across all 4 databases Also no stats where gather during the query execution
What I should have checked or monitored?
View 10 Replies
View Related
Mar 19, 2013
installing Oracle for a new Data-Center(still in planning).It will be an OLTP shop working 24/7, with 250 concurrent users.
1. Can it, in theory run with good performance on a quad processor(something like Intel Xeon Series 56XX) , with 32GB of memory?
2. What is the Oracle Database Edition required for this configuration?
3. What additional things should one take into account to plan the most cost-effective configuration?
View 2 Replies
View Related
Jul 20, 2013
Why the query is behaving differently with the different database.(execution plan)
Whatever the production database is having same database instance replicated to a new schema. I tried both the queries running on both environment.In prod the index has been used but in newdev it is not. This case existing primary key index were not been used.
View 6 Replies
View Related
Dec 26, 2011
I am executing the query below:
INSERT INTO temp_vendor(vendor_record_seq_no,checksum,rownumber,transaction_type,iu_flag)
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */ vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1,
vendor_data vd2,
vendor_data vd3,
(SELECT rownumber,
[code]....
It is taking different approaches (execution plans) while executing for same set of parameters. Due to which sometimes it executes successfully, but sometimes it fills all TEMP space and get failed. I am pasting both the execution plan (different from expalin plan) below:
I. Successfull Execution Plan:
------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 65612 (100)| | | |
|* 1 | HASH JOIN | | 1 | 6121 | 65612 (1)| 00:13:08 | | |
[code]....
II. Failed with TEMP space Execution Plan:
--------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 1967 (100)| | | |
|* 1 | FILTER | | | | | | | |
| 2 | SORT GROUP BY | | 1 | 8233 | 1967 (3)| 00:00:24 | | |
|* 3 | HASH JOIN | | 1 | 8233 | 1966 (3)| 00:00:24 | | |
[code]....
View 8 Replies
View Related
May 2, 2012
Performance issues with the below mentioned sql.After gone through execution plan we have found out the reason but we couldn't able to change the execution plan the way we want.
If we could able to join
HRMGR.HR_EXPANDED_BOOK table with MISBOMGR.ibm_client_mgr7_empid, MISBOMGR.ibm_client_mgr6_empid at earlier stage means before HRMGR.HR_EMP_STATUS_LOOKUP then my issue will be solved but somehow optimizer is not considering that path. Even i have added push_subq hint which will push sub queries to execute at earlier stage but no use. Why push_subq hint is not working in this scenario and what can be the other alternative to change the driving path.
Query :-
select /*+ push_subq */CEMP.EMP_ID,
CEMP.EMP_STATUS_CD,
EMP_STATUS_DESC,
MGR_6_EMP_ID,
MGR_7_EMP_ID
FROM
[code]........
Execution plan :-
------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 16958 | 927K| 12008 (2)| 00:02:25 | | |
|* 1 | FILTER | | | | | | | |
| 2 | MERGE JOIN OUTER | | 173K| 9511K| 12008 (2)| 00:02:25 | | |
| 3 | REMOTE | HR_EXPANDED_BOOK | 173K| 7303K| 12005 (2)| 00:02:25 | INFODB | R->S |
|* 4 | SORT JOIN | | 11 | 143 | 3 (34)| 00:00:01 | | |
| 5 | REMOTE | HR_EMP_STATUS_LOOKUP | 11 | 143 | 2 (0)| 00:00:01 | INFODB | R->S |
|* 6 | TABLE ACCESS FULL| IBM_CLIENT_MGR7_EMPID | 1 | 8 | 2 (0)| 00:00:01 | | |
|* 7 | TABLE ACCESS FULL| IBM_CLIENT_MGR6_EMPID | 1 | 8 | 3 (0)| 00:00:01 | | |
------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( EXISTS (SELECT /*+ USE_HASH ("IBM_CLIENT_MGR7_EMPID") */ 0 FROM
"MISBOMGR"."IBM_CLIENT_MGR7_EMPID" "IBM_CLIENT_MGR7_EMPID" WHERE "MGR_7_EMP_ID"=:B1) OR EXISTS
(SELECT 0 FROM "MISBOMGR"."IBM_CLIENT_MGR6_EMPID" "IBM_CLIENT_MGR6_EMPID" WHERE "MGR_6_EMP_ID"=:B2))
4 - access("CEMP"."EMP_STATUS_CD"="EMPLU"."EMP_STATUS_CD"(+))
filter("CEMP"."EMP_STATUS_CD"="EMPLU"."EMP_STATUS_CD"(+))
6 - filter("MGR_7_EMP_ID"=:B1)
7 - filter("MGR_6_EMP_ID"=:B1)
Remote SQL Information (identified by operation id):
----------------------------------------------------
3 - SELECT "EMP_ID","EMP_STATUS_CD","MGR_6_EMP_ID","MGR_7_EMP_ID" FROM
"HRMGR"."HR_EXPANDED_BOOK" "SYS_ALIAS_2" WHERE "EMP_STATUS_CD"='P' (accessing 'INFODB' )
5 - SELECT "EMP_STATUS_CD","EMP_STATUS_DESC" FROM "HRMGR"."HR_EMP_STATUS_LOOKUP" "EMPLU"
(accessing 'INFODB' )
View 3 Replies
View Related
Mar 25, 2012
I have queries on the execution plan of a sql statement
Following is the example
create table t1 as select s1.nextval id,a.* from dba_objects a;
create table t2 as select s2.nextval id,a.* from dba_objects a;
insert into t1 select s1.nextval id,a.* from dba_objects a;
insert into t1 select s1.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
commit;
create index i1 on t1(id);
create index i2 on t2(id);
create index i11 on t1(object_type);
exec dbms_stats.gather_table_stats(user,'T1',cascade=>true);
exec dbms_stats.gather_table_stats(user,'T2',cascade=>true);
select count(*) from t1 where object_type='VIEW';
COUNT(*)
----------
8934
set autotrace traceonly explain
Can we say in the following case, that,
(1) First index on object_type is accessed to get rowids - t1.object_type='VIEW'
(2) Then the filter on owner is applied - t1.owner='SYS'
(3) Then the table T1 is accessed to fetch data from the rowids returned by the index I11 and filer application - TABLE ACCESS BY INDEX ROWID
Though I am unable to understand how filter can be applied to the rowids retrieved from index, we can see from the plan below that The rows accessed have reduced from 8550 to 1221 before we access the table...Thus filter "t1.owner='SYS'" is applied in between. Right?
another question is
Case 1 - do we retrieve a rowid from index for a given value, then retrieve required values from table for that rowid
Thus row at a time in both ... in loop
OR
Case 2 - we first fetch all rowids from index and then retrieve values from table one row at a time from the collection of rowids fetched?
Suppose Case 1 is what is happening then can we say, both the steps mentioned by IDS 2,3 in plan below are executed exactly equal number of times and the filter "t1.owner='SYS'" is applied at some later stage? Of course in this case the values in ROWS stand misleading then
select * from t1,t2 where t1.id = t2.id and t1.object_type='VIEW' and t1.owner='SYS';
Execution Plan
----------------------------------------------------------
Plan hash value: 26873579
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1221 | 233K| 915 (1)| 00:00:11 |
|* 1 | HASH JOIN | | 1221 | 233K| 915 (1)| 00:00:11 |
|* 2 | TABLE ACCESS BY INDEX ROWID| T1 | 1221 | 116K| 381 (1)| 00:00:05 |
|* 3 | INDEX RANGE SCAN | I11 | 8550 | | 24 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | T2 | 161K| 15M| 533 (1)| 00:00:07 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T1"."ID"="T2"."ID")
2 - filter("T1"."OWNER"='SYS')
3 - access("T1"."OBJECT_TYPE"='VIEW')
View 7 Replies
View Related
Oct 31, 2012
So the situation is like this
- Database A (20 tables)
- Database B (20 tables)
- Both A and B are Oracle 11gR2
- Both of these databases run on different hardware (A is a VM, B is on a physical host)
- The 20 tables in A and B have exactly same number of rows and after preparing the data, the schemas were analysed using the same DBMS_STATS parameters
Despite this, the execution plans appear to be quite different for the same queries between A and B
I imagine there is something outside of the Oracle table rowcounts, table stats, column stats, index stats that's resulting in the different execution plans.
View 3 Replies
View Related
Apr 30, 2012
refere to below 2 queries and their execution plans:
First Query
INSERT INTO temp_vendor(vendor_record_seq_no,checksum,rownumber,transaction_type,iu_flag)
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */
vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1,
[code]...
Second Query
SELECT vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM ( select * from vendor_data vd1
where vd1.study_seq_no = 99903
AND vd1.control_column_seq_no = 435361232
[code]...
Both are to achieve same output but written in different ways. CAn I get same exectuion plan from 1st query as there is for 2nd using hints
View 10 Replies
View Related
Mar 5, 2013
One of our clients is using Rule Based Optimizer on Oracle 10.2.0.3.0
2-3 weeks backs, during performance issue in one of the sql queries, one of our team members executed tuning adviser for it, created SQL profile and the subsequent execution of the SQL did not took much time (less I/O). Now it took hardly a minute to execute
When this happened I checked that the SQL profile forced that particular query to use CBO (say plan_hash_value is PHV1 here). Yesterday the same query again took 15-20 minutes for execution. I checked that even for this execution the query used the same SQL profile but "this time" with different plan_hash_value - say PHV2.
Today again the query executed in less than a minute and used the plan_hash_value as PHV1.
select distinct plan_hash_value,timestamp from dba_hist_sql_plan where sql_id='mysqlid' order by 1,2;
PLAN_HASH_VALUE TIMESTAMP
--------------- --------------------
890360113 20-feb-2013 16:38:39
3736413466 04-mar-2013 08:12:52
1237282258 03-jan-2013 17:15:02
I confirmed from awrsqrpt as well that different plans were used for different plan_hash_values and every time same SQL profile was used
SQL> select name,CATEGORY,SIGNATURE,CREATED,LAST_MODIFIED,TYPE,STATUS,FORCE_MATCHING from dba_sql_profiles;
NAME CATEGORY SIGNATURE CREATED LAST_MODIFIED TYPE STATUS FOR
------------------------------ ------------------------------ ---------- -------------------- -------------------- --------- -------- ---
SYS_SQLPROF_015ffffcc3e1c5b000 DEFAULT 1.5512E+19 20-feb-2013 16:30:48 20-feb-2013 16:30:48 MANUAL ENABLED NO
I am unable to understand how execution plan and thus plan_hash_value is changing for the same SQL Profile. I read that SQL Profile (unlike stored outline) keeps up with increasing data volume and may not keep up with changing data distribution.
I checked that values for 4 bind variables out of 81 are different for execution between today and yesterdays' run(queried v$sql_bind_capture based on last_captured)
My questions are
1) does the different plan_hash_values with different execution plans for query using same SQL profile mean the query was hard parsed multiple times and still used the same SQL profile?
2) If that is the case why I never saw child_number = 1 in any of the views for the same sql_id. I tried it repeatedly over last 2 weeks and always found child_number=0 in v$sql (also loaded_versions=1)
3) Does the different values of bind variable are causing this flip-flop of the plans? How can I conclude this?
I have 2 plans with 2 different plan_hash_values. I know which would be better. How can I force the sql to use better plan in the two in this case where I am using Rule Based Optimizer and have SQL profile created If this is not possible then how can I create stored outline from the existing plan (not waiting for subsequent execution to take place).
View 6 Replies
View Related
Apr 3, 2012
We have a query which makes Oracle behave very strangely. It is a straight-forward join between four tables of about 30.000 rows each, with some simple comparisons and some NOT LIKE:s.
When we run this query, it either takes about 1 second or more than 1.000 seconds to run and return the approximately 5.000 rows of the result. If we run the same query over and over again, it fluctuates back and forth between two different execution plans, apparently at random, 3 times out of 4 selecting the 1.000 second version and 1 time out of 4 the 1 second version.
There are no other connections to the database, the schema is not modified, the data is identical, the query is identical, and the response is identical, but the execution time alternates between 1 second and 1.000 seconds.On the same database instance we have another schema which is identical, but with slightly less data, which is used for development. The 1.000 second run times did not happen in that schema, but only in the test system's database.
Therefore we would REALLY like to understand what happens and why, so that we can avoid triggering this in the future. We could try locking the 1 second execution plan, but then we're afraid of doing the same thing wrong again in the future.
Here are the two execution plans that Oracle switches between, more or less at random:
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
5455 5455 5455 HASH JOIN (cr=15663 pr=10536 pw=0 time=855673 us cost=82273 size=2707430769293 card=14028138701)
79272 79272 79272 TABLE ACCESS FULL GROUPS (cr=1008 pr=0 pw=0 time=22154 us cost=277 size=10693 card=289)
[code]...
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
5455 5455 5455 HASH JOIN (cr=15664 pr=0 pw=0 time=778178696 us cost=30838477 size=741611997206725 card=3842549208325)
375411 375411 375411 TABLE ACCESS FULL GROUP_GROUPS_FLAT (cr=3782 pr=0 pw=0 time=51533 us cost=1029 size=25152738 card=375414)
[code]...
The query:
select g.ucid, a.ucid
from account a, groups g, group_members gm, group_groups_flat ggf
where a.ucid = gm.ucid_member
and gm.ucid_group = ggf.ucid_member
[code]...
And excerpts from the schema:
CREATE TABLE "PDB"."GROUPS"
(
"UCID" VARCHAR2(256 BYTE),
"UNIX_GID" NUMBER(*,0),
[...]
[code]...
View 4 Replies
View Related
Jan 27, 2012
When we use the AUTOTRACE / EXPLAIN PLAN we can see the (estimated) best execution plan the Optimizer found for our SQL Command. Is there a way to display all alternate execution plans the Optimizer has considered ?
View 1 Replies
View Related
Feb 8, 2011
refer following sql statements and code
Session 1
create table tab1 as select * from dba_objects where object_id is not null;
alter session set events '10046 trace name context forever, level 12';
declare
x number;
begin
for i in 1..4
loop
[code]....
Session 2
after "starting" the above pl/sql block from Session 1, I keep on querying tab2 from Session 2 And as soon as 2 records are inserted in tab2, I create index from Session 2
select * from tab2;
select * from tab2;
select * from tab2;
N
----------
1
2
create index i on tab1(object_id);
As I have tested from a single session (just before this test) such index is used for the sql statement
select count(1) into x from tab1 where object_id=2331;
However when I checked the trace file I am not geeting results as expected
I am expecting 4 execution plans - 2 FTS and 2 Index Access scans and for this I am issuing following command
tkprof dst1_ora_7369.trc dst1_ora_7369.txt aggregate=no sys=no
But unfortunately I am getting following output
SELECT COUNT(1)
FROM
TAB1 WHERE OBJECT_ID=2331
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 4 0.00 0.00 0 2 0 0
[code]....
1) Why I am unable to see 4 execution plans - 2 with FTS and 2 with Index access when I mentioned 'aggregate=no'?
2) Whether the index i will be used for last 2 iterations after first 2 iterations of FTS?
If answer to above question 2) is 'No'
By which method I can force an ongoing sql statement in loop to take different execution path? Of course I can't hard parse sql in 'that' current session Will flushing Shared pool work in above case?
View 6 Replies
View Related
Jul 21, 2010
I have two Oracle instances that are setup identically.When I run a query on one of them, it takes around 3 seconds, on the other it takes around 200 seconds.
I have looked at the explain plans, and it has shown me what I think is the problem. On one instance, it does a join on two tables, then runs the other filter/access predicates. On the other instance it runs the filter/access predicated first, then does the expensice join. The one that does the join first is the one that takes around 200 seconds. How to tell Oracle to make this join after runnning the other predicates?
View 15 Replies
View Related
May 18, 2010
Can we have same execution plan for a create table statement where the name of the table changes every time as follows:
create table test
as
select * from t1
Here table name changes from test to another table name next time
View 6 Replies
View Related
Nov 23, 2010
I have one query in my production which is taking more CPU time. when that statement executing the CUP is taking more than 90%
I am attaching the sql query and indexes on the table.
View 4 Replies
View Related
Mar 20, 2012
Which step in the following plan is the first step of execution
I reckon it is "TABLE ACCESS BY INDEX ROWID| BANK_BATCH_STATE"
Is that correct?
In the "Predicate Information (identified by operation id):"
section the predicates - access and filter for the step "TABLE ACCESS FULL | PYMNT_DUES" are displayed first
Isn't there any relation between the order of execution steps and the order in which predicates are displayed?
Execution Plan
----------------------------------------------------------
Plan hash value: 538700484
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2364 | 15 (14)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | HASH GROUP BY | | 1 | 2364 | 15 (14)| 00:00:01 |
| 3 | NESTED LOOPS | | 1 | 2364 | 14 (8)| 00:00:01 |
| 4 | NESTED LOOPS | | 1 | 2313 | 13 (8)| 00:00:01 |
| 5 | NESTED LOOPS | | 1 | 2281 | 12 (9)| 00:00:01 |
| 6 | NESTED LOOPS OUTER | | 1 | 2255 | 11 (10)| 00:00:01 |
|* 7 | HASH JOIN | | 1 | 175 | 6 (17)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | INDX_2 | 12 | 612 | 2 (0)| 00:00:01 |
|* 9 | TABLE ACCESS FULL | PYMNT_DUES | 43 | 5332 | 3 (0)| 00:00:01 |
| 10 | VIEW PUSHED PREDICATE | | 1 | 2080 | 5 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 154 | 5 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 103 | 4 (0)| 00:00:01 |
|* 13 | TABLE ACCESS BY INDEX ROWID| BANK_BATCH_STATE | 1 | 32 | 2 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | INDX_BBS_1 | 3 | | 1 (0)| 00:00:01 |
|* 15 | TABLE ACCESS BY INDEX ROWID| DAILY_CHECK | 1 | 71 | 2 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | INDX_SEARCH | 1 | | 1 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | INDX_2 | 1 | 51 | 1 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | INDX_IAM_SR_NO | 1 | 26 | 1 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | INDX_2 | 1 | 32 | 1 (0)| 00:00:01 |
|* 20 | INDEX RANGE SCAN | INDX_2 | 1 | 51 | 1 (0)| 00:00:01 |
-----------------------------------------------------------------
View 3 Replies
View Related
Sep 26, 2012
update xml eating up a lot of time is there any way to tune
SELECT UPDATEXML(:B3 , '/FCUBS_RES_ENV/FCUBS_BODY/FLD/FN[@TYPE="' || :B2 ||
'"]', :B1 )
FROM
DUAL
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 499 0.44 0.90 0 3 0 0
Fetch 499 1.49 2.87 0 0 0 499
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 999 1.93 3.77 0 3 0 499
real code
SELECT updatexml(l_xml,
'/FCUBS_RES_ENV/FCUBS_BODY/FLD/FN[@TYPE="' ||
upper(replace(cspkes_misc.fn_getparam(p_parent_list,
l_parent_list_clob,
'Y',
l_cnt,
'>'),
'-',
'_')) || '"]/text()',
l_fn_str)
INTO l_xml
FROM dual;
View 4 Replies
View Related
Dec 12, 2011
Till statspack we had
elapsed time = time spent on waits + time CPU was used
Total time during snaps = Elapsed time + (may be) time waited for CPU...In AWR is it possible to draw such equation? I can see that the AWR report has following elements
1) End Snap time - Begin Snap time
2) DB time - as mentioned at the top of AWR report
3) DB CPU - in "Top 5 Timed Foreground Events" (I assume this is 'CPU used by sesson timing' in statspack)
4) Total of time for all Statistics in "Time Model Statistics"
5) BUSY_TIME + IDLE_TIME - "Operating System Statistics"
Time between 2 snapshots? or what else? Also for which seconds to multiply to 'DB Time(s)' per second and 'DB CPU(s)' per second in Load Profile to get the db time and CPU time?
View 2 Replies
View Related
Feb 13, 2013
For example, we have a table ACCOUNT (snowflake dimension containing other dimension keys) and I have many fact tables based on this dimension. Normally data warehouse load happens like first dimensions needs to be loaded and then facts. Our frequency of loads is 30 mins.
To increase the rate in which the data will be available in the facts (as its a financial application), am considering to have two batches one with dimension and another one with fact (came to this conclusion as there is no dependency like first dimensions to be loaded then only fact) just the update might get missed sometimes. But if I do that, when dimension gets loaded, it will be read in the facts in another session. Will this affect the performance ?
LOADING (insert/update) and selecting data from table at the same time. Will it affect the performance in any way.
View 1 Replies
View Related
Jul 9, 2012
I understand that when data is read from the disk, I/O is done..And When computations are done then CPU is used..Then where the following equation fits?
DB Time = sum of database CPU time + waits
Is I/O considered as a part of CPU time?
Does this equation changes with SAN, OS caching?
View 3 Replies
View Related
Feb 2, 2013
Elapsed Time (s)CPU Time (s)Executions Elap per Exec (s) % Total DB TimeSQL IdSQL ModuleSQL Text
3,263 32 1 3263.49 2.79 3ta0ms19fvgds httpd.exe SELECT CASE WHEN COUNT >= 1...Elapsed
6,360 164 2 3180.17 5.44 51jx99dm0swv7 cpm_srvscript@ahcaxasmil1b (TNS V1-V3) SELECT /*+ CCL<PFT_GET_RP...
On AWR, I see two script that are out of ordinary, and I want to make sure that I interpret them correctly.
1) "Elap per Exec (s)" shows 3263.49 with 1 "Executions".
2) "Elap per Exec (s)" shows 3180.17 each execution with 2 "Executions".
Does this mean that this script ran for ~ 54 minutes (3263.49 / 60 seconds) for 1 "Execution" and ~ 53 minutes (3180.17 / 60 seconds) per each execution? I need to understand "Elapsed Time (s),CPU Time (s),Executions ,Elap per Exec (s), % Total DB Time" represent.
View 9 Replies
View Related
Apr 27, 2012
I have a table which contains 8,21,177 amount of data totally.Now I am trying to delete around 4,84,000 of data from this table by using just one filter i.e. my query is something like below
DELETE /*+ parallel(resource,4) */ FROM resource where created_by = 'MIGN'
This is going to delete 4,84,000 rows of data . But my current issue is this is taking lots of time to delete the data . To be precise , its almost taking 25 hours to delete this data..The created_by column is indexed .
Execution Plan
----------------------------------------------------------
Plan hash value: 2389236532
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 499 | 20459 | 39 (0)| 00:00:
01 |
| 1 | DELETE | RESOURCE | | | |
[code]....
View 26 Replies
View Related
Apr 15, 2011
Test1 table have around 385772300 rows. below delete and select statment talking lot of time.
Select stament taking more than 1 hrs.
SELECT TO_NUMBER(MAX(f.T3))
--INTO v_FISCAL_MONTH_ID
FROM Test1 f;
delete statment taking more than 2 hours
DELETE FROM TEST1 WHERE TRUNC(T10) < TRUNC(ADD_MONTHS(SYSDATE,-36));
CREATE TABLE Test1
(
[Code].....
View 4 Replies
View Related
Sep 11, 2013
Yesterday, there were performance issue at server. So today, when i am generating report for that particular period, found snapshot id sequence is serially but with skipped hourly timed. Instead of generating report at 15:30, it generated at 16:30.
Enter value for num_days: 2
Listing the last 2 days of Completed Snapshots
Snap
Instance DB Name Snap Id Snap Started Level
------------ ------------ --------- ------------------ -----
tagidev TAGIDEV 2857 10 Sep 2013 00:30 1
2858 10 Sep 2013 01:30 1
2859 10 Sep 2013 02:30 1
2860 10 Sep 2013 03:30 1
2861 10 Sep 2013 04:30 1
2862 10 Sep 2013 05:31 1
[code]....
Below are the details at alert log -
Tue Sep 10 14:28:20 2013
Thread 1 cannot allocate new log, sequence 7029
Checkpoint not complete
Current log# 2 seq# 7028 mem# 0: E:APPORACLEORADATATAGIDEVREDO02.LOG
Thread 1 advanced to log sequence 7029 (LGWR switch)
[code]....
1) why snap didn't started at 15:30?
2) since database just started at the scheduled time of AWR snap time. But generated at 16:32 instead of 16:30, though last services "SMCO" is started at 16:42. How it snap id generated for this particular time?
3) what does "kewastUnPackStats(): bad magic 1 (0x000000001B3CE48D, 0)" mean?
View 7 Replies
View Related
Aug 17, 2010
Is there any way to reduce the index creation time. I have one table which has 7700000 records and every day this table get truncate and we create with create table as select statement and then create the 4 indexes and each index took 5 minutes so in totality it took 20 minutes in index creation.
View 2 Replies
View Related