Performance Tuning :: Run Explain Plan Syntax - Show Error?
Aug 3, 2010
when i runnung the explain plan syntax , show error :
running --- SELECT * FROM TABLE(dbms_xplan.display) ;
ERROR: an uncaught error in function display has happened;
please contact Oracle support
Please provide also a DMP file of the used plan table
PLAN_TABLE
ORA-00904: "OTHER_TAG": 無效的 ID
View 4 Replies
ADVERTISEMENT
May 24, 2012
what privilege is require for a user to execute explain plan? I get below error while try to execute explain plan.
SQL> explain plan for SELECT /*+ FULL(t) */ COUNT(*) FROM "DREAM"."CONSUMER.TAB" t WHERE ROWNUM <= 1000000;
explain plan for SELECT /*+ FULL(t) */ COUNT(*) FROM "DREAM"."CONSUMER.TAB" t WHERE ROWNUM <= 1000000
*
ERROR at line 1:
ORA-01031: insufficient privileges
View 9 Replies
View Related
Aug 17, 2012
I am facing a weird situation wherein the explain plan of same sql in SIT and PROD is different.In fact the explain plan is very costly in Prod.Also the DB version of both SIT and PROD is same.
Below is the sql and corresponding explain plan in Prod and SIT respectively.
Query:
SELECT seq,CCN,ProcessorPart,root_item,comp_path,Item,comp_item,comp_item_type,
lag(comp_item_type,1,'PART') over(PARTITION BY seq ORDER BY lvl)Nxt_comp_item_type,lvl,bom_qty,
ROUND(CASE min(abs(bom_qty)) OVER (PARTITION BY seq ORDER BY lvl)
WHEN 0 THEN 0 ELSE 1 END * EXP (SUM (LN (nullif(abs(bom_qty),0))) OVER (PARTITION BY seq ORDER BY lvl))) Ulti_qty,
'AMER'
[code]...
The tables referred in above query is small tables containing arnd 10k records.The above tables are partitioned on Region and not indexed.
Explain Plan in Prod: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS165173613539322883634804
SORT UNIQUE236360
UNION-ALL
PARTITION LIST SINGLE117240
[code]...
Explain Plan in SIT: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS3211689
SORT UNIQUE347240
UNION-ALL
PARTITION LIST SINGLE172120
[code]...
I am not able to attribute why there is a huge change in Cost between SIT and Prod.Apparently the Job is going for 3-5 hours which used to get completed within 20mins in SIT.
View 5 Replies
View Related
Oct 24, 2011
however I was able to identify a poorly performing query that seemed to be maxing out our CPU. I have been trying to understand the Explain Plan. The plan below is from our test system which has considerably less information in the tables than our PROD system.
I can see there are a bunch of table scans at the end which may indicate missing indexes, but I am unclear on whether this is actually a problem as the %CPU seems to be worse for the JOIN near the top of the plan.
-------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Inst |IN-OUT|
-------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1870M| 3018G| | 677M (1)|999:59:59 | | |
| 1 | SORT ORDER BY | | 1870M| 3018G| 3567G| 677M (1)|999:59:59 | | |
[code]...
View 4 Replies
View Related
Dec 14, 2010
I need to warn readers that I am not a DBA but am heavily involved in application development. Whatever I know about database tuning is whatever I've managed to pick up via self-learning, and I must admit that the sum total of my knowledge isn't a lot.
Anyway, our "DBAs" recently did an upgrade to our 10g database, going from version 10.2.0.2.0 to 10.2.0.4.0. Immediately after the upgrade, a particular query has started to under-perform. The query itself was not altered in any way during the upgrade.
We have two explain plans for the query, a before and an after plan. The two plans are similar but not identical. The plans are too massive to post here, so I hope the following synopsis of the differences will do.
The 10.2.0.2.0 plan:
shows a HASH GROUP BY
has a TempSpc column in the explain plan
shows a particular table (EMP_HISTORY) as having ~1700 rows
The 10.2.0.4.0 plan:
shows SORT GROUP BY instead of HASH GROUP BY does not show a TempSpc column in the explain plan shows the EMP_HISTORY table as having only 25 rows
Other than these points, no other discernible differences can be noted. I'm wondering what would cause HASH to change to SORT. I'm told that stats are up-to-date.
View 5 Replies
View Related
Jul 21, 2010
I have two Oracle instances that are setup identically.When I run a query on one of them, it takes around 3 seconds, on the other it takes around 200 seconds.
I have looked at the explain plans, and it has shown me what I think is the problem. On one instance, it does a join on two tables, then runs the other filter/access predicates. On the other instance it runs the filter/access predicated first, then does the expensice join. The one that does the join first is the one that takes around 200 seconds. How to tell Oracle to make this join after runnning the other predicates?
View 15 Replies
View Related
Dec 12, 2011
since the optimizer (during explain plan) assumes all bind variable to be of varchar type, while checking plan for SQL statement using bind variable of numeric and date type shall we convert (typecast) it as following?
variable n_sal number
variable dt_joining date
exec n_sal:= 1000
exec dt_joining := '12-dec-2005'
select first_name from emp_data where sal=to_number(n_sal) and joining=to_date(dt_joining);
View 12 Replies
View Related
May 12, 2011
I have created an non unique index lk_fein on lookup_fein( code,map_id,trash). When I check the explain plan it does a full table scan on lookup_fein. if I force it to use index by it does and the cost also decreases.
SQL> SELECT WORK_FEIN,
2 NON_FEIN ,
3 FI_FEIN ,
4 MFEIN ,
5 TOTAL_FEIN ,
[code]...
View 1 Replies
View Related
Aug 16, 2012
I have an APP that truncates tables and loads data, which in turn makes the stats stale. I ran the query advisor (see attachment) and of course it ecommends running stats or accept a profile.I really don't want to do that as it may cause a load on my DB.
In turn, I would like to consider having my APP team change the query to pass a hint to use the best query plan.syntax to pass the hint to emulate good attached plan? Or is this a bad way to proceed?
select /* INDEX FAST FULL SCAN PK_PLACEMENT_REQUEST_QUEUE */
sum(lastshares) as "ROSEN"
from nyeo.fix_exec_reports fer, nyeo.placement_request_queue q,
nyeo.nyeo_block_control bc
where fer.clordid = q.sequence_number
and q.blockid = bc.blockid
and upper(bc.deskname) like '%ROSEN%'
View 2 Replies
View Related
Jul 20, 2013
Why the query is behaving differently with the different database.(execution plan)
Whatever the production database is having same database instance replicated to a new schema. I tried both the queries running on both environment.In prod the index has been used but in newdev it is not. This case existing primary key index were not been used.
View 6 Replies
View Related
Apr 12, 2013
How can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.
View 7 Replies
View Related
Mar 25, 2012
I have queries on the execution plan of a sql statement
Following is the example
create table t1 as select s1.nextval id,a.* from dba_objects a;
create table t2 as select s2.nextval id,a.* from dba_objects a;
insert into t1 select s1.nextval id,a.* from dba_objects a;
insert into t1 select s1.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
commit;
create index i1 on t1(id);
create index i2 on t2(id);
create index i11 on t1(object_type);
exec dbms_stats.gather_table_stats(user,'T1',cascade=>true);
exec dbms_stats.gather_table_stats(user,'T2',cascade=>true);
select count(*) from t1 where object_type='VIEW';
COUNT(*)
----------
8934
set autotrace traceonly explain
Can we say in the following case, that,
(1) First index on object_type is accessed to get rowids - t1.object_type='VIEW'
(2) Then the filter on owner is applied - t1.owner='SYS'
(3) Then the table T1 is accessed to fetch data from the rowids returned by the index I11 and filer application - TABLE ACCESS BY INDEX ROWID
Though I am unable to understand how filter can be applied to the rowids retrieved from index, we can see from the plan below that The rows accessed have reduced from 8550 to 1221 before we access the table...Thus filter "t1.owner='SYS'" is applied in between. Right?
another question is
Case 1 - do we retrieve a rowid from index for a given value, then retrieve required values from table for that rowid
Thus row at a time in both ... in loop
OR
Case 2 - we first fetch all rowids from index and then retrieve values from table one row at a time from the collection of rowids fetched?
Suppose Case 1 is what is happening then can we say, both the steps mentioned by IDS 2,3 in plan below are executed exactly equal number of times and the filter "t1.owner='SYS'" is applied at some later stage? Of course in this case the values in ROWS stand misleading then
select * from t1,t2 where t1.id = t2.id and t1.object_type='VIEW' and t1.owner='SYS';
Execution Plan
----------------------------------------------------------
Plan hash value: 26873579
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1221 | 233K| 915 (1)| 00:00:11 |
|* 1 | HASH JOIN | | 1221 | 233K| 915 (1)| 00:00:11 |
|* 2 | TABLE ACCESS BY INDEX ROWID| T1 | 1221 | 116K| 381 (1)| 00:00:05 |
|* 3 | INDEX RANGE SCAN | I11 | 8550 | | 24 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | T2 | 161K| 15M| 533 (1)| 00:00:07 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T1"."ID"="T2"."ID")
2 - filter("T1"."OWNER"='SYS')
3 - access("T1"."OBJECT_TYPE"='VIEW')
View 7 Replies
View Related
Oct 31, 2012
So the situation is like this
- Database A (20 tables)
- Database B (20 tables)
- Both A and B are Oracle 11gR2
- Both of these databases run on different hardware (A is a VM, B is on a physical host)
- The 20 tables in A and B have exactly same number of rows and after preparing the data, the schemas were analysed using the same DBMS_STATS parameters
Despite this, the execution plans appear to be quite different for the same queries between A and B
I imagine there is something outside of the Oracle table rowcounts, table stats, column stats, index stats that's resulting in the different execution plans.
View 3 Replies
View Related
Apr 30, 2012
refere to below 2 queries and their execution plans:
First Query
INSERT INTO temp_vendor(vendor_record_seq_no,checksum,rownumber,transaction_type,iu_flag)
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */
vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1,
[code]...
Second Query
SELECT vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM ( select * from vendor_data vd1
where vd1.study_seq_no = 99903
AND vd1.control_column_seq_no = 435361232
[code]...
Both are to achieve same output but written in different ways. CAn I get same exectuion plan from 1st query as there is for 2nd using hints
View 10 Replies
View Related
Jul 1, 2013
We have recently upgraded application (from Oracle Applications 11.5.9 to 12.1.3) and database (from 9.2.0.5.0 to 11.2.0.3.0).Since we are confronting to performances issues, i try to analyse some queries which Explains plans seems strange (in my opinion).Studying one of them i discover the next case (which according to my logic, i can't explain): --
Just bind variable --select *from MTL_MATERIAL_TRANSACTIONS mmtwhere 1 = 1and mmt.INVENTORY_ITEM_ID = :p1and mmt.ORGANIZATION_ID = :p2and mmt.TRANSACTION_REFERENCE = :p3--and mmt.SUBINVENTORY_CODE = :p4 PlanSELECT STATEMENT ALL_ROWS Cost: 5 Bytes: 361 Cardinality: 1 2 TABLE ACCESS BY INDEX ROWID TABLE INV.MTL_MATERIAL_TRANSACTIONS Cost: 5 Bytes: 361 Cardinality: 1 1 INDEX RANGE SCAN INDEX XXSPE.XXSPE_MTL_MATERIAL_TRANSAC_N99 Cost: 3 Cardinality: 2-- Nvl on bind variable --select *from MTL_MATERIAL_TRANSACTIONS mmtwhere 1 = 1and mmt.INVENTORY_ITEM_ID = :p1and
[code]...
View 3 Replies
View Related
Mar 5, 2013
One of our clients is using Rule Based Optimizer on Oracle 10.2.0.3.0
2-3 weeks backs, during performance issue in one of the sql queries, one of our team members executed tuning adviser for it, created SQL profile and the subsequent execution of the SQL did not took much time (less I/O). Now it took hardly a minute to execute
When this happened I checked that the SQL profile forced that particular query to use CBO (say plan_hash_value is PHV1 here). Yesterday the same query again took 15-20 minutes for execution. I checked that even for this execution the query used the same SQL profile but "this time" with different plan_hash_value - say PHV2.
Today again the query executed in less than a minute and used the plan_hash_value as PHV1.
select distinct plan_hash_value,timestamp from dba_hist_sql_plan where sql_id='mysqlid' order by 1,2;
PLAN_HASH_VALUE TIMESTAMP
--------------- --------------------
890360113 20-feb-2013 16:38:39
3736413466 04-mar-2013 08:12:52
1237282258 03-jan-2013 17:15:02
I confirmed from awrsqrpt as well that different plans were used for different plan_hash_values and every time same SQL profile was used
SQL> select name,CATEGORY,SIGNATURE,CREATED,LAST_MODIFIED,TYPE,STATUS,FORCE_MATCHING from dba_sql_profiles;
NAME CATEGORY SIGNATURE CREATED LAST_MODIFIED TYPE STATUS FOR
------------------------------ ------------------------------ ---------- -------------------- -------------------- --------- -------- ---
SYS_SQLPROF_015ffffcc3e1c5b000 DEFAULT 1.5512E+19 20-feb-2013 16:30:48 20-feb-2013 16:30:48 MANUAL ENABLED NO
I am unable to understand how execution plan and thus plan_hash_value is changing for the same SQL Profile. I read that SQL Profile (unlike stored outline) keeps up with increasing data volume and may not keep up with changing data distribution.
I checked that values for 4 bind variables out of 81 are different for execution between today and yesterdays' run(queried v$sql_bind_capture based on last_captured)
My questions are
1) does the different plan_hash_values with different execution plans for query using same SQL profile mean the query was hard parsed multiple times and still used the same SQL profile?
2) If that is the case why I never saw child_number = 1 in any of the views for the same sql_id. I tried it repeatedly over last 2 weeks and always found child_number=0 in v$sql (also loaded_versions=1)
3) Does the different values of bind variable are causing this flip-flop of the plans? How can I conclude this?
I have 2 plans with 2 different plan_hash_values. I know which would be better. How can I force the sql to use better plan in the two in this case where I am using Rule Based Optimizer and have SQL profile created If this is not possible then how can I create stored outline from the existing plan (not waiting for subsequent execution to take place).
View 6 Replies
View Related
Oct 29, 2013
The types of query I refer to in the title are of this pseudo-code ilk:
select t.column_value
from table1 o, xmltable('for $co in $data
where $co/path1=$bind1
and $co/path2=$bind2
passing o.field as "data", :b1 as "bind1", :b2 as "bind2") t
where o.field = :b3
They're querying a table with a (binary) xmltype with a path/domain index over this column.As those who have had the (mis)fortune to run into these will know, the queries are extensively rewritten under the covers to access to xml via the paths supplied.
getting a baseline to work with queries like this? I was suspicious because whilst I can hint it to pick a certain access path first (leading()), the plan hashes remain the same.
I'm not sure, however, if I'm simply "doing it wrong" or it is just not possible with the level of recursive rewriting going on.NB: I consider myself reasonably competent in applying baselines to "traditional" queries...
View 7 Replies
View Related
Jul 3, 2009
How to find explain plan of a query in oracle using TOAD(any) and SQL Developer(any)?
If there any functional key to find out let me know.
View -1 Replies
View Related
Feb 8, 2011
refer following sql statements and code
Session 1
create table tab1 as select * from dba_objects where object_id is not null;
alter session set events '10046 trace name context forever, level 12';
declare
x number;
begin
for i in 1..4
loop
[code]....
Session 2
after "starting" the above pl/sql block from Session 1, I keep on querying tab2 from Session 2 And as soon as 2 records are inserted in tab2, I create index from Session 2
select * from tab2;
select * from tab2;
select * from tab2;
N
----------
1
2
create index i on tab1(object_id);
As I have tested from a single session (just before this test) such index is used for the sql statement
select count(1) into x from tab1 where object_id=2331;
However when I checked the trace file I am not geeting results as expected
I am expecting 4 execution plans - 2 FTS and 2 Index Access scans and for this I am issuing following command
tkprof dst1_ora_7369.trc dst1_ora_7369.txt aggregate=no sys=no
But unfortunately I am getting following output
SELECT COUNT(1)
FROM
TAB1 WHERE OBJECT_ID=2331
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 4 0.00 0.00 0 2 0 0
[code]....
1) Why I am unable to see 4 execution plans - 2 with FTS and 2 with Index access when I mentioned 'aggregate=no'?
2) Whether the index i will be used for last 2 iterations after first 2 iterations of FTS?
If answer to above question 2) is 'No'
By which method I can force an ongoing sql statement in loop to take different execution path? Of course I can't hard parse sql in 'that' current session Will flushing Shared pool work in above case?
View 6 Replies
View Related
May 18, 2010
Can we have same execution plan for a create table statement where the name of the table changes every time as follows:
create table test
as
select * from t1
Here table name changes from test to another table name next time
View 6 Replies
View Related
Jun 4, 2010
The prod stats has been implemented in development. The stats has been gathered 2 months back on dev while in production the stats has been gathered 2 weeks back.
My question shouldn't the high volume of data causes changes in plan in both the environment? My thinking is that plan can be different as the high volume of data are changing in prod it may lead to a different plan.
View 6 Replies
View Related
Sep 7, 2011
1. In previous version of Toad (9.7.2) during executing a statement I was able to click on 'Explain plan current statement'. In this version of Toad (10.6.0. 42) is not a possible to do it.In the Toad options I've filled the sign: "use a separate connection when Toad itself is generating transactions"
2. During executing a statement I still see 'clock' cursor.Is it a possible to disable to see it?
View 1 Replies
View Related
Jun 4, 2010
attached query giving consistent execution plan but different timings across run
SELECT /*+ INDEX (CRT CRT_CUN_FK_I)*/
DISTINCT odr.dve_id
FROM company_requirements crt, orders odr, lelo_products la_pct
WHERE crt.qtn_cun_id = 10035637--10000021--10035667
AND crt.ID = odr.crt_id_quote_implemented
AND NVL (odr.cancellation_date, '31-Dec-9999') = '31-Dec-9999'
[code]....
we have 4 databases, 2 on each servers, such that db1 and db2 on server1 and db3 and db4 on server2
refer count of the records for column of biggest table in the query, taken on all 4 databases (The column is nullable)
select count(*) from company_requirements crt WHERE crt.qtn_cun_id = 10035637
db1 = 73335
db2 = 89073
db3 = 81182
db4 = 82936
First I executed the query on db1 and db2 while there wasn't any user logged on to the system
db1
**********
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 17.47 473.39 85704 1508102 0 0
[code]...
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file sequential read 85704 0.31 460.55
latch free 1 0.00 0.00
SQL*Net message from client 1 14.98 14.98
[code]...
Why the elasped time changed when data and plan hasn't changed at all? Also why the plan has different stats for round 1 and 2 on db1 and db2?
I ran it 2 times each round each database so hard parsing shall not be issue.Also why the number of rows accessed are different in db1,db2 and db3,db4 especially for step1 when count of crt.qtn_cun_id is similar?
In fact when the query was taking long I was the only user on the system Also I used hard coded value (no bind variables at all)
I checked num_rows, distinct keys as well which are quite similar across all 4 databases Also no stats where gather during the query execution
What I should have checked or monitored?
View 10 Replies
View Related
Jun 21, 2011
I use oracle 11g r2
i have created a table for objects
create table table_t
(Record_no number,
t_no number,
position SDO_GEOMETRY
, occupation_time number);
and inserted many values (examples below )
INSERT INTO tra VALUES
(2,
20503,
SDO_GEOMETRY
(2001,
NULL,
SDO_POINT_TYPE (1387, 0, NULL),
NULL,
NULL),
23037
)
/
and( position) indexed as Rtree spatial index
now when i run spatial query such as
SELECT * FROM tra t WHERE
SDO_FILTER(t.position, MDSYS.SDO_GEOMETRY(2001,NULL,NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1,1003,3),
MDSYS.SDO_ORDINATE_ARRAY(0,0,9000,0)), 'querytype=WINDOW') = 'TRUE' and t.position.sdo_point.X=1;
i do not know how many IO accrued ?
i tried set autotrace on
but the physical read is 0 , this is not possible because i have more than 100000 objects there and all indexed as R-tree
View 2 Replies
View Related
Mar 20, 2012
Which step in the following plan is the first step of execution
I reckon it is "TABLE ACCESS BY INDEX ROWID| BANK_BATCH_STATE"
Is that correct?
In the "Predicate Information (identified by operation id):"
section the predicates - access and filter for the step "TABLE ACCESS FULL | PYMNT_DUES" are displayed first
Isn't there any relation between the order of execution steps and the order in which predicates are displayed?
Execution Plan
----------------------------------------------------------
Plan hash value: 538700484
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2364 | 15 (14)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | HASH GROUP BY | | 1 | 2364 | 15 (14)| 00:00:01 |
| 3 | NESTED LOOPS | | 1 | 2364 | 14 (8)| 00:00:01 |
| 4 | NESTED LOOPS | | 1 | 2313 | 13 (8)| 00:00:01 |
| 5 | NESTED LOOPS | | 1 | 2281 | 12 (9)| 00:00:01 |
| 6 | NESTED LOOPS OUTER | | 1 | 2255 | 11 (10)| 00:00:01 |
|* 7 | HASH JOIN | | 1 | 175 | 6 (17)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | INDX_2 | 12 | 612 | 2 (0)| 00:00:01 |
|* 9 | TABLE ACCESS FULL | PYMNT_DUES | 43 | 5332 | 3 (0)| 00:00:01 |
| 10 | VIEW PUSHED PREDICATE | | 1 | 2080 | 5 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 154 | 5 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 103 | 4 (0)| 00:00:01 |
|* 13 | TABLE ACCESS BY INDEX ROWID| BANK_BATCH_STATE | 1 | 32 | 2 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | INDX_BBS_1 | 3 | | 1 (0)| 00:00:01 |
|* 15 | TABLE ACCESS BY INDEX ROWID| DAILY_CHECK | 1 | 71 | 2 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | INDX_SEARCH | 1 | | 1 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | INDX_2 | 1 | 51 | 1 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | INDX_IAM_SR_NO | 1 | 26 | 1 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | INDX_2 | 1 | 32 | 1 (0)| 00:00:01 |
|* 20 | INDEX RANGE SCAN | INDX_2 | 1 | 51 | 1 (0)| 00:00:01 |
-----------------------------------------------------------------
View 3 Replies
View Related
Jul 16, 2013
An SQL query is taking a lot of time than usual and not completing even left after hours! The query joins a table with a quite complex view.
The same query in a test database completes in less than 2 mins.
I would like to export the sql plan from test database to prod database.
how to export/import in 10.2.0.4 version for a particular sql statement's execution plan.
View 2 Replies
View Related
Jan 27, 2011
DBMS_STATS.CREATE_STATS_TABLE
DBMS_STATS.EXPORT_SCHEMA_STATS
DBMS_STATS.GATHER_TABLE_STATS
I have used the above to get a copy of schema stats and gather new stats for specific tables into a STATS TABLE in my personal schema. What I want to do now is use this stats table to generate plans for queries where I believe stats are off. Is it even possible? To be clear, I do not want to import stats because this replaces the stats currently there. I just want to point the CBO to my stats table for generating plans.
there was a session parameter I could set to tell oracle to use my stats table when generating plans, or an explain plan clause I could use or a DBMS_XPLAN paramter I could provide that would tell these tools to use my stats table when generating a plan, or even some way to tell autotrace. But I have found none of this.
View 2 Replies
View Related
Aug 31, 2011
I am trying to run a trace file with TKPROF. It throws an error :
MEMORY FAULT
The size of the trace file is 8MB. I tried with some other file trace files bigger in size (10MB) than the above file, it works fine. I tried seeing any permission right is required on that trace file is required for the tkprof to excute, but it has got the same permissions like the other trace files.
The problem is that it is not showing any error number or any other Error Description other than "MEMORY FAULT".
View 1 Replies
View Related
Jul 5, 2012
SQL>
create table oldemp8
(
fname varchar2(30),
salary number(6),
job_id varchar2(20)
)
[code].......
Table created.
SQL> select * from oldemp8;
select * from oldemp8
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "(": expecting one of: "comma, defaultif,
nullif, )"
KUP-01007: at line 7 column 16
ORA-06512: at "SYS.ORACLE_LOADER", line 19
SQL>
what is the syntax error in the above command. I place the notepad file properly.i create external table before many time but cant find any this type of error.
os windows xp 2000. oracle 10g (10.2.0.1.0)
View 7 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related