Execution Plan Randomly Changes After Statistics Collection
Oct 10, 2012
I don't know, if this is the intent behavior of oracle or not. But i noticed, my queries Execution plan randomly changes after statistics collection. Several tables are truncated after the daily run at 8AM and statistics gathered for all the tables in that schema.
However execution plans for 2-3 sql statements always changes after this and performance is brought back to normal by executing the procedure by explicitly calling it from the command line with arguments instead of bind variables.
View 3 Replies
ADVERTISEMENT
Jun 15, 2012
the most accurate/efficient way of obtaining the execution plan for a piece of running SQL in Oracle 9i. in 10g and 11g obviously dbms_xplan.display_cursor(sql_id) can be used,
How can this be achieved in 9i, currently I am simply obtaining the SQL_TEXT and then running an explain plan ("EXPLAIN PLAN FOR..") - I believe this is not necessarily the same explain plan that will be used for the sql that is executing though
View 7 Replies
View Related
Oct 8, 2010
[font="Lucida Console"][/font]
Issue: For this sql statement client is changing the date and this sql is running fine in development and taking time in production.So I created the sql profile and push it ot prodcution so for EFFDT <= '25-APR-2010' it was running fine as plan is same as development .....but then again client changed the EFFDT <= '28-AUG-2010' is changed then plan neglected my sql profile because of hardcoded value and so it has parsed the sql again.
How we can fix this plan ? there application is like that so they are goin to pass the hardoce value like this only.....so they can not use bind variable... they are going to fire the sql from one session can we set on the session level like cursor sharing or some hints to get the development plan *for proper formating see the attached file* Statement :
SELECT
DECODE(SUBSTR(JL.PAYGROUP,1,1),'P','P','T','P','C'), JL.DEPTID_CF, JL.OPERATING_UNIT, JL.FUND_CODE,
JL.CLASS_FLD, JL.PROGRAM_CODE, PC.EMPLID, PC.EMPL_RCD, BUGL.BUSINESS_UNIT, JL.ACCOUNT,
SUM(DECODE(JL.GL_NBR,'REGER',JL.AMOUNT,0)), 'Regular Earnings', SUM(DECODE(JL.GL_NBR,'OTERN',JL.AMOUNT,0)), 'Overtime', SUM(DECODE(JL.GL_NBR,'NRTAL',JL.AMOUNT,0)),
[code]....
View 3 Replies
View Related
Mar 6, 2012
I am executing below query, but optimizer generating 2 different plans for the same. I don't want to use sql profiles to fix execution plan.
Query
SELECT R.VENDOR_RECORD_SEQ_NO ,
R.VENDOR_SUBJECT_SEQ_NO ,
NVL(D.RESOLVED_VALUE, D.ORIGINAL_VALUE) VAL,
D.CONTROL_COLUMN_SEQ_NO
[code]....
View 3 Replies
View Related
Jul 20, 2013
Why the query is behaving differently with the different database.(execution plan)
Whatever the production database is having same database instance replicated to a new schema. I tried both the queries running on both environment.In prod the index has been used but in newdev it is not. This case existing primary key index were not been used.
View 6 Replies
View Related
Feb 1, 2011
I am using Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
I have 2 schemas in my application.
1. Application schema
2. EOD(End of day) schema.
End of day schema is populated from Application schema whenever user runs EOD process. The tables are pulled like this.
1. Master tables : Always deleted and reinserted at each EOD process
2. Log tables for each transaction table: Delta between the last EOD and current EOD data is pulled and are used for populating transaction tables
3. Transaction tables: These are populated from log tables pulled from previous step. The logic is like this
Now based on these tables about 30 reports are generated in EOD schema. Please note that each transaction table will have a EOD_ID and any report generated uses the where condition <transaction table>.EOD_ID = <current EOD_ID>
My log table contract_log and transaction table is contract in EOD schema.
contract_log table has data like this
contract_number contract_date customer_idqtyrateeffective_date
11/Jan/2010CUST-0110101/Jan/2010
1NULLNULLNULL112/Jan/2010
EOD on 1st Jan 2010 constructs contract table as
contract_numbercontract_datecustomer_idqtyrateeod_id
11/Jan/2010CUST-011010EOD-1
(Since the change of rate 11 is not visible on 1st Jan 2010 because it is effective on 2nd Jan 2010)
EOD on 2nd Jan 2010 constructs contract table as
contract_numbercontract_datecustomer_idqtyrateeod_id
11/Jan/2010CUST-011011EOD-2
(Since the change of rate 11 is visible on 2nd Jan 2010)
This logic is working fine. But we run more than 20-30 EODs the processing time increased to 10-15 hours.
It took some time to figure out the issue as a single query when run from toad or pl sql developer runs in few seconds but as a part of the whole package it takes 2-3 hours(each query).
The problem found was that oracle execution plan gets corrupted when the process starts. So what we did was to analyze the tables
after they are pulled. This perfectly solved our problem. Currently the whole process is taking only about 12-13 minutes where about 3 minutes is lost on analyze tables and indexes. I know this is a temporary solution as I need to get out of online analyze of tables and indexes.
My code for table and index regeneration is as below
PROCEDURE sp_gather_table_index_stats(pc_table_name VARCHAR2) IS
CURSOR cur_ind IS
SELECT index_name
FROM user_indexes
WHERE table_name = pc_table_name;
BEGIN
EXECUTE IMMEDIATE ' begin DBMS_STATS.gather_table_stats(user,' || '''' ||
pc_table_name || '''' || '); end;';
FOR cur_ind_rows IN cur_ind LOOP
EXECUTE IMMEDIATE ' begin DBMS_STATS.gather_index_stats(user,' || '''' ||
cur_ind_rows.index_name || '''' || '); end;';
END LOOP;
END;
View 1 Replies
View Related
Apr 12, 2013
How can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.
View 7 Replies
View Related
Mar 25, 2012
I have queries on the execution plan of a sql statement
Following is the example
create table t1 as select s1.nextval id,a.* from dba_objects a;
create table t2 as select s2.nextval id,a.* from dba_objects a;
insert into t1 select s1.nextval id,a.* from dba_objects a;
insert into t1 select s1.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
commit;
create index i1 on t1(id);
create index i2 on t2(id);
create index i11 on t1(object_type);
exec dbms_stats.gather_table_stats(user,'T1',cascade=>true);
exec dbms_stats.gather_table_stats(user,'T2',cascade=>true);
select count(*) from t1 where object_type='VIEW';
COUNT(*)
----------
8934
set autotrace traceonly explain
Can we say in the following case, that,
(1) First index on object_type is accessed to get rowids - t1.object_type='VIEW'
(2) Then the filter on owner is applied - t1.owner='SYS'
(3) Then the table T1 is accessed to fetch data from the rowids returned by the index I11 and filer application - TABLE ACCESS BY INDEX ROWID
Though I am unable to understand how filter can be applied to the rowids retrieved from index, we can see from the plan below that The rows accessed have reduced from 8550 to 1221 before we access the table...Thus filter "t1.owner='SYS'" is applied in between. Right?
another question is
Case 1 - do we retrieve a rowid from index for a given value, then retrieve required values from table for that rowid
Thus row at a time in both ... in loop
OR
Case 2 - we first fetch all rowids from index and then retrieve values from table one row at a time from the collection of rowids fetched?
Suppose Case 1 is what is happening then can we say, both the steps mentioned by IDS 2,3 in plan below are executed exactly equal number of times and the filter "t1.owner='SYS'" is applied at some later stage? Of course in this case the values in ROWS stand misleading then
select * from t1,t2 where t1.id = t2.id and t1.object_type='VIEW' and t1.owner='SYS';
Execution Plan
----------------------------------------------------------
Plan hash value: 26873579
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1221 | 233K| 915 (1)| 00:00:11 |
|* 1 | HASH JOIN | | 1221 | 233K| 915 (1)| 00:00:11 |
|* 2 | TABLE ACCESS BY INDEX ROWID| T1 | 1221 | 116K| 381 (1)| 00:00:05 |
|* 3 | INDEX RANGE SCAN | I11 | 8550 | | 24 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | T2 | 161K| 15M| 533 (1)| 00:00:07 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T1"."ID"="T2"."ID")
2 - filter("T1"."OWNER"='SYS')
3 - access("T1"."OBJECT_TYPE"='VIEW')
View 7 Replies
View Related
Oct 31, 2012
So the situation is like this
- Database A (20 tables)
- Database B (20 tables)
- Both A and B are Oracle 11gR2
- Both of these databases run on different hardware (A is a VM, B is on a physical host)
- The 20 tables in A and B have exactly same number of rows and after preparing the data, the schemas were analysed using the same DBMS_STATS parameters
Despite this, the execution plans appear to be quite different for the same queries between A and B
I imagine there is something outside of the Oracle table rowcounts, table stats, column stats, index stats that's resulting in the different execution plans.
View 3 Replies
View Related
Apr 30, 2012
refere to below 2 queries and their execution plans:
First Query
INSERT INTO temp_vendor(vendor_record_seq_no,checksum,rownumber,transaction_type,iu_flag)
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */
vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1,
[code]...
Second Query
SELECT vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM ( select * from vendor_data vd1
where vd1.study_seq_no = 99903
AND vd1.control_column_seq_no = 435361232
[code]...
Both are to achieve same output but written in different ways. CAn I get same exectuion plan from 1st query as there is for 2nd using hints
View 10 Replies
View Related
Mar 5, 2013
One of our clients is using Rule Based Optimizer on Oracle 10.2.0.3.0
2-3 weeks backs, during performance issue in one of the sql queries, one of our team members executed tuning adviser for it, created SQL profile and the subsequent execution of the SQL did not took much time (less I/O). Now it took hardly a minute to execute
When this happened I checked that the SQL profile forced that particular query to use CBO (say plan_hash_value is PHV1 here). Yesterday the same query again took 15-20 minutes for execution. I checked that even for this execution the query used the same SQL profile but "this time" with different plan_hash_value - say PHV2.
Today again the query executed in less than a minute and used the plan_hash_value as PHV1.
select distinct plan_hash_value,timestamp from dba_hist_sql_plan where sql_id='mysqlid' order by 1,2;
PLAN_HASH_VALUE TIMESTAMP
--------------- --------------------
890360113 20-feb-2013 16:38:39
3736413466 04-mar-2013 08:12:52
1237282258 03-jan-2013 17:15:02
I confirmed from awrsqrpt as well that different plans were used for different plan_hash_values and every time same SQL profile was used
SQL> select name,CATEGORY,SIGNATURE,CREATED,LAST_MODIFIED,TYPE,STATUS,FORCE_MATCHING from dba_sql_profiles;
NAME CATEGORY SIGNATURE CREATED LAST_MODIFIED TYPE STATUS FOR
------------------------------ ------------------------------ ---------- -------------------- -------------------- --------- -------- ---
SYS_SQLPROF_015ffffcc3e1c5b000 DEFAULT 1.5512E+19 20-feb-2013 16:30:48 20-feb-2013 16:30:48 MANUAL ENABLED NO
I am unable to understand how execution plan and thus plan_hash_value is changing for the same SQL Profile. I read that SQL Profile (unlike stored outline) keeps up with increasing data volume and may not keep up with changing data distribution.
I checked that values for 4 bind variables out of 81 are different for execution between today and yesterdays' run(queried v$sql_bind_capture based on last_captured)
My questions are
1) does the different plan_hash_values with different execution plans for query using same SQL profile mean the query was hard parsed multiple times and still used the same SQL profile?
2) If that is the case why I never saw child_number = 1 in any of the views for the same sql_id. I tried it repeatedly over last 2 weeks and always found child_number=0 in v$sql (also loaded_versions=1)
3) Does the different values of bind variable are causing this flip-flop of the plans? How can I conclude this?
I have 2 plans with 2 different plan_hash_values. I know which would be better. How can I force the sql to use better plan in the two in this case where I am using Rule Based Optimizer and have SQL profile created If this is not possible then how can I create stored outline from the existing plan (not waiting for subsequent execution to take place).
View 6 Replies
View Related
Feb 8, 2011
refer following sql statements and code
Session 1
create table tab1 as select * from dba_objects where object_id is not null;
alter session set events '10046 trace name context forever, level 12';
declare
x number;
begin
for i in 1..4
loop
[code]....
Session 2
after "starting" the above pl/sql block from Session 1, I keep on querying tab2 from Session 2 And as soon as 2 records are inserted in tab2, I create index from Session 2
select * from tab2;
select * from tab2;
select * from tab2;
N
----------
1
2
create index i on tab1(object_id);
As I have tested from a single session (just before this test) such index is used for the sql statement
select count(1) into x from tab1 where object_id=2331;
However when I checked the trace file I am not geeting results as expected
I am expecting 4 execution plans - 2 FTS and 2 Index Access scans and for this I am issuing following command
tkprof dst1_ora_7369.trc dst1_ora_7369.txt aggregate=no sys=no
But unfortunately I am getting following output
SELECT COUNT(1)
FROM
TAB1 WHERE OBJECT_ID=2331
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 4 0.00 0.00 0 2 0 0
[code]....
1) Why I am unable to see 4 execution plans - 2 with FTS and 2 with Index access when I mentioned 'aggregate=no'?
2) Whether the index i will be used for last 2 iterations after first 2 iterations of FTS?
If answer to above question 2) is 'No'
By which method I can force an ongoing sql statement in loop to take different execution path? Of course I can't hard parse sql in 'that' current session Will flushing Shared pool work in above case?
View 6 Replies
View Related
Jul 21, 2010
I have two Oracle instances that are setup identically.When I run a query on one of them, it takes around 3 seconds, on the other it takes around 200 seconds.
I have looked at the explain plans, and it has shown me what I think is the problem. On one instance, it does a join on two tables, then runs the other filter/access predicates. On the other instance it runs the filter/access predicated first, then does the expensice join. The one that does the join first is the one that takes around 200 seconds. How to tell Oracle to make this join after runnning the other predicates?
View 15 Replies
View Related
May 18, 2010
Can we have same execution plan for a create table statement where the name of the table changes every time as follows:
create table test
as
select * from t1
Here table name changes from test to another table name next time
View 6 Replies
View Related
Jun 4, 2010
attached query giving consistent execution plan but different timings across run
SELECT /*+ INDEX (CRT CRT_CUN_FK_I)*/
DISTINCT odr.dve_id
FROM company_requirements crt, orders odr, lelo_products la_pct
WHERE crt.qtn_cun_id = 10035637--10000021--10035667
AND crt.ID = odr.crt_id_quote_implemented
AND NVL (odr.cancellation_date, '31-Dec-9999') = '31-Dec-9999'
[code]....
we have 4 databases, 2 on each servers, such that db1 and db2 on server1 and db3 and db4 on server2
refer count of the records for column of biggest table in the query, taken on all 4 databases (The column is nullable)
select count(*) from company_requirements crt WHERE crt.qtn_cun_id = 10035637
db1 = 73335
db2 = 89073
db3 = 81182
db4 = 82936
First I executed the query on db1 and db2 while there wasn't any user logged on to the system
db1
**********
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.08 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 17.47 473.39 85704 1508102 0 0
[code]...
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
db file sequential read 85704 0.31 460.55
latch free 1 0.00 0.00
SQL*Net message from client 1 14.98 14.98
[code]...
Why the elasped time changed when data and plan hasn't changed at all? Also why the plan has different stats for round 1 and 2 on db1 and db2?
I ran it 2 times each round each database so hard parsing shall not be issue.Also why the number of rows accessed are different in db1,db2 and db3,db4 especially for step1 when count of crt.qtn_cun_id is similar?
In fact when the query was taking long I was the only user on the system Also I used hard coded value (no bind variables at all)
I checked num_rows, distinct keys as well which are quite similar across all 4 databases Also no stats where gather during the query execution
What I should have checked or monitored?
View 10 Replies
View Related
Mar 20, 2012
Which step in the following plan is the first step of execution
I reckon it is "TABLE ACCESS BY INDEX ROWID| BANK_BATCH_STATE"
Is that correct?
In the "Predicate Information (identified by operation id):"
section the predicates - access and filter for the step "TABLE ACCESS FULL | PYMNT_DUES" are displayed first
Isn't there any relation between the order of execution steps and the order in which predicates are displayed?
Execution Plan
----------------------------------------------------------
Plan hash value: 538700484
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2364 | 15 (14)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | HASH GROUP BY | | 1 | 2364 | 15 (14)| 00:00:01 |
| 3 | NESTED LOOPS | | 1 | 2364 | 14 (8)| 00:00:01 |
| 4 | NESTED LOOPS | | 1 | 2313 | 13 (8)| 00:00:01 |
| 5 | NESTED LOOPS | | 1 | 2281 | 12 (9)| 00:00:01 |
| 6 | NESTED LOOPS OUTER | | 1 | 2255 | 11 (10)| 00:00:01 |
|* 7 | HASH JOIN | | 1 | 175 | 6 (17)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | INDX_2 | 12 | 612 | 2 (0)| 00:00:01 |
|* 9 | TABLE ACCESS FULL | PYMNT_DUES | 43 | 5332 | 3 (0)| 00:00:01 |
| 10 | VIEW PUSHED PREDICATE | | 1 | 2080 | 5 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 154 | 5 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 103 | 4 (0)| 00:00:01 |
|* 13 | TABLE ACCESS BY INDEX ROWID| BANK_BATCH_STATE | 1 | 32 | 2 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | INDX_BBS_1 | 3 | | 1 (0)| 00:00:01 |
|* 15 | TABLE ACCESS BY INDEX ROWID| DAILY_CHECK | 1 | 71 | 2 (0)| 00:00:01 |
|* 16 | INDEX RANGE SCAN | INDX_SEARCH | 1 | | 1 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | INDX_2 | 1 | 51 | 1 (0)| 00:00:01 |
|* 18 | INDEX RANGE SCAN | INDX_IAM_SR_NO | 1 | 26 | 1 (0)| 00:00:01 |
|* 19 | INDEX RANGE SCAN | INDX_2 | 1 | 32 | 1 (0)| 00:00:01 |
|* 20 | INDEX RANGE SCAN | INDX_2 | 1 | 51 | 1 (0)| 00:00:01 |
-----------------------------------------------------------------
View 3 Replies
View Related
Dec 8, 2011
which will drive different execution plan from one user to another user for the same query and same objects,owners infact everthing is same.
I have found out only one particular user is getting different execution plan but remaining all are getting same execution plan.
View 14 Replies
View Related
Jun 19, 2013
I need function to pick the record from DB random manner.For example, say we have 500 records and input value to the function is 5 means, it should display the records randomly between 1 to 5
View 12 Replies
View Related
Jun 19, 2013
I need function to pick the record from DB random manner.
For example
, say we have 500 records and input value to the function is 5 means, it should display the records randomly between 1 to 5
If user inputted value is 10 means, it should display one record between 1 to 10.
View 1 Replies
View Related
Sep 26, 2013
We've an accounts table that basically represents hotel chains & brands. an example shown below.
account_id chain_id brand_id service1
NULL NULL 111
NULL NULL 122
NULL NULL 11
Here I want to update the chain_id & brand_id which are currently NULL in order to make every row eligible for further processing.
There is another table(say chain_brand) which maintains the relationship between chain_id and brand_id. one chain_id can have multiple brand_ids eg.,
chain_id brand_id
101 2011 101
2012 102 2020
Now I need a script that could randomly pick values from chain_brand table and update the accounts table. condition is those values should be unique for an account_id eg.,
account_id chain_id brand_id service1
101 2011 111 101
2011 122 102 2020 11
so each account can be attached to only one chain_id and one brand_id.
View 2 Replies
View Related
Mar 31, 2009
My question is on TRUNCATE statement.
So if the truncate syntax goes like
TRUNCATE { TABLE [ schema. ]table
[ { PRESERVE | PURGE } MATERIALIZED VIEW LOG ]
| CLUSTER [ schema. ]cluster
}
[ { DROP | REUSE } STORAGE
Where it is really not mandatory to add (DROP STORAGE) to the truncate statement knowing fully well the Watermark is dropped. Lately some of my Truncate statements that are part of Plsql package have been failing randomly with a ORA-03291 error. I iteratively walk through a list to truncate some tables and 1 or 2 out of 10 randomly fail. But work fine when I rerun. Its been a night mare for couple of weeks now. This code was working alright for last several years now in 10g and previously 9i. Do you believe some changes to settings on the Oracle database or to blame for ?
ORA-03291: Invalid truncate option - missing STORAGE keyword
ORA-06512: at "xxxx.xxxxxxxxx", line 35
ORA-06512: at line 2
View 7 Replies
View Related
Sep 20, 2012
There is a nested table with in a nested table type and i want to print the value and again assign a new value to the next subscript and i have tried a lot but couldn't find any solution.
declare
type type_name is table of varchar2(10);
type type_name1 is table of type_name;
names type_name1:=type_name1(type_name('hello'));
begin
-----HOW TO PRINT A VALUE--------
-----HOW TO ASSIGN A NEW VALUE TO NEW SUBSCRIPT
null;
end;
1) need to print the values of names(1)
2)Assign a value to names(2)
View 3 Replies
View Related
Mar 22, 2013
I am describing a SQL statement to get it's column list:DECLARE
cur NUMBER;
col_cnt INTEGER;
rec_tab DBMS_SQL.DESC_TAB;
[Code]....
Now I need to get out the columns list from rec_tab.col_name and put it to my_colls collection. Have Oracle any build-in to do that?
View 4 Replies
View Related
Jan 6, 2012
I've 2 table with below colums
Create table rent (customer_id number(10),
doc_num varchar2(20)
);
Create table doc_id (doc_num varchar2(20));
Insert into rent(customer_id) values (1);
Insert into rent(customer_id) values (2);
Insert into rent(customer_id) values (3);
Insert into rent(customer_id) values (4);
[code]...
Now my requirement is i need to assign doc_num from doc_id table to 4 customers in rent table randomly. I mean update doc_num in rent table from doc_id table randomly. how to write update statement.
View 8 Replies
View Related
Jan 21, 2011
How can we check the I/O statistics on a ASM diskgroup?
Do we have any command which can be run @ sqlplus prompt after logging into the asm instance?
What steps can we take to increase the Read speed and the write speed on the ASM diskgroup.
View 1 Replies
View Related
May 16, 2011
I want to take the statistics of my Production database and import to my local database, to calculate the Production statistics.I have used the statistics=compute, to export statistics. In the log file, for some tables, there was a waring like "exp-00091 exporting questionable statistics"
Is this dump will be useful for me to calculate production statistics?What option i have to use while importing statistics=recalculate or statistics=safe?
View 5 Replies
View Related
Aug 28, 2010
1) Why we need to do Compute Statistic on index.
2) Is it only for optimizer to make a better plan?
3) If yes, which means, optimizer will not able to collect statistic by itself?
4) if I'm not collect statistic, then optimizer will do it or skip.
View 5 Replies
View Related
Sep 12, 2012
am having Oracle 9i RAC on IBM AIX .
I have large partitioned tables ( 4 partitions are added every month ). Is is possible to collect Incremental Statistics Gathering on these objects ( 9i ). If I collect stats with Ggranularity => ALL and ESTIMATE_PERCENT =100 the stats are accurate but it takes so much time .
One way may be to collect stats as Ggranularity => PARTITION for each new partition ( this quite fast ). but what about the Global Table Stats?
View 7 Replies
View Related
May 12, 2013
How to use see the query plan.
View 1 Replies
View Related
Aug 13, 2012
After we have upgraded our database from 10g to 11gR2 one of the sql started running very slow, is there any way to use the 10g sql plan for this query in 11gR2..? the 10g sql plan still shows up in history table along with it hash value.
We have tried using SPM but due select_workload_repository is set to basic(default) in 10g, the plans are not getting populated into profiles.
View 1 Replies
View Related