Automated Gathering Of Optimizer Stats
Sep 10, 2013
I am quiet confused with the optimizer collection stats job on 11g. when run the following query i see the statistics enabled.
SQL> select client_name, status from dba_autotask_client;
CLIENT_NAME STATUS
---------------------------------------------------------------- --------
auto optimizer stats collection ENABLED
auto space advisor ENABLED
sql tuning advisor ENABLED
I can gather statistics manually using DBMS_STATS, but there is no automated gathering of optimizer stats. how can i have system run and collect statistics on a daily bases?
View 1 Replies
ADVERTISEMENT
Sep 20, 2012
In my database auto optimizer stats collection job scheduled. It is successfully running,i am confirming this by viewing DBA_AUTOTASK_JOB_HISTORY. My doubt is whether stats gather job collect statistics when 10%(default value) of data modified in a hole table or table partitions.
Below is the output for user_tab_modification.I am able to see two entry for same table.
SQL> select TABLE_NAME,PARTITION_NAME,INSERTS,UPDATES,DELETES,TIMESTAMP,TRUNCATED from USER_tab_modifications where table_name='DLM_PERFORMANCE_DATA';
TABLE_NAME PARTITION_NAME INSERTS UPDATES DELETES TIMESTAMP TRU
------------------------------ ------------------------------ ---------- ---------- ---------- ----------- ---
DLM_PERFORMANCE_DATA 169812174 0 0 20-SEP-2012 NO
DLM_PERFORMANCE_DATA SYS_P2663580 4946409 0 0 20-SEP-2012 NO
View 4 Replies
View Related
Oct 1, 2012
I am on 11.2 on Linux.I am looking into a performance issue. The issue is around 1 particular SQL, involving about 5 tables.I re-gathered statistics on 2 main tables in the query (out of 5 tables).
When I say re-gathered, I first did DBMS_STATS.DELETE_TABLE_STATS and then did DBMS_STATS.GATHER_TABLE_STATS.
Earlier, we had histograms on these tables, which I removed and gathered stats without generating histograms.
SQL> select table_name, num_rows, sample_size, last_analyzed from user_tables where
2 table_name in ( 'DETAIL_TABLE','MASTER_TABLE');
TABLE_NAME NUM_ROWS SAMPLE_SIZE LAST_ANALYZED
------------------------------ ---------- ----------- -------------------
MASTER_TABLE 50615338 50615338 01/10/2012 11:09:27
DETAIL_TABLE 353550440 353550440 01/10/2012 11:10:05
2 rows selected.Then ran the SQL again couple of times (actually, that SQL is in a stored procedure, which I ran couple of times).I found this wonderfull SQL on internet, which tells me when the SQL ran and which plan (identified by its hash value) it used. Using this SQL I tried to check if my SQL was run using any different plan, but it used exactly same plan it used before I re-gathered the stats. See the last analyzed time above and begin_interval_time below, same SQL has run before and after stats collection, with same plan_hash_value.
SQL> select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
2 nvl(executions_delta,0) execs,
3 (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
4 (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
5 from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
[code]....
7 rows selected.
My question is, when I re-gathered stats on 2 tables out of 5 tables in a given SQL, are the plans not flushed out of SGA? I was expecting that, at least a new plan hash value would show up front of my SQL, before and after stats collection.
View 8 Replies
View Related
Oct 26, 2010
Oracle 10g has the feature of automatic stats gathering in this case is it necessary to run DBMS_STATS on tables manually. Does the stats gathered become stale when the auto stat runs ?
View 1 Replies
View Related
Jun 2, 2011
I am gathering stats by using below block i.e., for some 3 million records and there are 6 indexes on the table. What is the relevance of value 4 here (i.e., method_opt => 'FOR ALL INDEXED COLUMNS SIZE 4')? If I increase 4 to 250 will there be any speed change in gathering stats. My intention is to speed up the gathering of stats.
begin
dbms_stats.gather_table_stats(
ownname => SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA'),
tabname => 'LEGAL_VIEW_TARGET',
method_opt => 'FOR ALL INDEXED COLUMNS SIZE 4',
cascade => TRUE
);
END;
View 12 Replies
View Related
May 16, 2011
I have several databases that i've recently upgraded from 9i to 11g. With all of them, the automatic stats gathering process has worked just fine every night during the maintenance window.
However, i have this other database that i created and it seems that the only stats being gathered are on the sys and system schemas and not the actual schema that holds all of our tables.
I did some searching, but i'm not sure i was using the right search terms, because i came up empty.
BANNER
-----------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Solaris: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
View 16 Replies
View Related
Jan 26, 2012
I have created a table like below-
PROMPT CREATE TABLE tst_fetch_vendor_data
CREATE TABLE tst_fetch_vendor_data (
vendor_data_seq_no NUMBER NOT NULL,
study_seq_no NUMBER NOT NULL,
vendor_record_seq_no NUMBER NOT NULL,
control_column_seq_no NUMBER NOT NULL,
resolved_value VARCHAR2(4000) NULL,
original_value VARCHAR2(4000) NULL,
transaction_user VARCHAR2(30) NOT NULL,
[code]....
Its just a temporary table, in which data comes and goes. I am using this in middle of a process.I am using it in a process like below--
--EXECUTE IMMEDIATE 'TRUNCATE TABLE TST_FETCH_VENDOR_DATA DROP STORAGE';
insert /*+ append */ into tst_fetch_vendor_data
(select * from vendor_data vd
where vd.control_column_seq_no in
(select control_column_seq_no from temp_control_column));
dbms_stats.gather_table_stats('EPDSYSREP','TST_FETCH_VENDOR_DATA',ESTIMATE_PERCENT=>100,
METHOD_OPT=>'for all indexed columns size auto',CASCADE=>True);
code to use that table..This table can contain data from 0 to 108000000 records.Now my questions are-
1. How much should I select sampling size (currently its 100%)Can I use dbms_stats.auto_sample_size, what will be the effect?
2. dbms_stats is good approach or should I use dynamic sampling.
3. what about the approach using CTAS instead of inserting data through insert.
4. What about pl/sql table with index or with clause query.
5. Do I need to rebuild index after inserting data into table.
View 3 Replies
View Related
Jan 27, 2011
DBMS_STATS.CREATE_STATS_TABLE
DBMS_STATS.EXPORT_SCHEMA_STATS
DBMS_STATS.GATHER_TABLE_STATS
I have used the above to get a copy of schema stats and gather new stats for specific tables into a STATS TABLE in my personal schema. What I want to do now is use this stats table to generate plans for queries where I believe stats are off. Is it even possible? To be clear, I do not want to import stats because this replaces the stats currently there. I just want to point the CBO to my stats table for generating plans.
there was a session parameter I could set to tell oracle to use my stats table when generating plans, or an explain plan clause I could use or a DBMS_XPLAN paramter I could provide that would tell these tools to use my stats table when generating a plan, or even some way to tell autotrace. But I have found none of this.
View 2 Replies
View Related
Sep 11, 2012
During STATS gather running for the table, unknowingly i deleted the old stats using EXEC DBMS_STATS.DELETE_TABLE_STATS. I would like to know will it affect the stats gather job currently running for the table and whether my stats will be gathered successfully.
View 5 Replies
View Related
Jul 18, 2012
Query -
SELECT *
FROM sysadm.ps_tmtl_post_vw a
WHERE a.month_prepared_for = 'JUNE,2012'
AND a.ca_status = 'P5 CUST GO AHEAD'
[code]...
When I try for the SQL-Tuning sets its throws error that
ADDITIONAL INFORMATION SECTION
-------------------------------------------------------------------------------
- The optimizer could not merge the view at line ID 2 of the execution plan.
The optimizer cannot merge a view that contains a set operator.
I read earlier forum where it says that optimizer unable to interpret the conditions like order by etc etc.Now there is one view which is getting used in the query when I did select * from vw it took more than 16 hrs to complete. (bad view).
Attached File(s)
exec_plan.txt ( 2.06MB )
Number of downloads: 1
view_def.txt ( 14.12K )
Number of downloads: 2
View 5 Replies
View Related
Jun 3, 2013
what is oracle Cost-Based Optimizer? Any material easy to follow?
View 3 Replies
View Related
Jan 22, 2009
how to reduce the cpu cost for a query at query level.
View 10 Replies
View Related
Jul 23, 2013
The full statement is:
SELECT v.key$ FROM VERSION_TABLE v, DOCUMENT_TABLE d, CLASS_TABLE z WHERE
v.documentKey = d.key$ AND
d.classKey = z.key$ AND
z.key$ IN (SELECT zz.key$ FROM CLASS_TABLE zz
START WITH zz.name = 'esDTTemplate'
CONNECT BY PRIOR zz.key$ = zz.parentKey) AND
v.ESGROUP = 'SearchOperatorsMapping' ORDER BY d.name
Now I noticed that the subquery is never used to seed the join: indexes - if any - are used. Otherwise a full table scan is performed.In the example - if ESGROUP is indexed, then it's chosen to start the join evaluation. If not, a full table scan is performed.Is there any way to suggest to the optimizer to use the subquery in case there are no indexes - as a fallback ?
In the above example where VERSION_TABLE contains nearly two million records, the no index solution takes 60 secs. vs. less than 1 sec. in the index case.Wrapping the hierarchical query in a inline view leads to same result.
PS: the execution plan (without index) is:
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 9 (100)| |
| 1 | SORT ORDER BY | | 1 | 171 | 9 (23)| 00:00:01 |
|* 2 | HASH JOIN SEMI | | 1 | 171 | 8 (13)| 00:00:01 |
[code]...
View 3 Replies
View Related
Oct 14, 2013
I'm looking to see if there is a way (fully expecting it to be an underscore, or two...) to force the optimizer to keep churning until all permutations are exhausted.I'm aware that it, to paraphrase, cuts out when it's spent more time parsing than it would just running it based on it's estimates.
I've got some irritating problems with xml rewrite, xml indexes and access paths/cardinalities etc and I'm really needing the entire thing considered as a one off for debugging this. I've already cranked up the maximum permutations to the max but it's not enough, it shorts out after 5041 permutations (I'd set that to 80000 max).
I know you'd not want to do this in the real world but I cant get the damned thing to run the plan I want in a 10053 so I can see the values it has there. I know I can hint it, but I'm trying to ascertain why it's not even considering it in a "normal" parse.
View 6 Replies
View Related
Sep 12, 2012
am having Oracle 9i RAC on IBM AIX .
I have large partitioned tables ( 4 partitions are added every month ). Is is possible to collect Incremental Statistics Gathering on these objects ( 9i ). If I collect stats with Ggranularity => ALL and ESTIMATE_PERCENT =100 the stats are accurate but it takes so much time .
One way may be to collect stats as Ggranularity => PARTITION for each new partition ( this quite fast ). but what about the Global Table Stats?
View 7 Replies
View Related
Dec 24, 2012
Is it possible to gather stats for a schema which its in use. When i try to analyze the tables of a schema it shows that the statistics for that table is locked. So is it possible that instead of analyzing a table one by one , can i go for gathering the Schema stats while the objects of that Schema is still in use ( like DML or select statements being issued on those schema objects) .
DB version : 10.2.0.4
OS version : RHEL 5.8
DB type : RAC
View 12 Replies
View Related
Mar 4, 2013
I have a form based on a table where i have 2 keys as primary key (col1,col2)==>composite pk. sample values are like:
col1 col2
1 21
2 21
62 21
62 1
But issue is at the point where the automatic row fetching is getting executed [Automated Row Processing section on edit page] Primary column value is col1(only one column is allowed in apex Auto row fetch) Secondary key column is col2.
For ex, when i access the row for col1=62, it fetches 2 rows where value=62 (based on col1) and i get error; (exact fetch returns more than 1 row)
How to avoid this
View 1 Replies
View Related
Nov 16, 2011
The manual work around on populating child tables for testing purpose are taking long time and its very painful work. So I am trying for a tool that takes parent table name and child table name as input and produce insert statements for child table with foreign keys as output.
View 5 Replies
View Related
Sep 3, 2013
,I need to view database statistics after executing SQL> exec dbms_stats.gather_database_stats;is that really possible? how to check it?
View 7 Replies
View Related
Oct 9, 2012
The database is 11.2.0.3 on a linux machine. I issued the following command, but the session was a little slow. The table size is about 50 GB and has 3 indexes. I specified "degree=8" for parallel processing.
When gathering statistics on the table , parallel slaves were invoked and gathering statistics on the table has finished fast enough. However, when it goes to gathering statistics on the indexes, only one active session was invoked, and thus "degree=8" option was ignored.
My question is :
Do I need to use dbms_stats.gahter_index_stats instead of "cascade" option in order to gather statistic on the indexes with parallelism?
exec dbms_stats.gather_table_stats(ownname=>'SDPSTGOUT',tabname=>'OUT_SDP_CONTACT_HIS',estimate_percent=>10, degree=>8 , method_opt=>'FOR ALL COLUMNS SIZE 1',Granularity=>'ALL',cascade=>TRUE)
View 2 Replies
View Related
Feb 27, 2012
i have a table with 15 coulums and containing millions of rows which is being updated everyday.Now i have created a similar report table with only the coulums i need to report on from the main table.what plsql script or if there is any better alternative do i need to write to copy the data from the coulums i need from the main table to the new report table. the new report table will be be updated every 01:00am with the data coming from the main table and the update is automated.
View 3 Replies
View Related
May 10, 2011
I have to send several reports to various branch via email, I created a branch table in which email of that branch is stored in a column. Now if report is generated for particular branch, its PDF should also be sent via email to that branch, How can this be done on When Button Pressed,
View 1 Replies
View Related
Apr 28, 2008
Is there a way to query an oracle database in an automated fashion by a timestamp field based on current timestamp, like: 04/29/08 00:00:00 - 72 hours?
View 3 Replies
View Related
Dec 14, 2012
Is it possible for Access to extract data from an Oracle database and upload it directly?
Currently we have a business process where data is being extracted in scheduled queries (30+) to Excel spreadsheets, then manually edited to remove heading lines and imported to an Access database. I see an opportunity to automate a time consuming manual activity by having the Access db extract the data and directly upload it.
View 3 Replies
View Related
Jul 17, 2012
OS: RHEL 5.7 64 bit
DB: 11.2.0.2 Standard Edition 64 bit
Everyday EOD is run and after the eod, users are requesting to receive a mail confirming the same from the database. For this we need to configure automated email which will be sent to a list of users email ID immediately after the EOD is done.
View 3 Replies
View Related
Jul 5, 2011
I need to exclude a single schema from the autostats gathering feature in 11g. The tables in this schema are analyzed at the appropriate time via the application code. The autostats gathering job sometimes kicks in at a time in which the tables are getting updated or loaded which can skew explain plans during the updates/inserts.
I've searched through the oracle documentation and cannot find a way to simply "exclude" the schema without locking it. I see it is possible to disable the autostats at the entire db level but not at the schema level.
View 5 Replies
View Related
Apr 3, 2013
I have a form on a table that allows me to edit, delete, or create records. If I choose to edit a record from the report the form get populated correctly and edits or deletes work fine. My problem is when I choose to create a new record. I get a blank form which is what I want, but once I fill it out and click the create button, the automated row fetch fires again and gives me a no data found error. How do I get it to not fire when the create button is pressed?
View 2 Replies
View Related
Oct 18, 2012
I want to know how the Oracle optimizer choose joins and apply them while executing the query. So that I will insure about optimizer join before writing any query.
View 2 Replies
View Related
Nov 8, 2011
Provide automated shell script for renaming oracle DB 10gR2. Now it is a daily manual job for us.
View 4 Replies
View Related
Mar 14, 2012
How to setup the database for automated RMAN backup. like if we want to have a backup exactly at 8:00 PM IST. So how to configure it.
View 6 Replies
View Related