The database is 11.2.0.3 on a linux machine. I issued the following command, but the session was a little slow. The table size is about 50 GB and has 3 indexes. I specified "degree=8" for parallel processing.
When gathering statistics on the table , parallel slaves were invoked and gathering statistics on the table has finished fast enough. However, when it goes to gathering statistics on the indexes, only one active session was invoked, and thus "degree=8" option was ignored.
My question is :
Do I need to use dbms_stats.gahter_index_stats instead of "cascade" option in order to gather statistic on the indexes with parallelism?
exec dbms_stats.gather_table_stats(ownname=>'SDPSTGOUT',tabname=>'OUT_SDP_CONTACT_HIS',estimate_percent=>10, degree=>8 , method_opt=>'FOR ALL COLUMNS SIZE 1',Granularity=>'ALL',cascade=>TRUE)
I have large partitioned tables ( 4 partitions are added every month ). Is is possible to collect Incremental Statistics Gathering on these objects ( 9i ). If I collect stats with Ggranularity => ALL and ESTIMATE_PERCENT =100 the stats are accurate but it takes so much time .
One way may be to collect stats as Ggranularity => PARTITION for each new partition ( this quite fast ). but what about the Global Table Stats?
Is it possible to gather stats for a schema which its in use. When i try to analyze the tables of a schema it shows that the statistics for that table is locked. So is it possible that instead of analyzing a table one by one , can i go for gathering the Schema stats while the objects of that Schema is still in use ( like DML or select statements being issued on those schema objects) .
DB version : 10.2.0.4 OS version : RHEL 5.8 DB type : RAC
We have a bunch of tables with FKs but we do not have the 'on delete cascade' option on these FKs. If I want to delete records in the database do I have to write delete statements for each table starting with the lowest child or is there a cascade option I can use with the delete statement?
, we have used alter table to change the constraint in on delete cascade. there is two option. what is different between both ofthem. Example:1)Alter table t3 add constraint t2_F foreign key (id) references t2(id,column2,column3) on delete cascade ;or2)Alter table t3 add constraint t2_F foreign key (id) references t2 on delete cascade ;
know first one that particular column will set to on delete cascade.What is the use of second one in this case we are not using column name in "foreign key (id) references t2 on delete cascade ;"
I am quiet confused with the optimizer collection stats job on 11g. when run the following query i see the statistics enabled.
SQL> select client_name, status from dba_autotask_client;
CLIENT_NAME STATUS ---------------------------------------------------------------- -------- auto optimizer stats collection ENABLED auto space advisor ENABLED sql tuning advisor ENABLED
I can gather statistics manually using DBMS_STATS, but there is no automated gathering of optimizer stats. how can i have system run and collect statistics on a daily bases?
I am on 11.2 on Linux.I am looking into a performance issue. The issue is around 1 particular SQL, involving about 5 tables.I re-gathered statistics on 2 main tables in the query (out of 5 tables).
When I say re-gathered, I first did DBMS_STATS.DELETE_TABLE_STATS and then did DBMS_STATS.GATHER_TABLE_STATS.
Earlier, we had histograms on these tables, which I removed and gathered stats without generating histograms. SQL> select table_name, num_rows, sample_size, last_analyzed from user_tables where 2 table_name in ( 'DETAIL_TABLE','MASTER_TABLE');
2 rows selected.Then ran the SQL again couple of times (actually, that SQL is in a stored procedure, which I ran couple of times).I found this wonderfull SQL on internet, which tells me when the SQL ran and which plan (identified by its hash value) it used. Using this SQL I tried to check if my SQL was run using any different plan, but it used exactly same plan it used before I re-gathered the stats. See the last analyzed time above and begin_interval_time below, same SQL has run before and after stats collection, with same plan_hash_value.
My question is, when I re-gathered stats on 2 tables out of 5 tables in a given SQL, are the plans not flushed out of SGA? I was expecting that, at least a new plan hash value would show up front of my SQL, before and after stats collection.
Oracle 10g has the feature of automatic stats gathering in this case is it necessary to run DBMS_STATS on tables manually. Does the stats gathered become stale when the auto stat runs ?
I am gathering stats by using below block i.e., for some 3 million records and there are 6 indexes on the table. What is the relevance of value 4 here (i.e., method_opt => 'FOR ALL INDEXED COLUMNS SIZE 4')? If I increase 4 to 250 will there be any speed change in gathering stats. My intention is to speed up the gathering of stats.
I need to exclude a single schema from the autostats gathering feature in 11g. The tables in this schema are analyzed at the appropriate time via the application code. The autostats gathering job sometimes kicks in at a time in which the tables are getting updated or loaded which can skew explain plans during the updates/inserts.
I've searched through the oracle documentation and cannot find a way to simply "exclude" the schema without locking it. I see it is possible to disable the autostats at the entire db level but not at the schema level.
I have several databases that i've recently upgraded from 9i to 11g. With all of them, the automatic stats gathering process has worked just fine every night during the maintenance window.
However, i have this other database that i created and it seems that the only stats being gathered are on the sys and system schemas and not the actual schema that holds all of our tables.
I did some searching, but i'm not sure i was using the right search terms, because i came up empty.
BANNER ----------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for Solaris: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production
PROMPT CREATE TABLE tst_fetch_vendor_data CREATE TABLE tst_fetch_vendor_data ( vendor_data_seq_no NUMBER NOT NULL, study_seq_no NUMBER NOT NULL, vendor_record_seq_no NUMBER NOT NULL, control_column_seq_no NUMBER NOT NULL, resolved_value VARCHAR2(4000) NULL, original_value VARCHAR2(4000) NULL, transaction_user VARCHAR2(30) NOT NULL, [code]....
Its just a temporary table, in which data comes and goes. I am using this in middle of a process.I am using it in a process like below--
--EXECUTE IMMEDIATE 'TRUNCATE TABLE TST_FETCH_VENDOR_DATA DROP STORAGE';
insert /*+ append */ into tst_fetch_vendor_data (select * from vendor_data vd where vd.control_column_seq_no in (select control_column_seq_no from temp_control_column)); dbms_stats.gather_table_stats('EPDSYSREP','TST_FETCH_VENDOR_DATA',ESTIMATE_PERCENT=>100, METHOD_OPT=>'for all indexed columns size auto',CASCADE=>True);
code to use that table..This table can contain data from 0 to 108000000 records.Now my questions are-
1. How much should I select sampling size (currently its 100%)Can I use dbms_stats.auto_sample_size, what will be the effect?
2. dbms_stats is good approach or should I use dynamic sampling.
3. what about the approach using CTAS instead of inserting data through insert.
4. What about pl/sql table with index or with clause query.
5. Do I need to rebuild index after inserting data into table.
I have to write a procedure that accepts schema name, table name and column value as parameters....I knew that i need to use metadata to do that deleting manually.
I want to take the statistics of my Production database and import to my local database, to calculate the Production statistics.I have used the statistics=compute, to export statistics. In the log file, for some tables, there was a waring like "exp-00091 exporting questionable statistics"
Is this dump will be useful for me to calculate production statistics?What option i have to use while importing statistics=recalculate or statistics=safe?
In the article regarding gathering CBO Statistics, it states: QUOTE When an Oracle database is created, a job will be scheduled that will generate the database statistics for you. You will still need to collect system statistics however, as these are not collected by the automatic statistics gathering mechanism.
what is the difference between "database statistics" and "system statistics"? In other words, do I need to run this script for each schema owner in my 10g/11g instance?
variable whoami varchar2(20); begin select user into :whoami from dual; end; exec dbms_stats.gather_schema_stats( - ownname => :whoami, - options => 'GATHER AUTO', - estimate_percent => 15, - cascade => true).
I am using Oracle 11g R2 version.I want to import the DB statistics. But i am getting an exception when i execute the command DBMS_STATS.IMPORT_SCHEMA_STATS ('user1','STATS_INFO', '','', TRUE, FALSE).
The error is ORA-20000: no statistics are imported ORA-06512: at "SYS.DBMS_STATS", line 10603 ORA-06512: at line 1.
The privileges 'ANALYZE ANY' and 'ANALYZE ANY DICTIONARY' is already given to the user.Also i executed this command as sys. But still error occurs.
Same command is successfully executed in Oracle 10g. Is there any difference in importing the statistics in Oracle 10g and 11g ?
updating the statistics for a table (with GATER_TABLE_STATS) and using NUM_ROWS then. This works fine for me as long as I am the owner of the table, but when someone else is, I always get this error: ORA-20000: Table does not exist or insufficient privileges.what privileges do I need to use GATHER_ TABLE_ STATS on all Tables, which were created by Users?
when I tried to use ANALYZE TABLE TEST_TABLE COMPUTE STATISTICS on a certain table I got the following error: a view is not appropriate here. The strange thing is, TEST_TABLE is not a view (at least it is not listed in ALL_VIEWS and is listed in ALL_TABLES, so it cant be a view right?).
Besides, is there another way to gather Table Statistics (not using Analyze Table or Gather_Table_Stats)?
When I launch it from command line it stops with error message: "insufficient privileges" and ask me the user, so I put 'PDMUSER' that is my user, it asks also pwd, I put it and the works.
I have a table which have 300+ columns and have 13 million rows. It is on a 32 kb block size. This is a table in data ware house environment. There no# of rows in the table haven't changed much but I see that the time taken to collect statistics have increased significantly.Initially it took only 15 minutes (with the same 13M rows) now it runs for 4+ hours. The max parallel servers is 4 (which is unchanged). The table is not partitioned.
OS: HP UX Itanium Database: Oracle 11g (11.2.0.2)
Command is: exec dbms_stats.gather_table_stats(ownname=>'ABC',tabname=>'ABC_LOAD',estimate_percent=>dbms_stats.auto_sample_size,cascade=>TRUE,DEGREE=>dbms_stats.auto_degree);
I would like to understand:
1) What could have been the causes of this change in time. 15 minutes to 4+hours ? 2) How can we gather statistics of huge table at a faster rate?
I have to create some indexes in a production database. Do I need to Compute Statistics after creating indexes? Or when I create they automatically are computed?
The version I'm using is:
Oracle Database 10g Release 10.2.0.5.0 - 64bit Production PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for 64-bit Windows: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production
I have created materialized view which hold few million records.Should i have to analyse the view and compute the statistics after i create the materialized view?
Also,just in case i need further indexing,should i have to take the statistics for the table again??
I have gathered frequency histogram manually on one of my column of a table to provide more information to optimizer for better calculation of cardinality.
Now i have my weekend job runs for gathering stats on schema level with method_opt as 'For all column size repeat'. But i don't want the stats of above column to be overridden by the stats job. I don't want to lock the statistics of whole table, but i just want to lock the column level stats for this table.
I don't know, if this is the intent behavior of oracle or not. But i noticed, my queries Execution plan randomly changes after statistics collection. Several tables are truncated after the daily run at 8AM and statistics gathered for all the tables in that schema.
However execution plans for 2-3 sql statements always changes after this and performance is brought back to normal by executing the procedure by explicitly calling it from the command line with arguments instead of bind variables.