I am gathering stats by using below block i.e., for some 3 million records and there are 6 indexes on the table. What is the relevance of value 4 here (i.e., method_opt => 'FOR ALL INDEXED COLUMNS SIZE 4')? If I increase 4 to 250 will there be any speed change in gathering stats. My intention is to speed up the gathering of stats.
Oracle 10g has the feature of automatic stats gathering in this case is it necessary to run DBMS_STATS on tables manually. Does the stats gathered become stale when the auto stat runs ?
I have several databases that i've recently upgraded from 9i to 11g. With all of them, the automatic stats gathering process has worked just fine every night during the maintenance window.
However, i have this other database that i created and it seems that the only stats being gathered are on the sys and system schemas and not the actual schema that holds all of our tables.
I did some searching, but i'm not sure i was using the right search terms, because i came up empty.
BANNER ----------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production PL/SQL Release 11.2.0.1.0 - Production CORE 11.2.0.1.0 Production TNS for Solaris: Version 11.2.0.1.0 - Production NLSRTL Version 11.2.0.1.0 - Production
I have used the above to get a copy of schema stats and gather new stats for specific tables into a STATS TABLE in my personal schema. What I want to do now is use this stats table to generate plans for queries where I believe stats are off. Is it even possible? To be clear, I do not want to import stats because this replaces the stats currently there. I just want to point the CBO to my stats table for generating plans.
there was a session parameter I could set to tell oracle to use my stats table when generating plans, or an explain plan clause I could use or a DBMS_XPLAN paramter I could provide that would tell these tools to use my stats table when generating a plan, or even some way to tell autotrace. But I have found none of this.
I have two design alternatives and need to understand how expensive (speed) is one of them against the other for a medium size table (100K-200K records):
create table xyz ( f1 number not null, f2 varchar2(20) not null, f3 number not null, f4 varchar2(50),
[code]....
the idea is to optimize the design by using a PK instead of the 3 keys and there is a debate that searching a unique index field(2nd scenario) is of the same speed than searching a PK field (1st scenario).
I've got a query running a select count (*) over a table. The default plan takes in the order of 15 minutes to return, a hinted plan to use a different index takes 3 minutes to return.
Unfortunately I cant get at the index stats and a few other areas which I suspect may be key here.When running autotrace against the two queries I see fairly different values as one would expect.
Query
select count (*) from fulfilmentitem bfi where created >= sysdate-30 AND bfi.status = 'FA' AND bfi.fulfilmentmethod = 'D' Slow run PLAN_TABLE_OUTPUT ----------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| ---------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 15 | 33119 (1)| | 1 | SORT AGGREGATE | | 1 | 15 | | |* 2 | TABLE ACCESS BY INDEX ROWID| FULFILMENTITEM | 12525 | 183K| 33119 (1)| |* 3 | INDEX RANGE SCAN | IDX_FULFIL_METHODSTATUS | 250K| | 1786 (1)| ---------------------------------------------------------------------------------------------- [code]....
IDX_FULFIL_METHODSTATUS is across FULFILMENTMETHOD & STATUS in that order. IDX_BFI_CREATED is on CREATED and is approx 70% of the size of the other index
The row counts estimated in the explain plan are out, the count(*) comes in at 32.8k rows.As you will have seen, the fast run shows a pretty significant consistent get increase compared to the slow run and a decent though not dramatic physical read drop.
My uncertainty is around if these changes in consistent get/phys read values would typically be enough to suggest the real time improvements I'm observing or if other (albeit perhaps temporary) factors are involved. It is a prod OLTP environment so the data will be rapidly changing and that may be a factor.
I know it can never be an exact science without intimately knowing the hardware/current loads etc but I also know that there's enough experience on these boards to have a loose handle on if the time shifts between queries are likely (or not) to be reflective of the stat changes or if those differences alone shouldn't (or typically wouldn't account) for it.
I'm thinking about instructing the query to ignore its original plan but am hesitant to do so without being a little more confident that it's not just a timing thing or something other than the change of index approach which may be causing the improvement. the autotrace stat changes observed I couldn't put my hand on heart say "yup - that change is good, ignore the default index all the time for this job".
I am quiet confused with the optimizer collection stats job on 11g. when run the following query i see the statistics enabled.
SQL> select client_name, status from dba_autotask_client;
CLIENT_NAME STATUS ---------------------------------------------------------------- -------- auto optimizer stats collection ENABLED auto space advisor ENABLED sql tuning advisor ENABLED
I can gather statistics manually using DBMS_STATS, but there is no automated gathering of optimizer stats. how can i have system run and collect statistics on a daily bases?
I am on 11.2 on Linux.I am looking into a performance issue. The issue is around 1 particular SQL, involving about 5 tables.I re-gathered statistics on 2 main tables in the query (out of 5 tables).
When I say re-gathered, I first did DBMS_STATS.DELETE_TABLE_STATS and then did DBMS_STATS.GATHER_TABLE_STATS.
Earlier, we had histograms on these tables, which I removed and gathered stats without generating histograms. SQL> select table_name, num_rows, sample_size, last_analyzed from user_tables where 2 table_name in ( 'DETAIL_TABLE','MASTER_TABLE');
2 rows selected.Then ran the SQL again couple of times (actually, that SQL is in a stored procedure, which I ran couple of times).I found this wonderfull SQL on internet, which tells me when the SQL ran and which plan (identified by its hash value) it used. Using this SQL I tried to check if my SQL was run using any different plan, but it used exactly same plan it used before I re-gathered the stats. See the last analyzed time above and begin_interval_time below, same SQL has run before and after stats collection, with same plan_hash_value.
My question is, when I re-gathered stats on 2 tables out of 5 tables in a given SQL, are the plans not flushed out of SGA? I was expecting that, at least a new plan hash value would show up front of my SQL, before and after stats collection.
I have set the incremental stats for my partition table as it takes more than 20 min to gather , though the incremental is set to 'true' the table is getting analyzed completely.
Is it possible for the DBMS_STATS "LIST STALE" command to show a stale partition but NOT have its table show as stale?
I had a scenario where the table itself AND 1 partition showed as stale. I ran a fnd_stats gather table stats just on that 1 partition. Once it was completed it showed the partition to no longer be stale. it also showed that the table was no longer stale. so I guess I do not need to run stats on the whole table as well?
so if this is the case, when would I need to run stats on the full partitioned table if running it on the partitions themselves removes the staleness of the table?
Our UNDO space remains at a high level 85 to 95 percent. We keep adding database files and it doesn't seem to go down significantly. When we do a backup of the system where we shut the database down, it does go down some but then within a week or so it is back up again.
PROMPT CREATE TABLE tst_fetch_vendor_data CREATE TABLE tst_fetch_vendor_data ( vendor_data_seq_no NUMBER NOT NULL, study_seq_no NUMBER NOT NULL, vendor_record_seq_no NUMBER NOT NULL, control_column_seq_no NUMBER NOT NULL, resolved_value VARCHAR2(4000) NULL, original_value VARCHAR2(4000) NULL, transaction_user VARCHAR2(30) NOT NULL, [code]....
Its just a temporary table, in which data comes and goes. I am using this in middle of a process.I am using it in a process like below--
--EXECUTE IMMEDIATE 'TRUNCATE TABLE TST_FETCH_VENDOR_DATA DROP STORAGE';
insert /*+ append */ into tst_fetch_vendor_data (select * from vendor_data vd where vd.control_column_seq_no in (select control_column_seq_no from temp_control_column)); dbms_stats.gather_table_stats('EPDSYSREP','TST_FETCH_VENDOR_DATA',ESTIMATE_PERCENT=>100, METHOD_OPT=>'for all indexed columns size auto',CASCADE=>True);
code to use that table..This table can contain data from 0 to 108000000 records.Now my questions are-
1. How much should I select sampling size (currently its 100%)Can I use dbms_stats.auto_sample_size, what will be the effect?
2. dbms_stats is good approach or should I use dynamic sampling.
3. what about the approach using CTAS instead of inserting data through insert.
4. What about pl/sql table with index or with clause query.
5. Do I need to rebuild index after inserting data into table.
I load a table through sql loader which takes nearly 14 min for 8-9 millions records, once the records complete i run the analyze table compute statics to gather stats and it takes nearly 15 min. is there any ways so that i can reduce the stats timing. the stats collection command runs from other schema not from where the table is residing.
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 1 0.00 0.00 db file sequential read 85704 0.31 460.55 latch free 1 0.00 0.00 SQL*Net message from client 1 14.98 14.98
[code]...
Why the elasped time changed when data and plan hasn't changed at all? Also why the plan has different stats for round 1 and 2 on db1 and db2?
I ran it 2 times each round each database so hard parsing shall not be issue.Also why the number of rows accessed are different in db1,db2 and db3,db4 especially for step1 when count of crt.qtn_cun_id is similar?
In fact when the query was taking long I was the only user on the system Also I used hard coded value (no bind variables at all)
I checked num_rows, distinct keys as well which are quite similar across all 4 databases Also no stats where gather during the query execution
Is there anyway to speed up the performance of the go_record built it or is there an alternative way to do it.
I have a table with nearly 30,000 rows and I would like to implement a text field that will allow the user to jump to a specified record. The only problem is if they try to jump too far away it will take a long time to load (beginning to end of 30,000 takes over a minute).
This problem doesn't arise if all the records, or up to the one they are jumping to, have been fetched already, but even if I fetch all records at the beginning it will still take a long time to initially load them.
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
During STATS gather running for the table, unknowingly i deleted the old stats using EXEC DBMS_STATS.DELETE_TABLE_STATS. I would like to know will it affect the stats gather job currently running for the table and whether my stats will be gathered successfully.
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */ INTO DWH_BILL_DET rq USING (SELECT rated_que_rowid, detail_rerate_flag_code, rerate_sel_key,
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns: (id number, job varchar2(20), time date, plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names. For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)? 2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname from emp e, dept d where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query, Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that? Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN OPEN v_refcursor FOR SELECT /*+ FULL(a) PARALLEL(a, 5) */ * FROM items a WHERE a.sku_status = 'A'
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.