Auto-estimate Percent And Histograms Or Small Est Percent No Histograms By Default?
Jul 20, 2013
I've used custom written statistics gathering scripts that by default gather statistics on large tables with a small estimate percentage and FOR ALL COLUMNS SIZE 1. They allow the estimate percentage to be set higher on individual tables and allow me to choose individual columns to have histograms with the maximum number of buckets. The nice thing about this approach was that the statistics ran very efficiently by default and they could be dialed up as needed to tune individual queries. But, with 11g you can set preferences at the table level so that the automatic stats job or even a manual run of gather_table_stats will use your settings. Also, there is some special sampling algorithm associated with auto_sample_size that you miss out on if you manually set the estimate percentage. So, I could change my approach to query tuning and statistics gathering to use AUTO_SAMPLE_SIZE and FOR ALL COLUMNS SIZE AUTO by default and then override these if needed to tune a query or if needed to make the statistics run in an acceptable length of time. I work in an largish company with a team of about 10 DBAs and a bunch of developers and over 100 Oracle databases so we can't really carefully keep track of each system. Having the defaults be less resource intensive saves me the grief of having stats jobs run too long, but it requires intervention when a query needs to be tuned. Also, with my custom scripts I get a lot of hassles from people wanting to know why I'm not using the Oracle default settings so that is a negative. But I've seen the default settings, even on 11.2, run much too long on large tables. Typically our systems start small and then data grows and grows until things start breaking so the auto stats may seem fine at first, but eventually they will start running too long. Also, when the stats jobs run forever or with the automatic jobs don't finish in the window I get complaints about the stats taking too long or not being updated. So, either direction has its pros and cons as far as I can tell.
We are currently migrated our database to Oracle 11g i.e 11.2.0.3 (from 10.2.0.3). Now we are facing application slowness on 11g, we found the issue is with missing histogram or some unnecessary histograms in place. Given below are my details.
When we were in 10g, we were having schema level gather stats job with 'METHOD_OPT' as 'AUTO' running each weekend and gathering only STALE stats. And we had faced some performance issue due to optimizer following wrong path of execution due to some missing histograms and some extras too.So we had done skew ness(for each column in table in database) analysis on DB level using 'WIDTH_BUCKET' function manually on each column,and we deleted no of unnecessary histograms and created some on 10g DB. And updated the default schema level gather stats job with 'METHOD_OPT' as 'FOR ALL COLUMN SIZE REPEAT'. And we were getting stable environment.
Now we just migrated and its done by our techops(Oracle) team, i am confused how the histograms has been changed/modified for columns during migration, causing headache now. So now below are my plans and fears associated with them.
1. If i will make the histograms exactly same in both the environments, i.e. will delete histograms which are present in new DB(11g) but were not in old DB(10g).and creating those which were there in old DB(10g) but now not in new DB(11g). My question is , will it be safe to do this or it may impact negatively with 11g optimizers features in place?
2. Or should i try easier option once by making 'METHOD_OPT' for the schema level stats job as 'AUTO' on 11g and try once, as because i believe 11g is more stable and optimizer is equipped with more intelligence in case of stats gathering using AUTO?
what is the equivalent of Top n Percent in Oracle sql 11g. Here is my requirement:
I have to find stores contributing top 20% of sales: Store Sales PercetageABC200(200/380)*100=52%XYZ100(100/380)*100=26%PQR50(50/380)*100=13%dddd20(20/380)*100=5%rrrr10(10/380)*100=2%
In the above example I have to get only store ABC as this store alone is contributing more than Top 20%If I change the requirement to Top 70% I have to get store's ABC and XYZ.
In my database,stale_percent is set to 10. and i have table which has partition. i have dropped table partition dropped which has 10% of data. I would like to know whether oracle will consider only insert,update,delete as stale percent or will it include the dropping paritition data also. Because my stats gather is not running. When i include drop partition data it exceed 10% of stale_percent,But excluding dropped partition it is not exceeds 10% of stale.
TABLE_NAME PARTITION_NAME SUBPARTITION_NAME INSERTS UPDATES DELETES TIMESTAMP TRU DROP_SEGMENTS ------------------------------ ------------------------------ ------------------------------ ---------- ---------- ---------- ----------- --- ------------- sample_DATA_DATA 235825577 0 0 11-NOV-2012 NO 3 test_DATA_DATA 811618472 0 0 11-NOV-2012 NO 12 sample_DATA_DATA SYS_P2665099 3005966 0 0 11-NOV-2012 NO 0 sample_DATA_DATA SYS_P2665119 3873671 0 0 11-NOV-2012 NO 0
Is there no format mask for Percent sign. My customer is creating a Computed field and wants to have the percent sign added to the result. (like a dollar sign). I saw some posts about adding jQuery..??? Is this going to be added to the APEX code some day?
Just like the Money format, FML999G999G999G999G990D00, where did FML come from? it translates to a dollar sign. Isn't there something that could translate to a percent sign
I was installing Oracle 10g Client on my PC. But after Specifying Home Details, I was unable to proceed. The installation hangs in the Loading Product Information form.
If the temp space left is 0%, i.e. all temp space used up, is it possible to make new DB connection ( can new users still connect to the DB)?
Or re-phrasing the question... How much of temp space (if at all ) is required for a new user to login to DB? Like SORT_AREA_SIZE in PGA. So, as memory sort area is already used ( Temp space is 100% full), can DB make more new connections?
I created manually a database in 10g, after succesfully creating the dB, I created a single user re: LAMS. Now, I noticed that my USERS tablespace is currently at a 99.96% usage:
SQL> @check_space_used.sql Monday, March 14, 2011 2:46:22 PM SGT
i'm facing a problem while i'm inserting millions of record from table to table that undo tablespace reach 100% full and execution aborted. , how can free the undo tablespace ??? many of extendes are offline. will it flush automatically ??? or what i should do
I'm trying to create a sequence for a primary key to simply auto-increment by the default of 1. I have a sql script written to generate mt tables, and I'm not sure how to modify the script to include the sequence. I also just want the sequence for a specific column, ie, PK, not the PK in all tables.
Here's a snippet from my script:
create table image ( image_id int NOT NULL, source_id int NOT NULL, CONSTRAINT image_id_pk PRIMARY KEY (image_id), CONSTRAINT fk_source_id FOREIGN KEY (source_id) REFERENCES source(source_id) );
Would I add the create sequence statement right after the create table, and if so, how do I apply the sequence to only 1 table and a single column?
We load large amount of data into multiple tables using sqlldr. Amount of data that we need to load varies according to the situation. We want to estimate the tablespace usage growth due to this data load, so we can verify/extend the tablespaces before the data load. Though, setting to autoextend will work in this case, We want to avoid extending the tablespace during sqlldr executing due to performance.
Our initial attempt was to note the tablespace size before and after executing the sqlldr and use the delta. But this delta was not consistent in different environments for the same amount of data. Different environments mean different oracle servers, different existing sizes of tablespaces, One data file Vs multiple data files etc.
How do we reliably estimate how much tablespace we need for the given amount of data?
I have a 10g database and on a specific day of everymonth, a query appears to cause a performance issue. As i have analyzed, there are two small tables of 11MB in size and the query goes for higher number of executions/gets/reads/elapsed time etc. So as the tables are very small in size, i am planning to pin both of these tables in SGA buffer pool keep memory. I need few clarifications...
*These tables are Dynamic in nature and the row data tends to change everyday. If i pin these objects in Keep buffer pool, will this dynamic changes to the table made also be reflected in the pinned buffer pool?
*Will this have any adverse effect?
*My SGA_TARGET is set equal to SGA_MAX_SIZE which is 1532MB. Do i need to change any memory setting before pinning these objects?
*On every restart of the database, do i need to again execute the pin commands for these objects.
I did some google searches about large number of extents and ASSM. I see bits and pieces on the web. This is something I need to look at while testing an application. Not looking to go into 'why' I would use smaller extents, I just want to make sure I have what I need to look for during testing..Issues with massive numbers of extents:
1. DBA_EXTENTS query is really slow. 2. issues truncating tables (due to having to read lots of extents) 3. issues splitting maxvalue partitions and with dropping partitions. 4. if I stay away from ASSM, would this reduce these issues? Are there any other performance issues or other issues I need to know about to check when I do tests?
Any issues with query or insert wait time? The tables that would get smaller events would have thousands of partitions/sub-partitions . Most of these sub-partitions will be rather smaller.I just want to test for a variety of different cases. The 'why' will come out during testing.
I read the Website URL....and searching now for an alternative way for a small Oracle DB around 120 - 250 GB Data.I cant find any DRAM SSD that seems to suite the needs of such a small amount of data to a valuable price.
The only Hdd i found is URL... but im not sure if thats the right thing and if its possible to still buy such a device.where do we get such a device in Europe(swiss) and what would be the price for such a device?
Do i need to make a special setup within oracle (11g R2) to get the maximum out of such a DRAM SSD?
I'm trying to create a table with a select statement. I want to populate this new table with the aggregated value from a VIEW. Following is the code used for creating the VIEW,
create or replace view FINAL_WEB_LOG as select SESSION_ID, SESSION_DT, C_IP, CS_USER_AGENT, tab_to_string(CAST(COLLECT(web_link) AS t_varchar2_tab)) WEBLINKS from web_views_tab group by C_IP, CS_USER_AGENT, SESSION_DT;
I want to create a table with WEBLINKS and SESSION_ID which is a sequence from another table.
CREATE TABLE FINAL_WEB AS SELECT weblinks FROM final_web_log UNION SELECT session_id FROM WEB_VIEWS_TAB;
This now gives me the following error,
SQL Error: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
This has to do with the field, Weblinks, it does have longer values.
How can i increase the sga_target in this case.Im unable to create a pfile.
SQL> startup ORA-01078: failure in processing system parameters ORA-00821: Specified value of sga_target 2048M is too small, needs to be at least 4112M SQL> startup nomount
[code]...
The $ORACLE_HOME/dbs location has a spfile.
more initQR01MRA1.ora SPFILE='+DB_DATA/QR01MRA/spfile'
"This is just for testing 123. This is just for testing 45654. This is just for testing 5567876. This is just for testing 53456547. This is just for testing 123423. This is just for testing 98090. This is just for testing 099473. This is just for testing. This is just for testing. This is just for testing 3. This is just for testing 34983245983. This is just for testing 6432."
I need to divide this sting after every 100 characters, as the length of column to insert is 100. And i do not want to modify the column as it has great impact. I need to divide the string, such that it should be less then 100 characters also the string is not cut in between.
like: first string: "This is just for testing 123. This is just for testing 45654. This is just for testing 5567876."
then 2nd string: "This is just for testing 53456547. This is just for testing 123423. This is just for testing 98090."
then 3rd string: "This is just for testing 099473. This is just for testing. This is just for testing. This is just for testing 3."
While taking backup of schema using Expdp, I got the following error on one of the tables in the schema due to which export for this table could not be done.
Re: ORA-01555: snapshot too old: rollback segment number 2 with name "_SYSSMU2$" too small.
Undo retention time is 14400 sec and Undo Tablespace is Auto Extensible. Table to be exported contains 100 crore rows.
I noticed my DB is generating a lot of "small" .arc files and I am usure why. As you can see from the v$log query my log file size is set to 50MB. But yet BLOCKS*BLOCK_SIZE never adds up to 50MB.
Is there anything else I can look into to see how to make the .arc files larger?
I have researched this problem and checked my variable sizes over and over again. I have tested the procedure within the Oracle Express environment and it works fine; HOWEVER, when the procedure is called from my C# app it produces the ORA-06502 error.
The stored procedure signature looks like this...
Original - SQL Code
create or replace save_new_project (p_custorgid in number, p_title in varchar2, p_AOIName in varchar2, p_description in varchar2, p_receiveddate in date, p_deadlinedate in date, p_startdate in date,
[code]....
The OracleParameter in my C# app for the last out param is defined as such...
As I said at the beginning of this post, the procedure works fine in the Oracle environment. So why is it not working by simply calling it from C#? I've tried changing the OracleDbType to CLOB which eliminates the error but it returns a bizarre result. It returns this string, "Oracle.DataAccess.Types.OracleClob".
Since CLOB doesn't really work either I switch back to Varchar2 and specify a size of 5000 (in the database the field I am querying is defined as Varchar(30)). I still get the ORA-06502 error.
I am clueless as to what the problem is. It should work and it does if I run a series of SQL statements in an Oracle SQL Command window. The test that works fine looks like this...
Original - SQL Code
declare v_projid projects.projectid%type; v_statustypedescrip projectstatustypes.type%type; /* this is a varchar(30) */ begin save_new_project(2, 'Some input text goes here', 'More input text', 'And more again','26-APR-2007','26-APR-2007','26-APR-2007','26-APR-2007','users name as inpujt text
[code]....
But calling save_new_project from C# throws ORA-06502. It identifies line 40 of my stored procedure. This is line 40...
when I want to create a table.When I run my procedure I received : PL/SQL: numeric or value error: character string buffer too small
create or replace PROCEDURE p_create_tmp_tables (p_result OUT NUMBER ) IS string_sql varchar2(1000); result NUMBER; BEGIN string_sql := 'CREATE TABLE TMP_CATEGORIES (CODE_CATEGORY NUMBER(6,0), NAME_CATEGORY VARCHAR2(25 BYTE))'; execute immediate string_sql; [code]...
One of the users received the error ora-01555: rollback segment too small
I have read about the error and saw that it is caused by transactions made upon same data, filling to maximum one of the UNDOTBS rollback segments.It happens only once in a while, each time on a different Rollback Segment.
I've read that i should do a few things to enlarge those segments, such as bring a 1. segment offline, 2. drop it and then 3. create it with a bigger size & possibly a bigger extent setting
i've also read that those actions aren't relevant if you have the parameter UNDO_MANAGEMENT set on AUTO, which is actually the case.is that why when i execute ALTER ROLLBACK SEGMENT RB_SEG_NAME_11$ OFFLINE;
1. How do i solve my issue with the Rollback segment being too small to contain the snapshot with the requested SCN?
2. what does the parameter UNDO_MANAGEMENT mean? should i change it, in order to adjust my Rollback Segments attributes, to prevent my error of re-occurring?
I want to pass a damn long query(which includes a lot of column names, tables, joins, union, where clauses etc. and whose length is more than 120000) in a Ref cursor (that's length is more than 32767). Query is stored in a LONG type variable V_QRY in stored procedure, and I am opening that ref cursor like below-
OPEN P_RPT_TEST FOR V_QRY;
at run time its giveing string buffer too small error.