In my database,stale_percent is set to 10. and i have table which has partition. i have dropped table partition dropped which has 10% of data. I would like to know whether oracle will consider only insert,update,delete as stale percent or will it include the dropping paritition data also. Because my stats gather is not running. When i include drop partition data it exceed 10% of stale_percent,But excluding dropped partition it is not exceeds 10% of stale.
TABLE_NAME PARTITION_NAME SUBPARTITION_NAME INSERTS UPDATES DELETES TIMESTAMP TRU DROP_SEGMENTS
------------------------------ ------------------------------ ------------------------------ ---------- ---------- ---------- ----------- --- -------------
sample_DATA_DATA 235825577 0 0 11-NOV-2012 NO 3
test_DATA_DATA 811618472 0 0 11-NOV-2012 NO 12
sample_DATA_DATA SYS_P2665099 3005966 0 0 11-NOV-2012 NO 0
sample_DATA_DATA SYS_P2665119 3873671 0 0 11-NOV-2012 NO 0
I have to drop some partitions in table on production environment (to get free space). The environment have to be continuously available. I was considering of use ALTER TABLE ... DROP PARTITION ... UPDATE INDEXES but it is slow, because of use clause UPDATE INDEXES. Is there another possibility to remove these data?
writing a trigger to drop partitions with zero rows which are older than 6months and drop the local indexes and rebuild the global indexes for any schema in a databaase ?
I have tried the below code :
declare v_statement varchar2(600); v_rows number; begin for x in (select * from dba_tab_partitions [code]........
I want to avoid using row number and also want to dynamically select a schema when executing the script.
11.2.0.3 This is for a build. We are still in development. No risk of data loss. As part of the build, I drop the user,re-create it, re-create the objects. Allows us to test the build all the way through. Its our process. This user has some tables with several 1000 partitions. I ran a 10046 trace and oracle is using pl/sql to do loops to do DML against the data dictionary. Anyway to speed this up? I am going to turn off the recyclebin during the build and turn it back on. anything else I can do? Right now I just issue 'drop user cascade'. Part of is the weak hardware we have in the development/environment. Takes about 20 minutes just to run through this part of the script (the script has alot more pieces than this) and we do fairly frequent builds. I can't change the build process. My only option is to try to make this run a little faster.
I've used custom written statistics gathering scripts that by default gather statistics on large tables with a small estimate percentage and FOR ALL COLUMNS SIZE 1. They allow the estimate percentage to be set higher on individual tables and allow me to choose individual columns to have histograms with the maximum number of buckets. The nice thing about this approach was that the statistics ran very efficiently by default and they could be dialed up as needed to tune individual queries. But, with 11g you can set preferences at the table level so that the automatic stats job or even a manual run of gather_table_stats will use your settings. Also, there is some special sampling algorithm associated with auto_sample_size that you miss out on if you manually set the estimate percentage. So, I could change my approach to query tuning and statistics gathering to use AUTO_SAMPLE_SIZE and FOR ALL COLUMNS SIZE AUTO by default and then override these if needed to tune a query or if needed to make the statistics run in an acceptable length of time. I work in an largish company with a team of about 10 DBAs and a bunch of developers and over 100 Oracle databases so we can't really carefully keep track of each system. Having the defaults be less resource intensive saves me the grief of having stats jobs run too long, but it requires intervention when a query needs to be tuned. Also, with my custom scripts I get a lot of hassles from people wanting to know why I'm not using the Oracle default settings so that is a negative. But I've seen the default settings, even on 11.2, run much too long on large tables. Typically our systems start small and then data grows and grows until things start breaking so the auto stats may seem fine at first, but eventually they will start running too long. Also, when the stats jobs run forever or with the automatic jobs don't finish in the window I get complaints about the stats taking too long or not being updated. So, either direction has its pros and cons as far as I can tell.
I have a table that has 2 columns of type nested table. Now in the purge process, when I try to truncate or drop a partition from this table, I get error that I can't do this (because table has nested tables). how I will be able to truncate/drop partition from this table? IF I change column types from nested table to varray type, will it work?
Also, is there any short method of moving existing data from a nested table column to a varray column (having same fields as nested table)?
I have used the above to get a copy of schema stats and gather new stats for specific tables into a STATS TABLE in my personal schema. What I want to do now is use this stats table to generate plans for queries where I believe stats are off. Is it even possible? To be clear, I do not want to import stats because this replaces the stats currently there. I just want to point the CBO to my stats table for generating plans.
there was a session parameter I could set to tell oracle to use my stats table when generating plans, or an explain plan clause I could use or a DBMS_XPLAN paramter I could provide that would tell these tools to use my stats table when generating a plan, or even some way to tell autotrace. But I have found none of this.
During STATS gather running for the table, unknowingly i deleted the old stats using EXEC DBMS_STATS.DELETE_TABLE_STATS. I would like to know will it affect the stats gather job currently running for the table and whether my stats will be gathered successfully.
i have created FAST refresh materialized view.it is eligible for FAST refresh because when i executed SELECT * FROM USER_MVIEWS i could see FAST_REFRESHABLE=DIRLOAD_DML and STALENESS=STALE and it is not getting refreshed(through DBA_JOBS). Why it is in STALE status and how to resolve it. and one of my materialized views is in NEEDS_COMPILE status
From the below query i found that there are some stale stats for 3 tables.
================================= select table_name, stale_stats, last_analyzed from dba_tab_statistics where owner= 'SYSADM' and stale_stats='YES' order by last_analyzed desc
I collect stats for those above 3 tables with dbms_stats.gather_table_stats().But no luck.After collection of stats immediately I ran the above query.But still it is showing there are stale stats for 3 tables.
how can I change "STALE-STATS" status, so that optimizer can use the updated stats eficiently.
what is the equivalent of Top n Percent in Oracle sql 11g. Here is my requirement:
I have to find stores contributing top 20% of sales: Store Sales PercetageABC200(200/380)*100=52%XYZ100(100/380)*100=26%PQR50(50/380)*100=13%dddd20(20/380)*100=5%rrrr10(10/380)*100=2%
In the above example I have to get only store ABC as this store alone is contributing more than Top 20%If I change the requirement to Top 70% I have to get store's ABC and XYZ.
Is there no format mask for Percent sign. My customer is creating a Computed field and wants to have the percent sign added to the result. (like a dollar sign). I saw some posts about adding jQuery..??? Is this going to be added to the APEX code some day?
Just like the Money format, FML999G999G999G999G990D00, where did FML come from? it translates to a dollar sign. Isn't there something that could translate to a percent sign
I was installing Oracle 10g Client on my PC. But after Specifying Home Details, I was unable to proceed. The installation hangs in the Loading Product Information form.
If the temp space left is 0%, i.e. all temp space used up, is it possible to make new DB connection ( can new users still connect to the DB)?
Or re-phrasing the question... How much of temp space (if at all ) is required for a new user to login to DB? Like SORT_AREA_SIZE in PGA. So, as memory sort area is already used ( Temp space is 100% full), can DB make more new connections?
I created manually a database in 10g, after succesfully creating the dB, I created a single user re: LAMS. Now, I noticed that my USERS tablespace is currently at a 99.96% usage:
SQL> @check_space_used.sql Monday, March 14, 2011 2:46:22 PM SGT
i'm facing a problem while i'm inserting millions of record from table to table that undo tablespace reach 100% full and execution aborted. , how can free the undo tablespace ??? many of extendes are offline. will it flush automatically ??? or what i should do
i have many partitions in one table. i need metadata for some of the partitions. We have any option to get the metadata for partitions only.because while selecthing the table meatdata i am getting long script.
I have a day based partitioned table TAB1.Let say for month Aug 2013 Partitions are like P010813,P020813,P030813 up to P310813.When gather stats job will run i want to analyze only the last 3 used partitions based on current date, this would be P180813,P190813,P200813. write a query which will give the last 3 used partitions.e.g.
If run query on 20-AUG-2013.P180813P190813P200813FOR 25-AUG-2013P230813P240813P250813
ALTER TABLE table_name DROP PARTITION (partition_1000); ALTER TABLE table_name DROP PARTITION (partition_1001); ... ......... ...... ALTER TABLE table_name DROP PARTITION(partition_1320);(b
it is a delta partition,so trying to remove 320 partitions at once in pl/sql developer for a single table.
Like this i have to remove for more then 15 tables one by one, will this effect the database like filling up the archinve log destination by writing more logs.
kind of problems that i am going to face , as i am doing it on the production box directly.
I have more 100 partition in a table, I would like to query 10 partitions alone in single statement, Hope it could be possible like query data for single partitions, provide the syntax for the same.
Because if I try to query for all the partition then the query is Hanging due to the large no of data, then I can query single partition by partition then it takes more than a day. so, I would like query data for 10 partition in a single select.
I'm looking for a script to partition the data into sections where the VALUE is the same over a constant period of time with no breaks. I'd like to give each partition a value to identify it by.
So the outcome of the script would be the following -
I was trying to do something with trunc(date_time) but that didnt work out right as the blocks of data can carry over several days as seen in the rows with IDENTIFIER = 8.
My developer came with a requirement of creating partitions on a table which has 40 million records. His exact requirement is to create as many as partitions in such a way that 1 partition should not exceed 5k-10k records and these records should be inserted/updated on the same date (i.e. using a column as source_timestamp field). How to accomplish this?
I have tables in production which has got huge no of partitions(say more than 100), but I would like to extract table definiation along with mentioned few partitions(say 10 partitions) alone. How to do that, which way is the best to extract DDL with right format.
because when I use metadata package the format for the extraction is not good, is there a way to extract table definition with mentioned partition names.