1.2 million chained rows, 1.7 million blocks, etc. Initial extent for this table is 64k and next 1 mb. I would try to calculate this out better for efficiency and performance. This will not be efficient as it stands. calculate the size.
I need to create table A. which will going have more than 8L records. Daily this table A will truncate and reinsert all 8L records. Also number of records(8L) will we increase 50K per month. what should be storage clause parameters . Mainly initial and next extent.
I've read the documentation that describes the storage management.I create a tablespace as:
CREATE TABLESPACE MY_TABLESPACE_NAME DATAFILE 'path/filename1.dbf' SIZE 3000M AUTOEXTEND ON NEXT 200M MAXSIZE 4000M LOGGING ONLINE PERMANENT EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M BLOCKSIZE 8k SEGMENT SPACE MANAGEMENT AUTO FLASHBACK ON;
As the extent management is local, does it mean that any storage clause of the objects (tables, indexes etc.) placed in it isn't taken into consideration? I mean in a case of placing a table in the mentioned tablespace that has a storege parameters defined as follows:
1. what will be the initial extent size? 1M or 100M? 2. what will be the next extent size? 1M or 20M? 3. will the maxextents parameter be taken into consideration? 4. when i'm sure the tablespace is dedicated to keep only one object [MY_TABLE], what should be the relation between the initial datafile size [filename1.dbf] and the initial extent size? Should they be iqual or doesn't matter? 5. as the SEGMENT SPACE MANAGEMENT is AUTO, the PCTFREE param doesn't make sense, right?
I have executed the below statement to move a bulk of table from source table space-SHOP3_DATA to destination tablespace -LINUX_CATA. Before executing the statement the source table space was having 85% full message . While executing the statement this is giving the error for the source tablespace means the the tablespace is getting consumed in the source TS. Currently no-one is using the tablespace SHOP3_DATA' . If I add datafile to the SHOP3_DATA the problem may be solved but why the space is getting consumed in the source. Now the tablespace SHOP3_DATA is 95% full
SQL> alter table LINUX_CATA.TST_FOLDERS move tablespace LINUX_CATA; Table altered. SQL> alter table LINUX_CATA.TST_SEARCH_TESTS move tablespace LINUX_CATA; alter table LINUX_CATA.TST_SEARCH_TESTS move tablespace LINUX_CATA * ERROR at line 1: ORA-01658: unable to create INITIAL extent for segment in tablespace SHOP3_DATA
while importing data i got this error in my log file.and i cannot import my data successfuly
in my log file error i found like this
ORA-01658: unable to create INITIAL extent for segment in tablespace USERS IMP-00017: following statement failed with ORACLE error 1658: IMP-00003: ORACLE error 1658 encountered ORA-01658: unable to create INITIAL extent for segment in tablespace USERS IMP-00017: following statement failed with ORACLE error 1658:
i can import my data using imp utility using this syntax
C:UsersAdministrator>imp tiger/****@tcs file=E:DUMP s. dmp log=E:DUMP s.log fromuser=tiger121 touser=tiger statistics=none
this my user tiger default tablespace its uses and its a auto extend on and locally managed tablespace,and i have enough space on my drive also.
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */ INTO DWH_BILL_DET rq USING (SELECT rated_que_rowid, detail_rerate_flag_code, rerate_sel_key,
I am trying to update columns of Table A with the columns of Table B. Both these tables have 60,000 rows each. I tried this operation using following 2 queries:
Query 1
Update TableA A set (A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3 from TableB where A.CODE=B.CODE)
Query 2 Update TableA A set (A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3 from TableB where A.CODE=B.CODE) where exists A.code = (select B.code from TableB B where A.code=B.code)
When i execute these two above queries, it keeps executing indefinitely.
select gam.SOL_ID,COUNT(gam.FORACID) from gam,smt where gam.ACID=smt.ACID and gam.ACID NOT IN(select ACID from imt) and gam.SCHM_TYPE in('SBA','CCA','CAA','ODA') and GAM.ACCT_CLS_FLG='N' and gam.SOL_ID IN(select SOL_ID from IMT) group by gam.SOL_ID /
attached is the explain plan.
in which index on IMT table is not used. And the query is doing a FTS on IMT table. What needs to be done to avoid FTS on IMT table.
I am using one script to delete the records from a table, its taking 1hr to delete.
declare cursor c1 is select ownerid,ownertype from nightly_metric_projects ; v1 c1%rowtype; open c1; loop fetch c1 into v1; exit when c1%notfound; DELETE FROM DGT_ITEMEFFORTDATA WHERE OWNERTYPE = c1.OWNERTYPE AND OWNERID = c1.OWNERID; end loop; close c1; commit;
nightly_metric_projects--1200 records DGT_ITEMEFFORTDATA--13200000
I have normal tables with hugh Data and would like to increase the performace by following means:
1) Add a new column in each table. Say this column Name is IS_LIVE. This new column have only two value 1 ( LIVE ) OR 0 ( NOT LIVE ). 2) Change the normal tables to Partitioned table. There would be only two partitioned in all the table. The partitioned key column would be IS_LIVE and both partitioend recrods would be in two different tablespace. 3) Added a POLICY function to these partitioned table to Always add a Query Predicate of '1' to all queuries.
I am interested to know that what kind of Indexes ( Global Or local ) would be suitable for these kind of Design.Is there any use of having Local index on IS_LIVE.Please note that Primary Key doesnot have this new column in it.
Objective : To find solution to archieve data from 2 big tables which is occupying maximum size in the data base. With current data (From Jan 2005 to Sept 2011) it has records as mentioned below:
We need to load data and run monthly batches from October 2011 to current month which will increase this space.
1. Issue is there will not be having so much space.
2. Maintenance of such table is diffcult now.Also there is huge impact on performance. Can we think of partitioning the table base on date aswe query 1st table based on certain date range?
3. Most of reports use this table and creating performances issues
We have few tables in our production database which are havoc in size and will increase in size in future too so as part of the corrective measures , we have jotted down the below 3 methods to manage the size of those tables :-
1> Partitioning the table and take the export of identified partitions and after that, truncate those partition. 2> Creating history tables and remove not so current data from the original table to history table.
I have tried below steps for removing the table fregmentation but for some table i am not getting good result here.
1. It will collect the data which are having more than 100MB fragmentation.
select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len /1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE" from dba_tables where owner in('a','b','c','d') and round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) > 100 order by 8 desc;
2. then move the object(table) to the same tablespace.
alter table abc move; alter table bcd move; alter table efg move;
3. also rebuild the dependent objects.
alter index abc_PK rebuild online;
4. Then analyze the table which are having more than 100MB of fragmentation.
From the below query i found that there are some stale stats for 3 tables.
================================= select table_name, stale_stats, last_analyzed from dba_tab_statistics where owner= 'SYSADM' and stale_stats='YES' order by last_analyzed desc
I collect stats for those above 3 tables with dbms_stats.gather_table_stats().But no luck.After collection of stats immediately I ran the above query.But still it is showing there are stale stats for 3 tables.
how can I change "STALE-STATS" status, so that optimizer can use the updated stats eficiently.
If a table(have a primary key) is empty(after truncate),the sql of dml(insert,update) is very quickly,but if the table have many rows about 10,000,000 rows, the dml is very slowly,why?
Is it possible for the DBMS_STATS "LIST STALE" command to show a stale partition but NOT have its table show as stale?
I had a scenario where the table itself AND 1 partition showed as stale. I ran a fnd_stats gather table stats just on that 1 partition. Once it was completed it showed the partition to no longer be stale. it also showed that the table was no longer stale. so I guess I do not need to run stats on the whole table as well?
so if this is the case, when would I need to run stats on the full partitioned table if running it on the partitions themselves removes the staleness of the table?
I am working with an online application with the database in Oracle 10G. We have a table with 10 million rows and this table is subjected to grow in future also. Moreover we cannot archive some of these rows as these records are required for referencing.
We have all necessary indexes on the table but querying this table takes a lot of time especially when it is joined with other tables. some methods with which I can manage this table in a better way so that queries joining this table would execute faster..
We are on Oracle 10.2.0.4 on Solaris 10. There is a table in my production db that has 872944 number of rows. Most of its data is now unnecessary, we need to retain, based on a date column in the table just last one month's data and delete rest of the data. So after that the table will have just 3000 rows.
However as the table was huge earlier(872k rows prior to delete) , does the delete of data release its oracle blocks and does the size of the table reduce? If not, will it rebuild the table online (online redefinition) so that the query that does a full scan on this table goes faster?
I checked using an example table that just delete of data does not remove the oracle blocks - they remain in the user_tables for that table and cost of full table scan remains same. We have a query that does the full table scan so I am thinking that after this delete I should do an online table re-definition , is that the right decision?
When i run a script that does a select from a single table (table has 33521868 records)the query is executed in about .094 seconds. I use the exact same query to insert into a temporary table and the query takes 10 minutes and more.
What should I be doing to speed up this process. Also tried using hints and it does not speed up the insert.
I have a table which contains 8,21,177 amount of data totally.Now I am trying to delete around 4,84,000 of data from this table by using just one filter i.e. my query is something like below
DELETE /*+ parallel(resource,4) */ FROM resource where created_by = 'MIGN'
This is going to delete 4,84,000 rows of data . But my current issue is this is taking lots of time to delete the data . To be precise , its almost taking 25 hours to delete this data..The created_by column is indexed .
Execution Plan ---------------------------------------------------------- Plan hash value: 2389236532
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------
I am inserting data using a procedure for 2012 and 2013 year which is using partitioned tables includes crore of data in a partition taking lot of time or taking months. Is there any other way by which I can insert data fast from our query.
By default the DBMS_STATS package runs once every 24 hours to collect statistics for database objects and Oracle collects new statistics when enough of the data (about 10%) has changed.
My question here is how to check the table has changed 10% in database?
The requirement is I have to get the details of all data of previous Active cycle(status A) when the Item became disabled(status = D) for Input date.
In above case,since for Item P1 and on cycle date 04-Nov-12,status is D,I have to consider the previous active cycle which is 03-Nov-12. Based on above std date,the data is queried from another table to get all the Items. Item P2 should not be considered in above case.
Below is the code which I have written which considers the rundate as Input parameter.
-- To get the Items disabled for Input date with Itemdisabled as ( select item,stddate maxcycledate from Item_tbl where rundate = stddate
[code]....
In above case,I'm querying the Item_tbl twice once for getting the disabled Items and once for getting the Previous cycle which is active.
Is there any way to query above only once and get the required results using Lag/Lead functions etc.
How do i find a particular SQL or a set of SQL's which are excuted against a table (user identified table) that is either a very frequently executed query against that table or high impact SQL against that table? I am currently looking through the AWR reports to go through all the queries but i was wondering if there are any dictionary views where we can find this info from?