Objective : To find solution to archieve data from 2 big tables which is occupying maximum size in the data base. With current data (From Jan 2005 to Sept 2011) it has records as mentioned below:
We need to load data and run monthly batches from October 2011 to current month which will increase this space.
1. Issue is there will not be having so much space.
2. Maintenance of such table is diffcult now.Also there is huge impact on performance. Can we think of partitioning the table base on date aswe query 1st table based on certain date range?
3. Most of reports use this table and creating performances issues
We have a table emp_details with 23772889 records. Our requirement is to increase few of the columns size in the table emp_details. We are following the below alter statement which is taking around 2 hours of time.
I am running Oracle 10.2.0.1.0 on MS Windows 2003 server 64-bit with 16G RAM.
Here is the findings for my Oracle database.
SQL> select * * from v$sgainfo; NAME BYTES RES -------------------------------- ---------- --- Fixed SGA Size 1293560 No Redo Buffers 7094272 No Buffer Cache Size 830472192 Yes
[code]...
I find that the SGA component "Buffer Cache" is decreasing from the start "1.8G" and down to now 0.8G. On the other hand, the component "Shared Pool" is increasing from the start 0.3G to now 1.2G. I noticed that there are 100 operations of shrinking of "Buffer cache" and growth of "Shared Pool" in Oracle every day.Is it a indicator that I should raise up the SGA_MAX_SIZE?
I tried to increase the SGA_MAX_SIZE to 4G. But I cannot start the Oracle afterward.Is it a limitation of MS Windows(OS) or Oracle?I set the SGA_MAX_SIZE to 3G. This time, I can startup Oracle.What is the optimum/maximum I can set to SGA_MAX_SIZE?Is there any adverse effect/concern when setting the SGA_MAX_SIZE more than 2G?
The REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
I would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ undo_management string AUTO undo_retention integer 96000 undo_tablespace string undo
Following are the details in AWR report (00:00 til 01:00 of 21-Apr-2013) .... not thet the error was produced at 00:42
Undo Segment Summary DB/Inst: DBCPY/dbcpy01 Snaps: 18853-18854 -> Min/Max TR (mins) - Min and Max Tuned Retention (minutes) -> STO - Snapshot Too Old count, OOS - Out of Space count -> Undo segment block stats: -> uS - unexpired Stolen, your - unexpired Released, uU - unexpired reUsed
[code]....
Undo Advisor information taken 'now' is as following
SQL> select dbms_undo_adv.longest_query(sysdate-2,sysdate) from dual; DBMS_UNDO_ADV.LONGEST_QUERY(SYSDATE-2,SYSDATE) ---------------------------------------------- 379650 SQL> select dbms_undo_adv.required_retention from dual;
[code]....
In above situation what should be my first choice (assuming increasing space is not an issue) - increase undo tablespace or increase undo retention?
If latter is the choice then what should be the value? Because as I understand present 96000 value is taken as lower limit and because of auto tuning the actual value (TUNED_UNDORETENTION) being used was 345600 In that case shall I set it to something > max(maxquerylen) i.e 379,650 + X?Or I shall increase the undo tablespace size?
From Undo Advisor output it looks to me that even if I increase the undo retention to 379650 current undo size will be able to support it (may be at the expense of DMLs)Is that right?
All the analysis till now on our system proves that our system is clearly I/O bound and db sequential read is the biggest culprit.
We have even identified the index which is being affected by sequential read. I am thinking of creating a new tablespace with 32K blocksize (currently all table spaces are 8k) and migrate this index to the new space. That way, Oracle will have to do less number of reads to get the required data.
But is there anything wrong in having just one tablespace with a differnt block size? Or is there anything that I have to be watchful about while doing it?
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */ INTO DWH_BILL_DET rq USING (SELECT rated_que_rowid, detail_rerate_flag_code, rerate_sel_key,
I am trying to update columns of Table A with the columns of Table B. Both these tables have 60,000 rows each. I tried this operation using following 2 queries:
Query 1
Update TableA A set (A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3 from TableB where A.CODE=B.CODE)
Query 2 Update TableA A set (A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3 from TableB where A.CODE=B.CODE) where exists A.code = (select B.code from TableB B where A.code=B.code)
When i execute these two above queries, it keeps executing indefinitely.
select gam.SOL_ID,COUNT(gam.FORACID) from gam,smt where gam.ACID=smt.ACID and gam.ACID NOT IN(select ACID from imt) and gam.SCHM_TYPE in('SBA','CCA','CAA','ODA') and GAM.ACCT_CLS_FLG='N' and gam.SOL_ID IN(select SOL_ID from IMT) group by gam.SOL_ID /
attached is the explain plan.
in which index on IMT table is not used. And the query is doing a FTS on IMT table. What needs to be done to avoid FTS on IMT table.
I am using one script to delete the records from a table, its taking 1hr to delete.
declare cursor c1 is select ownerid,ownertype from nightly_metric_projects ; v1 c1%rowtype; open c1; loop fetch c1 into v1; exit when c1%notfound; DELETE FROM DGT_ITEMEFFORTDATA WHERE OWNERTYPE = c1.OWNERTYPE AND OWNERID = c1.OWNERID; end loop; close c1; commit;
nightly_metric_projects--1200 records DGT_ITEMEFFORTDATA--13200000
I have normal tables with hugh Data and would like to increase the performace by following means:
1) Add a new column in each table. Say this column Name is IS_LIVE. This new column have only two value 1 ( LIVE ) OR 0 ( NOT LIVE ). 2) Change the normal tables to Partitioned table. There would be only two partitioned in all the table. The partitioned key column would be IS_LIVE and both partitioend recrods would be in two different tablespace. 3) Added a POLICY function to these partitioned table to Always add a Query Predicate of '1' to all queuries.
I am interested to know that what kind of Indexes ( Global Or local ) would be suitable for these kind of Design.Is there any use of having Local index on IS_LIVE.Please note that Primary Key doesnot have this new column in it.
We have few tables in our production database which are havoc in size and will increase in size in future too so as part of the corrective measures , we have jotted down the below 3 methods to manage the size of those tables :-
1> Partitioning the table and take the export of identified partitions and after that, truncate those partition. 2> Creating history tables and remove not so current data from the original table to history table.
I have tried below steps for removing the table fregmentation but for some table i am not getting good result here.
1. It will collect the data which are having more than 100MB fragmentation.
select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len /1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE" from dba_tables where owner in('a','b','c','d') and round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) > 100 order by 8 desc;
2. then move the object(table) to the same tablespace.
alter table abc move; alter table bcd move; alter table efg move;
3. also rebuild the dependent objects.
alter index abc_PK rebuild online;
4. Then analyze the table which are having more than 100MB of fragmentation.
From the below query i found that there are some stale stats for 3 tables.
================================= select table_name, stale_stats, last_analyzed from dba_tab_statistics where owner= 'SYSADM' and stale_stats='YES' order by last_analyzed desc
I collect stats for those above 3 tables with dbms_stats.gather_table_stats().But no luck.After collection of stats immediately I ran the above query.But still it is showing there are stale stats for 3 tables.
how can I change "STALE-STATS" status, so that optimizer can use the updated stats eficiently.
If a table(have a primary key) is empty(after truncate),the sql of dml(insert,update) is very quickly,but if the table have many rows about 10,000,000 rows, the dml is very slowly,why?
Is it possible for the DBMS_STATS "LIST STALE" command to show a stale partition but NOT have its table show as stale?
I had a scenario where the table itself AND 1 partition showed as stale. I ran a fnd_stats gather table stats just on that 1 partition. Once it was completed it showed the partition to no longer be stale. it also showed that the table was no longer stale. so I guess I do not need to run stats on the whole table as well?
so if this is the case, when would I need to run stats on the full partitioned table if running it on the partitions themselves removes the staleness of the table?
1.2 million chained rows, 1.7 million blocks, etc. Initial extent for this table is 64k and next 1 mb. I would try to calculate this out better for efficiency and performance. This will not be efficient as it stands. calculate the size.
I am working with an online application with the database in Oracle 10G. We have a table with 10 million rows and this table is subjected to grow in future also. Moreover we cannot archive some of these rows as these records are required for referencing.
We have all necessary indexes on the table but querying this table takes a lot of time especially when it is joined with other tables. some methods with which I can manage this table in a better way so that queries joining this table would execute faster..
We are on Oracle 10.2.0.4 on Solaris 10. There is a table in my production db that has 872944 number of rows. Most of its data is now unnecessary, we need to retain, based on a date column in the table just last one month's data and delete rest of the data. So after that the table will have just 3000 rows.
However as the table was huge earlier(872k rows prior to delete) , does the delete of data release its oracle blocks and does the size of the table reduce? If not, will it rebuild the table online (online redefinition) so that the query that does a full scan on this table goes faster?
I checked using an example table that just delete of data does not remove the oracle blocks - they remain in the user_tables for that table and cost of full table scan remains same. We have a query that does the full table scan so I am thinking that after this delete I should do an online table re-definition , is that the right decision?
When i run a script that does a select from a single table (table has 33521868 records)the query is executed in about .094 seconds. I use the exact same query to insert into a temporary table and the query takes 10 minutes and more.
What should I be doing to speed up this process. Also tried using hints and it does not speed up the insert.
I have a table which contains 8,21,177 amount of data totally.Now I am trying to delete around 4,84,000 of data from this table by using just one filter i.e. my query is something like below
DELETE /*+ parallel(resource,4) */ FROM resource where created_by = 'MIGN'
This is going to delete 4,84,000 rows of data . But my current issue is this is taking lots of time to delete the data . To be precise , its almost taking 25 hours to delete this data..The created_by column is indexed .
Execution Plan ---------------------------------------------------------- Plan hash value: 2389236532
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------
I am inserting data using a procedure for 2012 and 2013 year which is using partitioned tables includes crore of data in a partition taking lot of time or taking months. Is there any other way by which I can insert data fast from our query.
By default the DBMS_STATS package runs once every 24 hours to collect statistics for database objects and Oracle collects new statistics when enough of the data (about 10%) has changed.
My question here is how to check the table has changed 10% in database?
The requirement is I have to get the details of all data of previous Active cycle(status A) when the Item became disabled(status = D) for Input date.
In above case,since for Item P1 and on cycle date 04-Nov-12,status is D,I have to consider the previous active cycle which is 03-Nov-12. Based on above std date,the data is queried from another table to get all the Items. Item P2 should not be considered in above case.
Below is the code which I have written which considers the rundate as Input parameter.
-- To get the Items disabled for Input date with Itemdisabled as ( select item,stddate maxcycledate from Item_tbl where rundate = stddate
[code]....
In above case,I'm querying the Item_tbl twice once for getting the disabled Items and once for getting the Previous cycle which is active.
Is there any way to query above only once and get the required results using Lag/Lead functions etc.