Archiving And Purging Data From Huge Tables?

Apr 22, 2013

I'm currently working on a project which is to archive the old data and then purge the same data from the main table.

Here is a detail description:

There are around 50 odd tables from which I would need to archive the old data(matching certain filter conditions...not date based). Meaning I have to store the data in a temp table. Once stored in temp table then I would have to delete those rows from the main table. This temp table will be later exported and stored on ARchive database(a seperate database). These tables are very huge. One of the table is actually 250 GB in size. And all these tables have many indexes built - both normal and bitmap.The 250 GB size table has 40 million rows that need to be archived and purged. The total number of rows in the table are 540 million.On this table alone there are 50 bitmap indexes and 2 normal indexes. This table is partitioned based on date column.This date column is not used/useful in identifying the old data. There are around 20 tables which are quite similar in size to the above described table. Rest of them are little small when compared to the above table.

We have to execute this activity over a weekend which gives us about 48 hours time to complete the activity. Best possible ways to handle this activity. Most importantly should be able to complete the activity within the specified 48 hour window.

The solution what we are now thinking of is:

1. Create the temp table ---Create tmp_tbl as select * from main_table where <<conidtions identifying old data>>

2. Once the temp table is created. Make copy of indexes that exist on the main table and eventually drop them.

3. Execute a PL/SQL script to perform the bulk delete from main table and commit for every 100000 rows.

4. Once the bulk delete is finished then recreate the indexes on the main table using the copy made at earlier step.

Our main worry is about the step#4. Considering the size of these tables and the number of indexes to be built,we are not sure how long the index re-creation will run for each table.

depending on the possibilities we may have to split the activity in to 2-3 phases spreading across 2-3 weekends. Even then we are not sure whether we will be able to pull off this activity.

The database we are using is Oracle 10g.

View 1 Replies


ADVERTISEMENT

Performance Tuning :: LONG To CLOB Conversion In Huge Tables

Jan 25, 2013

We have a huge table in production, with LONG column. We are trying to change its datatype to CLOB. The table has 120 Million records and is of 270 GB in size.

We tried using the oracle expdp/impdp option to try the conversion in our perf environment. With 32 parallels, the export completed in 1.5 hrs. However, the import took 13 hrs.

I also tried the to_lob option using inserts, it went on for 20 hrs and I killed the process. Are there any ways to improve the performance of LONG to CLOB conversion on huge tables?

View 6 Replies View Related

Inserting Data In New Column Where Table Has Huge Data

May 26, 2013

I am trying to add a new column in a table and insert data from another column of same table.

alter table POSITION add INT_MK_DATA_ID number(10,0) null;
update POSITION set INT_MK_DATA_ID = INST_MARKET_DATA_ID;
commit

As there are huge number of records in the POSITION table ...its taking for ever to execute this query.

View 1 Replies View Related

PL/SQL :: Queries To Get Data In Batches From A Huge Data Table

Jan 3, 2013

Here is my problem, i need to create some files with my own format(let say 5000 records each) from a huge data table (May contain 5 Million records). And i want this creation to be multi threaded.

so how can i form queries efficiently to fetch records like 1..5000 and 5001..10000 and so on. I can form some thing like select * from table where rownum<5000 and not exists ( already fetched records) . but it is not the efficient one.

View 5 Replies View Related

Modifying Fields With Huge Data

Aug 26, 2011

I have 2 questions, because they can be inter-related I am posting it in a single post. These queries are related to Oracle(PL\SQL).

1. I am trying to increase the size of a field in a table which has almost 2 million records and the query for alteration runs for almost and hour and rollsback, wondering is there a better way of doing it.

2. I have modified the size of a field in a table from Varchar2(10) to Varchar2(20), now when I tried to rollback the modification it is not letting me to change the size from Varchar2(20) to Varchar2(10). No data has been inserted after the modification.

View 1 Replies View Related

SQL & PL/SQL :: Modification Of Fields With Huge Data?

Aug 26, 2011

I have 2 questions, because they can be inter-related I am posting it in a single post. These queries are related to Oracle(PLSQL).

1. I am trying to increase the size of a field in a table which has almost 2 million records and the query for alteration runs for almost and hour and rollsback, wondering is there a better way of doing it.

2. I have modified the size of a field in a table from Varchar2(10) to Varchar2(20), now when I tried to rollback the modification it is not letting me to change the size from Varchar2(20) to Varchar2(10). No data has been inserted after the modification.

View 2 Replies View Related

Datatype Modification For Table With Huge Data

Aug 11, 2011

Below is my requirement.

Need to change the precision of a column in a existing table. Statistics about the table

* has over 130 columns
* More than 300 million records
* Column to modify is #121 which has data
* No primary key defined

Since the column has data, it is not possible to modify with a simple Alter.

Second option - create temp column in same table, update from original, put null in original, alter, update back from temp, drop the temp column. This approach is very expensive and time consuming.

Also the Column ID needs to preserved as #121.

View 3 Replies View Related

Bulk Deletes Of Data From Huge Table

Jun 13, 2013

I am trying to delete 3 million records of data from huge table which already consists of 3 billion records.

This is hitting performance of DB and halting other activities of my users. Is there any easy way to delete such data fast. I have tried with forall delete but it is even taking lot of time.

View 5 Replies View Related

SQL & PL/SQL :: Group By Gives Wrong Value In Huge Data Records?

Jun 18, 2012

I have table which contains huge data. around 12 lakhs records. when I use sum function on accountname and docdate it gives wrong value. once I restart the server it gives the correct value. one or two days it gives correct value after that again I get the same problem. If I restart again it gives correct value.

I use Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 64 bit server on Linux.

View 6 Replies View Related

Server Utilities :: Exporting Huge Amount Of Data?

Jul 25, 2011

extract a huge amount of data from a couple of views... the problem is that they want it in TXT files with fixed record length. There will be like 6 files, for a total amount of about 10GB.

export those tables in the fastest possible way? If I'm not mistaken exp and expdp can't create txt files, so do I really need to use utl_file or spool?

View 1 Replies View Related

Forms :: Handle Huge Number Of Rows In Data Block

Jun 14, 2011

I need to read a huge number of rows, say in lakhs and then need to populate it in data block. Since it is having huge data am never able to run the form. it hangs after some time. when i test with few rows it is working. so no problem in coding.

View 4 Replies View Related

Oracle 11g Express Edition - Load Huge Data Into Table

Nov 6, 2012

I am using oracle 11g Express Edition, I have a file of .csv forma, Which has a data of size 500MB which needs to be uploaded into oracle table.

Which would be the best method to upload the data into table. Data is employee ticket history which is of huge data.

How to do the mass upload of data into oracle table.

View 3 Replies View Related

Adrci Purging Of Command?

Sep 27, 2011

I am using the following command within a shell scrript to purge files with adrci

adrci exec="show homes"|grep -v : | while read file_line
do
echo "adrci purging diagnostic destination " $file_line
echo "purging ALERT older than $Minutes ..."
adrci exec="set homepath $file_line;purge -age $Minutes -type ALERT"

[code]...

Is there a command I can use to see what will be purged or what has been purged?

View 2 Replies View Related

Implicit Commit While Purging

Oct 24, 2011

I have written a purge package that would delete records older than 10 years. Since the data is huge, the purging was taking 14 hours plus. To improve performance, I disabled constraints , deleted records and then reanabled them. This was quite quick but the only problem is rollback. Say for some reason if enabling constraints fails there is no way to rollback as enabling and disabling constraints does an implicit rollback.

View 1 Replies View Related

Purging Table Which Is Having 98 M Rows

Dec 3, 2012

I want to purge a table which is having more then 98M rows...here are the details...

Purge Process I followed
---------------------------------------------
Step 1. Created backup table from Main table with required data only
create table abc_98M_purge as
select * from abc_98M where trunc(tran_date)>='01-jul-2011'
-> table created with 5325411 rows

Step 2. truncate table abc_98M

Step 3. inserted all 5325411 rows back to abc_98M from abc_98M_purge using below procedure

DECLARE
TYPE ARRROWID IS TABLE OF ROWID INDEX BY BINARY_INTEGER;
tbrows ARRROWID;
row PLS_INTEGER;
cursor cur_insert is select rowid from abc_98M_purge order by rowid;
BEGIN
open cur_insert;
loop
[code]....

View 4 Replies View Related

Redo Multiplexing And Archiving

Aug 21, 2012

Database version
SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE    11.1.0.7.0      Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Productionredo logs multiplexed.
[code]....

+ when the redolog 5 was archived how the archiving process works ?

let say both the log members are clean in case which one will be archived 5a or 5b ?

+ I can also see only one archive log is creating during log swtich

View 2 Replies View Related

Alert Log And Listener Log File Purging In 11g?

Mar 6, 2013

how to purge the alert log and listner log file in 11g.

OS:IBM/AIX RISC System/6000
DB version: 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

View 8 Replies View Related

Partitions For Archiving - Date With Completed Flag?

Jan 29, 2011

I am working on an archiving strategy. I want to roll off transactions that are older than seven days, but only if they are flagged as Completed. The numbers of transactions are very large so this is a worthwhile venture.

The only strategy I have been able to come up with so far is to partiton on date. Then when 7 days comes up, sweep the about-to-be archived day for the few remaining not Completed transactions, put those into a new table (a new version of this partiton) and switch partitions. Each day I do this until the older parititions are empty.

View 7 Replies View Related

Disable Archiving In Session Level Oracle 11g Rel 2?

Jul 30, 2013

I  just want to write some data  in a particular table,But  I  dont  want it  to be  archivedSo is  it possible to disable archiving in session level Oracle 11g Rel 2

View 3 Replies View Related

RMAN :: ORA-00258 / Manual Archiving In NOARCHIVELOG Mode Must Identify Log

Jan 27, 2013

My RMAN backup failed with below error.

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of sql command on default channel at 01/26/2013 22:48:56
RMAN-11003: failure during parse/execution of SQL statement: alter system archive log current
ORA-00258: manual archiving in NOARCHIVELOG mode must identify log
RMAN>

Recovery Manager complete.

View 2 Replies View Related

Maintain Large Tables / Cleanup Data From Our Tables

May 18, 2011

I have to cleanup data from our tables (Production Environment) that contain millions of rows. The question is apart from the solution of the partitioned tables what alternative recommended solution suggests Oracle?

To delete these tables by using a cursor PL/SQL block or to import all the database and in the tables that we want to remove the old rows to use the QUERY option of the data pump utility.

I have used both ways and i have to admit that datapump solution is much much faster than the deletion that suffers from I/O disk.The question again is which method from these two is more reliable and less risky for the health of the database.

View 5 Replies View Related

Implementation Where Data From DB2 Tables Are Moved To Oracle Tables

Sep 3, 2012

I came across an implementation where data from DB2 tables are moved to Oracle tables, for BI solutioning, using some oracle procedures called from MS SQL DTS packages which are scheduled jobs.Just being curious, can this be done using OWB or ODI rather than the above detour. I suppose there are some changes being done in those procedures before the data is being loaded into Oracle tables, can't this be done using OWB/ODI? Can it be scheduled too as jobs using OWB/ODI?

View 1 Replies View Related

Oracle Data Pump - Export Data From Schemas Or Tables

Oct 11, 2012

I need to export only the data from schemas or tables, how to do that with Oracle Data Pump? when we use schemas parameter this export all schema, not only the data right?

View 7 Replies View Related

Data Warehouse - How To Insert / Access Data In The Tables

May 30, 2011

I created a data warehouse in oracle 10g n with three Dimension and one cube after that it crates 4 tables . How to use an insert sql statement to insert data in those tables n how to access them.

View 7 Replies View Related

Dropping Column From Huge Table

Aug 9, 2012

I am trying to drop a column from a huge partitioned table (non compressed including partitions). I am working on 11gR2 database for information.

i used below approach

1. alter table <tab_name> set t1 unused column;
2. alter table <tab_name> drop unused columns;

then i got the below error message

ORA-39726: unsupported add/drop column operation on compressed tables

First statement did work i could be able to add another column with the same name but still don't want the unused column on the table.

View 5 Replies View Related

Deleting Huge Records From Table?

Apr 29, 2013

Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.

Method I have used:

1. Created Temp table

2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.

delete from B where exists (select 1 from temp where b.col1=temp.col1);

3. why it is taking too much time for deleting records in table A.

View 5 Replies View Related

PL/SQL :: Oracle 11g - Huge Table Statistics

Jun 13, 2012

I have a table which have 300+ columns and have 13 million rows. It is on a 32 kb block size. This is a table in data ware house environment. There no# of rows in the table haven't changed much but I see that the time taken to collect statistics have increased significantly.Initially it took only 15 minutes (with the same 13M rows) now it runs for 4+ hours. The max parallel servers is 4 (which is unchanged). The table is not partitioned.

OS: HP UX Itanium
Database: Oracle 11g (11.2.0.2)

Command is:
exec dbms_stats.gather_table_stats(ownname=>'ABC',tabname=>'ABC_LOAD',estimate_percent=>dbms_stats.auto_sample_size,cascade=>TRUE,DEGREE=>dbms_stats.auto_degree);

I would like to understand:

1) What could have been the causes of this change in time. 15 minutes to 4+hours ?
2) How can we gather statistics of huge table at a faster rate?

View 1 Replies View Related

SQL & PL/SQL :: To Verify Huge Number Of Records In Two Different Databases

Jun 5, 2012

I need to verify huge number of records in two different databases. Basically i wanted to check if same record exist in other database's table or not? but as the number of records are more than billions who would i verify?

checking record one by one would be so hectic and time consuming. any other option is there?

View 11 Replies View Related

How To Rebuild Huge Indexes Upto 30gb In Size

Sep 7, 2011

i am having some indexes upto size of 40gb, i need to rebuild them having only 15 temp tablespace and without downtime

View 5 Replies View Related

Query Optimization On Join With A View On Huge Table?

Jun 22, 2011

I have this table

create table ACTIONARI_ARH
(
actionar_id NUMBER(10) not null,
id VARCHAR2(20) not null,
id_2 VARCHAR2(20),
tip VARCHAR2(1),
nume VARCHAR2(100),
prenume VARCHAR2(100),
adresa VARCHAR2(200),

[code]....

and this view

CREATE OR REPLACE VIEW ACTIONARI AS
SELECT "ACTIONAR_ID","ID","ID_2","TIP","NUME","PRENUME","ADRESA","LOCALITATE","JUDET","TARA","CERT_DECES","DATA_REGISTRU" Data_operare,"USER_MODIF","DATA_MODIF","REZIDENT"
FROM (
select

[code]....

The table has about 30 milion records and holds persons names, addresses, personal id (id), and internal id(actionar_id) and date when a new adress has been added.

The view is about getting only the most recent info for one person (actionar_id).

if i run a

a) select * from actionari a where a.actionar_id = 'nnnnnnn', result is returned immediatly, oracle uses index and does not do a full table scan.

b) select * from actionari a where a.actionar_id in ('nnnnnnn','mmmmmm','ooooooo'), result is returned immediatly, oracle uses index and does not do a full table scan.

my problem when i use this view in a join.let's assume i have another table with no more than 500 records, something like

create table SMALL_TABLE
(
actionar_id NUMBER(10) not null,
......
);

and if i run

select *
from SMALL_TABLE s
join actionari a
on a.actionar_id = s.actionar_id;

it takes like forever to process, forever means 1~3 minutes.by looking at the execution plan, oracle does a full table scan, creates the view for all unique 7milion persons, and only then joins the result with the actionar_is's in the small table and returns the desired 500 record result.i am using oracle 10g.

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved