I am trying to delete 3 million records of data from huge table which already consists of 3 billion records.
This is hitting performance of DB and halting other activities of my users. Is there any easy way to delete such data fast. I have tried with forall delete but it is even taking lot of time.
I was given a task by manager to keep track of changes on a given table including os_user who made it.Should I create a trigger on it (on any update, insert, delete etc.) or there is a better way of doing it ?I think there could be some info already in some data dictionary views or something like it.
Here is my problem, i need to create some files with my own format(let say 5000 records each) from a huge data table (May contain 5 Million records). And i want this creation to be multi threaded.
so how can i form queries efficiently to fetch records like 1..5000 and 5001..10000 and so on. I can form some thing like select * from table where rownum<5000 and not exists ( already fetched records) . but it is not the efficient one.
Need to change the precision of a column in a existing table. Statistics about the table
* has over 130 columns * More than 300 million records * Column to modify is #121 which has data * No primary key defined
Since the column has data, it is not possible to modify with a simple Alter.
Second option - create temp column in same table, update from original, put null in original, alter, update back from temp, drop the temp column. This approach is very expensive and time consuming.
EMP_TEST ----------------------------------------------------- ENO EFIRSTNAME ESECONDNAME DEPTNO 1 JOHN PAI 10 2 ABC DEF 20 3 EFG GHI 30
Now the primary key in this above table is pk_emp_test_eno on eno.
I have a requirement where i need to dump some dummy data (600000000 numbers of data ) into the emp_test based on these existing data without disabling the constraints (maintaining unique constraint for each record). And while inserting i want to commit after every 1000 insertion.
in bulk inserting dummy datas into the table as it is taking much more time to insert into the same.
Table contains 10k records,we are going to insert data into another table with FORALL bulk collect limit 1000. if i use 10000 ,it's completed fast compared to 1000 limit.Can u tell me which one is better Limit.
I have 2 questions, because they can be inter-related I am posting it in a single post. These queries are related to Oracle(PL\SQL).
1. I am trying to increase the size of a field in a table which has almost 2 million records and the query for alteration runs for almost and hour and rollsback, wondering is there a better way of doing it.
2. I have modified the size of a field in a table from Varchar2(10) to Varchar2(20), now when I tried to rollback the modification it is not letting me to change the size from Varchar2(20) to Varchar2(10). No data has been inserted after the modification.
I have 2 questions, because they can be inter-related I am posting it in a single post. These queries are related to Oracle(PLSQL).
1. I am trying to increase the size of a field in a table which has almost 2 million records and the query for alteration runs for almost and hour and rollsback, wondering is there a better way of doing it.
2. I have modified the size of a field in a table from Varchar2(10) to Varchar2(20), now when I tried to rollback the modification it is not letting me to change the size from Varchar2(20) to Varchar2(10). No data has been inserted after the modification.
I'm currently working on a project which is to archive the old data and then purge the same data from the main table.
Here is a detail description:
There are around 50 odd tables from which I would need to archive the old data(matching certain filter conditions...not date based). Meaning I have to store the data in a temp table. Once stored in temp table then I would have to delete those rows from the main table. This temp table will be later exported and stored on ARchive database(a seperate database). These tables are very huge. One of the table is actually 250 GB in size. And all these tables have many indexes built - both normal and bitmap.The 250 GB size table has 40 million rows that need to be archived and purged. The total number of rows in the table are 540 million.On this table alone there are 50 bitmap indexes and 2 normal indexes. This table is partitioned based on date column.This date column is not used/useful in identifying the old data. There are around 20 tables which are quite similar in size to the above described table. Rest of them are little small when compared to the above table.
We have to execute this activity over a weekend which gives us about 48 hours time to complete the activity. Best possible ways to handle this activity. Most importantly should be able to complete the activity within the specified 48 hour window.
The solution what we are now thinking of is:
1. Create the temp table ---Create tmp_tbl as select * from main_table where <<conidtions identifying old data>>
2. Once the temp table is created. Make copy of indexes that exist on the main table and eventually drop them.
3. Execute a PL/SQL script to perform the bulk delete from main table and commit for every 100000 rows.
4. Once the bulk delete is finished then recreate the indexes on the main table using the copy made at earlier step.
Our main worry is about the step#4. Considering the size of these tables and the number of indexes to be built,we are not sure how long the index re-creation will run for each table.
depending on the possibilities we may have to split the activity in to 2-3 phases spreading across 2-3 weekends. Even then we are not sure whether we will be able to pull off this activity.
I have table which contains huge data. around 12 lakhs records. when I use sum function on accountname and docdate it gives wrong value. once I restart the server it gives the correct value. one or two days it gives correct value after that again I get the same problem. If I restart again it gives correct value.
I use Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 64 bit server on Linux.
I have materialized view replication setup in an Oracle 10g environment. Inserts and Updates are being propogated as expected. When a record in the master table is deleted, there are no entries being written to the materialized view logs, and hence, the delete is not propogated to the materialized view on the remote server.
The Materialized View is defined with FAST Refresh and the refesh occurs avery 5 minutes. This allows me to see what entries are written to the mv logs before they are consumed. I thought that all DML statements get written to the mv logs and am lost to explain this behavior. The matser table definition is not very complex. It has a PK defined and two UK Constaints but does not even have any FK constraints defined. There are a couple of triggers defined on this table for Insert, Update and Delete which write audit table records but I can't see how this could affect things. There are no errors being generated on the master table side in the DB or that the application sees.
Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.
Method I have used:
1. Created Temp table
2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.
delete from B where exists (select 1 from temp where b.col1=temp.col1);
3. why it is taking too much time for deleting records in table A.
I have a table which have 300+ columns and have 13 million rows. It is on a 32 kb block size. This is a table in data ware house environment. There no# of rows in the table haven't changed much but I see that the time taken to collect statistics have increased significantly.Initially it took only 15 minutes (with the same 13M rows) now it runs for 4+ hours. The max parallel servers is 4 (which is unchanged). The table is not partitioned.
OS: HP UX Itanium Database: Oracle 11g (11.2.0.2)
Command is: exec dbms_stats.gather_table_stats(ownname=>'ABC',tabname=>'ABC_LOAD',estimate_percent=>dbms_stats.auto_sample_size,cascade=>TRUE,DEGREE=>dbms_stats.auto_degree);
I would like to understand:
1) What could have been the causes of this change in time. 15 minutes to 4+hours ? 2) How can we gather statistics of huge table at a faster rate?
extract a huge amount of data from a couple of views... the problem is that they want it in TXT files with fixed record length. There will be like 6 files, for a total amount of about 10GB.
export those tables in the fastest possible way? If I'm not mistaken exp and expdp can't create txt files, so do I really need to use utl_file or spool?
I need to read a huge number of rows, say in lakhs and then need to populate it in data block. Since it is having huge data am never able to run the form. it hangs after some time. when i test with few rows it is working. so no problem in coding.
create table ACTIONARI_ARH ( actionar_id NUMBER(10) not null, id VARCHAR2(20) not null, id_2 VARCHAR2(20), tip VARCHAR2(1), nume VARCHAR2(100), prenume VARCHAR2(100), adresa VARCHAR2(200),
[code]....
and this view
CREATE OR REPLACE VIEW ACTIONARI AS SELECT "ACTIONAR_ID","ID","ID_2","TIP","NUME","PRENUME","ADRESA","LOCALITATE","JUDET","TARA","CERT_DECES","DATA_REGISTRU" Data_operare,"USER_MODIF","DATA_MODIF","REZIDENT" FROM ( select
[code]....
The table has about 30 milion records and holds persons names, addresses, personal id (id), and internal id(actionar_id) and date when a new adress has been added.
The view is about getting only the most recent info for one person (actionar_id).
if i run a
a) select * from actionari a where a.actionar_id = 'nnnnnnn', result is returned immediatly, oracle uses index and does not do a full table scan.
b) select * from actionari a where a.actionar_id in ('nnnnnnn','mmmmmm','ooooooo'), result is returned immediatly, oracle uses index and does not do a full table scan.
my problem when i use this view in a join.let's assume i have another table with no more than 500 records, something like
create table SMALL_TABLE ( actionar_id NUMBER(10) not null, ...... );
and if i run
select * from SMALL_TABLE s join actionari a on a.actionar_id = s.actionar_id;
it takes like forever to process, forever means 1~3 minutes.by looking at the execution plan, oracle does a full table scan, creates the view for all unique 7milion persons, and only then joins the result with the actionar_is's in the small table and returns the desired 500 record result.i am using oracle 10g.
I want to drop a column in a huge table which contain about 420,000,000 rows,i use the alter table drop coumn command to execute,and found it takes a long time and generate huge redo.
Is there any quickly way to drop a column in a huge table?
I need to load (using SQL Loader) an huge XML file, with several hundreds of records into an Oracle Table.The XML file schema is pretty simple, and it's anything like this:
<dataroot> <record> <companyname>LimitSoft S.A.</companyname> <address>Street Number 1</address>
[code]...
I'm trying to use the help included in this link [URL]...
When they refer to schema[URL].... what should I use?? I do not need to use the Oracle website to register anything, right?
PROCEDURE Return_Summary(WX IN dbms_sql.varchar2_table, WX OUT SYS_REFCURSOR) IS
Begin FOR i IN 1 .. Pi_ WX.count LOOP
/* I need to put this results in a temp table or table object Can I use temp table for this or do we have any other recommended method. The loop might execute max of 10 times and for each run it might return 100-200 records. */
select WX_NM, WX_NUM from TAB A, TAB B, TAB C where A.KEY1 = B.KEY1 and B.KEY1 = C.KEY1 and C.WX = WX(i); End Loop; End;
optimize this code. Scenario have to update about 40 million rows to static value, I'm committing 1Million rows in one loop. The first 1 millions rows are getting updated very fast probably in a 2 minute, after that the code just hungs and i don't see increase in committed rows.
Declare cursor c1 is select rowid from t1 where c1 is not null;
I want to insert bulk records to the table. I want to insert date rows for next 50 years in table ( from year 2001 to year 2050). I have following columns in my table :
YYYYMMDD MM/DD/YYYY Day of the week ( Monday, Tuesday etc) JulianDate
I need a procedure to get the records from the table for every SKU of XML message from the table and take one record for each SKU based on order precedence.And send the set of output records to the .net code. The order precedence is
If MODEL is not null that record should be high precedence and consider max lead_time If MODEL and VK_UNIT is NULL then the precedence goes to STATE_ID.
The parameters in the procedure are
CREATE OR REPLACE PROCEDURE sku_proc ( p_bu_id IN number, p_model IN varchar2, p_vk_unit IN XMLTYPE, p_country_id IN varchar2, [code].......
Query to get the data from the table based on the parameters.
Based on the above query assume we got the following records.
BU_ID MODEL VK_UNIT COUNTRY_ID STATE_ID LEAD_TIME FC 0 M123 210-39348 AB 0 20 A 0 M123 210-39348 AB 0 30 B 0 NULL 210-39348 AB 0 10 C 0 M123 405-12132 AB 0 10 A 0 NULL 340-30904 AB 0 30 C 0 M123 340-30904 AB 0 20 B 0 M123 340-30904 AB 0 10 A 0 NULL 403-10890 AB 0 10 B 0 M123 403-10890 AB 0 20 A 0 M123 709-10007 AB 0 10 B 0 NULL NULL AB 0 20 B 0 NULL NULL AB 1 30 A
Th final query has to return the following result to the OUT parameters p_Lead_Time p_FC of the procedure.
LEAD_TIME FC 30 B 10 A 20 B 20 A 10 B 30 A
How to implement my above requirement using BULK COLLECT.