Large Rows Committing Delete
Nov 8, 2010
I was asked in a telephone interview about committing a delete ( a few million rows)in an oracle log table which had a trillion rows in total. He said that delete took 2 days.
Then he asked me if then commit is performed(assuming a huge rollback segments are allocated) how long does it take for that commit .
View 5 Replies
ADVERTISEMENT
Aug 8, 2013
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below
CREATE TABLE "APP"."MON_TXNS" ( "ID_TXN" NUMBER(12,0) NOT NULL ENABLE, "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE, "ID_PAYER" NUMBER(12,0), "ID_PAYER_PI" NUMBER(12,0), "ID_PAYEE" NUMBER(12,0), "ID_PAYEE_PI" NUMBER(12,0), "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE, "STR_TEXT" VARCHAR2(60 CHAR), "DAT_MERCHANT_TIMESTAMP" DATE, "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE), "DAT_EXPIRATION" DATE, "DAT_CREATION" DATE, "STR_USER_CREATION" VARCHAR2(30 CHAR), "DAT_LAST_UPDATE"
[Code]...
Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
SQL> explain plan for 2 delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
Explained. SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT--------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2798378986
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 || 1 | DELETE | MON_TXNS | | | | ||* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 || 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
-------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
View 6 Replies
View Related
Nov 29, 2012
I have a query that returns 11 Million rows but not all of them can be displayed in SQLDeveloper or DBVisualizer because of limited memory or other type of issues. I need to copy the entire result set to excel for further calculations.
Is there any way that i can select N number of rows out of my actual result set.
For example:
a) A result set contains 10 Million rows in total.
b) I want to display first 5 Million rows by executing a query
c) Then I want to display the remaining 5 Million rows by executing the query again with any parameter changes.
So all I want is to extract the rows of my actual result set in two or more executions, depending on the number of rows.
View 3 Replies
View Related
Apr 30, 2013
Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.
Method I have used:
1. Created Temp table
2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.
delete from B where exists (select 1 from temp where b.col1=temp.col1);
3. Why it is taking too much time for deleting records in table A.
Is there any thing that during deleting data from such master table, it is referring to all dependent tables even if dependent data is not present ?
View 12 Replies
View Related
Feb 6, 2011
I having a problem with duplicate rows with a query.
What I have so far is
SELECT D.Student_Id,
E.Latest_Reg_Date,
E.Attendance_Count,
[Code].....
View 1 Replies
View Related
Oct 19, 2011
I am using the below proc to delete some records
1)select client_id,count(*) from TCLIENT_NOTIFICATION_PACK where client_id=1620560178 group by client_id order by 2 desc;
client_id count(*)
----------- ---------
16205601785128
2)select client_id, count(*) from TCLIENT_NOTIFICATION_PACK
where client_id=1620560178
group by client_id
having count(*) > 40
order by 2 desc
client_id count(*)
----------- ---------
16205601785128
3) select client_id,clnt_notification_pack_tid
-- bulk collect into v_client_id,v_notif_tid
from (select clnt_notification_pack_tid,
client_id,
clnt_notification_pack_typ_tid,
crte_dt,
[code]....
4) Iam using the below proc to delete the rows from table, except the 4 rows returned above
declare
v_clnt_notification_pack_tid TCLIENT_NOTIFICATION_PACK.CLNT_NOTIFICATION_PACK_TID%type;
tYPE t_client_id is table of TCLIENT_NOTIFICATION_PACK.client_id%type;
tYPE t_notif_tid is table of TCLIENT_NOTIFICATION_PACK.clnt_notification_pack_tid%type;
v_client_id t_client_id;
v_notif_tid t_notif_tid;
[code]....
5) After running this procedure, i shud see 5124 records, but i see zero records.
View 23 Replies
View Related
Oct 25, 2006
How to delete even rows of a table or rather alternate rows of a table.
View 31 Replies
View Related
Jun 13, 2011
I have a table, in that i have "n" no of rows from that i want to delete 1% of rows from that table
View 26 Replies
View Related
Jan 4, 2013
Have a table which has 3 columns id,name,time where time is of datatype timestamp and it stores the time when the row was inserted. Need an query which accepts 2 parameters as input Ex: Start_Time,End_Time and all the rows in between the above mentioned timestamps must be deleted.
View 8 Replies
View Related
Mar 18, 2013
i have deleted 20 lak rows from a table which has 30 lak rows,
and i wanted to release the fragmented space, what is the procedure other than exp/imp or alter table move
and also the recommended way to do this prod env... (coalesce /alter move etc.. )
db version is 11.2
View 7 Replies
View Related
Mar 1, 2010
I want to delete the duplicate rows in a table without using ROWID.
I have the following data.
SNO SNAME SECTION
1 RAM A
2 SHYAM B
2 SHYAM B
3 KISHOR C
3 KISHOR D
4 RAMESH E
5 RAJESH F
5 RAJESH F
The Output Should be like this.
SNO SNAME SECTION
1 RAM A
2 SHYAM B
3 KISHOR C
3 KISHOR D
4 RAMESH E
5 RAJESH F
View 8 Replies
View Related
Mar 18, 2013
CREATE TABLE "TEST_JET" ("K1" NUMBER, "K2" NUMBER, "K3" NUMBER, "K4" VARCHAR2(1)) ;
REM INSERTING into TEST_JET
Insert into TEST_JET (K1,K2,K3,K4) values (1,2,3,'I');
Insert into TEST_JET (K1,K2,K3,K4) values (1,2,3,'U');
Insert into TEST_JET (K1,K2,K3,K4) values (1,2,3,'D');
Insert into TEST_JET (K1,K2,K3,K4) values (1,2,2,'U');
Insert into TEST_JET (K1,K2,K3,K4) values (1,2,2,'D');
Insert into TEST_JET (K1,K2,K3,K4) values (1,3,5,'I');
Insert into TEST_JET (K1,K2,K3,K4) values (1,6,7,'U');
Insert into TEST_JET (K1,K2,K3,K4) values (1,6,7,'D');
Insert into TEST_JET (K1,K2,K3,K4) values (1,6,7,'T');
[code]....
based on the above result set , for a particular group ,only that op will be retained which comes out in the query . say for example , we have got 1,2,3,'D' for group 1,2,3
now since we have got the D Operation from the above query , i don't need the other two rows .i.e.
(1,2,3,'I');
(1,2,3,'U');
what is the best way to delete the data we don't want retaining the rows we want ,using a single sql statement . Also , for the result set row 7,7,7,T I first need to delete the group containing T operation, and insert two new rows .i.e. 7,7,7,D and 7,7,7,I .
View 14 Replies
View Related
Jun 1, 2012
There are four columns as follows, but I need to delete those rows from the third column only where the second letter of the word appears as vowel. For example, I want to delete the rows having the words, 'Ramu' and 'Ravi' only.
A B C D
xx y Ramu xx
yu ut Ameer uui
rtt iw Ravi iwoow
fgsg isd Intel jjuiw
View 4 Replies
View Related
Apr 5, 2013
How Can I delete the returned two rows?
1 select s.reg_no,s.course_code,
2 s.section src_sec,a.section a_sec,a.att_date,a.att_flag
3 from attendance a ,src s
4 where a.semester_code=1
5 and a.semester_year=2013
6 and s.semester_code=1
[code]....
View 6 Replies
View Related
Jun 18, 2012
The following query take lot of time when exectued, even after I drop the indexes, is there a better way to write the following query?
DELETE from pwr_part
where ft_src_ref_id in (select ft_src_ref_id
from pwr_purge_ft);
--Table:pwr_part
--UIP10371 foreign key (FT_SRC_REF_ID, FT_DTL_SEQ)
[Code]...
Explain Plan:
Description Object owner Object name Cost Cardinality Bytes
SELECT STATEMENT, GOAL = ALL_ROWS 224993 5492829 395483688
HASH JOIN RIGHT SEMI 224993 5492829 395483688
INDEX FAST FULL SCAN PWR_OWNER PWR_PURGE_FT_PK 43102 23803770 142822620
PARTITION HASH ALL 60942 27156200 1792309200
TABLE ACCESS FULL PWR_OWNER PWR_PART 60942 27156200 1792309200
View 6 Replies
View Related
Aug 13, 2011
I need to delete all the registers where the table 1 does join with table 2 in 3 fields... for example:
delete taba1 t1
where t1.campo1 in ( select distinct(tr.campo1)
from tabla1 tr,
tabla2 t2
where t2.error = 0
tr.campo1 = t2.campo1
and tr.campo2 = t2.campo2
[Code]...
View 4 Replies
View Related
Jul 23, 2012
I want to compare two tables , and delete the common rows from the first table
Here is what i have done :
Create table test1(Test1C1 Number,
Test1C2 varchar2(50));
Create table test2(Test2C1 Number,
Test2C2 varchar2(50));
Insert into test1 values(1,'testdata1');
Insert into test1 values(2,'testdata2');
Insert into test1 values(3,'testdata3');
[code].......
it deletes all the records from Table test1. What should I modify here ? or should I write a different query ?
The desired contents in table test1 will be
2 testdata2
4 testdata4
6 testdata6
8 testdata8
10 testdata10
View 5 Replies
View Related
Dec 14, 2007
I joined the forum just today, i need some tips on deleting the millions of rows from a huge table having 25 millions of rows.
View 4 Replies
View Related
Mar 15, 2013
I know how to select the last N sets of rows, using DENSE_RANK - where multiple rows have the same timestamp but I want to only select those rows which do NOT have the top 2 unique timestamps.
i.e.:
SELECT *
FROM ( SELECT DENSE_RANK() OVER (ORDER BY myTimestamp DESC) DENSE_RANK, HISTORYID, USER_ID, myTimestamp, STATUS, FROM TXN_HIST)
WHERE DENSE_RANK > 2 order by myTimestamp DESC, HISTORYID, USER_ID;
But how do I DELETE these same rows?
View 3 Replies
View Related
Oct 9, 2012
I have to write a procedure that accepts schema name, table name and column value as parameters....I knew that i need to use metadata to do that deleting manually.
View 9 Replies
View Related
Mar 9, 2011
1. When querying the "alert_log" table I created from the alert log using the script below, 2 new files were created ALERT_LOG_30499.bad and ALERT_LOG_30499.log.
The ALERT_LOG_30499.log. contains this error message:
error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log
ORA-12899: value too large for column MSG (actual: 82, maximum: 80)
the ALERT_LOG_30499.bad , so far, only contains datafile resize information. The datafiles have plenty of space and there is plenty of space on the San slice the datafiles reside.
2. then each time I recreate the table and increased the increased the varchar2 size, the "actual" size will also increase in the log file.
error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log ORA-12899: value too large for column MSG (actual: 92, maximum: 90)
3. When I increased the varchar2 size to 120+ it gave me this error message:
[oracle@tds_dw bdump]$ cat ALERT_LOG_30715.log
LOG file opened at 03/09/11 14:46:20
Field Definitions for table ALERT_LOG
Record format DELIMITED BY NEWLINE
Data in file has same endianness as the platform
Rows with all null fields are accepted
Fields in Data Source:
MSG CHAR (255)
Terminated by ","
Trim whitespace same as SQL Loader
TABLE DDL:
create table
alert_log ( msg varchar2(80) )
organization external (
type oracle_loader
default directory BDUMP
access parameters (
records delimited by newline
)
location('alert_damistst.log')
)
reject limit 1000;
**** QUESTION
I can still query the alert_log table in sqlplus, but those log and bad files are generated, is this an issue?
example of a piece of the results from " select * from alert_log; "
MSG
--------------------------------------------------------------------------------
Thread 1 advanced to log sequence 5254 (LGWR switch)
Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log
Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log
Thread 1 cannot allocate new log
Checkpoint not complete
Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log
Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log
Wed Mar 9 14:33:09 2011
Thread 1 advanced to log sequence 5255 (LGWR switch)
Current log# 2 seq# 5255 mem# 0: /tds_oradata/redo02a.log
Current log# 2 seq# 5255 mem# 1: /u02/damistst/REDO_LOGS/redo02b.log
13076 rows selected.
View 7 Replies
View Related
Jun 12, 2008
I keep getting the "ORA-01401:inserted value too large for column". No biggie - I've dealt with this multiple times before (but obviously not enough in this instance).
The data being entered is a SINGLE digit number - a number like 1, 2 or 3 - nothing fancy, just a plain straight everyday single digit number. The field in question is / was set as field type "Integer". Now, there is no set field size for integers! - not in Oracle anyway. Since it wasn't happy, I decided I'll try field types of 'Number' and also "Varchar2" set to 10 bytes. I have deleted the column from the table and re-created it as well.
Here's the even more puzzling bit: I can INSERT data into this field, BUT I can not UPDATE the field with the exact same data. The data is being inserted from a csv file. The same exact csv file used to insert works, but the same data in the same file will not update only that particular column.
If I delete the specific column data from the csv file, all goes through fine. If I hard code the update for the field (eg SET field2 = '1' or even SET field2 = ' ') it still doesn't work. So I know it is not the csv file that is causing problems. I deleted all data from the csv file except the field in question - still no luck.
So after eliminating:
1. The field type
2. The field length
3. The data being inserted
4. The external source of the data
What else could possibly be the problem?
View 1 Replies
View Related
Jan 21, 2011
I HAVE DECLARED A VARIABLE
VAR1 VARCHAR2(20000);
BUT STILL WHEN I ASSIGN SOME STRINGS TO THAT VARIABLE I GET "VALUE TOO LARGE" MESSAGE. WHAT SHOULD I DO?
View 2 Replies
View Related
Jul 29, 2011
I am using trim function in my select query. But still I am getting white space in my output. because of this, I am getting the error "value too large for column... " when I load the data into a table through sqlloader.
define APPName="&1"
set heading off;
set verify off;
set newpage 0
set feedback off;
set rtrimspool on;
set termout off;
set pagesize 40000;
[code].....
View 3 Replies
View Related
Feb 6, 2012
I have a 27 million row table in the following format:
MEDCLM_MTH_SUM_KEY PRIMARY_DIAG_CD DIAG_CD2 DIAG_CD3 DIAG_CD4 DIAG_CD5 DIAG_CD6 DIAG_CD7 DIAG_CD8 DIAG_CD9 DIAG_CD10
2212990780 5552 78907 53170 5368
2231127242 V5481 7812 71595 4019 2761 2859 496 V4364 30501
I need to unpivot this data to get it to look like this:
MEDCLM_MTH_SUM_KEY DIAG_CD_LEVEL DIAG_CD
2212990780 PRIMARY_DIAG_CD 5552
2212990780 DIAG_CD2 78907
2212990780 DIAG_CD3 53170
[code]...
I was wondering if there was a quicker, more efficient way to do this.
View 3 Replies
View Related
Sep 30, 2011
I need to add a new column to a very large table, and update it with 'N'. (this is similar as specifying default 'N'). I'm using Oracle 9i. Which is the best method regarding to speed, to update this column on entire table? The table contains ~30 millions of records. I've read that parallel DML (here UPDATE) does not work on unpartitioned tables. My table is not partitioned. If i specify:
update /*+ full(p) parallel(p,10) */ my_table p set p.my_column = 'N';
This, i think will not speed up the operation on 9i. Our business does not accept to use CREATE TABLE AS SELECT, then renaming table and recreating all indexes and so on.
View 21 Replies
View Related
Apr 27, 2013
I'm trying to identify what PL/SQL or the exact code that's is causing massive UGA consumption. How can I identify it?
View 5 Replies
View Related
Apr 26, 2013
Can an undo tablespace be too large and actually hurt performance? I have seen a system with a dedicated 1 TB drive for undo tablespace (no guarantee) with an undo_retention of 7 days. Would this hurt performance? What about setting an undo_retention of 24 hours with no guarantee? The only mention I could find online said that it would not hurt performance but I wanted to double check. You would think that Oracle does not care if it deletes the undo at 15 minutes or if it deletes the undo at a later date such as 7 days later and the performance should stay the same.
View 3 Replies
View Related
Aug 6, 2013
I have oracle 11gr2 database on linux os. It's total sga size is 500mb only. Now, if uses wants read the 1gb of data from database, then there is no sufficient memory in buffer cache. so how it will works. the transaction will get successful or it will fail.And i have another doubt, does oracle can read the data from memory only or it can also read directly from disk.
View 11 Replies
View Related
Jan 23, 2012
the large data FOR UPDATE in table column ?
claimClob clob:=claim; -- claim large data
v_buf varchar2(1000);
amount binary_integer:=1000;
position binary_integer:=1;
[Code]....
why the FOR UPDATE don't do nothing ?
View 4 Replies
View Related