Deleting Huge Records From Table?

Apr 29, 2013

Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.

Method I have used:

1. Created Temp table

2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.

delete from B where exists (select 1 from temp where b.col1=temp.col1);

3. why it is taking too much time for deleting records in table A.

View 5 Replies


ADVERTISEMENT

SQL & PL/SQL :: Deleting Duplicate Combination Of Records From Table?

Sep 29, 2011

How can I delete the duplicate combination of records from the below table.

CREATE TABLE test
(
gidNUMBER(10),
pidNUMBER(10)
);
INSERT INTO test VALUES (10,20);
INSERT INTO test VALUES (20,10);
INSERT INTO test VALUES (25,46);

[code]....

The condition is if GID = PID and PID = GID then only one combination of these records should be retained. For example Out of 10-20 and 20-10 only one record should be retained.

Expected result after deletion

GID PID
---------- ----------
10 20
25 46
89 64
15 16
19 26

View 5 Replies View Related

JDeveloper, Java & XML :: Load Huge XML File With Hundreds Of Records Into Oracle Table?

Jun 29, 2011

I need to load (using SQL Loader) an huge XML file, with several hundreds of records into an Oracle Table.The XML file schema is pretty simple, and it's anything like this:

<dataroot>
<record>
<companyname>LimitSoft S.A.</companyname>
<address>Street Number 1</address>

[code]...

I'm trying to use the help included in this link [URL]...

When they refer to schema[URL].... what should I use?? I do not need to use the Oracle website to register anything, right?

View 4 Replies View Related

SQL & PL/SQL :: Group By Gives Wrong Value In Huge Data Records?

Jun 18, 2012

I have table which contains huge data. around 12 lakhs records. when I use sum function on accountname and docdate it gives wrong value. once I restart the server it gives the correct value. one or two days it gives correct value after that again I get the same problem. If I restart again it gives correct value.

I use Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 64 bit server on Linux.

View 6 Replies View Related

SQL & PL/SQL :: To Verify Huge Number Of Records In Two Different Databases

Jun 5, 2012

I need to verify huge number of records in two different databases. Basically i wanted to check if same record exist in other database's table or not? but as the number of records are more than billions who would i verify?

checking record one by one would be so hectic and time consuming. any other option is there?

View 11 Replies View Related

PL/SQL :: Deleting Duplicate Records

Dec 20, 2012

I have a table and it's having duplicate records.

for one particular employee, he is having multiple records with the same data in the table

EMPNO    ENAME     JOB     SAL     DOB
-------------------------------------------------------
1               A           X        100     1956
2               B           Y        200     1974
1               A           X        100     1956
3               C           Z        300     1920

[Code]....

like this am having multiple times the duplicates.

I have written the below query to delete the duplicate records. But it is deleting only one record (if we have 5 duplicates it is deleting only 1 ). But I am looking to delete if we have 5 duplicates need to delete 4 duplicates and keep 1 record in the table.

query which am using to delete the duplicates is

DELETE FROM Table1 a 
WHERE ROWID IN (SELECT MAX(ROWID) 
                FROM Table2 b
                WHERE a.ID = b.ID); 

it is deleting only one row but I want to delete 4 records out of 5 and keep one record.

View 12 Replies View Related

Forms :: Deleting Records From Block

Aug 2, 2011

I want to delete records from block in the form.could you explain where(in which trigger) should i write set_block_property.

View 9 Replies View Related

SQL & PL/SQL :: Deleting Multiple Records Based On Conditions

Jun 28, 2010

I have a requirement where i need to retain latest 3 records based on creation date for each customer_id and delete the older records. The customer_ id or contract_number data in the test table are not unique.

Sample Table Script:

CREATE TABLE TEST
(
CUSTOMER_ID VARCHAR2(120 BYTE) NOT NULL,
CONTRACT_NUMBER VARCHAR2(120 BYTE) NOT NULL,
CREATION_DATE DATE NOT NULL
);
[code]...

View 8 Replies View Related

SQL & PL/SQL :: Delete Statement Is Deleting 1200000 Records

Sep 7, 2010

I executed the following delete statement.

DELETE FROM sre_t WHERE TO_CHAR(end_dt,'yyyy')<'2000'
or TO_CHAR(start_dt)<'yyyy')<'2000';

It's executing for 15 to 20 minutes after that i got the error "session timed out"..The table is having four crore records.The delete statement is deleting 12,00000 records.

View 4 Replies View Related

SQL & PL/SQL :: Deleting Duplicate Records Based On Repetition Of Certain Fields

Sep 15, 2011

I have the following situation and need support:

create table try_x
(a number PRIMARY KEY,
b NUMBER,
c NUMBER,
f_text VARCHAR2(10));

insert ALL
into try_x values (0,1,1,'abc')
into try_x values (1,1,1,'abc')
into try_x values (2,1,1,'xyz')
into try_x values (3,1,2,'abc')
into try_x values (4,1,2,'abc')
into try_x values (5,1,2,'abc')
into try_x values (6,1,3,'abc')
into try_x values (7,1,3,'abc1')
into try_x values (8,1,3,'abc2')
into try_x values (9,1,3,'abc2')
select * from DUAL;

Although a is the PK, records with similar b,c,f_text are considered redundant and I need to delete all occurrences in the table where b, c, d are redundant and leave the unique ones. So I need the result to look like:

a b c f_text
-----------------
0 1 1 abc
2 1 1 xyz
3 1 2 abc
6 1 3 abc
7 1 3 abc1
8 1 3 abc2

View 15 Replies View Related

Dropping Column From Huge Table

Aug 9, 2012

I am trying to drop a column from a huge partitioned table (non compressed including partitions). I am working on 11gR2 database for information.

i used below approach

1. alter table <tab_name> set t1 unused column;
2. alter table <tab_name> drop unused columns;

then i got the below error message

ORA-39726: unsupported add/drop column operation on compressed tables

First statement did work i could be able to add another column with the same name but still don't want the unused column on the table.

View 5 Replies View Related

PL/SQL :: Oracle 11g - Huge Table Statistics

Jun 13, 2012

I have a table which have 300+ columns and have 13 million rows. It is on a 32 kb block size. This is a table in data ware house environment. There no# of rows in the table haven't changed much but I see that the time taken to collect statistics have increased significantly.Initially it took only 15 minutes (with the same 13M rows) now it runs for 4+ hours. The max parallel servers is 4 (which is unchanged). The table is not partitioned.

OS: HP UX Itanium
Database: Oracle 11g (11.2.0.2)

Command is:
exec dbms_stats.gather_table_stats(ownname=>'ABC',tabname=>'ABC_LOAD',estimate_percent=>dbms_stats.auto_sample_size,cascade=>TRUE,DEGREE=>dbms_stats.auto_degree);

I would like to understand:

1) What could have been the causes of this change in time. 15 minutes to 4+hours ?
2) How can we gather statistics of huge table at a faster rate?

View 1 Replies View Related

Datatype Modification For Table With Huge Data

Aug 11, 2011

Below is my requirement.

Need to change the precision of a column in a existing table. Statistics about the table

* has over 130 columns
* More than 300 million records
* Column to modify is #121 which has data
* No primary key defined

Since the column has data, it is not possible to modify with a simple Alter.

Second option - create temp column in same table, update from original, put null in original, alter, update back from temp, drop the temp column. This approach is very expensive and time consuming.

Also the Column ID needs to preserved as #121.

View 3 Replies View Related

Bulk Deletes Of Data From Huge Table

Jun 13, 2013

I am trying to delete 3 million records of data from huge table which already consists of 3 billion records.

This is hitting performance of DB and halting other activities of my users. Is there any easy way to delete such data fast. I have tried with forall delete but it is even taking lot of time.

View 5 Replies View Related

SQL & PL/SQL :: Deleting Partition Of A Table

Nov 19, 2012

create or replace
Procedure ReadingsPurge
As
v_sql varchar2(500);
v_date date;
p_count NUMBER;

[Code]...

-- Code below drops partitions that are older than the NoOfDays Parameter
OPEN c1;
LOOP
FETCH c1 INTO v_partition_name, v_high_value;
EXIT WHEN c1%NOTFOUND;

[Code]....

Above code is compiling successfully.

After I added the lines makred in the red font, when I tried to execute the stored procedure, I got an error

Error starting at line 1 in command:
execute ReadingsPurge
Error report:
ORA-00933: SQL command not properly ended
ORA-06512: at "CDC_USER.READINGSPURGE", line 30
ORA-06512: at line 1
00933. 00000 - "SQL command not properly ended"
*Cause:
*Action:

View 2 Replies View Related

Query Optimization On Join With A View On Huge Table?

Jun 22, 2011

I have this table

create table ACTIONARI_ARH
(
actionar_id NUMBER(10) not null,
id VARCHAR2(20) not null,
id_2 VARCHAR2(20),
tip VARCHAR2(1),
nume VARCHAR2(100),
prenume VARCHAR2(100),
adresa VARCHAR2(200),

[code]....

and this view

CREATE OR REPLACE VIEW ACTIONARI AS
SELECT "ACTIONAR_ID","ID","ID_2","TIP","NUME","PRENUME","ADRESA","LOCALITATE","JUDET","TARA","CERT_DECES","DATA_REGISTRU" Data_operare,"USER_MODIF","DATA_MODIF","REZIDENT"
FROM (
select

[code]....

The table has about 30 milion records and holds persons names, addresses, personal id (id), and internal id(actionar_id) and date when a new adress has been added.

The view is about getting only the most recent info for one person (actionar_id).

if i run a

a) select * from actionari a where a.actionar_id = 'nnnnnnn', result is returned immediatly, oracle uses index and does not do a full table scan.

b) select * from actionari a where a.actionar_id in ('nnnnnnn','mmmmmm','ooooooo'), result is returned immediatly, oracle uses index and does not do a full table scan.

my problem when i use this view in a join.let's assume i have another table with no more than 500 records, something like

create table SMALL_TABLE
(
actionar_id NUMBER(10) not null,
......
);

and if i run

select *
from SMALL_TABLE s
join actionari a
on a.actionar_id = s.actionar_id;

it takes like forever to process, forever means 1~3 minutes.by looking at the execution plan, oracle does a full table scan, creates the view for all unique 7milion persons, and only then joins the result with the actionar_is's in the small table and returns the desired 500 record result.i am using oracle 10g.

View 2 Replies View Related

Server Administration :: How To Drop A Column In Huge Table

Aug 8, 2011

I want to drop a column in a huge table which contain about 420,000,000 rows,i use the alter table drop coumn command to execute,and found it takes a long time and generate huge redo.

Is there any quickly way to drop a column in a huge table?

View 5 Replies View Related

Backup & Recovery :: Taking A Dump Of Huge Table

Jul 10, 2012

Is this possible to take the dump of a huge table , say 500gb, into multiple files and then import it in other database? if yes how can we do it?

Note that it is a single table with 500GB of size.

View 1 Replies View Related

SQL & PL/SQL :: Script Hangs When Deleting From Table

May 24, 2010

I'm testing a procedure which loads data into my database, and after each test I want to empty some of the tables and reset the sequences. I have this script to do that...

DELETE FROM COM_MERGE;
COMMIT;
DELETE FROM COM_TITLE;
COMMIT;
DELETE FROM COM_ISSUE;
COMMIT;
DELETE FROM COM_PAGE_ELEMENT;
COMMIT;
DELETE FROM COM_ELEMENT;
COMMIT;
DELETE FROM COM_STORY_TITLE;
COMMIT;
BEGIN
COM_RESET_SEQUENCES;
END;

Today I added the call to the sequences procedure to my script, but I have been using the script to delete from tables for a number of days without problem.However today I am finding that when I run the script it works ok the first couple of times, but when I try running it for a third time, it hangs after the second delete (in other words it stops when it gets to the delete from COM_ISSUE). After this happened the first couple of times I stopped the db and restarted it, then the script was ok twice, but again I'm finding that the script hangs. There is no error message, but the script fails to complete.

I didn't know if it was because originally I had one commit at the end of the script, so I added commits after each delete but that didn't solve it.I am using SQL Developer but I have found the same problem when running the script from SQL Plus.This is the definition of the COM_ISSUE table (just in case the table is the source of the problem).There is only one row in COM_ISSUE.

CREATE TABLE "BILL"."COM_ISSUE"
(
"CI_ID" NUMBER NOT NULL ENABLE,
"CI_TITLE" NUMBER NOT NULL ENABLE,
"CI_DATE" NUMBER NOT NULL ENABLE,
"CI_PRICE" NUMBER NOT NULL ENABLE,
"CI_PUBLISHER" NUMBER NOT NULL ENABLE,
CONSTRAINT "COM_ISSUE_PK" PRIMARY KEY ("CI_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536
[code]....

View 9 Replies View Related

Oracle 11g Express Edition - Load Huge Data Into Table

Nov 6, 2012

I am using oracle 11g Express Edition, I have a file of .csv forma, Which has a data of size 500MB which needs to be uploaded into oracle table.

Which would be the best method to upload the data into table. Data is employee ticket history which is of huge data.

How to do the mass upload of data into oracle table.

View 3 Replies View Related

PL/SQL :: Deleting Large Number Of Rows From Table

Apr 30, 2013

Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.

Method I have used:

1. Created Temp table

2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.
delete from B where exists (select 1 from temp where b.col1=temp.col1);

3. Why it is taking too much time for deleting records in table A.

Is there any thing that during deleting data from such master table, it is referring to all dependent tables even if dependent data is not present ?

View 12 Replies View Related

SQL & PL/SQL :: How To Restrict User (Schema) From Deleting Data From Table

Nov 2, 2012

I want to know how to restrict a user(Schema) from deleting the values from a table created in the same schema.

Below is the example.

I have created a table employee in abc schema which has two values.

EMPLOYEE
ABC
XYZ

In the above scenario the abc user can only fire select query on the EMPLOYEE table.

SELECT * FROM EMPLOYEE;

He should not be able to use any other DML commands on that table. If he uses then Insufficient privileges error should be thrown.

View 10 Replies View Related

PL/SQL :: How To Restrict User (Schema) From Deleting Data From A Table

Nov 2, 2012

I have scenario here.

I want to know how to restrict a user(Schema) from deleting the values from a table created in the same schema.

Below is the example.

I have created a table employee in abc schema which has two values.

EMPLOYEE
ABC
XYZ

In the above scenario the abc user can only fire select query on the EMPLOYEE table.

SELECT * FROM EMPLOYEE;

He should not be able to use any other DML commands on that table. If he uses then Insufficient privileges error should be thrown.

View 6 Replies View Related

SQL & PL/SQL :: Oracle 10g - Update Records In Target Table Based On Records Coming In From Source

Jun 1, 2010

I am trying to update records in the target table based on the records coming in from source. For instance, if the incoming record is present in the target table I would update them in the target else I would simply insert. I have over one million records in my source while my target has 46 million records. The target table is partitioned based on calendar key. I implement this whole logic using Informatica. Looking at the informatica session log I find that the informatica code is perfectly fine but its in the update part it takes long time (more than 5 days to update one million records). find the TARGET TABLE query and the UPDATE query as below.

TARGET TABLE:
CREATE TABLE OPERATIONS.DENIAL_REGRET_FACT
(
CALENDAR_KEY INTEGER NOT NULL,
DAY_TIME_KEY INTEGER NOT NULL,
SITE_KEY NUMBER NOT NULL,
RESERVATION_AGENT_KEY INTEGER NOT NULL,
LOSS_CODE VARCHAR2(30) NOT NULL,
PROP_ID VARCHAR2(5) NOT NULL,
[code].....

View 9 Replies View Related

Inserting Data In New Column Where Table Has Huge Data

May 26, 2013

I am trying to add a new column in a table and insert data from another column of same table.

alter table POSITION add INT_MK_DATA_ID number(10,0) null;
update POSITION set INT_MK_DATA_ID = INST_MARKET_DATA_ID;
commit

As there are huge number of records in the POSITION table ...its taking for ever to execute this query.

View 1 Replies View Related

PL/SQL :: Queries To Get Data In Batches From A Huge Data Table

Jan 3, 2013

Here is my problem, i need to create some files with my own format(let say 5000 records each) from a huge data table (May contain 5 Million records). And i want this creation to be multi threaded.

so how can i form queries efficiently to fetch records like 1..5000 and 5001..10000 and so on. I can form some thing like select * from table where rownum<5000 and not exists ( already fetched records) . but it is not the efficient one.

View 5 Replies View Related

Server Administration :: Reorganize A Table And Index After The Deletion Of Records From Table?

Feb 7, 2012

We deleted millions of records from a table.

1.Is it necessary to reorganize a table and index after the deletion of records from table ? Because i see some change in table size after table and index reorganization.

2.Will re org table and index improve the database performance ?

View 7 Replies View Related

PL/SQL :: Selecting Records From 125 Million Record Table To Insert Into Smaller Table?

Jul 17, 2013

Oracle 11gI have a large table of 125 million records - t3_universe.  This table never gets updated or altered once loaded,  but holds data that we receive from a lead company. I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads -  that will be updated with regard to when the lead is mailed and for other relevant information.  select records from this 125 million record table to insert into the smaller table. 

I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches. My current attempt has been to create a View using the query that selects the records as shown below.  Then use a second query that inserts into T3_Leads from this View V_Market.  This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause?    My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key.   

CREATE VIEW V_Market  asWITH got_pairs    AS     (         SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */  l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no       ,      ROW_NUMBER () OVER ( PARTITION BY  l.address_key                                    ORDER BY      l.hh_verification_date  DESC                    ) AS r_num         FROM   t3_universe  e         JOIN   t3_universe  l  ON                l.address_key  = e.address_key             AND l.zip_code = e.zip_code           AND   l.p1_gender != e.p1_gender      

[code]....

View 2 Replies View Related

SQL & PL/SQL :: Insert All Records From External Table Into Export Table

Mar 25, 2013

following is the requirement

External Table
WKSHT_FILE_EXT
wksht_line
Export Table
Wksht_export
global_idvarchar2(10)
wksht_linevarchar2(250)
[code]....

Step 1.Insert all records from the external table into the export table. Truncate the export table first

Step 2.Read in a record from the export map table

Step 3.Search through export table records looking for the key words BRANCH =. Compare the branch code with the branch code form the map table

Step 4.If a match is found mark all records in the export table for the worksheet with the global ID from the export map table as follows..The first line of a worksheet is marked by the words WKSHTS..The last line of the work sheet is marked by the words COMPANY CONFIDENTIAL..We will need to capture the line break so also mark the next line after the COMPANY CONFIDENTIAL line

Step 5.Continue with Steps 2 - 4 until all records have been processed from the export map table.

first I have to create a procedure ti insert data from external table to export table.Global id will be blank.it will be updated by the mapping table's Global Id when The EB COLUMN's data(i.e 8p,2Betc ) will match with the BRANC=NA,2Betc of the datasheet loaded from the external table.. FOLLOWING IS THE SAMPLE DATASHEET

WKSHTS AAAAA BBBBBBBBBBB ELECTRONICS INC. TIME REPORT-DATE PAGE
SORT - BR, SLSREP AEC FIELD SALES REPRESENTATIVE 16:14 09/21/12 1
BRANCH = 2B
EMPLOYEE NAME SALVAAG, GREGG Days in the Month 28
[code]....

THERE ARE 2 pages..I have to split this LONG REPORT STORED IN WKSHT_LINE COLUMN OF EXPORT TABLE to 2 records..like wise 500 pages are there means 500 records.. AND THEN FIND BRANCH= after that which two words will come i.e NA,2B etc if it will MATCH WITH MAPPING TABLE"S EB COLUMN"S DATA,THEN MAPPING TABLE's GLOBAL ID WILL BE UPDATED TO EXPORT TABLE's GLOBAL ID WHICH IS BLANK

View 1 Replies View Related

Modifying Fields With Huge Data

Aug 26, 2011

I have 2 questions, because they can be inter-related I am posting it in a single post. These queries are related to Oracle(PL\SQL).

1. I am trying to increase the size of a field in a table which has almost 2 million records and the query for alteration runs for almost and hour and rollsback, wondering is there a better way of doing it.

2. I have modified the size of a field in a table from Varchar2(10) to Varchar2(20), now when I tried to rollback the modification it is not letting me to change the size from Varchar2(20) to Varchar2(10). No data has been inserted after the modification.

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved