Data Archive Script Is Taking Too Long To Delete A Large Table
Aug 8, 2013
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below
CREATE TABLE "APP"."MON_TXNS" ( "ID_TXN" NUMBER(12,0) NOT NULL ENABLE, "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE, "ID_PAYER" NUMBER(12,0), "ID_PAYER_PI" NUMBER(12,0), "ID_PAYEE" NUMBER(12,0), "ID_PAYEE_PI" NUMBER(12,0), "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE, "STR_TEXT" VARCHAR2(60 CHAR), "DAT_MERCHANT_TIMESTAMP" DATE, "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE), "DAT_EXPIRATION" DATE, "DAT_CREATION" DATE, "STR_USER_CREATION" VARCHAR2(30 CHAR), "DAT_LAST_UPDATE"
[Code]...
Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
SQL> explain plan for 2 delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
Explained. SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT--------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2798378986
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 || 1 | DELETE | MON_TXNS | | | | ||* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 || 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
-------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
View 6 Replies
ADVERTISEMENT
Sep 14, 2010
I have upgraded oracle database from 9i to 11g using export and import utility. After migration we are facing performance issue in report generation, We have observed that First execution of report is taking very long time and when we generate the same report 2 -3 times there is considerable change in the execution time and it is more better than the first execution.
2 days back I have restarted the database and found the same issue. There are around 300 Reports and it is not possible to generate all the reports 2-3 times every time we restart the database.
View 5 Replies
View Related
Sep 20, 2010
We are firing a normal Drop command on our database and the database version is 10.2.0.4.The database is running on AIX v5.The command is taking more time than usual .
When i am monitoring the session i can see that a call is being made to procedure "aw_drop_proc".Could i ask you if this is something that is taking more time than usual.
We are not having any partitions on the nested tables .We have a pack of tables and we are dropping this pack through a procedure.The pack comprises of nested tables & normal tables.To drop a nested table it is taking around 6 seconds(Table with no rows) and a normal table(With no rows) it is taking 17 milli seconds.We have a partition on Normal table.
The same operation in windows is taking very less time when compared to AIX.
View 5 Replies
View Related
Jun 18, 2011
We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.
View 1 Replies
View Related
Mar 17, 2012
I have following query in Dataguard ..
If I want to take Rman archive log backup with delete input command , how the archive logs will be copied to standby database
For eg
I am taking archive backup as
RMAN>backup archivelog all delete input;
here consider few archives are not copied to standby database (due to nw issue) then how standby will receives these missing archives as those are deleted by rman backup at primary side.
I am not getting any document related to above query.
View 7 Replies
View Related
Sep 27, 2011
How can we tune our RMAN Duplicate as it was taking 10+ hours to finish. Current production database size is 800GB. Approx duration of rman online backup (FULL, including archived logs) is 4 to 5 hours.
Here's the rman backup script we used:
sql 'alter system archive log current';
list archivelog all;
run
{
show all;
report schema;
backup database plus archivelog delete all input;
}
exit
Here's the rman duplicate command we used:
run
{
set until time "to_date(to_char(sysdate,'Mon DD YYYY') || ' 02:30:00', 'Mon DD YYYY HH24:MI:SS')";
allocate auxiliary channel ch1 type disk;
duplicate target database to testdb;
}
exit
View 3 Replies
View Related
Feb 5, 2013
This is my stored procedure
I have below store procedure:
create or replace
PROCEDURE TESTPERFORMANCE (
o_statuscode OUT NUMBER,
o_statusdescription OUT VARCHAR2,
starttime out timestamp,
time_after_query_TESTJOB out timestamp,
[Code]...
This procedure is taking around 35 minutes when there are 35000 records to loop over (i.e cursor has 35000 records) and TESTJOBTRANSACTIONS table has 90000 records. How to reduce execution time.
View 12 Replies
View Related
Aug 23, 2012
Expdp directory=xxx.dmp dumpfile=aaa.dmp logfile=xxx.log FULL=Y
: :: : : :: : : : ;
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 24.87 MB
Processing object type SCHEMA_EXPORT/USER
[code]...
then my export hangs..... checked in alert log nothing found.and then killed the job and reran again but same....checked the status and it's saying EXECUTING.
View 15 Replies
View Related
Jul 17, 2013
11.2.0.3 This is for a build. We are still in development. No risk of data loss. As part of the build, I drop the user,re-create it, re-create the objects. Allows us to test the build all the way through. Its our process. This user has some tables with several 1000 partitions. I ran a 10046 trace and oracle is using pl/sql to do loops to do DML against the data dictionary. Anyway to speed this up? I am going to turn off the recyclebin during the build and turn it back on. anything else I can do? Right now I just issue 'drop user cascade'. Part of is the weak hardware we have in the development/environment. Takes about 20 minutes just to run through this part of the script (the script has alot more pieces than this) and we do fairly frequent builds. I can't change the build process. My only option is to try to make this run a little faster.
View 3 Replies
View Related
Feb 26, 2013
I have a session on a system here that has been stuck on a DELETE statement for a very long time and the session is pegging the CPU. Using TOAD here is the "current statement":
DELETE FROM WORKFLOW
WHERE ID = :B1
ID is the primary key of the table.Here are some relevant stats, also from TOAD's session browser:
Elapsed time = 35507986900
CPU time = 35531815481
Buffer gets = 972040769
Disk reads = 951289273
Executions = 71462
I'm not sure I understand "executions" because from the information I have from the people who initiated this, this particular delete should only be occurring 30 times... maybe that stat means something other than what I think it does.I also ran a trace for 30 seconds using:
CODESQL> begin dbms_monitor.session_trace_enable(session_id=>97, serial_num=>15, wai
ts=>true, binds=>true); end;
2
3 /
PL/SQL procedure successfully completed.
SQL> begin dbms_monitor.session_trace_disable(session_id=>97, serial_num=>15); e
nd;
2 /
PL/SQL procedure successfully completed.
I ran tkprof over the resulting trc file:
CODE[root@localhost trace]# /oradb/devmain/product/11.2.0/dbhome_1/bin/tkprof ltw35qa1_ora_19558.trc
output = delete_scattered2.txt
TKPROF: Release 11.2.0.1.0 - Development on Wed Feb 27 04:04:50 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
[root@localhost trace]#
The resulting file is attached. Now the offending query in the tkprof, based on my interpretation, is the select from the CMTE00 table, which contains 2344482 rows and no index on the workflow_id column. The relationship between CMTE00 and WORKFLOW tables are 1 to 1. There is a foreign key on CMTE00 pointing to the primary key of WORKFLOW which is what I assume initiated this query - I assume this is oracle checking the referential integrity since our code is not executing that statement. Also of interest, prior to this delete statement, the corresponding entry in CMTE00 was deleted in the same transaction. Google searching "db scattered file read" lead me to one of your (Don - if you read this) articles and appears to indicate that individual blocks are being fetched off the disk and this is what is taking up all the time.
View 14 Replies
View Related
Nov 8, 2010
I was asked in a telephone interview about committing a delete ( a few million rows)in an oracle log table which had a trillion rows in total. He said that delete took 2 days.
Then he asked me if then commit is performed(assuming a huge rollback segments are allocated) how long does it take for that commit .
View 5 Replies
View Related
Apr 26, 2010
I have a query on , how to view the sample data from a very table which is large in size ( more than 10 million ).
I just need to see some sample data from a large table ( to see what kind of data which is application related ).
My question is :
Select *
from Sample_table
where rownum < 10
is this a Good way to view the sample data ?
I have understanidng that the rownum will be assigined to the rows once all the rows are reteived.
So what is the best way to view ?..I am not sure of any condition to put in the intial time of querying.
View 5 Replies
View Related
Mar 9, 2010
In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T
WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT)
IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT
FROM LOAD_TRADE_ORDER
WHERE IND_IS_BAD_RECORD='N');
Tables Used:
oTRADE_ORDER_EMP_ALLOCATION Row count (329525880)
oLOAD_TRADE_ORDER Row count (29281)
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
View 4 Replies
View Related
Oct 18, 2010
Scenario:
Our application is using a two instance, one for the live active data and the other for the reports data. We have a process which moves the data from the live instance to reports instance every night. In a single db environment the process is working without any issues. However when we move to the RAC environment the reports db's (insert) in large table get locked and we are unable to insert data to the reports db.
What we are performing is:
Insert into my_table_rpt select * from may_table_live@db_link_to_livedb;
Issues:
my_table_rpt get locked
We have found the workaround by disable locking in destination and subsequent to the insert enable locking
ALTER TABLE my_table_rpt DISABLE TABLE LOCK;
Insert the data to the reports database table
Then
ALTER TABLE my_table_rpt ENABLE TABLE LOCK
Question:
Why does the large destination table (my_table_rpt) get locked in the RAC environment?
View 2 Replies
View Related
Oct 8, 2013
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns. Is it possible to design report in below format(half columns on one side of page, half on other side ofpage :
Column1DataColumn11DataColumn2DataColumn12DataColumn3DataColumn13DataColumn4DataColumn14DataColumn5DataColumn15DataColumn6DataColumn16DataColumn7DataColumn17DataColumn8DataColumn18DataColumn9DataColumn19DataColumn10DataColumn20Data I am using Apex 4.2.3 version on oracle 11g xe.
View 2 Replies
View Related
Oct 12, 2012
We are working on a performance tuning aspect, where in a table has a LONG datatype and number of round trips are increasing based on the number of rows fetched.i.e one round trip for every row.
Background.
1. created a table with LONG data type.
2. inserted bulk load of data.
3. set auto trace on and executed below tests..
SQL> select count(*) from test_long;
COUNT(*)
----------
6110
SQL> desc test_long
Name Null? Type
----------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------
ID NUMBER
TESTLONG LONG
[code].....
but on LONG column, irrespective of the array size the round trips does not reduce.
View 1 Replies
View Related
Jul 7, 2011
I have a partitioned table that is streamed to another database. I need to archive data on that table. That is I need to add a partition and remove a partition.
If I make those changes to the source table, will it stream over to the destination table?
If not, can I ...
pause streaming make changes to source table make same changes to destination table sreenable streaming. I know making data changes to the destination table can screw up streams but not sure if that holds for ddl.
View 1 Replies
View Related
Sep 16, 2010
I am not able to delete one month old archive log file on production database. it already applied on standby database. While deleting the files, it throwing errors as "Another program is using it".
Version:10.2.0.4
OS: Windows server 2003.
View 5 Replies
View Related
Jul 26, 2013
I am trying to remove archive log file on my standby site but I am getting the following message. There are lots of archive logs are present on the storage but rman is unable to displaying that.
RMAN> list archivelog all;
specification does not match any archived log in the repositoryRMAN>
View 7 Replies
View Related
Aug 30, 2012
My oracle database is 11.2.0.2 RAC RDBMS on RHEL 5.6
We recently created physical standby database. How do we delete standby archive logs from physical standby?
View 2 Replies
View Related
Jul 24, 2012
how to insert data in a table by deleting previous entered data and only inserting current data like:
CREATE TABLE test
(
name VARCHAR2(20),
id NUMBER
)
INSERT INTO test VALUES ('aaa',5500);
[code]....
I got two rows. now when I do insert statement I want to delete the previously stored data and only insert the current data like:
INSERT INTO test VALUES ('aaa',8);
INSERT INTO test VALUES ('aaa',9);
it must show aaa,8 and aaa,9 bt not the previous values.
NOTE: we can not do sth like: update set... where id = ... becoz the values are dynamic.
View 4 Replies
View Related
Apr 13, 2011
we are getting below error:
ora-00257 archiver error. connect internal only until freed
when we tried to remove the unwanted arc files thro ASMCMD,we are getting below error:
ASMCMD> rm -ef 2011_04_05/
Unknown option: e
usage: rm [-rf] <name1 name2 . . .>
ASMCMD> rm -rf 2011_04_05/
ORA-15032: not all alterations performed
ORA-15028: ASM file '+XCOM_BACKUP_DG/TXCOM/ARCHIVELOG/2011_04_05/thread_2_seq_27215.1143.747641143' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
ORA-15032: not all alterations performed
ORA-15028: ASM file '+XCOM_BACKUP_DG/TXCOM/ARCHIVELOG/2011_04_05/thread_3_seq_21762.826.747641143' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
ORA-15032: not all alterations performed
ORA-15177: cannot operate on system aliases (DBD ERROR: OCIStmtExecute)
further we checked FRA size:
SQL> select * from v$flash_recovery_area_usage;
FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
------------ ------------------ ------------------------- ---------------
CONTROLFILE 0 0 0
ONLINELOG 0 0 0
ARCHIVELOG 3.19 0 38
BACKUPPIECE 0 0 0
IMAGECOPY 0 0 0
FLASHBACKLOG 0 0 0
and checked any arc processing holding lock on arc files:
> ps -ef | grep -i ora_arc*
oracle 6989 1 0 15:07 ? 00:00:00 ora_arc0_TXCOM1
oracle 6991 1 0 15:07 ? 00:00:00 ora_arc1_TXCOM1
oracle 12246 12164 0 15:17 pts/4 00:00:00 grep -i ora_arc*
oracle 13452 1 0 Mar23 ? 00:01:07 ora_arc0_TWEBAPPS1
oracle 13454 1 0 Mar23 ? 00:00:30 ora_arc1_TWEBAPPS1
oracle 15402 1 0 Mar23 ? 00:00:50 ora_arc0_SXCOM1
[Code] ........
but we were not able to remove those .arc files from that folder. finally we have down all the instances and deleted those files manually.
View 1 Replies
View Related
Jan 9, 2013
I have a table with around 650,000,000 rows and we need to delete about 60,000,000 rows at the end every month and same amount of rows accumulate throughout the month. The deletion usually takes overnight to delete. We are using 10r2 in IBM AIX. The procedure we are using to delete is:
declare
ln_count number:=0;
begin
for i in (select rowid from table1 where some_id<2012090000)
loop
delete from table1
[code]...
When this procedure is started I mostly see that the session is busy in user i/o wait for db sequencial file read. Will using cursor instead will give better results.
View 13 Replies
View Related
Oct 28, 2008
I need to create a stored procedure in Oracle 9i which will automatically delete data one by one from a particular table and then by means of same procedure insert record one by one in same table.
View 6 Replies
View Related
Apr 13, 2012
how to delete a data which is related to many no of child table and according to setnull and casecad by using subprogram?
View 6 Replies
View Related
Nov 19, 2010
how i can chk & delete duplicate rows from a table
View 3 Replies
View Related
Nov 10, 2012
I am using oracle 11g database.
unforunatly i delete the data from main table. and i operated alter stmt.
now how do i retrieve the data..??
View 5 Replies
View Related
Apr 27, 2012
I have a table which contains 8,21,177 amount of data totally.Now I am trying to delete around 4,84,000 of data from this table by using just one filter i.e. my query is something like below
DELETE /*+ parallel(resource,4) */ FROM resource where created_by = 'MIGN'
This is going to delete 4,84,000 rows of data . But my current issue is this is taking lots of time to delete the data . To be precise , its almost taking 25 hours to delete this data..The created_by column is indexed .
Execution Plan
----------------------------------------------------------
Plan hash value: 2389236532
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 499 | 20459 | 39 (0)| 00:00:
01 |
| 1 | DELETE | RESOURCE | | | |
[code]....
View 26 Replies
View Related
May 10, 2012
can we delete multiples table through the single query?
suppose we have 2 table first one is emp and second is client
i want delete all data from emp and client through the single line query
View 1 Replies
View Related
Sep 24, 2010
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
View 4 Replies
View Related