Expdp Slow With Error ORA-01555 - Snapshot Too Old On 11g?
May 15, 2013
data pump export is very slow. For 50GB export has taken more than 24Hrs with one below error:
Database Version:11.2.0.2.0
OS: Windows server 2008 r2
Increased 10GB RAM and CPU 6 to 8 then also same issue
Error:
ORA-31693: Table data object "BNCSDB"."MS_DATA_PTORE" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 20 with name "_SYSSMU20_4037596720$" too small
Export log:
Export: Release 11.2.0.2.0 - Production on Tue May 14 20:03:25 2013
Copyright � 1982, 2009, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@orcl dumpfile=BCSDB04_19.dmp logfile=BCSDB04_19.log
I have a problem regarding with Oracle Flashback Query.if I'm doing flashback a database two hours before,I'm receiving ORA-01555: snapshot too old error.My question is:How to able to extend the based on the hour or day?
We are running a query in one of our databases using the database the link. The query fails in the middle with "ORA-01555: snapshot too old". Not sure, about the database which this error message points to? Will it be the database we have logged in or it is the database where db link points ?
We are getting the below errors while migrating partitioned tables using expdp.
The source and target databases are both running on 10.2.0.5 and the main thing is source database doesn't have any active sessions. This is a clone of a Prod Database and no one is accessing it.
ORA-31693: Table data object "DPMMGR"."WHSE_CTNR_EVNT_W":"MSG_PRCS_N"."MSG_PRCS_N_DC556" failed to load/unload and is being skipped due to error: ORA-02354: error in exporting/importing data ORA-01555: snapshot too old: rollback segment number 31 with name "_SYSSMU31$" too small ORA-31693: Table data object "DPMMGR"."RLTM_PRDCT_LOG":"RPL_20120814" failed to load/unload and is being skipped due to error: ORA-02354: error in exporting/importing data ORA-01555: snapshot too old: rollback segment number 14 with name "_SYSSMU14$" too small
Undo Tablespace has enough space but still the expdp is failing.
SQL>/ TABLESPACE Totalspace(MB) Used Space(MB) Freespace(MB) % Used % Free --------------- --------------- -------------- ------------- ---------- ---------- UNDO01 145096 115338 29758 79.49 20.51 SQL> show parameter undo [code]....
I am having problems with my procedure which is refreshing materialized views. This is the error i am getting : -12008: ORA-12008: error in materialized view refresh path ORA-01555: snapshot too old: rollback segment number 14...
I have two database with the same procedure and the same objects. They run at the same time, however, there are times when i get the error on the other database while the other database runs the procedure without any error.
When i checked the net, they say that this error is caused by data that is old, so the solution is to make the source a predecessor of my procedure/job. But what i am thinking is that how was it possible for the other database to run it completely even if it is not yet linked as the successor of the source?
I am getting the ORA-01555 Snapshot Too Old error while excecuting my procedure (which is running in OLTP system). My procedure contains one table which master table of my DB which holds 1.2 GB of data. The proc is using the MERGE statement which is inserting or updating the data in the master table, while the condition of MERGE is using the subquery in which 2 DBlink table is used (in which one table is updating quite frequently). The proc. is executing in the interval of 6 hrs i.e 4 times a day and its is scheduled in job. find the proc. as an attachment.
We are taking full logical backup ( export) in freeze hours( no user login). But prompting error "ORA-01555: snapshot too old: rollback segment number 7 with name "_SYSSMU7$" too small".
i got a problem recenly in Oracle 11g R2 RAC database . normally When I export sample user 'SCOTT' , it takes hardly one minutes .But In our RAC environment this export runs with 20to40 minutes .
Here the output : --------------------------------------------------------------- oracle@rac2 dump]$ expdp system/sys123 directory=test_dir dumpfile=scott1.dmp schemas=scott
Export: Release 11.2.0.1.0 - Production on Mon Jan 23 09:30:26 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=test_dir dumpfile=scott1.dmp schemas=scott Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 192 KB
[Code] .......
In another machine(where I configure RAC again in Linux) , I got the same problem . I also dont find any perfect documents in metalink . My host information :
OS : AIX 6.1 Storage : IBM (using ASM) Database : Oracle 11g R2
how to tackle the error ORA-01555: Snapshot too old: rollback segment number XX with name NNN too small.I am using Oracle 8i .ways to tackle it without extending the rollback segment
we are getting "snapshot too old" error when we executing a procedure. The error details are as below:
"ORA-01555: snapshot too old: rollback segment number 208 with .."
The code framework is below. basically we are having 3 cursor to select data and the base tables seems to update frequently. hence we understood that this error might come. if any other better way to restructure the below code framework.
Create or replace procedure proc1 as CURSOR C1 is select a from tab1 CURSOR C2 (p1 VARCHAR2) is select * from tab2;
[code]...
Since DBA not willing to change UNDO_RETENTION and other UNDO* parameters, we need to work around with restructure the code.
But gives the following error ORA-31693: Table data object "LOG"."DAILY_LOG" failed to load/unload and is being skipped due to error: ORA-00904: "YYYYMMDD": invalid identifier
I tried with simple sql with YYYMMDD and it works fine, the entry_date is a char field. in QUERY where i'm doing wrong here?
I want to take an export of table MESSAGE, and filter it for the day of 17 JUL 2013 (just to limit the size). i used the following expdp command but its not working.
expdp SYSTEM directory=DATA_PUMP_DIR dumpfile=DB_16_08_2013.dmp logfile=FA0001P_BG_16_08_2013.log TABLES=schema.MESSAGE QUERY=schema.MESSAGE:where created_on between to_date('17-July-13 00:00:00','DD-Mon-YY hh24:MI:SS') and to_date('17-July-13 23:59:00','DD-Mon-YY hh24:MI:SS')
But with select query i am able to retrieve the rows for the specific date.
select * from MESSAGE where created_on between to_date('17-July-13 00:00:00','DD-Mon-YY hh24:MI:SS') and to_date('17-July-13 23:59:00','DD-Mon-YY hh24:MI:SS') Here is the command with syntax error. [oracle@orcl log]$ expdp SYSTEM directory=DATA_PUMP_DIR dumpfile=DB_16_08_2013.dmp logfile= DB_16_08_2013.log TABLES=schema.MESSAGE QUERY=schema.MESSAGE:where created_on between to_date('17-July-13 00:00:00','DD-Mon-YY hh24:MI:SS') and to_date('17-July-13 23:59:00','DD-Mon-YY hh24:MI:SS') -bash: syntax error near unexpected token `('
From some day I have this error during export data pump:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-31626: job does not exist ORA-31687: error creating worker process with worker id 1 ORA-31687: error creating worker process with worker id 1 ORA-31688: Worker process failed during startup.
This error is random, if I retry after few minutes the expdp work correctly.
I've been using datapump for a long time now but I have not come across this problem before.
Importing just two tables: Table1 data=100Mb=11 million rows Table2 data=4.2Gb =19.6 million rows
Table1 ran for approx. 5 hours Table2 ran for approx. 15 hours
If I run the impdp importing both tables in the same par file the default tablespace of the users the import is running as runs out of space due to ORA-01691: unable to extend lob segment <owner>.SYS_LOG0001175799C00045$$ by 512 in tablespace USERS. I do not understand why it is creating objects in order to import tables into someone elses schema.
The environment is Red Hat LINUX 4.1.2-51 running Oracle 11.2.0.1 of Oracle11gR2. This is a 9 node RAC using ASM.
One of the users received the error ora-01555: rollback segment too small
I have read about the error and saw that it is caused by transactions made upon same data, filling to maximum one of the UNDOTBS rollback segments.It happens only once in a while, each time on a different Rollback Segment.
I've read that i should do a few things to enlarge those segments, such as bring a 1. segment offline, 2. drop it and then 3. create it with a bigger size & possibly a bigger extent setting
i've also read that those actions aren't relevant if you have the parameter UNDO_MANAGEMENT set on AUTO, which is actually the case.is that why when i execute ALTER ROLLBACK SEGMENT RB_SEG_NAME_11$ OFFLINE;
1. How do i solve my issue with the Rollback segment being too small to contain the snapshot with the requested SCN?
2. what does the parameter UNDO_MANAGEMENT mean? should i change it, in order to adjust my Rollback Segments attributes, to prevent my error of re-occurring?
I am getting the below error in alert log file,when my application calling a procedure.
ORA-01555 caused by SQL statement below (Query Duration=1576 sec, SCN: 0x09a2.5dda3165): Fri Sep 16 16:33:40 2011 UPDATE SSPT_NETWORK_DETAILS SET INCLUDE_OFFERS = 'Yes' WHERE SESS_ID = SESS_ID
There is no ROLLBACK statement in my procedure. As per my understanding, the ORA-1555 error will occur,
1. The required old image is not in the undo,when we rollback the trasaction. 2. the select query may face this error because of delayed block cleanout concept.
But I don't know why this update statement causing this 1555 error?
I am getting below ORA-01555 error in alert log everyday.
ORA-01555 caused by SQL statement below (SCN: 0x09ad.86a4562a): Sat Jan 21 08:39:45 2012 SELECT (NVL(MAX(BLOCK_ID + BLOCKS ),0) * :b1 ) / 1024 FROM DBA_EXTENTS WHERE TABLESPACE_NAME = :b2 AND FILE_ID = :b3
I cannot able to find the sql_id here. So how can I find from which process or session this query is firing? before increasing the undo size, I need to analyze as why it is occuring?
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ undo_management string AUTO undo_retention integer 96000 undo_tablespace string undo
Following are the details in AWR report (00:00 til 01:00 of 21-Apr-2013) .... not thet the error was produced at 00:42
Undo Segment Summary DB/Inst: DBCPY/dbcpy01 Snaps: 18853-18854 -> Min/Max TR (mins) - Min and Max Tuned Retention (minutes) -> STO - Snapshot Too Old count, OOS - Out of Space count -> Undo segment block stats: -> uS - unexpired Stolen, your - unexpired Released, uU - unexpired reUsed
[code]....
Undo Advisor information taken 'now' is as following
SQL> select dbms_undo_adv.longest_query(sysdate-2,sysdate) from dual; DBMS_UNDO_ADV.LONGEST_QUERY(SYSDATE-2,SYSDATE) ---------------------------------------------- 379650 SQL> select dbms_undo_adv.required_retention from dual;
[code]....
In above situation what should be my first choice (assuming increasing space is not an issue) - increase undo tablespace or increase undo retention?
If latter is the choice then what should be the value? Because as I understand present 96000 value is taken as lower limit and because of auto tuning the actual value (TUNED_UNDORETENTION) being used was 345600 In that case shall I set it to something > max(maxquerylen) i.e 379,650 + X?Or I shall increase the undo tablespace size?
From Undo Advisor output it looks to me that even if I increase the undo retention to 379650 current undo size will be able to support it (may be at the expense of DMLs)Is that right?
I have a requirement to create a generic script to generate awr report on the basis of two timings . We are planning to do load test on multiple server. The user will key in the start time and the end time of the load test and the script should round off the user entered time to the nearest snapshot id and generate a awr report. I have tried using the dba_hist_snapshot table fields begin_interval_time and end_interval_time column. However it failed.
The servers are running on 11g as well as 10g and few are in RAC and few single instances.
I am trying to generate AWR report for database observation. But I am not getting any snapshot listed there. below is the output of my awrrpt.sql
SQL> @?/rdbms/admin/awrrpt.sql
Current Instance ~~~~~~~~~~~~~~~~ DB Id DB Name Inst Num Instance ----------- ------------ -------- ------------ 1140984076 AFCCV 1 afccv
Specify the Report Type ~~~~~~~~~~~~~~~~~~~~~~~ Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text Defaults to 'html'
Enter value for report_type: html
Type Specified: html
Instances in this Workload Repository schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ DB Id Inst Num DB Name Instance Host ------------ -------- ------------ ------------ ------------ * 1140984076 1 AFCCV afccv SERVICEDB1
Using 1140984076 for database Id Using 1 for instance number
Specify the number of days of snapshots to choose from ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Entering the number of days (n) will result in the most recent (n) days of snapshots being listed. Pressing <return> without specifying a number lists all completed snapshots.
Enter value for num_days: 3
Listing the last 3 days of Completed Snapshots
Specify the Begin and End Snapshot Ids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enter value for begin_snap:
in the awr report I saw some segments in Segments by ITL Waits section. Number of waits reaches to 100 in 8-hour snapshot. Is it small/big number? Should I consider increase INITRANS or there is nothing to be worried because value 80-100 are not too high?
Materialized views are normally used for summarized data access.
CREATE MATERIALIZED VIEW mv_snapshot_A REFRESH FAST START WITH SYSDATE NEXT SYSDATE + 20/1440 WITH PRIMARY KEY AS SELECT * FROM A;
This does not seem to be the case here as the materialized view seems to be just a full select. The overhead of the snapshot logs are concerning for this core table. Can we turn off logging in 10g ? the materialized view is defined as fast refresh/ build immediate .
The main requirement here is to keep the snapshot every 15 minutes so that the users can see the updated information ( the flow of data from one location to other).
User get the location wise count of data and can go further in details like in which location wise system wise data count. As the base table is volatile the materialized view is used so that the moment the user clicks for location wise details the data is static for 15 min and user don't get confused.