Server Utilities :: Slow Expdp After Upgrade
Oct 1, 2012
We had AIX OS on 570 machine and database 10.2.0.4. We took expdp and it took 2 and hour to complete every night.
Now we upgrade to 10.2.0.5 and 770 machine and now same command takes 6 hours to complete even database and hardware is upgraded
Command is
expdp T24SILK/oracle directory=backup dumpfile=exp_beod_T24_%U_$dt
.dmp logfile=exp_T24_$dt.log EXCLUDE=TABLE:"LIKE '%TRACE'" parallel=6
View 1 Replies
ADVERTISEMENT
May 29, 2013
I have one doubt on Expdp & RMAN. Do EXPDP utilities does backup at block level as what RMAN is doing? Which one is faster, expdp or RMAN?
View 16 Replies
View Related
Nov 7, 2012
All my sys tables are very slow after my database upgrade from 10.2.0.4 to 11.2.0.3 on AIX 6.1
For example
select * from ALL_TAB_COLUMNS; -- taking 19 seconds in 11.2.0.3 and few millisec in 10.2.0.4
I have deleted and updated fixed and dictionary table statistics , till I facing this issue
View 5 Replies
View Related
May 15, 2013
data pump export is very slow. For 50GB export has taken more than 24Hrs with one below error:
Database Version:11.2.0.2.0
OS: Windows server 2008 r2
Increased 10GB RAM and CPU 6 to 8 then also same issue
Error:
ORA-31693: Table data object "BNCSDB"."MS_DATA_PTORE" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 20 with name "_SYSSMU20_4037596720$" too small
Export log:
Export: Release 11.2.0.2.0 - Production on Tue May 14 20:03:25 2013
Copyright � 1982, 2009, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@orcl dumpfile=BCSDB04_19.dmp logfile=BCSDB04_19.log
[code]...
View 12 Replies
View Related
Jan 23, 2012
i got a problem recenly in Oracle 11g R2 RAC database . normally When I export sample user 'SCOTT' , it takes hardly one minutes .But In our RAC environment this export runs with 20to40 minutes .
Here the output :
---------------------------------------------------------------
oracle@rac2 dump]$ expdp system/sys123 directory=test_dir dumpfile=scott1.dmp schemas=scott
Export: Release 11.2.0.1.0 - Production on Mon Jan 23 09:30:26 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=test_dir dumpfile=scott1.dmp schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
[Code] .......
In another machine(where I configure RAC again in Linux) , I got the same problem . I also dont find any perfect documents in metalink . My host information :
OS : AIX 6.1
Storage : IBM (using ASM)
Database : Oracle 11g R2
View 4 Replies
View Related
Apr 13, 2011
I would like to export specific tables(not entire schema) including metadata. I am using a parameter file for expdp.
Tables=emp,dept
Does this also include all metadata or should i also add the below Include in the parfile ?
INCLUDE =Indexes,Sequences,Procedures,Views
View 3 Replies
View Related
May 27, 2011
I am trying to export Schema using expdp command. but its going hang after few minutes. it seems that it stucks any where. Even I am trying with normal scott schema it is also hanging.
View 16 Replies
View Related
Jun 4, 2010
I export a table using exp utility it take 30 mins to complete the export.The same i have done in expdp utility it take 10 mins to complete the export.
How it happens?
View 3 Replies
View Related
Jun 3, 2010
While trying to expdp using Query logics, getting syntax related erros shown below:
expdp system/xxxx SCHEMAS=LOG NETWORK_LINK=DBLINK1 INCLUDE=TABLE:"IN('DAILY_LOG')" QUERY=LOG.DAILY_LOG:"where entry_date< to_char(sysdate -1,'yyyymmdd')" DIRECTORY=dump DUMPFILE=log_exp.dmp logfile=log_exp.log
But gives the following error
ORA-31693: Table data object "LOG"."DAILY_LOG" failed to load/unload and is being skipped due to error:
ORA-00904: "YYYYMMDD": invalid identifier
I tried with simple sql with YYYMMDD and it works fine, the entry_date is a char field. in QUERY where i'm doing wrong here?
View 4 Replies
View Related
Oct 5, 2013
i want to exclude only data of some particular tables not complete table object when exporting using expdp.
View 13 Replies
View Related
Aug 26, 2012
I have a server configured to German & English. when i connect with SQLPLUS, i have German language server output, but when i do "alter session set nls_language='AMERICAN'" - it solves the issue for me.
I need the same for expdp command, but I don't know how to do this. I have tried to add a parameter nls_language, but expdp doesn't recognize it. Is it possible to somehow see server output of the expdp & writing it to the log file in English?
View 5 Replies
View Related
Jun 16, 2011
i succeeded to expdp to ASM diskgroup such as
create directory asmexpdir as '+RECO/FILTDB/EXPDP';
grant read,write on directory asmexpdir to oraasfs;
expdp oraasfs/oraasfs2301 directory=asmexpdir dumpfile=SBSR_EXP.dmp tables=TM_SFS_CUST_01 logfile=EXPDP_LOG:SBSR_EXP.log
SUCCESS MESSAGE
. . exported "ORAASFS"."TM_SFS_CUST_01" 387.2 MB 817684 rows
Master table "ORAASFS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ORAASFS.SYS_EXPORT_TABLE_01 is:
+RECO/filtdb/expdp/sbsr_exp.dmp
Job "ORAASFS"."SYS_EXPORT_TABLE_01" successfully completed at 03:34:59
And I like to run this daily and delete after 14 days. but it show error, what can be the solution to run this script?
#!/bin/bash
#Script to Perform Datapump Export backup Every Day
################################################################
#Change History
[code]...
View 9 Replies
View Related
Aug 10, 2013
I just did a 112G file migration of production data using oracle_datapump so I know this works in principle. When I tried it on my test instance I am seeing stuff like this
[oracle@aggs00.test for_test]$ ls -l aggs_day_conversion_agg_2419
-rw-r----- 1 oracle oracle 15917056 Aug 10 09:06 aggs_day_conversion_agg_2419
CREATE TABLE IMP_3251198_2419(
PARTITION_DATE DATE,
USER_ID NUMBER,
SID NUMBER,
[code]....
Executed in 1800.642 seconds
why it could be taking 1800 seconds to select one record from a not very big table? File corruption? Disc fragmentation? Oracle instance configuration?
View 29 Replies
View Related
Aug 17, 2013
I want to take an export of table MESSAGE, and filter it for the day of 17 JUL 2013 (just to limit the size). i used the following expdp command but its not working.
expdp SYSTEM directory=DATA_PUMP_DIR dumpfile=DB_16_08_2013.dmp logfile=FA0001P_BG_16_08_2013.log TABLES=schema.MESSAGE QUERY=schema.MESSAGE:where created_on between to_date('17-July-13 00:00:00','DD-Mon-YY hh24:MI:SS') and to_date('17-July-13 23:59:00','DD-Mon-YY hh24:MI:SS')
But with select query i am able to retrieve the rows for the specific date.
select * from MESSAGE where created_on between to_date('17-July-13 00:00:00','DD-Mon-YY hh24:MI:SS') and to_date('17-July-13 23:59:00','DD-Mon-YY hh24:MI:SS')
Here is the command with syntax error.
[oracle@orcl log]$ expdp SYSTEM directory=DATA_PUMP_DIR dumpfile=DB_16_08_2013.dmp logfile= DB_16_08_2013.log TABLES=schema.MESSAGE QUERY=schema.MESSAGE:where created_on between to_date('17-July-13 00:00:00','DD-Mon-YY hh24:MI:SS') and to_date('17-July-13 23:59:00','DD-Mon-YY hh24:MI:SS')
-bash: syntax error near unexpected token `('
View 3 Replies
View Related
Jun 5, 2012
I have to servers 'A' and 'B', On Server there is a schema with the name "test" having a table "t1". I want to import this t1 table to server B.
Is it possible to export dump using expdp to remote host.
I found that there is an option for this like "network_link". for testing this, I created a dblink from Server "B" to "A" named "vxmldb".
When I am using the below command on Server B there I am getting the following error.
C:>expdp directory=data_pump_dir logfile=test.log network_link=vxmldb schemas=test dumpfile=test.dmp
Export: Release 11.1.0.6.0 - 64bit Production on Tuesday, 05 June, 2012 14:22:07
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Username: system/vxmldb@vxmldb
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39170: Schema expression 'TEST' does not correspond to any schemas.
In above command
directory ---> Server "B" location
network_link ----- > dblink name which is created on Server "B" to access Server "A"
schemas ------ > schema name which is to be exported . Exists on Server "A" DB
username/password ---- >> higher level username/password for Server "A".
@connectString ----- >> connecting to Server "A"
View 15 Replies
View Related
May 29, 2012
We are DB users (not DBAs) and used always exp/imp bevore application upgrade.
Was googling arround and read something like "Oracle Data Pump - Time to let go of Exp / Imp". It seems exp/imp is obsolete.
Our system doesn't have "expdp" command
> find . -name expdp
>
is this because of too old SQL*Plus?
> sqlplus
SQL*Plus: Release 8.1.7.0.0 - Production on Tue May 29 16:05:28 2012
(c) Copyright 2000 Oracle Corporation. All rights reserved.
Enter user-name: ^C^C
- does our DBA need to give us privileges to run expdp/impdp?
- is that true that a expdp/impdp dump will be on the Oracle server (not the client machine)?
View 4 Replies
View Related
Feb 1, 2012
I am having one prod and one devl with prod having stream setup.
I have to refresh devl with prod , but if i will go by full expdp then db_links also get imported into the devl and may cause problem in devl.
Is there any other way using expdp to exclude the stream objects while doing import.
View 1 Replies
View Related
Dec 2, 2010
From some day I have this error during export data pump:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-31687: error creating worker process with worker id 1
ORA-31687: error creating worker process with worker id 1
ORA-31688: Worker process failed during startup.
This error is random, if I retry after few minutes the expdp work correctly.
View 8 Replies
View Related
May 1, 2010
If I would be using expdp using remap_schema will it also remap grants and synonyms ?
View 5 Replies
View Related
Apr 13, 2012
How we can overwrite existing dump file for expdp in oracle 10g because everytime we excute expdp and dmp file exist we get below error
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "C:scott_emp.dmp"
ORA-27038: created file already exists
OSD-04010: <create> option specified, file already exists
We have one feature in 11g reuse_dumpfiles=y ,which doesnt work in 10g, I want something which can overwrite existing dumpfile in 10g?
View 1 Replies
View Related
Aug 4, 2011
I am using expdp/impdp to migrate 4 TB database from solaris to Linux.But the import process is taking forever.
View 13 Replies
View Related
Nov 11, 2011
I have had the following problem open with Oracle support since March 2011 (8 months), and still no resolution.
When I export all our schema's on Sunday night it takes about 1 hour 50 minutes. When I export the same schema's on any other night it takes 7 hours. The only difference is that on Sunday at 4:00am we drop all connections in the connection pools and reestablish new connections. Then 19hours later on Sunday at 23:00 we perform the exports which only take 2 hours to complete.
I have also tried recreating the connections in the connection pools during the week, and the exports have then only taken 2 hours to complete. But the following night after the connections have been used during the day, the exports again take 7 hours. So it appears the export speed gets significantly slower when there are many open connections that have been used and not closed.
From the stats pack report I found 2 SQL statements internal to the export command, that had an order of magnitude in difference when looking at the elapsed execution time between the fast export, and the slow export (see below).
How to speed up the exports without having to drop and recreate the database connections in the connection pools each night.
FAST:
elapsed_time: 430.90
executions: 161,388
Module: exp@Oracle1 (TNS V1-V3)
SELECT COLNAME, COLNO, PROPERTY, NOLOG FROM SYS.EXU10CCL
WHERE CNO = :1 ORDER BY COLNO
elapsed_time: 264.29
executions: 50,349
Module: exp@Oracle1 (TNS V1-V3)
SELECT TOWNER, TNAME, NAME, LENGTH, PRECISION, SCALE, TYPE, ISNULL, CONNAME, COLID, INTCOLID, SEGCOLID, COMMENT$, DEFAULT$, DFLTLEN, ENABLED, DEFER, FLAGS, COLPROP, ADTNAME, ADTOWNER, CHARSETID, CHARSETFORM, FSPRECISION, LFPRECISION, CHARLEN, TFLAGS, 100 FROM SYS.EXU8COL
WHERE TOBJID = :1 ORDER BY INTCOLID
SLOW:
elapse_time: 8264.16
executions: 124,662
Module: exp@Oracle1 (TNS V1-V3)
SELECT COLNAME, COLNO, PROPERTY, NOLOG FROM SYS.EXU10CCL
WHERE CNO = :1 ORDER BY COLNO
elapsed_time: 3877.78
executions: 38,813
Module: exp@Oracle1 (TNS V1-V3)
SELECT TOWNER, TNAME, NAME, LENGTH, PRECISION, SCALE, TYPE, ISNULL, CONNAME, COLID, INTCOLID, SEGCOLID, COMMENT$, DEFAULT$, DFLTLEN, ENABLED, DEFER, FLAGS, COLPROP, ADTNAME, ADTOWNER, CHARSETID, CHARSETFORM, FSPRECISION, LFPRECISION, CHARLEN, TFLAGS, 100 FROM SYS.EXU8COL
WHERE TOBJID = :1 ORDER BY INTCOLID
I use the following export command for each schema:
$ORACLE_HOME/bin/exp user/pass file=somefile.dmp owner=$SCHEMA log=somelog.log buffer=9000000
I have an Oracle Standard edition 11.1.0.7 database on 64bit Linux with a 7GB SGA. I currently export (I use exp not datapump because datapump is a lot slower and we can't use parallel processing features of datapump on a standard edition database) approx 200 schema's each night. The export normally takes 1 hour 50 minutes which is approximately 2 schema's exported every minute. When the exports run slowly each export takes almost 2 minutes to complete.
The database has about 20 GB data and 50 GB indexes. The database has also approx 500 connections via toplink connection pools from 8 application servers.
View 2 Replies
View Related
Mar 30, 2007
I'm taking export dump using expdp of some schema's of total size is 300GB. This is the par file:
DIRECTORY=expdp
FILESIZE=32212254720
DUMPFILE=expdp_schema01.dmp,expdp_schema02.dmp,expdp_schema03.dmp,expdp_schema04.dmp,expdp_schema05.dmp,expdp_schema06.dmp,expdp_sche ma07.dmp,expdp_schema08.dmp,expdp_schema09.dmp,expdp_schema10.dmp,expdp_schema11.dmp,expdp_schema12.dmp,expdp_schema13.d
[code]....
here one biggest schema size is 250GB and the total size of all the schema's is 300GB. The file where am taking the dump has 350GB space but even then the expdp failed saying
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes
why it failed and how to restart it and make sure it runs successfully without error.
View 4 Replies
View Related
Aug 6, 2012
After upgrading 11gR1 database (11.1.0.7.0) to 11gR2 (11.2.0.3.0), the datapump exports have been taking quite a bit longer. When database was 11gR1, a full expdp took approx. 40-45 minutes. After upgrade, it takes approx. 1 hour 40-50 minutes. These times were with parallel=4. I tried with parallel=8 and parallel=12, both of these took around 1 hour 5-10 minutes, better but still quite a bit slower than pre-11gR2 upgrade. I tried with exclude=statistics, index_statistics, indexes; it still took approx. 1 hour 40-45 minutes. This is a PeopleSoft database so there are many, many objects to be exported. The database was upgraded using dbua.
View 1 Replies
View Related
Oct 11, 2012
We have a function that is called in various other PL/SQL packages, and performance has always been very good. On 29th Sept we upgraded our db to 10.2.0.5.0 and since then, a package that calls the function has gone from ~4mins, to ~2.5hrs to run.
In PL/SQL Developer, a simple select that calls the function has gone from ~0.5secs to retrieve the first 100 rows, to ~12secs. I ran a profile of the main package, which highlighted the where the bottleneck was (a fetch from an explicit cursor). Running an explain plan on the cursor SQL doesn't really show up anything untoward.
However, I found that if I subtly changed the cursor SQL, (so that it did the same thing, but was written differently), it fixed the performance problems.
where ade_start_date between cpDate-cpDays and cpDate-1
/*and ade_start_date < cpDate
and ade_start_date >= (cpDate-cpDays)*/
From this, we thought that there may have been a bad cached execution plan which the change of code forced a recalculation of. However, about 2 hours later, the changed code ran slowly again. So a further subtle change was made, which fixed the issue again. Until this morning, when it was running slowly again.
This feels like it is CBO/stats related potentially, but is out of my area of knowledge unfortunately. We have our DBA investigating this, but there may be things I can test to narrow down the possibilities in the meantime.
View 5 Replies
View Related
Oct 27, 2010
i was created this same thread in Dba-village. i need to attache my log file
i want to upgrade my database 8i to 10g, for testing purpose i install 10g with started database and i created needed users and i import using dump
imp system/passwd file=dumpfile.dmp log=logfile.log rows=y ignore=y statistics=none full=y
the log file gave some errors and warnings i will attach my log file
View 5 Replies
View Related
Jul 27, 2011
I am having export script which performs normal export operaion on full database. As no I want to convert the same script with expdp (datapump). So what are the changes I need to take care.As per my knowledge I have to perform following task:
1. Create a dicrectory
2. change command from exp to expdp with directory name mentioned.
View 2 Replies
View Related
Apr 27, 2012
i have a nightly import ( about 20 tables ) and it takes up to 5 hours..we have one table of about 800,000 lines and the rest are between 1000 and 200,000 this is very slow when i monitor the import i see a very long amount of wait for the SQLnet from client ,
i run the import on the Database server itself .. if i check the current statement i see it's moving from one to one for instance i have
SELECT /*+ all_rows ordered */
"A".ROWID, 'REPORT', 'CONTRACT_LVL', 'SYS_C001329497'
FROM "REPORT"."CONTRACT_LVL" "A"
WHERE NOT (LENGTH (bonus_nat) <= 31)
then
SELECT /*+ all_rows ordered */
"A".ROWID, 'REPORT', 'CONTRACT_LVL', 'SYS_C001684584'
FROM "REPORT"."CONTRACT_LVL" "A"
WHERE NOT (LENGTH (outcome_cd) <= 1)
etc and it takes hours DB is on windows 2003 runnin oracle RDBMS 9.2.0.7 while the import screen show 185000 lines imported..I also see a lot of consistent gets for this sessions raising at that time..Would it be better to export import without statistics ?
I need also to mention that the dump file comes from a linux hosted Database don't think it will make the difference for a exp/imp.It's a peoplesoft Database there are a lot of tables more than 15000 and if i take the table mentioned above and i want to check its constraints it takes decade before toad can display them.I have seen that we have a incredible amount of constraints on those tables it might be the reason .
I just wonder if the system catalog needs to be tuned ? /* Update */ why but now the huge number of wait is no as "Library cache lock".
View 7 Replies
View Related
May 25, 2013
I am using 11gR2 on windows server. This is the query that runs many times a day and effect badly the performance of database. I don't have much idea about this query.
SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS failed_count
FROM sys.dba_audit_session
WHERE returncode != 0 AND TO_CHAR(timestamp, 'YYYY-MM-DD HH24:MI:SS') >= TO_CHAR(current_timestamp - TO_DSINTERVAL('0 0:30:00'), 'YYYY-MM-DD HH24:MI:SS')
View 1 Replies
View Related
Mar 2, 2012
I have a situation where when I login as a user to my DBvia sqlplus no service name it takes about 20 secs to connect.Yet when I login as a user with DBA privs it logs in immediately.
Is there something I can do to trace what is happneing behind scences to determine what the login delay may be..
View 9 Replies
View Related