Expdp On 10G AIX / Impdp 11G Linux?
May 4, 2013First of all, I'm not a DBA, and when I try to import the dmp file generated on AIX with Oracle 10G on a RHEL6 11G machine I got a lot of issues, how to do this?
View 3 RepliesFirst of all, I'm not a DBA, and when I try to import the dmp file generated on AIX with Oracle 10G on a RHEL6 11G machine I got a lot of issues, how to do this?
View 3 RepliesI am using expdp/impdp to migrate 4 TB database from solaris to Linux.But the import process is taking forever.
View 13 Replies View RelatedI have a process to export a schema using expdp and import using impdp. Everything creates successfully except for a trigger. The trigger gives and error that the table or view does not exist. The account that I use to import the schema is different than the schema user but is a highly privileged account. I notice that the schema in the create or replace trigger line of code is remapped (I am using remapping in the impdp syntax) and the rest of the syntax of the trigger (which is just a sequence trigger for a primary key column) does not have the schema. In order to fix the issue, I have my bash script log into oracle as the schema user after the import of the schema and execute the trigger code. why do I have to do this for trigger code but not for other objects like views that create just fine.
View 2 Replies View Relatedi got a problem recenly in Oracle 11g R2 RAC database . normally When I export sample user 'SCOTT' , it takes hardly one minutes .But In our RAC environment this export runs with 20to40 minutes .
Here the output :
---------------------------------------------------------------
oracle@rac2 dump]$ expdp system/sys123 directory=test_dir dumpfile=scott1.dmp schemas=scott
Export: Release 11.2.0.1.0 - Production on Mon Jan 23 09:30:26 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=test_dir dumpfile=scott1.dmp schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
[Code] .......
In another machine(where I configure RAC again in Linux) , I got the same problem . I also dont find any perfect documents in metalink . My host information :
OS : AIX 6.1
Storage : IBM (using ASM)
Database : Oracle 11g R2
Grid version: 11.2.0.3
OS : Red Hat Enterprise Linux 5.4
Few months back, in our RAC Cluster, while taking an expdp backup in a local Linux formatted filesystem, I got some errors. Don't quite remember the error code or the scenario now as I had too much work that day. The issue was fixed only when we used an ACFS filesystem location as the directory object for expdp.
Today, in the same RAC cluster, to reproduce that issue, I tested taking an expdp backup in a local Linux formatted file system ( /home/oracle/pumpDir ) and the expdp completed without any issues.
with expdp,impdp in RAC cluster environment because of using a local Linux filesystem ?
I'm curious to know if expdp or impdp is able to change object names during the process. What I mean by this is... can I export out procedures:
procedure1
procedure2
procedure3
Then import them like this:
test_procedure1
test_procedure2
test_procedure3
I'm not sure the expdp or impdp has that ability, but I could have missed it. I know how to remap a schema, but that only changes the schema name.
For now we have a directory defined as /exp wherein we do the export using expdp. Can we define a new directory and use this directory for expdp.
View 3 Replies View Relatedi am trying to export table using datapump in oracle 10g, this expdp takes 5 hours time, so i want use use parallel keyword in expdp,
my question is how should i know number of parallels can i use...?
I have a client database on a remote server.I take a daily export dump of this database.Now, I have created a new Oracle 11g XE database on my local machine. I want to import that dump in this XE database.
I will ftp the dump file to local machine and all no isssues about that. I just want to know will import in XE database be successful? I mean, can we import data in XE database ?
while try to expdp on network drive. getting below error.how can we perform the expdp on network drive
Network location: \\tsclient\p\expdp
1.)SQL> show user
USER is "SYS"
SQL> create or replace directory exp as '\\tsclient\p\expdp';Directory created.
SQL>Grant read,write on directory exp to system;
2.)expdp system/xxxxx@orcl directory=exp dumpfile=EXP_orcl_072013.dmp logfile=EXP_72013_1.log schemas=('IIMS','CMMN')
Export: Release 11.2.0.2.0 - Production on Tue Jul 23 13:48:07 2013
Copyright © 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
I'm trying desperately for a few days to import an Oracle database 10.2.0.5 from a Red Hat Enterprise Linux Server release 4.1 on my Windows server 2008 R2 x64 11.2.0.1. expdp export is performed without errors with the following options:
DUMPFILE = expinclude.dmp
DIRECTORY = exp_dir
LOGFILE = expinclude.log
TABLES = (list of tables with partitions)
ESTIMATE = STATISTICS
When I try to import my dump I do not get a multitude of errors like this:
ORA-39083: Failed to create the object type TABLE: "VL_DATA". "LOG_VOUCHERS_MESSAGES" with option value error:
ORA-02219: invalid NEXT storage option value
SQL failing:
After having scoured the forums on the net I have found very little info on this error (besides the code itself emerges out of a thousand sites) I tried multiple combinations for Import: excluding the index, only the structure of import, import data, etc. ... without success.
I'm a SQL DBA, so pretty new to Oracle.
I'm trying to restore a database using impdp (it was created using expdp).
Here's my impdp statement:
CODEimpdp system/oracle full=y directory=ATA_PUMP_IR dumpfile=IRISexp%U.dmp remap_datafile=/'+PDATA/iris/datafile/undotbs2.302.699230857/':\'D:\NickL\App\oradata\DT_IRIS_EXPORT_2012\DATAFILE\undotbs2.302.699230857'/
This is the error I get:
CODEORA-39083: Object type TABLESPACE failed to create with error:
ORA-01276: Cannot add file +PATA/iris/datafile/undotbs2.302.6992308597. File has an Oracle Managed Files files name.
I am trying to use NETWORK_LINK option in datapump and import a table from one server to another. I gave the below command :
C:>impdp example/example@db DIRECTORY=DATA_PUMP_DIR
NETWORK_LINK=db.legal.regn.net remap_schema=BI:example
tables=BI.BI_DIRECT dumpfile=BI.dmp logfile=BI.log
Got the following errors :
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
Is this error related to the permission in the OS level (windows 7 in my case)? I manually created the folder 'DATA_PUMP_DIR' in the specified directory path. Though the directory I created (DATA_PUMP_DIR) shows read-only in the general tab of the property, I am able to create files under the folder 'DATA_PUMP_DIR'.
I am trying to do a impdp using a network link and it fails with the ORA-31626: job does not exist.It worked with a different database on the same server. The network link is there, data pump directory exists, the read and write privileges are granted to the oracle user.There are no other data pump jobs running:
SQL> select JOB_NAME,STATE from DBA_DATAPUMP_JOBS;
no rows selected
My database details:
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
The whole error is listed below:
omdx16dd$ impdp oracle@xtst080 SCHEMAS=BRM REMAP_SCHEMA=BRM:MBR_SLN CONTENT=METADATA_ONLY DIRECTORY=DATA_PUMP_DIR NETWORK_LINK=po02
Import: Release 10.2.0.4.0 - 64bit Production on Tuesday, 03 April, 2012 17:20:33
Copyright © 2003, 2007, Oracle. All rights reserved.Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-00942: table or view does not exist
ORA-00942: table or view does not exist
[code]...
After upgrading 11gR1 database (11.1.0.7.0) to 11gR2 (11.2.0.3.0), the datapump exports have been taking quite a bit longer. When database was 11gR1, a full expdp took approx. 40-45 minutes. After upgrade, it takes approx. 1 hour 40-50 minutes. These times were with parallel=4. I tried with parallel=8 and parallel=12, both of these took around 1 hour 5-10 minutes, better but still quite a bit slower than pre-11gR2 upgrade. I tried with exclude=statistics, index_statistics, indexes; it still took approx. 1 hour 40-45 minutes. This is a PeopleSoft database so there are many, many objects to be exported. The database was upgraded using dbua.
View 1 Replies View Relateddata pump export is very slow. For 50GB export has taken more than 24Hrs with one below error:
Database Version:11.2.0.2.0
OS: Windows server 2008 r2
Increased 10GB RAM and CPU 6 to 8 then also same issue
Error:
ORA-31693: Table data object "BNCSDB"."MS_DATA_PTORE" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 20 with name "_SYSSMU20_4037596720$" too small
Export log:
Export: Release 11.2.0.2.0 - Production on Tue May 14 20:03:25 2013
Copyright � 1982, 2009, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@orcl dumpfile=BCSDB04_19.dmp logfile=BCSDB04_19.log
[code]...
I am facing issue of block corruption in my exp backup which I am taking through expdp command. (Refer Attachment of Screen shot of error)
I want to know few things about the block corruption.
1. Why the block corruption occur.
2. How can I resolve this.
3. Can I rosolve this by deleting the same record on which this error is coming. if yes then how can I track that row in a table.
I already tried for DB verify utility. It shows the below result.
===================================================================
C:Documents and SettingsAdministrator>dbv file='E:ORADATAAFCCV1MONETA01.DBF' blocksize=8192
DBVERIFY: Release 11.1.0.6.0 - Production on Wed Feb 16 10:13:11 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
DBVERIFY - Verification starting : FILE = E:ORADATAAFCCV1MONETA01.DBF
DBV-00600: Fatal Error - [28] [27070] [0] [0]
C:Documents and SettingsAdministrator>
================================================================
I would like to export specific tables(not entire schema) including metadata. I am using a parameter file for expdp.
Tables=emp,dept
Does this also include all metadata or should i also add the below Include in the parfile ?
INCLUDE =Indexes,Sequences,Procedures,Views
We had AIX OS on 570 machine and database 10.2.0.4. We took expdp and it took 2 and hour to complete every night.
Now we upgrade to 10.2.0.5 and 770 machine and now same command takes 6 hours to complete even database and hardware is upgraded
Command is
expdp T24SILK/oracle directory=backup dumpfile=exp_beod_T24_%U_$dt
.dmp logfile=exp_T24_$dt.log EXCLUDE=TABLE:"LIKE '%TRACE'" parallel=6
I am trying to export Schema using expdp command. but its going hang after few minutes. it seems that it stucks any where. Even I am trying with normal scott schema it is also hanging.
View 16 Replies View RelatedI export a table using exp utility it take 30 mins to complete the export.The same i have done in expdp utility it take 10 mins to complete the export.
How it happens?
While trying to expdp using Query logics, getting syntax related erros shown below:
expdp system/xxxx SCHEMAS=LOG NETWORK_LINK=DBLINK1 INCLUDE=TABLE:"IN('DAILY_LOG')" QUERY=LOG.DAILY_LOG:"where entry_date< to_char(sysdate -1,'yyyymmdd')" DIRECTORY=dump DUMPFILE=log_exp.dmp logfile=log_exp.log
But gives the following error
ORA-31693: Table data object "LOG"."DAILY_LOG" failed to load/unload and is being skipped due to error:
ORA-00904: "YYYYMMDD": invalid identifier
I tried with simple sql with YYYMMDD and it works fine, the entry_date is a char field. in QUERY where i'm doing wrong here?
I am trying to import data from a dmp file created using expdp. Running on oracle 11g Express and getting following error, and tried to fix but could not succeeded, I have tables exist in POI schema and trying to import them in ghi schema. Created dmp file from poi shcema with two tables= '''REL20_AU_POI'',''ARCHIVE_POI''' and these tables do not exist in ghi schema
declare
l_handle NUMBER;
begin
l_handle := DBMS_DATAPUMP.open(
operation => 'IMPORT',
job_mode => 'TABLE',
[code].....
My database size is around 2900 GB in AIX 6.1, database version is 10.2.0.3. Everyday I need to take expdp dump backup of a single table which is only 57 MB in size. It takes around 55 minutes to complete the dump backup.
I have noticed that when backup starts , in first phase it does table scan ( we have 330000 tables) , next purely backup begin. My query is ,
1. how to make first my dump backup?
2. is there any way to skip table scan ?
I am asked to migration from oracle 10.2 to 11.2 on Red hat.In 10g,the db has around 50 users including default users like Scott,xdb etc. 1,Should i skip those users when i import? The schema MAHM05 has the tables, None has directly access privilege this schema.
All the users access by synonym.
2,which schema should i import first?
3,What are things i need to check?
I did an expdp on the prod DB and have been doing a straight impdp as a test, just to have data to work with, but it spews 214 errors. Mostly these:
ORA-31684 lots
ORA-39083 some
ORA-39151 lots
ORA-39082 lots
ORA-39111 lots
ORA-39112 lots
I can see that I can use REUSE_DATAFILES & TABLE_EXISTS_ACTION to overwrite tables by default, but is there a recognised way of replacing the entire DB with the impdp? Do I just create the instance, (with the init file) and not build/init the tables, or what? I'll experiment, but I'm just interested if there is a DBA best practice for this sort of thing.
I want to know how Remap_datafile paramater in IMPDP works in Oracle 10g. give me the whole scenario for two databases. One in Linux Platform and other in Windows platform. I want to import the tablesapce data from Linux to Windows Server database.
View 5 Replies View RelatedGetting below error While Impdp..
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
ORA-39083: Object type PROCACT_SCHEMA failed to create with error:
ORA-31625: Schema ADAS is needed to import this object, but is unaccessible
ORA-28031: maximum of 148 enabled roles exceeded
[code]...
ORA-06512: at "SYS.KUPW$WORKER", line 1342
ORA-06512: at line 2
Job "SYS"."SYS_IMPORT_FULL_01" stopped due to fatal error at 17:13:38
I am getting ORA-39127 while taking full database export using EXPDP on oracle 10g enterprise edition 10.1.0.2 . This is production database.
Details of the errors are:
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/GRANT/OBJECT_GRANT
ORA-39127: unexpected error from call to export_string :=SYS.LT_EXPORT_PKG.syste
m_info_exp(0,dynconnect,10.01.00.02.00',newblock)
ORA-06537: OUT bind variable bound to an IN position
ORA-06512: at "SYS.DBMS_METADATA", line 5107
Processing object type DATABASE_EXPORT/DE_SYSTEM_PROCOBJACT/DE_PRE_SYSTEM_ACTION
[code]....
Datapump Export completes with these 2 errors and dumpfile is generated. Are these errors Problematic. Will these errors cause problems while taking import from the dumpfile .
Can we create flatfile from dump file created by expdp?
View 2 Replies View Related