Server Utilities :: Missing Objects During Data Pump Import
May 18, 2011
I did the datapump export and import from one schema to a new schema in the same database. I had to use different tablespace. I used the following parameters in the parfiles :
We have a QA database on a VM server with Windows 2003 operating system and oracle 10.2.0.1 installed along with limited disk space.We received an expdp file from a client that is large enough that we had to copy it to a network drive (40GB). I created a new directory called IMPDMP with the directory path (using UNC pathing) to \serversharefoldersubfolder (our network mapped P drive, yes I included the backslash, but I have tried without it also). I also included the parfile here. I checked the grants and they seem to be fine
SQL> select * from session_roles where role like '%DATABASE' or role like 'DBA';
ROLE ------------------------------ DBA EXP_FULL_DATABASE IMP_FULL_DATABASE
SQL> select * from session_privs where privilege like '%DICT%';
PRIVILEGE ---------------------------------------- SELECT ANY DICTIONARY ANALYZE ANY DICTIONARY [code]....
My questions are this:
1) In interactive mode, does a dummy file expdat.dmp have to exist in the DATA_PUMP_DIR directory?
2) does my export have to reside in the DATA_PUMP_DIR directory (again, no disk space to handle the DMP file), one of the hard drives is just big enough to handle the space but since it has datafiles there also, it would crash during import when trying to extend.
its importing data fine upto some stage after that oracle gives the following error
Processing object type SCHEMA_EXPORT/JAVA_SOURCE/JAVA_SOURCE ORA-39097: Data Pump job encountered unexpected error -1423 ORA-39065: unexpected master process exception in DISPATCH ORA-01423: error encountered while checking for extra rows in exact fetch ORA-04030: out of process memory when trying to allocate 123404 bytes (QERHJ has h-joi,kllcqas:kllsltba)
ORA-39014: One or more workers have prematurely exited. Job "SYSTEM"."SYS_IMPORT_SCHEMA_04" stopped due to fatal error at 11:42:03
I though its due to lack of memory, so i have increased pga_aggregate_target=512MB to 600MB still i am getting a same error.
I'm rying to import schema's from a dump file that came from a different environment.
What I have is:
1. dump file 2. log file of the export
I'm trying to import the file(containing three schemas) with remap_schemas, and it fails, gives a lot of ORA-00959: tablespace 'string' does not exist.
Now, I've read in OTN:
[URL]
that what you need to do in that case is to use the REMAP_TABLESPACE option,to redirect the objects to a different tablespace.
I don't see a name of the tablespace I'm getting the error for in the export log.I don't know if I have more tablespaces I have to redirect with REMAP_TABLESPACE.
I don't want to perform this 3 times, have an error, by that find out what's the next tablespace needing redirection and only then starting over...
How can I know from the dump file and the log file,what is the tablespace names i need for the redirection to my names? Or its just that the tablespace giving me the error is the only one in the dump file?
We are trying to import data into existing tables in a schema using data pump
However the foreign key tables are being imported first and then the master table data thus violating the constraints
Apparently it seems larger tables are being imported first regardless of referential integrity constraints thus causing constraint violation (contrary to my understanding)
Is it a normal behaviour during data pump import?
Is it possible that the keys being sequence generated are causing this?
As I understand import will commit after each table In that case can we defer commit at all at the expense of large undo, set constraints to deferrable and try the import?
Is it possible to import a dump file using impdp data pump utility on oracle 10g where the export dump was taken using traditional exp utility and vice versa.
We were trying to refresh the QA database from the production export dump.After import we checked the dba_rules in QA database doesnot show any rule . This seems a reasons for many of the packages get invalidated after import.I checked the production database that have 16 rules defined under dba_rules tables.I am not sure abt the syntax for the creation of rules . Is there any way i can take the backup of rules from PROD or some creation of rule script so that i can create similar rules in QA.
I get import error while trying to import objects into schema.
Export file created by EXPORT:V11.02.00 via direct path import done in US7ASCII character set and UTF8 NCHAR character set import server uses UTF8 character set (possible charset conversion) . importing DEMO's objects into TEST . . importing table "TAB1" IMP-00058: ORACLE error 1950 encountered ORA-01950: no privileges on tablespace 'USERS'
i understand we need to grant the user space resource on the tablespace as below.
ALTER USER <user> QUOTA UNLIMITED on <tablespace_name>
My another question is can we grant QUOTA UNLIMITED on <tablespace_name> to user ?
I am new to Oracle DBA. m facing one problem. i imported and exported my data from one oracle DB to another by using exp/imp.....in both exp/imp log files, its shows the fallowing messages at the end.
"Import terminated successfully with warnings." "Export terminated successfully with warnings."
but when i count there are some rows missing in some tables...
what could be the cause? is there any other way to cross check whether export/import was successful...
How (or having a script) to get, in PL/SQL, the parameters that have been given on a Data Pump command (export or import): mode (easy), tables/schemas list, exclude/include values and so on?
I met some problems with data pump tools. We have a large db(oracle 10.2.0.5, single instance on hpux v11.11, data about 8TB), while we wanna migrate to Linux ia 64bit, 10.2.0.5 RAC. The migrate window is around 20 hours, so the window can NOT be assign more hours.
As we consider if impdp & expdp work can start together, that will be save a lot of time. So I wonder if there are any ways to implement this or any other ways can speed up and make the data integrate better?
why datapump is faster than normal exp ? one ans that i know is dp use block mode and exp use byte mode . is there any other major reason? if say i have database of size 10g and want to take datapump backup, but condition is that i can take dumpfile of size 2g only. there is any way to take full backup of database in part wise .
I have database A (Working in Live environment) and Database B copy of Database (Not live) I have Restored whole database (A) RMAN backup file on Database (B) Previous week now i don't want to change anything in any schema and want to import only updated and new records in the table in Database B
There are around 20 schema If for example i have everything in new database B all required database objects like Procedure,functions, packages with indexes in all tables and data in tables, i just want to add new data and updated data.
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
I'm installing a new application-testing server, i have installed 11g r2 instant clients & SQL* Plus client.
when i'm trying to run an expdp command, i get this:
'expdp' is not recognized as an internal or external command
Now, i understand this is because i don't have the Bin directory of a client installation in my Path of the OS. My question is, which one exactly i need for using data-pump utility, and where to download it?
I've found lots of posts of people that had issues with defining the ORA_HOME$in in the $PATH, or having a client incompatibility issue throughout the web, but no answer to my specific question.
ORA-31693: Table data object "033"."EZMILRIKUZ" failed to load/unload and is being skipped due to error: ORA-00922: missing or invalid option
my backup syntax is: 033/******@INTORCL DIRECTORY=exp_dir DUMPFILE=033.dmp LOGFILE=033.LOG FULL=N REUSE_DUMPFILES=Y FLASHBACK_TIME="TO_TIMESTAMP(TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')"
I thought there was a problem with the table so i created a new one and now I'm getting the same error on a different table (the third on in the list: . . exported "033"."PRQ" 192.9 KB 479 rows . . exported "033"."EZMIL" 558.8 KB 1229 rows ORA-31693: Table data object "033"."MIL" failed to load/unload and is being skipped due to error: ORA-00922: missing or invalid option
when i takeoff the "FLASHBACK_TIME" parameter it works fine. ButI need this parameter.
I want do this connected in windows 2008 r2 with oracle 11G R2 execute an import, that will do a full import, from a linux with oracle 10g called "SUPORTE1"
ORA-39001: valor de argumento invßlido ( argument valor invalid) ORA-39200: O nome do link "SUPORTE1;" Ú invßlido. ( link name invliad) ORA-44004: nome SQL qualificado invßlido ( sql name invalid)
I tested the connection, db-link and created the directory.
create directory asmexpdir as '+RECO/FILTDB/EXPDP'; grant read,write on directory asmexpdir to oraasfs; expdp oraasfs/oraasfs2301 directory=asmexpdir dumpfile=SBSR_EXP.dmp tables=TM_SFS_CUST_01 logfile=EXPDP_LOG:SBSR_EXP.log
SUCCESS MESSAGE
. . exported "ORAASFS"."TM_SFS_CUST_01" 387.2 MB 817684 rows Master table "ORAASFS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for ORAASFS.SYS_EXPORT_TABLE_01 is: +RECO/filtdb/expdp/sbsr_exp.dmp Job "ORAASFS"."SYS_EXPORT_TABLE_01" successfully completed at 03:34:59
And I like to run this daily and delete after 14 days. but it show error, what can be the solution to run this script?
#!/bin/bash #Script to Perform Datapump Export backup Every Day ################################################################ #Change History
Export: Release 10.2.0.1.0 - Production on Saturday, 25 December, 2010 5:10:06
Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options Starting "TEST1"."SYS_EXPORT_TABLE_01": test1/******** DIRECTORY=datapump DUMPFILE=expfull-3.dmp query=auth_test:"where TXNREQDTTIME<20-MAY-10" tables=auth_test Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64 KB
I need to use Data Pump for the first time on my production Database.Currently on Testing Database, when i am taking schema level export there are no errors or warnings in the log file but when i importing it gives fallowing ORA in the import log file. i searched on google,the only way i found is to recompile the invalid objects. how to avoid this warnings in log file.
"ORA-39082: Object type ALTER_PROCEDURE:"QUANTISV4"."P_CTM_ABN_INVST_EQUITY" created with compilation warnings"