i have more than 100 dumpfiles to import into my oracle 11g database. i know how to import(impdp) for same named dumps but here all the dumpfile names are totally different(ex: aa.dmp,bb.dmp,).
I have 3 dump files: A.dmp, B.dmp, C.dmp . Can I use multiple REMAP_TABLESPACE entries in a par file to remap the table spaces for the above dump file?
exporting a big table (many rows = 3.000.000). Using the command exp the error message returned is "expdat.dmp > EXP-00028: failed to open expdat.dmp for write". Is there a possibility to export this table in multiple files (as a splitter)?
the following situation, I have a directory named /dat/global/stock/ inside this i will get files named differently for example below.abcdef.112dfgrt.2......
Here i want to load this file one by one into the external tables and generate one more file based on some enrichment.
Step 1. Have to take first file and to load into the ext table. Step 2. Enrichment Step 3.File generation.
Now here i am facing a problem that in that particular directory i usually get 1000 files so i need to get file one by one and to put in one more directory. how can i get file one by one and generate file by using oracle loader
I am facing a problem importing DMP file in 11g. While importing it gives me error not responding. I have to attached the jpg file for that to clear you my point whats wrong is going during import. My Dump is on 9i i want to import that on 11G R2.
I want to import dump file (without 2 tables) .The dump file contains 100 tables,indexes and constraints. So out of 100 tables i want to import 98 tables from dump file (without 2 tables).
I tried to import a dump in 11g that was taken in oracle 9i. The import started but it hangs after some time. Exactly say it check only the character set of the DB's then it hangs. let me know if there are any specific procedures to import a dump from 9i to 11g directly.
We have two databases running on 10.2.0.4 and 9.2.0.8. Both are having the same unpartitioned table of size 80G. I am exporting the table on 10g by using parallel=8 and dumpfile with %U option. That took around 4 hours to export the table.
And on 9.2.0.8, i am exporting using below parameters, taking around 5 hours.
buffer=2000000 recordlength=64000
options i can try to speed up the export in both versions.
I want to take a schema level export .The schema size is 115 GB size . Do we require same amount of space to be available in server side (where we are taking a dump) as the schema size or less or more space is required in server side ?
We are DB users (not DBAs) and used always exp/imp bevore application upgrade.
Was googling arround and read something like "Oracle Data Pump - Time to let go of Exp / Imp". It seems exp/imp is obsolete.
Our system doesn't have "expdp" command
> find . -name expdp >
is this because of too old SQL*Plus?
> sqlplus SQL*Plus: Release 8.1.7.0.0 - Production on Tue May 29 16:05:28 2012 (c) Copyright 2000 Oracle Corporation. All rights reserved. Enter user-name: ^C^C
- does our DBA need to give us privileges to run expdp/impdp?
- is that true that a expdp/impdp dump will be on the Oracle server (not the client machine)?
Export: Release 10.2.0.3.0 - Production on Thu Dec 6 15:06:21 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Release 10.2.0.3.0 - Production Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set server uses AL32UTF8 character set (possible charset conversion)
About to export specified users ...
EXP-00010: DISTESTA is not a valid username Export terminated successfully with warnings.
C:Documents and SettingsAdministrator>
When i connect from sqlplus, it doesn't give any erro. What could be the reason?
I am trying to import a dump into my oracle 10.2.0.3.0 database on my win 7 professional laptop. The dump is exported from my win xp desktop pc running Oracle 10.2.0.1.0
Below is the error i get:
Import: Release 10.2.0.3.0 - Production on Fri Nov 12 15:57:52 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Username: system/password@orcl
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production With the Partitioning, OLAP and Data Mining options
Import file: EXPDAT.DMP > F:PersonalDPISIMBA.dmp
Enter insert buffer size (minimum is 8192) 30720>
Export file created by EXPORT:V10.02.01 via conventional path import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set List contents of import file only (yes/no): no >
Ignore create error due to object existence (yes/no): no >
Import grants (yes/no): yes >
Import table data (yes/no): yes >
Import entire export file (yes/no): no > Username:paymaster
Enter table<T> or partition<T:P> names. NULL list means all tables for user Enter table<T> or partition<T:P> name or . if done:
when I press enter key, the console hangs and a window appears with "Console Window Host has stopped working" then the console closes prematurely.
i have full export dump file....from this, i need to import only one procedure belongs to schema : IC_MIGR_DATA... i need to import into SCHEMA : rep_user...
Import: Release 10.2.0.2.0 - 64bit Production on Friday, 20 June, 2008 16:19:46
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production With the Partitioning, OLAP and Data Mining options ORA-39002: invalid operation ORA-31694: master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" failed to load/unload ORA-31644: unable to position to block number 30698425 in dump file "/AIIC_backup/expbkp/dumps/IC/IC_FULL_19062008.dmp" ******
how to import this one procedure JET_UPLIFT , this has to be imported into REP_USER schema, owner of this procedure is IC_MIGR_DATA
I want to know how to add date/time in export dump file in Linux using parfile script. I keep getting an error "contain an invalid substitution variables"
my parfile is:
Dumpfile = Daily_Full_%U_`date "+%Y%m%d%H%S"`.dmp or Dumpfile = Daily_Full_%U_`%date%`.dmp
I'm rying to import schema's from a dump file that came from a different environment.
What I have is:
1. dump file 2. log file of the export
I'm trying to import the file(containing three schemas) with remap_schemas, and it fails, gives a lot of ORA-00959: tablespace 'string' does not exist.
Now, I've read in OTN:
[URL]
that what you need to do in that case is to use the REMAP_TABLESPACE option,to redirect the objects to a different tablespace.
I don't see a name of the tablespace I'm getting the error for in the export log.I don't know if I have more tablespaces I have to redirect with REMAP_TABLESPACE.
I don't want to perform this 3 times, have an error, by that find out what's the next tablespace needing redirection and only then starting over...
How can I know from the dump file and the log file,what is the tablespace names i need for the redirection to my names? Or its just that the tablespace giving me the error is the only one in the dump file?
When I do the import the of succeeding dump, I drop the existing schema "SQL> drop user username cascade;" and import dump by " impdp system .... ". I would like to import a dump to an existing instance but only data import and will leave the current packages and other metadata untouched and unchanged on the said existing instance.
1. Do i need to drop user before the import if my requirements are the above?
2. If i need to drop user, what should be script.
3. For the import itself, what parameter should i use?
4. What are the necessaries I need to consider before doing the import.
Is it possible to import a dump file using impdp data pump utility on oracle 10g where the export dump was taken using traditional exp utility and vice versa.
here one biggest schema size is 250GB and the total size of all the schema's is 300GB. The file where am taking the dump has 350GB space but even then the expdp failed saying
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes
why it failed and how to restart it and make sure it runs successfully without error.
I have taken expdp dump from 11g database running in Development env.Now i want to import this dump into 10g database running in QA env.
while taking export from 11g database i used this script and backup was sucesssful
expdp system TABLES=sss_exp_test.EXP_SB_HEADER_TMP VERSION=10.2 DIRECTORY=RMSDEV_IMP_DIR DUMPFILE=EXP_SB_HEADER_TMP.dmp LOGFILE=EXP_SB_HEADER_TMP_expdp.log
When i trying to import this dump in 10g getting error.
impdp system TABLES=sss_exp.EXP_SB_HEADER_TMP DIRECTORY=RMS_DATA_PUMP DUMPFILE=EXP_SB_HEADER_TMP.dmp LOGFILE=EXP_SB_HEADER_TMP_impdp.log
Import: Release 10.2.0.5.0 - 64bit Production on Sunday, 14 October, 2012 19:58:53
Copyright (c) 2003, 2007, Oracle. All rights reserved. Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39002: invalid operation ORA-39040: Schema expression "SCHEMAS" must identify exactly one schema.