Export/Import/SQL Loader :: IMPDP Into Schema With Truncated Tables
Aug 2, 2012
I've got a schema that I've truncated all tables. I have a full schema export I took awhile back, and I'm wanting to import this into the schema to basically 'reset' it.
First time run, I got the :
ORA-39151: Table "xyz.tablename" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
I've been reading through, and see suggestions to add to the par file:
CONTENT=DATA_ONLY TABLE_EXISTS_ACTION=APPEND
And I've seen others use the option for:
table_exists_action=replace
I basically want to put the data back into the tables, and have the indexes rebuilt.....
I have a process to export a schema using expdp and import using impdp. Everything creates successfully except for a trigger. The trigger gives and error that the table or view does not exist. The account that I use to import the schema is different than the schema user but is a highly privileged account. I notice that the schema in the create or replace trigger line of code is remapped (I am using remapping in the impdp syntax) and the rest of the syntax of the trigger (which is just a sequence trigger for a primary key column) does not have the schema. In order to fix the issue, I have my bash script log into oracle as the schema user after the import of the schema and execute the trigger code. why do I have to do this for trigger code but not for other objects like views that create just fine.
I have imported schemas (impdp) from production database (10gR2, RHEL 64bit). One of the schemas has a db_link. The db_link points to a database that exists on the same server - both in production server and also on the new server where I imported the schemas. When I run a simple query in production using this db_link, it works but when I run the same query on the test server (where I imported the schemas), it gives me following error: ORA-02019: connection description for remote database not found I run this in prod database: Select count(1) from SOME_TABLE@my_link; when I run it in new database, it gives the above ORA error - even if I qualify the table and db_link with the schema owner like this: Select count(1) from the_owner.some_table@the_owner.my_link; NOTE: I am not running these queries as schema owner - I do not know the password. I am able to connect to both databases like this from the command prompt: $ sqlplus user@/password@db1$ sqlplus user@/password@db2 Does this mean that I need to recreate the db_link - perhaps every time I import?
I have a dumpfile from a database with hundreds of tablespaces. Do I need to remap all of them on impdp or is there a way to point all tables to a default tablespace? I mean, the source database has 200 tablespaces. The target database just 1.
Should I be able to use network_link and remap_data together?I am trying to do three things at once:- copy data from one db to an other (prod to test)- change schema- mask sensitive dataI would like to avoid using dump files, import from network link would be nice. My initial testing is suggesting the answer is no.
When I use impdp with the following parameter file, sourcetable.column_to_remap is transferred without modifications, no data remapping is
done.userid=importinguser/XX@orcltables=sourceuser.sourcetabletable_exists_action=REPLACEremap_data=sourceuser.sourcetable.column_to_remap:remappgk.remapfnnetwork_link=source_linkremap_schema=sourceuser:importinguserdirectory=dumpdirlogfile=log_of_imp If i dump the data into file first and then import from file, data remapping is done as expected so I believe my remappkg is correct.
t there is something wrong with my parameter file and network_link&remap_data work together!
Edit: Testing is done with 11.2.0.1.0 at Windows Server 2008 R2 Edit2: More testing: 11.1.0.7@linux - Success!, Edit3: When writing "Edit2" I thought it worked at 11.2.0.1@linux but I was wrong (source data was already remapped). ==> So the answer is yes, they work together at version 11.1.0.7 and they don't seem to work at 11.2.0.1.
I've been using datapump for a long time now but I have not come across this problem before.
Importing just two tables: Table1 data=100Mb=11 million rows Table2 data=4.2Gb =19.6 million rows
Table1 ran for approx. 5 hours Table2 ran for approx. 15 hours
If I run the impdp importing both tables in the same par file the default tablespace of the users the import is running as runs out of space due to ORA-01691: unable to extend lob segment <owner>.SYS_LOG0001175799C00045$$ by 512 in tablespace USERS. I do not understand why it is creating objects in order to import tables into someone elses schema.
The environment is Red Hat LINUX 4.1.2-51 running Oracle 11.2.0.1 of Oracle11gR2. This is a 9 node RAC using ASM.
We have troubles with import of transportable tablespace, when i try to import it with system the import is done currectly, when i try to use another user we receive this message :
ORA-31626: il job non esiste ORA-31633: impossibile creare la tabella principale "BMCESE.SYS_IMPORT_TRANSPORTABLE_05" ORA-06512: a "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: a "SYS.KUPV$FT", line 1020 ORA-00959: tablespace 'RMCCO_RMC_UTZ_PP080531' inesistente
this happens only on development server and it work into test server, so i don't think that the problem is in grants(i controlled that are the same and i have tried with DBA grant too and we received the error).Another strange thing is that tablespace 'RMCCO_RMC_UTZ_PP080531' is not included into the dump that i try to import and it doesn't exist in the database.
i'm writing a job to export data on a weekly basis and archive those data in-case needed to be re-imported in future, its important that we able to import successfully if ever needed.
can impdp perform checks to see if dumps are valid ?how about old imp?if not, will it be safe enough to rely on the export log file? in another word is it safe to assume its all safe if there is no error or warning in the exp log?
I'm going to import a single database using FS to a RAC on ASM both are in same server running oracle 11.2.0.3. So I was wondering if can I use network link mode of impdp without setting up a listener using 10gb Ethernet interface ?
I don’t want to incur the overhead of the tcp network layer because both DB single and Rac are on the same server.
I'm currently moving an IOT from one database to another using expdp/imdp. The IOT is non-partitioned and about 100GB in size containing ~1,1 billion rows.
The dumpfile contains nothing else but the IOT. I'm importing with no special parameters, no pre-created IOT, just ordinary dumpfile import. (impdp username/password dumpfile=impdp:iot.dmp nologfile=y )
During import I got unable to extend TEMP errors from impdp.
ORA-39171: Job is experiencing a resumable wait. ORA-01652: unable to extend temp segment by 128 in tablespace TEMP ORA-39171: Job is experiencing a resumable wait. ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
I had to add 2 additional files to my Temp tablespace (total 96GB of temp) before the import could finish off.
Is this temp usage to be expected when importing IOT's ?
I have schema level export for user SAMPLE1(Default tablespace USERS) on oracle 9.2.0.1 production database. I want to import into another 9i database on another server, so do i nneed to Create SAMPLE1 user and USERS tablespace in new database again.
I am using Oracle 10g Data Pump Export utility expdp. What I am trying to do is to export a single schema, except for a certain partition P in table T.
Imagine you have 100 schemas backed up (expdp) in a dumpfile and you want to import just one schema from that dumpfile in a DB. You can specify just that one schema you want using SCHEMAS parameter in the impdp. But things are not straightforward when you want use REMAP_SCHEMA.
Here is my scenario: ===================
I took the expdp dump of schemas A and B in one go. So, dumpfile has objects from both A and B.The dumpfile name is : schemas_AandB.dmpNow , I want to create schema C from A using REMAP_SCHEMA parameter
-- Putting each parameter in a separate line for readability impdp PSTREF/PSTREF_123 DIRECTORY=ADET_EFX_DIR DUMPFILE=schemas_AandB.dmp LOGFILE=CreatingCfromA-Impdp.log REMAP_SCHEMA=A:CEverything goes fine. Schema C is created from Schema A in the dumpfile.
But impdp is trying to create schema B as well because schema B was present in the dumpfile. Since the schema B and its objects are already in the DB , I get the following errors.
ORA-31684: Object type USER:"B" already exists ORA-31684: Object type PROCEDURE:"B"."SP_CLEAREXPIREDSESSIONDATA" already exists ORA-31684: Object type PROCEDURE:"B"."SP_DELETESESSIONDATA" already exists ORA-31684: Object type PROCEDURE:"B"."SP_DELETESTATECONTEXTINFO" already exists
[code]...
Trying to avoid schema B in the dumpfile from being imported by specifying SCHEMASBut I got the following error ORA-39065: unexpected master process exception in MAIN ORA-12801: error signaled in parallel query server PZ99, instance oracth214:HEWRAC1 (1) ORA-01460: unimplemented or unreasonable conversion requestedMaybe REMAP_SCHEMA and SCHEMAS parameters won't work together.
Is there any way to prevent the impdp from importing user B and its objects ?
aix 6.111.2.0.3 I have an expdp dump from prod to be imported to our test database.I have imported it using impdp, but to my surprise the tables were imported but lots of indexes were not created? even If I have used TRANSFORM=SEGMENT_ATTRIBUTES:N just to use the default USERS tablespace. How do I import the indexes separately, skipping the tables and other objects?
i have a .dmp file and i want to use the data in this file for my further practices. so, i need to dump the data in the .dmp file to the any schema exists in data base.
How can I exclude during the import (with impdp) all tables of a certain type, es. statistics?I tried with .... STATISTICS=NONE, but this command seem ignored during the import,
the following situation, I have a directory named /dat/global/stock/ inside this i will get files named differently for example below.abcdef.112dfgrt.2......
Here i want to load this file one by one into the external tables and generate one more file based on some enrichment.
Step 1. Have to take first file and to load into the ext table. Step 2. Enrichment Step 3.File generation.
Now here i am facing a problem that in that particular directory i usually get 1000 files so i need to get file one by one and to put in one more directory. how can i get file one by one and generate file by using oracle loader
Tables same column names but diffrenet index structures and traget one to be partitioned hence only want to import the content Each table on source datbaase hascolumn seq number and only want to extract the last few months of data.
TABLES:table1,table2... DUMPFILE=dump_dir CONTENT=data_only QUERY= table1:"WHERE seq_num >100 "want to use expdp but not sure about how to ensure all tables have the WHERE seq_num >100 condition, if leave table1: out and just have QUERY= "WHERE seq_num >100 " will this condition be applied to all tables which is what we want.
I'm assuming also can use impdp CONTENT=data_only?
Why do export-import require temporary tablespace? Since export-import do behave like DMLs, when does temporary tablespace be needed by datapump utility?
How do i map student_newId field to student_id field in STUDENT DB table so that new id should be inserted in student_id column. How do i specify the mapping in control file. I dont want to create a new column in student table. In control file i will specify the below, Is this a best approach?. Do we have any othe way?
But the query of import is still runing even not showing any amount of rows to be imported.
i already make the tablespace in which the table was previosuly before dropping but when i check the sapce of tablespace that is also not consuming one error i got preiviously while performing this task is:
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production With the Partitioning, OLAP and Data Mining options Master table "CDR"."SYS_IMPORT_TABLE_03" successfully loaded/unloaded Starting "CDR"."SYS_IMPORT_TABLE_03": cdr/********@tsiindia directory=TEST_DIR dumpfile=CAT_IN_DATA_042012.DMP tables=CAT_IN_DATA_042012 logfile=impdpCAT_IN_DATA_042012.log
[code]....
i check streams_pool_size it will show zero and then i make it to 48M and after that
SQL> show parameter streams_pool_size; NAME TYPE VALUE ----------- streams_pool_size big integer 48M
BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for 32-bit Windows: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - ProductionOS version:
Windows 7 64bit I have schema(scott) export with schema level option and imported with different name as (scott1).At regular period of time i need to import the scott to scott1 without affecting existing records.such as *1. Need to append new created records.* *2. Need to append updated records.*
for the above requirement I did in the following way expdp xxxx/******** schemas=SCOTT directory=dumpdir dumpfile=SCOTT_28-SEP-2012.dmp logfile=exp_SCOTT_28-SEP-2012.log imported in the following way impdp xxxx/******** AS SYSDBA REMAP_SCHEMA=SCOTT:SCOTT1 directory=DUMPDIR dumpfile=SCOTT_28-SEP-2012.dmp logfile=imp_SCOTT2_28-09-2012.log TRANSFORM=SEGMENT_ATTRIBUTES:n TABLE_EXISTS_ACTION=APPEND.
The problem is i couldn'table to append the records to existing tables the log error show such ways.
ORA-31684: Object type USER:"SCOTT1" already exists Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA [code].....