Server Utilities :: Using Expdp / Impdp To Migrate 4 TB Database From Solaris To Linux
Aug 4, 2011I am using expdp/impdp to migrate 4 TB database from solaris to Linux.But the import process is taking forever.
View 13 RepliesI am using expdp/impdp to migrate 4 TB database from solaris to Linux.But the import process is taking forever.
View 13 RepliesWe are planning to migate Oracle database from Linux 10gR2 (non-ASM) to Solaris (ASM) and then Oracle upgrade that database from 10gR2 to 11gR2 on Solaris.
View 4 Replies View RelatedFrom some day I have this error during export data pump:
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-31687: error creating worker process with worker id 1
ORA-31687: error creating worker process with worker id 1
ORA-31688: Worker process failed during startup.
This error is random, if I retry after few minutes the expdp work correctly.
First of all, I'm not a DBA, and when I try to import the dmp file generated on AIX with Oracle 10G on a RHEL6 11G machine I got a lot of issues, how to do this?
View 3 Replies View RelatedI need to migrate a 500 GB 10.2 database from windows to linux. Of course I found Alejandro Vargas paper on how to do that ("Database Migration From Windows to Linux Using RMAN"). My problem is that the source database will be up again for 7 days and I need to apply all changes made to the source database to the migrated linux database.
Can I just take all redo log files generated from the source database and apply those to the destination database? I might have a problem with a) file conversion and that the redo log are not in sync with the target database (different check points?).I just found that on an exotic website: "The contents of a redo log file depend on a combination of Oracle version, operating system and server architecture. In general redo logs written on one architecture cannot be read on another. There are a few exceptions to this rule. For example, in Oracle 10.2 a redo log written in Linux can be read by a Windows database."
So if this is true, does it also work the other way AND will the database migration from window to linux create a perfect clone of the database where those redo log files actually can be applied to?
I need to migrate database from Windows to Linux. The current size is ~50GB.
Current Env:
OS = Windows 2003
DB version = 10.2.0.3
Proposed Env:
OS = Linux
DB version = 11.2.0.2
Would using datapump be a correct choice for this migration? Also do the step below seem correct?
01. Pre-create tablespaces on target 11g database
02. Export full database of source 10g database
03. Copy dumpfile to Destination Server
04. Grant IMPORT_FULL_DATABASE system privilege to user SYSTEM of target 11g database
05. Import full database to target 11g database
I have one doubt on Expdp & RMAN. Do EXPDP utilities does backup at block level as what RMAN is doing? Which one is faster, expdp or RMAN?
View 16 Replies View RelatedI have a process to export a schema using expdp and import using impdp. Everything creates successfully except for a trigger. The trigger gives and error that the table or view does not exist. The account that I use to import the schema is different than the schema user but is a highly privileged account. I notice that the schema in the create or replace trigger line of code is remapped (I am using remapping in the impdp syntax) and the rest of the syntax of the trigger (which is just a sequence trigger for a primary key column) does not have the schema. In order to fix the issue, I have my bash script log into oracle as the schema user after the import of the schema and execute the trigger code. why do I have to do this for trigger code but not for other objects like views that create just fine.
View 2 Replies View Relatedi got a problem recenly in Oracle 11g R2 RAC database . normally When I export sample user 'SCOTT' , it takes hardly one minutes .But In our RAC environment this export runs with 20to40 minutes .
Here the output :
---------------------------------------------------------------
oracle@rac2 dump]$ expdp system/sys123 directory=test_dir dumpfile=scott1.dmp schemas=scott
Export: Release 11.2.0.1.0 - Production on Mon Jan 23 09:30:26 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=test_dir dumpfile=scott1.dmp schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
[Code] .......
In another machine(where I configure RAC again in Linux) , I got the same problem . I also dont find any perfect documents in metalink . My host information :
OS : AIX 6.1
Storage : IBM (using ASM)
Database : Oracle 11g R2
After querying the view v$db_transportable_platform, I can not see any information about Solaris SPARC, but only about Solaris x86. Can I migrate database to Solaris SPARC? What can I do to performing RMAN CONVERT?
View 1 Replies View RelatedGetting below error While Impdp..
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
ORA-39083: Object type PROCACT_SCHEMA failed to create with error:
ORA-31625: Schema ADAS is needed to import this object, but is unaccessible
ORA-28031: maximum of 148 enabled roles exceeded
[code]...
ORA-06512: at "SYS.KUPW$WORKER", line 1342
ORA-06512: at line 2
Job "SYS"."SYS_IMPORT_FULL_01" stopped due to fatal error at 17:13:38
Grid version: 11.2.0.3
OS : Red Hat Enterprise Linux 5.4
Few months back, in our RAC Cluster, while taking an expdp backup in a local Linux formatted filesystem, I got some errors. Don't quite remember the error code or the scenario now as I had too much work that day. The issue was fixed only when we used an ACFS filesystem location as the directory object for expdp.
Today, in the same RAC cluster, to reproduce that issue, I tested taking an expdp backup in a local Linux formatted file system ( /home/oracle/pumpDir ) and the expdp completed without any issues.
with expdp,impdp in RAC cluster environment because of using a local Linux filesystem ?
I would like to export specific tables(not entire schema) including metadata. I am using a parameter file for expdp.
Tables=emp,dept
Does this also include all metadata or should i also add the below Include in the parfile ?
INCLUDE =Indexes,Sequences,Procedures,Views
We had AIX OS on 570 machine and database 10.2.0.4. We took expdp and it took 2 and hour to complete every night.
Now we upgrade to 10.2.0.5 and 770 machine and now same command takes 6 hours to complete even database and hardware is upgraded
Command is
expdp T24SILK/oracle directory=backup dumpfile=exp_beod_T24_%U_$dt
.dmp logfile=exp_T24_$dt.log EXCLUDE=TABLE:"LIKE '%TRACE'" parallel=6
I am trying to export Schema using expdp command. but its going hang after few minutes. it seems that it stucks any where. Even I am trying with normal scott schema it is also hanging.
View 16 Replies View RelatedI export a table using exp utility it take 30 mins to complete the export.The same i have done in expdp utility it take 10 mins to complete the export.
How it happens?
While trying to expdp using Query logics, getting syntax related erros shown below:
expdp system/xxxx SCHEMAS=LOG NETWORK_LINK=DBLINK1 INCLUDE=TABLE:"IN('DAILY_LOG')" QUERY=LOG.DAILY_LOG:"where entry_date< to_char(sysdate -1,'yyyymmdd')" DIRECTORY=dump DUMPFILE=log_exp.dmp logfile=log_exp.log
But gives the following error
ORA-31693: Table data object "LOG"."DAILY_LOG" failed to load/unload and is being skipped due to error:
ORA-00904: "YYYYMMDD": invalid identifier
I tried with simple sql with YYYMMDD and it works fine, the entry_date is a char field. in QUERY where i'm doing wrong here?
When I am exporting a schema that own several DBA_JOBS using expdp, and then importing it into another database - the DBA_JOB change ownership to the schema I used for the import.
For example :
the log_user,priv_user and schema_user used to be schema 'A'
but after the impdp, assuming I imported as SYSTEM, the log_user,priv_user and schema_user are now SYSTEM !.
Is this an expected behavior ? How can I avoid it ? the job will fail executing as SYSTEM , and I need to manually create it as user A again.
I have a question to clarify regarding user creation during export and import.
Will user get created along with roles,privileges by default when using impdp command ?
I did an export using the following parfile (see below) I want to import all the objects associated with this schema into another DB but I want don't want to over-write any of the permissions such as grants.
Is there a way I can get the grants into a sql file before I do the import. If so, provide an example.
cat exp_par
DUMPFILE=exp.dmp
LOGFILE=exp.log
DIRECTORY=DBBACKUP
schemas=t1
I'm curious to know if expdp or impdp is able to change object names during the process. What I mean by this is... can I export out procedures:
procedure1
procedure2
procedure3
Then import them like this:
test_procedure1
test_procedure2
test_procedure3
I'm not sure the expdp or impdp has that ability, but I could have missed it. I know how to remap a schema, but that only changes the schema name.
i want to exclude only data of some particular tables not complete table object when exporting using expdp.
View 13 Replies View RelatedI have a server configured to German & English. when i connect with SQLPLUS, i have German language server output, but when i do "alter session set nls_language='AMERICAN'" - it solves the issue for me.
I need the same for expdp command, but I don't know how to do this. I have tried to add a parameter nls_language, but expdp doesn't recognize it. Is it possible to somehow see server output of the expdp & writing it to the log file in English?
i succeeded to expdp to ASM diskgroup such as
create directory asmexpdir as '+RECO/FILTDB/EXPDP';
grant read,write on directory asmexpdir to oraasfs;
expdp oraasfs/oraasfs2301 directory=asmexpdir dumpfile=SBSR_EXP.dmp tables=TM_SFS_CUST_01 logfile=EXPDP_LOG:SBSR_EXP.log
SUCCESS MESSAGE
. . exported "ORAASFS"."TM_SFS_CUST_01" 387.2 MB 817684 rows
Master table "ORAASFS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ORAASFS.SYS_EXPORT_TABLE_01 is:
+RECO/filtdb/expdp/sbsr_exp.dmp
Job "ORAASFS"."SYS_EXPORT_TABLE_01" successfully completed at 03:34:59
And I like to run this daily and delete after 14 days. but it show error, what can be the solution to run this script?
#!/bin/bash
#Script to Perform Datapump Export backup Every Day
################################################################
#Change History
[code]...
how to migrate a table from 10g xe database to a oracle9i database.Both the database are stand alone and we cannot create a dblink.
View 2 Replies View RelatedI am trying to use the datapump tool to migrate a 10g db to 11g. Everything works fine except for the "nameless" check constraints.
View 7 Replies View RelatedIn my present env, Oracle runs in Solaris 10 and I am planning to restore it to Linux5. I have read thro oracle docs, metalink that cross platform restore with different endian can be done by TTS. But it says that tablespace shud be in read-only until we plug in it in the dest server.
Also My db size will be 800-1000GB
What are reported issues when migratin Oracle 11g from Solairs to Linux?
View 1 Replies View RelatedI'm trying to do a network datapump between oracle databases, and it seems to continually hang when it gets to the point where it should be processing table data.
C:>impdp DP_USER/DP_USER parfile=sde_webmap_2.par
Import: Release 11.1.0.7.0 - 64bit Production on Wednesday, 26 May, 2010 17:42:03
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "DP_USER"."SYS_IMPORT_FULL_01": DP_USER/******** parfile=sde_webmap_2.par
Estimate in progress using BLOCKS method...
[code]...
It just sits at this point indefinitely.The parfile for those interested:
directory=datapumps
logfile=sde_webmap_2.log
network_link=backup
full=y
INCLUDE=SCHEMA:"IN ('SDE_WEBMAP_BUSINESS','SDE_WEBMAP_BUSINESS_A','SDE_WEBMAP_BUSINESS_B')"
And the results from V$SESSION_LONGOPS
69 SYS_IMPORT_FULL_01 IMPORT 0 1031 MB 5/26/2010 5:50:37 PM 5/26/2010 6:03:29 PM
As we know,there is a parameters named indexes of orignal imp,it use to generate create index ddl,Is there a parameter in impdp compare to it?
View 6 Replies View Related