i got a problem recenly in Oracle 11g R2 RAC database . normally When I export sample user 'SCOTT' , it takes hardly one minutes .But In our RAC environment this export runs with 20to40 minutes .
Here the output :
---------------------------------------------------------------
oracle@rac2 dump]$ expdp system/sys123 directory=test_dir dumpfile=scott1.dmp schemas=scott
Export: Release 11.2.0.1.0 - Production on Mon Jan 23 09:30:26 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** directory=test_dir dumpfile=scott1.dmp schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
[Code] .......
In another machine(where I configure RAC again in Linux) , I got the same problem . I also dont find any perfect documents in metalink . My host information :
OS : AIX 6.1
Storage : IBM (using ASM)
Database : Oracle 11g R2
Import: Release 11.2.0.1.0 - Production on Fri Feb 10 09:49:50 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "NTEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded Starting "NTEST"."SYS_IMPORT_FULL_01": ntest/******** directory=test_dir dumpfile=JBLLIVE.31Jan2012.11.50AM.dmp remap_schema=JBLLIVE:NTEST logfile=ntest_10feb.log Processing object type SCHEMA_EXPORT/USER ORA-31684: Object type USER:"NTEST" already exists Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE ---- In this situation I observed the worker status and see that some table and some LOB objects including LOB indexes are imported . Worker process do it in background but it does not show in the front import log file (I dont understand why it not shows in the import logfile). it imports one table,one LOB , one LOB index ..then again one table,one LOB , one LOB index ... in this way .
And my observation first it inserts data into the LOB tables and then it inserts into normal table . And when it is starting to insert data to the normal table then this table's log are shown in the import logfile.
First of all, I'm not a DBA, and when I try to import the dmp file generated on AIX with Oracle 10G on a RHEL6 11G machine I got a lot of issues, how to do this?
I have a process to export a schema using expdp and import using impdp. Everything creates successfully except for a trigger. The trigger gives and error that the table or view does not exist. The account that I use to import the schema is different than the schema user but is a highly privileged account. I notice that the schema in the create or replace trigger line of code is remapped (I am using remapping in the impdp syntax) and the rest of the syntax of the trigger (which is just a sequence trigger for a primary key column) does not have the schema. In order to fix the issue, I have my bash script log into oracle as the schema user after the import of the schema and execute the trigger code. why do I have to do this for trigger code but not for other objects like views that create just fine.
data pump export is very slow. For 50GB export has taken more than 24Hrs with one below error:
Database Version:11.2.0.2.0 OS: Windows server 2008 r2 Increased 10GB RAM and CPU 6 to 8 then also same issue
Error: ORA-31693: Table data object "BNCSDB"."MS_DATA_PTORE" failed to load/unload and is being skipped due to error: ORA-02354: error in exporting/importing data ORA-01555: snapshot too old: rollback segment number 20 with name "_SYSSMU20_4037596720$" too small
Export log: Export: Release 11.2.0.2.0 - Production on Tue May 14 20:03:25 2013
Copyright � 1982, 2009, Oracle and/or its affiliates. All rights reserved. ;;; Connected to: Oracle Database 11g Release 11.2.0.2.0 - 64bit Production Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/********@orcl dumpfile=BCSDB04_19.dmp logfile=BCSDB04_19.log
Grid version: 11.2.0.3 OS : Red Hat Enterprise Linux 5.4
Few months back, in our RAC Cluster, while taking an expdp backup in a local Linux formatted filesystem, I got some errors. Don't quite remember the error code or the scenario now as I had too much work that day. The issue was fixed only when we used an ACFS filesystem location as the directory object for expdp.
Today, in the same RAC cluster, to reproduce that issue, I tested taking an expdp backup in a local Linux formatted file system ( /home/oracle/pumpDir ) and the expdp completed without any issues.
with expdp,impdp in RAC cluster environment because of using a local Linux filesystem ?
I've been using datapump for a long time now but I have not come across this problem before.
Importing just two tables: Table1 data=100Mb=11 million rows Table2 data=4.2Gb =19.6 million rows
Table1 ran for approx. 5 hours Table2 ran for approx. 15 hours
If I run the impdp importing both tables in the same par file the default tablespace of the users the import is running as runs out of space due to ORA-01691: unable to extend lob segment <owner>.SYS_LOG0001175799C00045$$ by 512 in tablespace USERS. I do not understand why it is creating objects in order to import tables into someone elses schema.
The environment is Red Hat LINUX 4.1.2-51 running Oracle 11.2.0.1 of Oracle11gR2. This is a 9 node RAC using ASM.
Environment: Oracle RDBMS 10g R2. DB OS: HP Itanium
We use Oracle EBS R12.1.2 in our company and one of the analyst reported performance with saving the configuration in Pricing module. The common fix is to gather stats on BOM_EXPLOSIONS table. Recently, when the issue occurred I collected statistics on the table. The performance didnt improve. I went ahead and decided to trace the Oracle form session using the profile 'Initialization SQL Statement - Custom" at user level.
I also monitored the session in OEM 10g grid. The analyst performed the same set of steps and the performance was normal and acceptable. Analyst tried again and performance was matching with the expectation. I cleared the trace profile and analyst tried again. This time analyst had worse performance as the original issue. The issue got fixed later part of the day on its own. This has made me curious and thought to discuss it here.
I have had similar experience with 10g and 11g, when I enable the trace on the issue cannot be reproduced and when trace is off the issue pops back up.
My database is in NOARCHIVELOG mode.I took whole DB backup ( cold).Then just after half an hour I ran following script.
RMAN> RUN { RESTORE DATABASE;RECOVER DATABASE;alter database open;}
Starting restore at 24-FEB-12 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=133 device type=DISK channel ORA_DISK_1: starting datafile backup set restore
[code]...
Starting recover at 24-FEB-12 using channel ORA_DISK_1
starting media recovery archived log for thread 1 with sequence 2 is already on disk as file /u01/app/oracle/oradata/PROD/redo02.log archived log file name=/u01/app/oracle/oradata/PROD/redo02.log thread=1 sequence=2 media recovery complete, elapsed time: 00:00:01
[code]...
Why do I need to specify an option at the first place?As my redo is intact, it is not incomplete recovery and, I do not want to generate new incarnation of my database.Why oracle simply not opening my database?
i am trying to export table using datapump in oracle 10g, this expdp takes 5 hours time, so i want use use parallel keyword in expdp, my question is how should i know number of parallels can i use...?
I have a client database on a remote server.I take a daily export dump of this database.Now, I have created a new Oracle 11g XE database on my local machine. I want to import that dump in this XE database.
I will ftp the dump file to local machine and all no isssues about that. I just want to know will import in XE database be successful? I mean, can we import data in XE database ?
I'm trying desperately for a few days to import an Oracle database 10.2.0.5 from a Red Hat Enterprise Linux Server release 4.1 on my Windows server 2008 R2 x64 11.2.0.1. expdp export is performed without errors with the following options:
DUMPFILE = expinclude.dmp DIRECTORY = exp_dir LOGFILE = expinclude.log TABLES = (list of tables with partitions) ESTIMATE = STATISTICS
When I try to import my dump I do not get a multitude of errors like this:
ORA-39083: Failed to create the object type TABLE: "VL_DATA". "LOG_VOUCHERS_MESSAGES" with option value error: ORA-02219: invalid NEXT storage option value SQL failing:
After having scoured the forums on the net I have found very little info on this error (besides the code itself emerges out of a thousand sites) I tried multiple combinations for Import: excluding the index, only the structure of import, import data, etc. ... without success.
ORA-39002: invalid operation ORA-39070: Unable to open the log file. ORA-29283: invalid file operation ORA-06512: at "SYS.UTL_FILE", line 536 ORA-29283: invalid file operation
Is this error related to the permission in the OS level (windows 7 in my case)? I manually created the folder 'DATA_PUMP_DIR' in the specified directory path. Though the directory I created (DATA_PUMP_DIR) shows read-only in the general tab of the property, I am able to create files under the folder 'DATA_PUMP_DIR'.
I am trying to do a impdp using a network link and it fails with the ORA-31626: job does not exist.It worked with a different database on the same server. The network link is there, data pump directory exists, the read and write privileges are granted to the oracle user.There are no other data pump jobs running:
SQL> select JOB_NAME,STATE from DBA_DATAPUMP_JOBS;
no rows selected
My database details: BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi PL/SQL Release 10.2.0.4.0 - Production CORE 10.2.0.4.0 Production TNS for Linux: Version 10.2.0.4.0 - Production NLSRTL Version 10.2.0.4.0 - Production
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, Data Mining and Real Application Testing options ORA-31626: job does not exist ORA-00942: table or view does not exist ORA-00942: table or view does not exist
After upgrading 11gR1 database (11.1.0.7.0) to 11gR2 (11.2.0.3.0), the datapump exports have been taking quite a bit longer. When database was 11gR1, a full expdp took approx. 40-45 minutes. After upgrade, it takes approx. 1 hour 40-50 minutes. These times were with parallel=4. I tried with parallel=8 and parallel=12, both of these took around 1 hour 5-10 minutes, better but still quite a bit slower than pre-11gR2 upgrade. I tried with exclude=statistics, index_statistics, indexes; it still took approx. 1 hour 40-45 minutes. This is a PeopleSoft database so there are many, many objects to be exported. The database was upgraded using dbua.
I am trying to export Schema using expdp command. but its going hang after few minutes. it seems that it stucks any where. Even I am trying with normal scott schema it is also hanging.
I export a table using exp utility it take 30 mins to complete the export.The same i have done in expdp utility it take 10 mins to complete the export.
But gives the following error ORA-31693: Table data object "LOG"."DAILY_LOG" failed to load/unload and is being skipped due to error: ORA-00904: "YYYYMMDD": invalid identifier
I tried with simple sql with YYYMMDD and it works fine, the entry_date is a char field. in QUERY where i'm doing wrong here?
I am trying to import data from a dmp file created using expdp. Running on oracle 11g Express and getting following error, and tried to fix but could not succeeded, I have tables exist in POI schema and trying to import them in ghi schema. Created dmp file from poi shcema with two tables= '''REL20_AU_POI'',''ARCHIVE_POI''' and these tables do not exist in ghi schema
My database size is around 2900 GB in AIX 6.1, database version is 10.2.0.3. Everyday I need to take expdp dump backup of a single table which is only 57 MB in size. It takes around 55 minutes to complete the dump backup.
I have noticed that when backup starts , in first phase it does table scan ( we have 330000 tables) , next purely backup begin. My query is ,
1. how to make first my dump backup? 2. is there any way to skip table scan ?
I am asked to migration from oracle 10.2 to 11.2 on Red hat.In 10g,the db has around 50 users including default users like Scott,xdb etc. 1,Should i skip those users when i import? The schema MAHM05 has the tables, None has directly access privilege this schema.
All the users access by synonym. 2,which schema should i import first? 3,What are things i need to check?
I can see that I can use REUSE_DATAFILES & TABLE_EXISTS_ACTION to overwrite tables by default, but is there a recognised way of replacing the entire DB with the impdp? Do I just create the instance, (with the init file) and not build/init the tables, or what? I'll experiment, but I'm just interested if there is a DBA best practice for this sort of thing.