Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA ORA-39083: Object type PROCACT_SCHEMA failed to create with error: ORA-31625: Schema ADAS is needed to import this object, but is unaccessible ORA-28031: maximum of 148 enabled roles exceeded
[code]...
ORA-06512: at "SYS.KUPW$WORKER", line 1342 ORA-06512: at line 2
Job "SYS"."SYS_IMPORT_FULL_01" stopped due to fatal error at 17:13:38
I exported three databases : servicedesk, report and mostcmdb on server A to a dumpfile. Moved them to similar dump location on server B. Now, I want to import them into B by using:
The result : ;;; Import: Release 10.2.0.4.0 - Production on Jeudi, 15 Septembre, 2011 17:47:16
Copyright (c) 2003, 2007, Oracle. All rights reserved. ;;; Connecté à : Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Table maître "DATAPUMP"."SYS_IMPORT_SCHEMA_29" chargée/déchargée avec succès
but I am getting the below error like this for 1 or 2 tables . and If I import those tables seperately its getting imported successfully. i am not getting the below error always .
ORA-31693: Table data object "PROD"."DAS_ID_GENERATOR" failed to load/unload and is being skipped due to error: ORA-00001: unique constraint (PROD.DAS_ID_GENERATOR_P) violated ORA-31693: Table data object "PROD"."TKT_DIST_SRV_STAT" failed to load/unload and is being skipped due to error: ORA-00001: unique constraint (PROD.SERVER_STATS_P) violated
I am having issue with IMPDP on ORACLE VIRTUAL COLUMNS.I am having following table with Virtual column defined with Not null. Expdp is fine without any issue.
DDL : ------ CREATE TABLE alert_hist ( alertky INTEGER NOT NULL, alertcreatedttm TIMESTAMP(6) DEFAULT systimestamp NOT NULL, alertcreatedt DATE GENERATED ALWAYS AS (To_date(Trunc("alertcreatedttm"))) VIRTUAL NOT NULL
When I do the import (IMPDP) it got failed with the following error.
. . imported "TESTSCHEMA"."VALART" 359.1 KB 4536 rows ORA-31693: Table data object "TESTSCHEMA"."ALERT_HIST" failed to load/unload and is being skipped due to error: ORA-39097: Data Pump job encountered unexpected error -1
After that I dropped the Virtual Not null column and recreated that column with Nullable.
DDL : ----- alter table alert_hist drop column alertcreatedt; alter table alert_hist add alertcreatedt DATE GENERATED ALWAYS AS (To_date(Trunc("alertcreatedttm"))) VIRTUAL;
After that I took the expdp and impdp , it went fine with out any issue.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39001: invalid argument value ORA-39071: Value for EXCLUDE is badly formed. ORA-00920: invalid relational operator
we have the following requirement for creating materialized view.
In database "DB1", a table "abc" is there in "user1" schema. In database "DB2", in "user2", we are accessing the "abc" through DB link "DBL1" to DB1: select * from user1.abc@DBL1; we have created a public synonym for "user1.abc@DBL1" as "sabc".
In database "DB3", we need to create the materialized view to access "abc" in DB1 through DB2. we are not allowed to access "DB1" directly from "DB3". so we create a db link as "DBL2" in "DB3" to "DB2".
when i try to create a materialized view as below: create materialized view abc_mv as select * from sabc@DBL2;
But got the error as "synonym translation invalid". When i access the SQL statement separately as below select * from sabc@DBL2, its working but in Materialized view, it shown that error. Even i tried with schema name as well like "user2.sabc@DBL2" , but that also thrown same error.
Is there any option available for this type of creating materialized view?
Enter user-name: sys as sysdba Enter password: ERROR: ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist SVR4 Error: 2: No such file or directory
I have A Daily hot backup using Expdp Command On oracle 10g R2 installed on the Linux server. And I'm trying to move this Dump File to Another directory on Windows server 2003 over network using Ftp script which will be run after the export process finished Automatically.
When I am exporting a schema that own several DBA_JOBS using expdp, and then importing it into another database - the DBA_JOB change ownership to the schema I used for the import.
For example : the log_user,priv_user and schema_user used to be schema 'A' but after the impdp, assuming I imported as SYSTEM, the log_user,priv_user and schema_user are now SYSTEM !.
Is this an expected behavior ? How can I avoid it ? the job will fail executing as SYSTEM , and I need to manually create it as user A again.
I did an export using the following parfile (see below) I want to import all the objects associated with this schema into another DB but I want don't want to over-write any of the permissions such as grants.
Is there a way I can get the grants into a sql file before I do the import. If so, provide an example.
I'm going to import a single database using FS to a RAC on ASM both are in same server running oracle 11.2.0.3. So I was wondering if can I use network link mode of impdp without setting up a listener using 10gb Ethernet interface ?
I don’t want to incur the overhead of the tcp network layer because both DB single and Rac are on the same server.
I want to create two or three sachems on my production server which should be the same copy of my another second production server. And I access this second server through VPN connection on toad9.0.1. And I access my production server through VNC viewer and database through toad.
How cloud I create schema on my first prod. server from second server.
I'm trying to do a network datapump between oracle databases, and it seems to continually hang when it gets to the point where it should be processing table data.
C:>impdp DP_USER/DP_USER parfile=sde_webmap_2.par
Import: Release 11.1.0.7.0 - 64bit Production on Wednesday, 26 May, 2010 17:42:03
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "DP_USER"."SYS_IMPORT_FULL_01": DP_USER/******** parfile=sde_webmap_2.par Estimate in progress using BLOCKS method...
[code]...
It just sits at this point indefinitely.The parfile for those interested:
I was running IMPDP for my company's database.Everything is just fine except one problem.A big table(61.1G) could't be imported.The warning message shows "imported TBL_XXXX 61.1G 0 out of 147653981rows".The alert log had no warning message.I didn't know
I got: ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-31619: invalid dump file "C:P7DBfpac052912_dp01.dmp"
Done in AIX: create directory dp as '/bak' grant read, write on directory dp to public; grant exp_full_database to username;
Done in Windows: create directory dp as 'C:P7DB'; grant read, write on directory dp to public; grant exp_full_database to username; grant imp_full_database to username;
Import: Release 11.2.0.1.0 - Production on Fri Feb 10 09:49:50 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "NTEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded Starting "NTEST"."SYS_IMPORT_FULL_01": ntest/******** directory=test_dir dumpfile=JBLLIVE.31Jan2012.11.50AM.dmp remap_schema=JBLLIVE:NTEST logfile=ntest_10feb.log Processing object type SCHEMA_EXPORT/USER ORA-31684: Object type USER:"NTEST" already exists Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE ---- In this situation I observed the worker status and see that some table and some LOB objects including LOB indexes are imported . Worker process do it in background but it does not show in the front import log file (I dont understand why it not shows in the import logfile). it imports one table,one LOB , one LOB index ..then again one table,one LOB , one LOB index ... in this way .
And my observation first it inserts data into the LOB tables and then it inserts into normal table . And when it is starting to insert data to the normal table then this table's log are shown in the import logfile.
We need to setup a test system using data from one of the offsite customer location.The option we would like to use is impdp with network link. Below given step will make import faster if we exclude indexes and constraints
Steps Import schema excluding constraints,rf_constraints,indexes with metadata only Import schema with data only spool ddl using dbms_metadata.get_ddl from Source for constraints,rf_constraints and indexes execute on Destination the Index creation ddl execute on Destination constraints and rf_constraints ddl
I have a bit of an issue with Oracle datapump dump files.
Today, I manage the export and import of oracle dump files. As part of the batch export process I have a script which essentially says:
For each schema realated to my application in THIS instance, export schema via the system user (system user allows me privs to all schemas).
On the import UI side of things I am able to run a "head -20" command on the dmp file and determine the "export client version", "date the schema was dumped", and "what schema it was dumped from". All useful info presented in my UI.
in that I allow the importation of production schemas into test schemas, (contained in a different tablespace). Based on naming convention I can determine the schema type (production or test). Additionally and probably most importantly, I am assured where the data has come from.
In looking at "expdp" and the dump file. Using the same method as above, it appears the data pump dump DOES NOT carry similar headers. Because of this, I am unable to return very little useful info from the dump file.
I realize I could run the impdp with the "sqlfile=myfile.sql" and then interrogate the sql file for the info. But on large dump files this would be fairly time consuming compared to a "head -20" on a dump file.
i have more than 100 dumpfiles to import into my oracle 11g database. i know how to import(impdp) for same named dumps but here all the dumpfile names are totally different(ex: aa.dmp,bb.dmp,).
I am try to import 4G dump in Oracle 11R2 version, in that we have around 9000+ Package Body which is taking huge time than other objects (about 8 to 12 hrs) and also it is expecting lots of system space (roughly about 10GB).
I have tried both parallel and non-parallel.how to improve speed of the package body import.
Details about the Schema & Import No. of objects in Schema
SQL> select object_type,count(1) from user_objects GROUP BY ROLLUP( object_type);
OBJECT_TYPE COUNT(1) ------------------- ---------- FUNCTION 248 INDEX 5161 JAVA CLASS 471 JAVA RESOURCE 1 JAVA SOURCE 16 LIBRARY 1