We have 10g physical standby set in our environment and we are migrating to 11g now. We want to use the active data guard feature of 11g to run the live reports on standby rather production. Questions I have is:
1) On our current 10g standby environment, we use db_name=cusms which is exactly matching with the production database name. I don't see we are using database_unique name on our standby. But I have read several blogs where everyone talks about using db_unique name on standby and db_name can be exactly matching with production on 11g. I wanted to know, is db_unique name a new requirement to have on 11g? can I go ahead and not use db_unique name and just have db_name exactly matching with production? What are the implications of doing so? The reason we want to stick to this is in-case of failover we want the database name to be the same. But I want to hear your thoughts on this:
2)While building standby, I did noticed few things and want your clarifications:
a.On standby database, should I mount instance using pfile or spfile or it doesn't matter? b. Lets say if I use either spfile or pfile, can I just have db_unique name in that file and just start the instance in no mount and do the duplicate from rman? c.As soon as my duplicate target database for standby from active database got finished, I usually exit the rman session and go to sqlplus and shutdown the standby instance. (Is this ok to do) d.Then I start the standby instance with startup (mount and open the database) this should open the standby database in read only mode. Following I issue alter database recover managed standby database using current logfile disconnect to put the database in recovery mode. (any steps missing here) e.Then go to primary and do few log switches and come back to standby to see if the primary changes moved to secondary or not.
But what I have observed is:
a. When I do the duplicate it runs successful. But during the course of duplicate, primary system generates few archives which are not shipped or applied on standby. When I go to standby to recover the database, it says media recovery needed and ask for archives files. I need to manually move this files from primary to standby to apply. Isn/t this automatically taken care? b. I also noticed after I can not open the standby database in read only mode after the duplicate command. While trying to open, it says database media recovery needed. What's the best procedure to open the database in read only mode immediately? c. On my standby init.ora lets say if I use db_unique name, where would my control file be place? Will oracle create controlfile from primary and put it on my standby database and put an update an entry into my pfile or spfile?
I have database A (Working in Live environment) and Database B copy of Database (Not live) I have Restored whole database (A) RMAN backup file on Database (B) Previous week now i don't want to change anything in any schema and want to import only updated and new records in the table in Database B
There are around 20 schema If for example i have everything in new database B all required database objects like Procedure,functions, packages with indexes in all tables and data in tables, i just want to add new data and updated data.
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
I need to export only the data from schemas or tables, how to do that with Oracle Data Pump? when we use schemas parameter this export all schema, not only the data right?
I would like to ask if there is the possibility using the data pump export utility to export my full database plus some partitions tables by selecting specific partitions. Can i have all these criteria in only one data pump export? If yes any example?
I am running Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production on RHEL5. I am busy with data pump import, from the log I can see that my import is busy with the constraints.
I am using parameter EXCLUDE=INDEX during the import and I created the index DDL's.
Now I want to manually create indexes while the import is busy.
Will this be advisable to do or what would be the impact?
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORA-31626: job does not exist ORA-31637: cannot create job SYS_EXPORT_SCHEMA_01 for user KAILAS ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPV$FT_INT", line 600 ORA-39080: failed to create queues "KUPC$C_1_20111001165007" and "KUPC$S_1_20111001165007" for Data Pump job ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95 ORA-06512: at "SYS.KUPC$QUE_INT", line 1555 ORA-00832: no streams pool created and cannot automatically create one [oracle@localhost dbs]$
How (or having a script) to get, in PL/SQL, the parameters that have been given on a Data Pump command (export or import): mode (easy), tables/schemas list, exclude/include values and so on?
I met some problems with data pump tools. We have a large db(oracle 10.2.0.5, single instance on hpux v11.11, data about 8TB), while we wanna migrate to Linux ia 64bit, 10.2.0.5 RAC. The migrate window is around 20 hours, so the window can NOT be assign more hours.
As we consider if impdp & expdp work can start together, that will be save a lot of time. So I wonder if there are any ways to implement this or any other ways can speed up and make the data integrate better?
why datapump is faster than normal exp ? one ans that i know is dp use block mode and exp use byte mode . is there any other major reason? if say i have database of size 10g and want to take datapump backup, but condition is that i can take dumpfile of size 2g only. there is any way to take full backup of database in part wise .
We have a QA database on a VM server with Windows 2003 operating system and oracle 10.2.0.1 installed along with limited disk space.We received an expdp file from a client that is large enough that we had to copy it to a network drive (40GB). I created a new directory called IMPDMP with the directory path (using UNC pathing) to \serversharefoldersubfolder (our network mapped P drive, yes I included the backslash, but I have tried without it also). I also included the parfile here. I checked the grants and they seem to be fine
SQL> select * from session_roles where role like '%DATABASE' or role like 'DBA';
ROLE ------------------------------ DBA EXP_FULL_DATABASE IMP_FULL_DATABASE
SQL> select * from session_privs where privilege like '%DICT%';
PRIVILEGE ---------------------------------------- SELECT ANY DICTIONARY ANALYZE ANY DICTIONARY [code]....
My questions are this:
1) In interactive mode, does a dummy file expdat.dmp have to exist in the DATA_PUMP_DIR directory?
2) does my export have to reside in the DATA_PUMP_DIR directory (again, no disk space to handle the DMP file), one of the hard drives is just big enough to handle the space but since it has datafiles there also, it would crash during import when trying to extend.
I have an 11g data pump supplied by another party.I am on Windows 7(x64).I have experience using other databases, but not Oracle. The complexity of it all is a bit overwhelming...
I downloaded and installed [URL].I used the Database Configuration Assistant to create a database:
Template: Data Warehouse Name/SID: database0 Password: password0
I then used the 'database0' Enterprise Manager:
Logged in as SYSTEM/password0 (Normal) Import from Export Files Entire Files Host Credentials: myself (am Windows administrator) All the rest defaults
The job appears to finish successfully.When I look at the schema (using razorsql), most tables seem to be there. However,a significant number are not. When I open data pump in a text editor, those missing tables are clearly there - definitions and data.When I look in the import.log, there are errors of the type:
error in creating database file '/db02/oradata/database0/stuff.dbf' file create error, unable to create file unable to open file (OS 3) The system cannot find the path specified. Failing sql is: CREATE TABLESPACE "STUFF" DATAFILE '/db01/oradata/database0/stuff.dbf'
-- followed by the associated table creation errors.
So, does this mean that unix paths are hardcoded into the data pump, and is therefore incompatible with import into a Windows based system? Or are the paths symbolic, internal representations used by Oracle, and these errors are a symptom of an earlier, undisclosed problem?
The thing is, when I view the schema, the tablespace "STUFF" exists, just none of its tables.
I'm installing a new application-testing server, i have installed 11g r2 instant clients & SQL* Plus client.
when i'm trying to run an expdp command, i get this:
'expdp' is not recognized as an internal or external command
Now, i understand this is because i don't have the Bin directory of a client installation in my Path of the OS. My question is, which one exactly i need for using data-pump utility, and where to download it?
I've found lots of posts of people that had issues with defining the ORA_HOME$in in the $PATH, or having a client incompatibility issue throughout the web, but no answer to my specific question.
ORA-31693: Table data object "033"."EZMILRIKUZ" failed to load/unload and is being skipped due to error: ORA-00922: missing or invalid option
my backup syntax is: 033/******@INTORCL DIRECTORY=exp_dir DUMPFILE=033.dmp LOGFILE=033.LOG FULL=N REUSE_DUMPFILES=Y FLASHBACK_TIME="TO_TIMESTAMP(TO_CHAR(SYSDATE,'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS')"
I thought there was a problem with the table so i created a new one and now I'm getting the same error on a different table (the third on in the list: . . exported "033"."PRQ" 192.9 KB 479 rows . . exported "033"."EZMIL" 558.8 KB 1229 rows ORA-31693: Table data object "033"."MIL" failed to load/unload and is being skipped due to error: ORA-00922: missing or invalid option
when i takeoff the "FLASHBACK_TIME" parameter it works fine. ButI need this parameter.
I want do this connected in windows 2008 r2 with oracle 11G R2 execute an import, that will do a full import, from a linux with oracle 10g called "SUPORTE1"
ORA-39001: valor de argumento invßlido ( argument valor invalid) ORA-39200: O nome do link "SUPORTE1;" Ú invßlido. ( link name invliad) ORA-44004: nome SQL qualificado invßlido ( sql name invalid)
I tested the connection, db-link and created the directory.
create directory asmexpdir as '+RECO/FILTDB/EXPDP'; grant read,write on directory asmexpdir to oraasfs; expdp oraasfs/oraasfs2301 directory=asmexpdir dumpfile=SBSR_EXP.dmp tables=TM_SFS_CUST_01 logfile=EXPDP_LOG:SBSR_EXP.log
SUCCESS MESSAGE
. . exported "ORAASFS"."TM_SFS_CUST_01" 387.2 MB 817684 rows Master table "ORAASFS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for ORAASFS.SYS_EXPORT_TABLE_01 is: +RECO/filtdb/expdp/sbsr_exp.dmp Job "ORAASFS"."SYS_EXPORT_TABLE_01" successfully completed at 03:34:59
And I like to run this daily and delete after 14 days. but it show error, what can be the solution to run this script?
#!/bin/bash #Script to Perform Datapump Export backup Every Day ################################################################ #Change History
We should migrate our 10gR2 single-instance database with conventional file system to a two-node 11gR2 RAC on ASM (on same Windows Server platform…).
How can I migrate my production database using data pump? I have full data pump export from target but I don’t know how to import, whether the scheme after scheme, full import, do I need to first create manually tablespaces on destination, whether to exclude the index, constraint, statistics?
Export: Release 10.2.0.1.0 - Production on Saturday, 25 December, 2010 5:10:06
Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options Starting "TEST1"."SYS_EXPORT_TABLE_01": test1/******** DIRECTORY=datapump DUMPFILE=expfull-3.dmp query=auth_test:"where TXNREQDTTIME<20-MAY-10" tables=auth_test Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64 KB
I need to use Data Pump for the first time on my production Database.Currently on Testing Database, when i am taking schema level export there are no errors or warnings in the log file but when i importing it gives fallowing ORA in the import log file. i searched on google,the only way i found is to recompile the invalid objects. how to avoid this warnings in log file.
"ORA-39082: Object type ALTER_PROCEDURE:"QUANTISV4"."P_CTM_ABN_INVST_EQUITY" created with compilation warnings"
its importing data fine upto some stage after that oracle gives the following error
Processing object type SCHEMA_EXPORT/JAVA_SOURCE/JAVA_SOURCE ORA-39097: Data Pump job encountered unexpected error -1423 ORA-39065: unexpected master process exception in DISPATCH ORA-01423: error encountered while checking for extra rows in exact fetch ORA-04030: out of process memory when trying to allocate 123404 bytes (QERHJ has h-joi,kllcqas:kllsltba)
ORA-39014: One or more workers have prematurely exited. Job "SYSTEM"."SYS_IMPORT_SCHEMA_04" stopped due to fatal error at 11:42:03
I though its due to lack of memory, so i have increased pga_aggregate_target=512MB to 600MB still i am getting a same error.