Server Utilities :: Parameters Of Exp And Imp Command
May 31, 2004What is the parameters of exp and imp command?
View 4 RepliesWhat is the parameters of exp and imp command?
View 4 RepliesHow (or having a script) to get, in PL/SQL, the parameters that have been given on a Data Pump command (export or import): mode (easy), tables/schemas list, exclude/include values and so on?
10.2 (preferred) or 11.2 as you want.
I had specified the below:
Q1: Can we combine the 2 parameters together (owner and tables)? If not, then what is the way to specify it....
Exp scott/tiger owner=scott tables=(T1)
Error msg is: conflicting modes specified.
Q2: what is the privilege need for exporting other schema's tables?
Q3: what is the use of export table with index and many, but without ROWS?
As we know,there is a parameters named indexes of orignal imp,it use to generate create index ddl,Is there a parameter in impdp compare to it?
View 6 Replies View RelatedI receive the following error message
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "identifier": expecting one of: "binary_double,
[code].....
when I select count(*) on the external table created below.
SQL> CREATE TABLE cac_500_load
2 (
3 EMAILADDRESS VARCHAR2(80),
4 FIRSTNAME VARCHAR2(60),
5 LASTNAME VARCHAR2(60),
6 STREETADDRESS VARCHAR2(100),
[code].....
Here is the db version info:
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
I am new to SQL*loader and I would like to know what is the maximum number of ROWS that can be loaded in a Conventional bind array while specifying the command line parameter.
View 2 Replies View Relatedi am trying to run DBCA from unix command line. but it gives me error like this:
dbca[158]: /home/oracle/product/10.2.0/jdk/jre/bin/java: not found.
Also i tried to open some other utilites too. like netca it gives me same error.
I ran "exp" command to take a back of Oracle Db based on user and later imported(using "imp" command) the dump into another db. Its seen that some the tables are not exported during exp command run. Can I use exp command on Oracle 11.2 version?
or should I always be using expdp command?
load/unload csv file using sql command
View 1 Replies View RelatedI have taken database backup using exp command and when I try to import in other pc the foreign keys are not imported. It saying error message that no matching unique key or primary key for this column.
how will i take backup including with primary keys?
Below is my import command for importing specific function from export file but iam getting below errors
impdp system/PASSWORD schemas=TNC6 directory=dumpdir dumpfile=FULL01-02-2011.dmp logfile=IMP.log include=FUNCTION:"IN ('TNC_IS_NUMBER')"
ORA-39001: invalid argument value
ORA-39071: Value for INCLUDE is badly formed.
ORA-00936: missing expression
I want to take a export of schema JACK of size 700 MB which contains list of objects in it.
SQL> select count(*),object_type from dba_objects where owner='JACK' group by object_type;
COUNT(*) OBJECT_TYPE
---------- -------------------
207 INDEX
4 PROCEDURE
190 TABLE
80 VIEW
3 SYNONYM
67 SEQUENCE
6 rows selected.
The export command i am going to use is as below.
exp system/oracle@ORCL1 file=schemaexp.dmp log=schemaexp.log owner=JACK rows=y direct=y
grants=N constraints=y COMPRESS=N buffer=100000000 RECORDLENGTH=64000
Is it possible to take this schema export in windows command prompt mode and any guess how long it would take to complete the export ?Because based on the time it takes, i am going to perform the export in windows command prompt.
While i like to start CSS service to create new ASM instance in my own pc for testing purpose gettting the below errors "'localconfig' is not recognized as an internal or external command, operable program or batch file.".
View 1 Replies View RelatedI have had the following problem open with Oracle support since March 2011 (8 months), and still no resolution.
When I export all our schema's on Sunday night it takes about 1 hour 50 minutes. When I export the same schema's on any other night it takes 7 hours. The only difference is that on Sunday at 4:00am we drop all connections in the connection pools and reestablish new connections. Then 19hours later on Sunday at 23:00 we perform the exports which only take 2 hours to complete.
I have also tried recreating the connections in the connection pools during the week, and the exports have then only taken 2 hours to complete. But the following night after the connections have been used during the day, the exports again take 7 hours. So it appears the export speed gets significantly slower when there are many open connections that have been used and not closed.
From the stats pack report I found 2 SQL statements internal to the export command, that had an order of magnitude in difference when looking at the elapsed execution time between the fast export, and the slow export (see below).
How to speed up the exports without having to drop and recreate the database connections in the connection pools each night.
FAST:
elapsed_time: 430.90
executions: 161,388
Module: exp@Oracle1 (TNS V1-V3)
SELECT COLNAME, COLNO, PROPERTY, NOLOG FROM SYS.EXU10CCL
WHERE CNO = :1 ORDER BY COLNO
elapsed_time: 264.29
executions: 50,349
Module: exp@Oracle1 (TNS V1-V3)
SELECT TOWNER, TNAME, NAME, LENGTH, PRECISION, SCALE, TYPE, ISNULL, CONNAME, COLID, INTCOLID, SEGCOLID, COMMENT$, DEFAULT$, DFLTLEN, ENABLED, DEFER, FLAGS, COLPROP, ADTNAME, ADTOWNER, CHARSETID, CHARSETFORM, FSPRECISION, LFPRECISION, CHARLEN, TFLAGS, 100 FROM SYS.EXU8COL
WHERE TOBJID = :1 ORDER BY INTCOLID
SLOW:
elapse_time: 8264.16
executions: 124,662
Module: exp@Oracle1 (TNS V1-V3)
SELECT COLNAME, COLNO, PROPERTY, NOLOG FROM SYS.EXU10CCL
WHERE CNO = :1 ORDER BY COLNO
elapsed_time: 3877.78
executions: 38,813
Module: exp@Oracle1 (TNS V1-V3)
SELECT TOWNER, TNAME, NAME, LENGTH, PRECISION, SCALE, TYPE, ISNULL, CONNAME, COLID, INTCOLID, SEGCOLID, COMMENT$, DEFAULT$, DFLTLEN, ENABLED, DEFER, FLAGS, COLPROP, ADTNAME, ADTOWNER, CHARSETID, CHARSETFORM, FSPRECISION, LFPRECISION, CHARLEN, TFLAGS, 100 FROM SYS.EXU8COL
WHERE TOBJID = :1 ORDER BY INTCOLID
I use the following export command for each schema:
$ORACLE_HOME/bin/exp user/pass file=somefile.dmp owner=$SCHEMA log=somelog.log buffer=9000000
I have an Oracle Standard edition 11.1.0.7 database on 64bit Linux with a 7GB SGA. I currently export (I use exp not datapump because datapump is a lot slower and we can't use parallel processing features of datapump on a standard edition database) approx 200 schema's each night. The export normally takes 1 hour 50 minutes which is approximately 2 schema's exported every minute. When the exports run slowly each export takes almost 2 minutes to complete.
The database has about 20 GB data and 50 GB indexes. The database has also approx 500 connections via toplink connection pools from 8 application servers.
I'm working with sqlldr and i try to insert data from a csv file to a CTL file. One field of my table contains 5 characters but one row has 6 characters in this field, so it's rejected by oracle. (Logical, you can't insert 6 chars in a 5 chars field)
an error is visibly returned, so i wondered how you could catch the value of this error?is it a code? a message?
I'd like to add to my script a condition so that the end of the script would continue even if this error code is returned for that CTL execution.
I had a requirement to execute a long running sql query. But the sql query had some parameters to be passed in and at some places i need to press "Enter" . I want to use nohup command and run the sql script using shell script concept. How can i pass the parameters and run the nohup command.
[URL]
I want to keep the sql script in a shell script and run it through nohup command.
how can i keep the above sql in a shell script and run through nohup command
working on setting up connection between a Windows 2008 server and a pair of Oracle 11g DBs in a RAC Cluster. One Database (let's say DatabaseA) is in one data center, and the other (DatabaseB) is an a secondary, backup database. The RAC Cluster is all set up, working fine, etc. However, I Need to set up the machine.config file on my Windows Server, to go only connect to DatabaseA, unless it fails, in which case, we want it to connect to DatabaseB. Think we could do this if the host app server was Linux/Unix, but it is windows, and I just don't have the background as to the parameters to set up in the machine.config file. They are similar, but different, and we want a very specific behavior (use DatabaseA, unless fails, then DatabaseB). Application is .NET 4.0 app.
View 6 Replies View Relatedhow to find the versions of exp and imp utilities of database server from windows command prompt?
Note: Currently i have 10.2.0.10 oracle software installed on my local machine.
I recently discovered that there was a difference in my QA and prod environments, which I have since rectified.
SQL> select DBMS_STATS.GET_PARAM('METHOD_OPT') from dual;
DBMS_STATS.GET_PARAM('METHOD_OPT')
--------------------------------------------------------------------------------
FOR ALL COLUMNS SIZE 1
SQL> select DBMS_STATS.GET_PARAM('METHOD_OPT') from dual;
DBMS_STATS.GET_PARAM('METHOD_OPT')
--------------------------------------------------------------------------------
FOR ALL COLUMNS SIZE AUTO
I found a script that allows me to compare values from v$parameter
set pagesize 1000
col name format a28
col local format a20
col remote format a20
select local.name, local.value local,
remote.value remote,
[code]....
Is there other SQL code or another methode out there that would find differences in my DB's such as the method_opt setting, which don't appear in v$parameter.
I have one doubt on Expdp & RMAN. Do EXPDP utilities does backup at block level as what RMAN is doing? Which one is faster, expdp or RMAN?
View 16 Replies View Related I created a view with parameters l_id and l_name, how can i find them in oracle view?
Create Table tb_test
(
Id Number,
Name Varchar2(64)
);
Create Or Replace View vw_tb_test(l_id,l_name)
As
Select Id,Name From tb_test;
What is the function by specified the size_clause of parameters method_opt?
dbms_stats.gather_table_stats
method_opt := FOR ALL [INDEXED | HIDDEN] COLUMNS [size_clause]
FOR COLUMNS [size_clause]
column|attribute [size_clause]
[,column|attribute [size_clause] ... ]
size_clause := SIZE [integer | auto | skewonly | repeat],
where integer is between 1 and 254
I can not change the parameters RESOURCE_MANAGER_PLAN, at first, I set the parameters to DAYTIME,but when I restart my db,the parameters hold old values named MAXCAP_PLAN. Why?
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
[code]....
When I was going through the Enterprise Manager Grid Control, I found an error due to which I looked at the trace file and it said:
ORA-07445: exception encountered: core dump ACCESS_VIOLATION unable_to_trans_pc PC:0x7C81BD02 ADDR:0x49444E49 UNABLE_TO_READ]
I searched and found that it has something to do with the SGA parameters. I saw that the shared_pool_size and the sga_target paramters are set to 0...Also there are certain SQLs hanging at some point. I thought I should change the above mentioned parameters.
My question now is, can I use the Alter System statements from the SQL Plus to change these parameters, and do they change immediately or do I need to reboot the Oracle instance for those changes to take effect? I would like to do:
alter system set sga_target=400m;
alter system set shared_pool_size=200m;
would these work and take effect immediately?
A way to show session parameters? Such as
CONSTRAINTS
USE_STORED_OUTLINES
ROW ARCHIVE VISIBILITY
CURRENT_SCHEMA
I can detect the CURRENT_SCHEMA with a query against the user env context, but I can't find any of the others there. Could there be an issue with these values being stored in PGA, and therefore not visible though any regular views? I did find an article that showed a query against an x$ data structure which showed something for different settings of CONSTRAINTS, but I can't find it again.
the erelationship between sga_max_size,sga_targt,shared_pool_size,pga_aggregate_target and the server memory.
In short how shud i choose the above parameters for a server with a fixed RAM.
I want to load lakhs of records into a table. My problem is when after loading the ¼ of records my process is abend due to the size of my rollback segment area. I don't have an option to increase it. So, Is there any way to go for intermediate commits when I am using the imp or sqlldr utilities to load the entire data without abend?
View 2 Replies View Relatedwe are running SAP application against oracle database. say, if I use brspace or brtools (from SAP side) to shutdown or startup database or collect stats, does this mean it not recommend to use oracle command to shutdown/start & collect stats?
View 3 Replies View RelatedI had newly installed oracle 11g database, for testing & learning purpose.
Don't know wht happen to it after successfull installation , I am not able to start database.
Following error is appearing, how to resolve this.
Ora-01078:failure in processing system parameters
I can do only login to idle instance.
I have a small problem with creating the SPFILE,
These are the commands i had issued :-
1. startup nomount
2. create SPFILE from PFILE ;
Then i got an error :-
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file 'C:ORACLEORA92DATABASEINITORCL.ORA'
Ever since this happened i am unable to connect to ORACLE under any schema !
I get the following error : -
SQL> conn scott/tiger
ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Warning: You are no longer connected to ORACLE.
WHY DOES THIS HAPPEN AND HOW CAN I OVERCOME IT ?