Server Utilities :: Exporting Table Structure From A Particular DB?
Nov 9, 2010know the process of exporting only the table structure of a Database without the actual content of it.
Note:: I don't know how many tables are present in the DB.
know the process of exporting only the table structure of a Database without the actual content of it.
Note:: I don't know how many tables are present in the DB.
I am exporting a table that is 3 GB in size and also Partitioned with option NOCOMPRESS specified.
Now when i export it with COMPRESS=N option of exp utility then it should take 3 Gb in target server but will exporting it with COMPRESS=Y will save some storage during import or once NOCOMPRESS option specified on partition has no impact on exp utility COMPRESS=Y option and it will take 3 GB space in both cases
Is this true that whether u specify COMPRESS=N|Y during export it does not matter the size will be 3 GB always after import?
I am trying to export a partition of a table and import it to another database. I get the below error when I try to import.
ORA-14400: inserted partition key does not map to any partition
If I export the table(for that particular partition) and import the table(after dropping the table) in destination, the partitions and sub partitions are created without any problem.
The table is Range Partitioned and Sub partitioned in List. So I had to perform the below operation if I want to retain other data in the Destination table.
1. Drop the existing partition
2. Create the partition and sub partition, same as source
3. Execute imp
In fact I had to perform step#2, as if I split the partition also, the sub partition gets replicated in the new partition, which again throws the same error. Is there better way of managing the partitions and subpartition in destination with exp/imp utility, so that I need not perform step#1 and step#2 manually.
how can i export all table structure as script
note : i have multi schema's not one schema
i use
SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name)
FROM DBA_TABLES u;
but i need it for all schemas
I am Trying to Export my database and whenever I try to login it is giving Ora-Error 1017 Invalid Username/Password.
If I Login as System and Manager it is accepting but I am not able to Export all my Database.
i am using a schema which i need to take a backup of meta data only i am using exp utility
exp shan/shan@shan file=/backup_dump/shan.dmp log=/backup_dump/shan.log owner=shan rows=n
but it will return me below error, i have only access to user shan our client cant allow me to use system or sysdba schema or any other required grants or privileges so is there any way to take metadata backup of user shan from user shan.
EXP-00008: ORACLE error 942 encountered
ORA-00942: table or view does not exist
EXP-00024: Export views not installed, please notify your DBA
EXP-00000: Export terminated unsuccessfully
I would like to export few tables from the physical standby which is in read only mode.
I have tried both the exp and expdp methods and could successfully export and import the tables from physical standby using exp unfortunately the expdp does not allow this process from a read only database.
Does this mean that we still have to use the exp feature instead of expdp ?
Note : I would expect a proper response from experts and no unwanted comments like "Contact Oracle support" or "Paste the entire command here" or "Read the Manuals" or "Why i am exporting from Standby and not from Primary" etc.
extract a huge amount of data from a couple of views... the problem is that they want it in TXT files with fixed record length. There will be like 6 files, for a total amount of about 10GB.
export those tables in the fastest possible way? If I'm not mistaken exp and expdp can't create txt files, so do I really need to use utl_file or spool?
I taking export using consistent parameter. Theoretically i can understand . practically i couldn't understand how it works.
for ex
I am updating tab1 table under sams user. table having one lakh records.
while updating the query using consistent=y and consistent=n. i mean
exp sams/sams file=cons.dmp owner=sams consistent=y
exp sams/sams file=cons2.dmp owner=sams consistent=n
then both files imported to separate user(sam ,san).
Updated info not visible in san and sam user.
I want to know practically how it works. I need perfect example. while using consistent=y and consistent=n
Export /Import
==============
While exporting schema's
i couldn't export dump file to exact location i mean see following query : -
QUERY
=====
exp file=ackupfile1.dmp,ackupfile2.dmp,ackupfile3.dmp
owner=(order,purchase) filesize=5m as os level ,
I fould those dump files files home directory.
-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile1.dmp
-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile2.dmp
-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile3.dmp
[oracle@localhost ~]$ pwd
/home/oracle
when i listing
rw-r--r-- 1 oracle oinstall 72 Jun 20 21:17 afiedt.buf
drwxr-xr-x 3 oracle oinstall 4096 Jun 17 10:07 Desktop
-rw-r--r-- 1 oracle oinstall 71 Jun 19 20:42 ed.hup
drwxr-xr-x 2 oracle oinstall 4096 Aug 6 19:38 backup
-rw-r--r-- 1 oracle oinstall 2826240 Aug 6 19:39 expdat.dmp
-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile1.dmp
-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile2.dmp
-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile3.dmp
Dump file goes to home path even if i mentioned appropriate location.
What is the syntax for exporting only the procedures of a particular user.
View 1 Replies View RelatedI'm trying to export a relatively large database but it's a bit more complicated than that.For one schema I need a full export / import (data included).
For another 10 schemas I need them empty, with the exception of a table in some of them which needs to be exported / imported with all data inside.Is it possible to do this with datapump utility (impdp, expdp)?
Afterwards I will be running some scripts to populate the DB instance with critical data / metadata.
I have taken database backup using exp command and when I try to import in other pc the foreign keys are not imported. It saying error message that no matching unique key or primary key for this column.
how will i take backup including with primary keys?
Sunddenly my exports hangs at 'exporting cluster definitions'. I had been using this database since last 4 years and it never cause a problem or hangs at this level. here i'm pasting my screen details. it is my production db.
[oracle1@wbh_as1 smbshare]$ exp wb/wb
Export: Release 9.2.0.1.0 - Production on Thu Dec 23 00:02:44 2010
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Enter array fetch buffer size: 4096 >
Export file: expdat.dmp > wb
(2)U(sers), or (3)T(ables): (2)U >
Export grants (yes/no): yes >
Export table data (yes/no): yes >
Compress extents (yes/no): yes >
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user WB
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user WB
About to export WB's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
View 4 Replies View RelatedI am struck in a requirement to show the organization structure in a Top Down Organizational structure.
Is there any way we can make the well structured report OR form?
what command is used to create a table by copying the structure of another table including constraints ?
View 2 Replies View Related--this for txn details
CREATE TABLE txn_det(
txnid NUMBER PRIMARY KEY,
amount NUMBER,
status varchar2(50),
cust_id NUMBER);
----this for customer details
CREATE TABLE cust_det(
cust_id NUMBER PRIMARY KEY,
cust_name VARCHAR2(50),
cust_acc number(15));
--data to insert for customer table
INSERT INTO cust_det VALUES(101,'Miller','12345');
INSERT INTO cust_det VALUES(201,'Scott','45678');
----data to insert for txn table
INSERT INTO txn_det VALUES('tx0045',123.00,'success',101);
INSERT INTO txn_det VALUES('tx0046',4512.50,'success',101);
insert into txn_det values('tx0049',78.12,'success',101);
INSERT INTO txn_det VALUES('tx0055',123.12,'success',201);
Now THE problem IS cust_det TABLE's cust_id coulmn may contain duplicate.So I thought OF adding THE txn_id COLUMN TO THE cust_det table but I know that encourgaes redundancy.
How to take table structure in oracle? Actually I got it through this command "SELECT dbms_metadata.get_ddl(a.object_type,a.object_name) FROM user_objects a where object_type != 'PACKAGE BODY'"
any other way to get it? I need like table name field name datatype
How we can escape comma while exporting data from table into csv file.
CREATE TABLE emp
(
EMPNO NUMBER(4) NOT NULL,
ENAME VARCHAR2(10 BYTE),
JOB VARCHAR2(9 BYTE),
MGR NUMBER(4),
HIREDATE DATE,
address varchar2(100),
[code].......
i have to export data from emp table which has address column and address column contain comma, when i am running below script, the comma part in address field comes in next tab in csv file, is there any way we can avoid shifting to next tab and can have complete address in one tab.
set echo off
set verify off
set termout on
set heading off
set pages 50000
[code]....
create table my_rows
(
my_envvarchar2(100),
anumber(2),
bnumber(2)
)
/
insert into my_rows values ('A', 10, 20);
insert into my_rows values ('A', 10, 20);
insert into my_rows values ('A', 10, 20);
insert into my_rows values ('A', 10, 20);
insert into my_rows values ('A', 10, 20);
insert into my_rows values ('A', 10, 20);
insert into my_rows values ('A', 10, 20);
insert into my_rows values ('A', 10, 20);
[code]....
The first row means that the value 10 represents 40% in the couple (10,20). Meaning if I have 100 rows with the couple (10,20), 40 rows will be marked with the value 10 and 60 will be marked with the value 20. To do this, I used to create a temporary table with the same structure as the my_rows table with a new column "the_value" and I used to update this new column wth a PL/SQL for loop. But I think it is doable in a signle SQL.
i would like to view all table statructure of a schema in a file. i need to take print out of it for one analysys.
i did like this
exp user/pwd file=dumpfile.dmp full=y rows=n log=logfilename.log
imp user/pwd file=dumpfile.dmp full=y indexfile=indexfile.sql logfile=logfilename.log
but
its look like this i could not format
REM CREATE TABLE "SYSTEM"."DEF$_AQCALL" ("Q_NAME" VARCHAR2(30), "MSGID"
REM RAW(16), "CORRID" VARCHAR2(128), "PRIORITY" NUMBER, "STATE" NUMBER,
REM "DELAY" TIMESTAMP (6), "EXPIRATION" NUMBER, "TIME_MANAGER_INFO"
REM TIMESTAMP (6), "LOCAL_ORDER_NO" NUMBER, "CHAIN_NO" NUMBER, "CSCN"
REM NUMBER, "DSCN" NUMBER, "ENQ_TIME" TIMESTAMP (6), "ENQ_UID" NUMBER,
[Code] ..........
I am receiving two large export files from a vendor, so I have no control over the contents. I need to import these into our database. The two export files are very similar, except the one has slightly differenet columns in it. So, export file 1 may have a table:
COLUMN_A
COLUMN_B
COLUMN_C
The second file may have:
COLUMN_A
COLUMN_B
COLUMN_D
At the destination, I have a table that has:
COLUMN_A
COLUMN_B
COLUMN_C
COLUMN_D
Is there a parameter that would let me interchangably import either (or both) files into this destination table? This is my first attempt at data pump - but I know using import this has caused me issues. Not sure if the same limitations exist? Will the missing columns cause it to fail?
my working is relating with PUMP of oracle.
I would like to use command, for ex:
expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp COMPRESSION=ALL LOGFILE=export.log SCHEMAS=hr
But some tables in Schema HR, I don't want to export data, just only need table structure.
we have daily partitioned table, and for backup we are using data pump (expdp). we policy to drop partition after backup (archiving).
we have archived dump files for 1year, few days back developer made changes with table structure they added one new column to table.
Now we are unable to restore old partitions is there a way to restore partition if new column added / dropped from currect table.
a table structure is modified every now and then because of which the few packages get uncompiled. is there any way to monitor which user has changed table structure.
View 2 Replies View Relatedcreate trigger on certain column for table structure.
SQL> desc MXMS_BF_TXN_DTL_T
Name Null? Type
----------------------------------------- -------- ----------------------------
DOC_NO NOT NULL VARCHAR2(200)
SEQ_NO NOT NULL NUMBER(24)
GL_CODE VARCHAR2(200)
TXN_NATURE VARCHAR2(200)
TXN_TYPE_CODE VARCHAR2(200)
[code].....
I need to collect new and old data whenever update statement fire on DOC_NO,POLICY_KEY,CRT_USER column.i have created only audit table for the above as below structure .
Name Null? Type
----------------------------------------- -------- ----------------------------
TIMESTAMP DATE
WHO VARCHAR2(30)
CNAME VARCHAR2(30)
OLD VARCHAR2(2000)
NEW VARCHAR2(2000)
Description:- TIMESTAMP is for when the modification happen.
WHO is for username
CNAME is for column which is modified
OLD is for old value for the modified column
New os for new value for the modified column
I have a requirement to be coded like this:
A function to return pl/sql table(cant use ref cursor) whose columns varies every time it runs i.e,
means
type pl_tab_type is object(col1 varchar2(1000), col2 varchar2(1000))
type pl_tab is table of pl_tab_type
func f return pl_tab
as
...
end;
note : pl_tab_type will vary for each run of function f
i.e.,for example, pl_tab_type can be changed to as follows:
type pl_tab_type is object(col1 varchar2(1000), col2 varchar2(1000),col3 varchar2(1000))
how to return pl/sql table of dynamic type from func,
I have a requirement to import text files which are generated from 3d modelling software xsteel where it records all geometric information and i want to import this information into oracle table.
CREATE TABLE dstv_head ( wo_no VARCHAR2(12),struct VARCHAR2(12),rev_no NUMBER,
mark VARCHAR2(12),pos VARCHAR2(12),grade VARCHAR2(12),qty NUMBER,PROFILE VARCHAR2(24),TYPE VARCHAR2(12),
len NUMBER,width_web NUMBER,width_bottom NUMBER,flange_thk NUMBER,web_thk NUMBER,radius NUMBER,kgm NUMBER,
kgm1 NUMBER,kgm2 NUMBER,bevel_plus NUMBER,bevel_minus NUMBER,holes_yn VARCHAR2(1),holes_v_yn VARCHAR2(1),
hole_x_dim NUMBER,hole_y_dim NUMBER,hole_dia NUMBER,no_of_holes NUMBER)
-- All the data which has to go under specific field for example **9005.nc1 will go into wo_no field, 1239401A will go under struct.
ST
** 9005.nc1 --WO_NO
1239401A - STRUCT
1 -REV_NO
9005 -MARK
9005 --POS
S275JR --GRADE
2 --QTY
[code]....
I am working on Index issue and need to validate the structure of the index. Does validation lock the Index ?
Do we need downtime or we can do it in normal working hours.
alter index index_name validate structure;