Server Utilities :: Exporting Huge Amount Of Data?

Jul 25, 2011

extract a huge amount of data from a couple of views... the problem is that they want it in TXT files with fixed record length. There will be like 6 files, for a total amount of about 10GB.

export those tables in the fastest possible way? If I'm not mistaken exp and expdp can't create txt files, so do I really need to use utl_file or spool?

View 1 Replies


ADVERTISEMENT

Server Utilities :: Data Pump For Exporting And Importing Extremely Large Data Files

Sep 24, 2010

I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?

View 4 Replies View Related

Server Utilities :: Exporting Database Schema / Tables Without Data

Sep 22, 2010

I'm trying to export a relatively large database but it's a bit more complicated than that.For one schema I need a full export / import (data included).

For another 10 schemas I need them empty, with the exception of a table in some of them which needs to be exported / imported with all data inside.Is it possible to do this with datapump utility (impdp, expdp)?

Afterwards I will be running some scripts to populate the DB instance with critical data / metadata.

View 1 Replies View Related

Server Utilities :: Exporting Database?

May 31, 2010

I am Trying to Export my database and whenever I try to login it is giving Ora-Error 1017 Invalid Username/Password.

If I Login as System and Manager it is accepting but I am not able to Export all my Database.

View 9 Replies View Related

Server Utilities :: Exporting Table Structure From A Particular DB?

Nov 9, 2010

know the process of exporting only the table structure of a Database without the actual content of it.

Note:: I don't know how many tables are present in the DB.

View 1 Replies View Related

Server Utilities :: Exporting Metadata Backup

Jul 19, 2011

i am using a schema which i need to take a backup of meta data only i am using exp utility

exp shan/shan@shan file=/backup_dump/shan.dmp log=/backup_dump/shan.log owner=shan rows=n

but it will return me below error, i have only access to user shan our client cant allow me to use system or sysdba schema or any other required grants or privileges so is there any way to take metadata backup of user shan from user shan.

EXP-00008: ORACLE error 942 encountered
ORA-00942: table or view does not exist
EXP-00024: Export views not installed, please notify your DBA
EXP-00000: Export terminated unsuccessfully

View 3 Replies View Related

Server Utilities :: Exporting A Table That Is 3 GB In Size

Mar 22, 2011

I am exporting a table that is 3 GB in size and also Partitioned with option NOCOMPRESS specified.

Now when i export it with COMPRESS=N option of exp utility then it should take 3 Gb in target server but will exporting it with COMPRESS=Y will save some storage during import or once NOCOMPRESS option specified on partition has no impact on exp utility COMPRESS=Y option and it will take 3 GB space in both cases

Is this true that whether u specify COMPRESS=N|Y during export it does not matter the size will be 3 GB always after import?

View 6 Replies View Related

Server Utilities :: Exporting From Physical Standby

Aug 3, 2012

I would like to export few tables from the physical standby which is in read only mode.

I have tried both the exp and expdp methods and could successfully export and import the tables from physical standby using exp unfortunately the expdp does not allow this process from a read only database.

Does this mean that we still have to use the exp feature instead of expdp ?

Note : I would expect a proper response from experts and no unwanted comments like "Contact Oracle support" or "Paste the entire command here" or "Read the Manuals" or "Why i am exporting from Standby and not from Primary" etc.

View 1 Replies View Related

Server Utilities :: Exporting Schema Using Consistent Parameter

Aug 5, 2012

I taking export using consistent parameter. Theoretically i can understand . practically i couldn't understand how it works.

for ex

I am updating tab1 table under sams user. table having one lakh records.
while updating the query using consistent=y and consistent=n. i mean

exp sams/sams file=cons.dmp owner=sams consistent=y
exp sams/sams file=cons2.dmp owner=sams consistent=n

then both files imported to separate user(sam ,san).
Updated info not visible in san and sam user.

I want to know practically how it works. I need perfect example. while using consistent=y and consistent=n

View 2 Replies View Related

Server Utilities :: Exporting / Importing Partitioned Table

Sep 2, 2013

I am trying to export a partition of a table and import it to another database. I get the below error when I try to import.

ORA-14400: inserted partition key does not map to any partition

If I export the table(for that particular partition) and import the table(after dropping the table) in destination, the partitions and sub partitions are created without any problem.

The table is Range Partitioned and Sub partitioned in List. So I had to perform the below operation if I want to retain other data in the Destination table.

1. Drop the existing partition
2. Create the partition and sub partition, same as source
3. Execute imp

In fact I had to perform step#2, as if I split the partition also, the sub partition gets replicated in the new partition, which again throws the same error. Is there better way of managing the partitions and subpartition in destination with exp/imp utility, so that I need not perform step#1 and step#2 manually.

View 11 Replies View Related

Server Utilities :: Exporting Schema Using Filesize Parameter?

Aug 6, 2012

Export /Import
==============

While exporting schema's

i couldn't export dump file to exact location i mean see following query : -

QUERY
=====

exp file=ackupfile1.dmp,ackupfile2.dmp,ackupfile3.dmp
owner=(order,purchase) filesize=5m as os level ,

I fould those dump files files home directory.

-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile1.dmp
-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile2.dmp
-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile3.dmp
[oracle@localhost ~]$ pwd
/home/oracle

when i listing

rw-r--r-- 1 oracle oinstall 72 Jun 20 21:17 afiedt.buf
drwxr-xr-x 3 oracle oinstall 4096 Jun 17 10:07 Desktop
-rw-r--r-- 1 oracle oinstall 71 Jun 19 20:42 ed.hup
drwxr-xr-x 2 oracle oinstall 4096 Aug 6 19:38 backup
-rw-r--r-- 1 oracle oinstall 2826240 Aug 6 19:39 expdat.dmp
-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile1.dmp
-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile2.dmp
-rw-r--r-- 1 oracle oinstall 5242880 Aug 6 19:38 expfile3.dmp

Dump file goes to home path even if i mentioned appropriate location.

View 7 Replies View Related

Server Utilities :: Syntax For Exporting Only Procedures Of Particular User

Mar 7, 2011

What is the syntax for exporting only the procedures of a particular user.

View 1 Replies View Related

Server Utilities :: Impdp Takes Huge Time In Oracle11gR2

Feb 13, 2012

1) My database dump size near about 4GB , which is provided by the vendor .
2) In the dump , total objects are 364949 , where

Table : 121316
LOB object : 121315
(Normal+LOB) indexes : 122317

3) Now when I run the import using system or another user , it hangs on the below stage for 70+ hours ..

impdp ntest/ntest directory=test_dir dumpfile=JBLLIVE.31Jan2012.11.50AM.dmp remap_schema=JBLLIVE:NTEST logfile=ntest_10feb.log

Import: Release 11.2.0.1.0 - Production on Fri Feb 10 09:49:50 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "NTEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "NTEST"."SYS_IMPORT_FULL_01": ntest/******** directory=test_dir dumpfile=JBLLIVE.31Jan2012.11.50AM.dmp remap_schema=JBLLIVE:NTEST logfile=ntest_10feb.log
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"NTEST" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
----
In this situation I observed the worker status and see that some table and some LOB objects including LOB indexes are imported . Worker process do it in background but it does not show in the front import log file (I dont understand why it not shows in the import logfile). it imports one table,one LOB , one LOB index ..then again one table,one LOB , one LOB index ... in this way .

And my observation first it inserts data into the LOB tables and then it inserts into normal table . And when it is starting to insert data to the normal table then this table's log are shown in the import logfile.

an example of our data type :

Objects :
===================================================
LOB_FD17_RGS_TSTCD2 LOB
FD17_RGS_VERSION TABLE

(here i see one table has one LOB segment, in this way 121316 table has 121316 LOB)

SQL> desc FD17_RGS_VERSION
Name Null? Type
----------------------------------------- -------- ----------------------------
RECID VARCHAR2(255)
XMLRECORD BLOB

Our observation perhaps inserting blob mainly occurs the slowness . Is there any patch or is there any bug regarding BLOB/LOB objects in oracle-11gR2

View 6 Replies View Related

Server Utilities :: Primary Keys Are Not Exporting When Export Using EXP Command

Dec 27, 2011

I have taken database backup using exp command and when I try to import in other pc the foreign keys are not imported. It saying error message that no matching unique key or primary key for this column.

how will i take backup including with primary keys?

View 7 Replies View Related

Server Utilities :: Export Hangs At Exporting Cluster Definitions?

Dec 22, 2010

Sunddenly my exports hangs at 'exporting cluster definitions'. I had been using this database since last 4 years and it never cause a problem or hangs at this level. here i'm pasting my screen details. it is my production db.

[oracle1@wbh_as1 smbshare]$ exp wb/wb

Export: Release 9.2.0.1.0 - Production on Thu Dec 23 00:02:44 2010

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Enter array fetch buffer size: 4096 >

Export file: expdat.dmp > wb

(2)U(sers), or (3)T(ables): (2)U >

Export grants (yes/no): yes >

Export table data (yes/no): yes >

Compress extents (yes/no): yes >

Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user WB
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user WB
About to export WB's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions

View 11 Replies View Related

Server Utilities :: Impdp - Package Body Import Taking Huge Time?

Sep 13, 2012

I am try to import 4G dump in Oracle 11R2 version, in that we have around 9000+ Package Body which is taking huge time than other objects (about 8 to 12 hrs) and also it is expecting lots of system space (roughly about 10GB).

I have tried both parallel and non-parallel.how to improve speed of the package body import.

Details about the Schema & Import No. of objects in Schema

SQL> select object_type,count(1) from user_objects GROUP BY ROLLUP( object_type);

OBJECT_TYPE COUNT(1)
------------------- ----------
FUNCTION 248
INDEX 5161
JAVA CLASS 471
JAVA RESOURCE 1
JAVA SOURCE 16
LIBRARY 1

ORA-00933: SQL command not properly ended

View 3 Replies View Related

Inserting Data In New Column Where Table Has Huge Data

May 26, 2013

I am trying to add a new column in a table and insert data from another column of same table.

alter table POSITION add INT_MK_DATA_ID number(10,0) null;
update POSITION set INT_MK_DATA_ID = INST_MARKET_DATA_ID;
commit

As there are huge number of records in the POSITION table ...its taking for ever to execute this query.

View 1 Replies View Related

PL/SQL :: Queries To Get Data In Batches From A Huge Data Table

Jan 3, 2013

Here is my problem, i need to create some files with my own format(let say 5000 records each) from a huge data table (May contain 5 Million records). And i want this creation to be multi threaded.

so how can i form queries efficiently to fetch records like 1..5000 and 5001..10000 and so on. I can form some thing like select * from table where rownum<5000 and not exists ( already fetched records) . but it is not the efficient one.

View 5 Replies View Related

Large Amount Of Data

Aug 6, 2013

I have oracle 11gr2 database on linux os. It's total sga size is 500mb only. Now, if uses wants read the 1gb of data from database, then there is no sufficient memory in buffer cache. so how it will works. the transaction will get successful or it will fail.And i have another doubt, does oracle can read the data from memory only or it can also read directly from disk. 

View 11 Replies View Related

Replication :: How Much Amount Of Data Over Network

Mar 1, 2011

I have implement multi master replication between two server.

How much amount of data transfer over the network? How to calculate this value?

View 2 Replies View Related

Modifying Fields With Huge Data

Aug 26, 2011

I have 2 questions, because they can be inter-related I am posting it in a single post. These queries are related to Oracle(PL\SQL).

1. I am trying to increase the size of a field in a table which has almost 2 million records and the query for alteration runs for almost and hour and rollsback, wondering is there a better way of doing it.

2. I have modified the size of a field in a table from Varchar2(10) to Varchar2(20), now when I tried to rollback the modification it is not letting me to change the size from Varchar2(20) to Varchar2(10). No data has been inserted after the modification.

View 1 Replies View Related

SQL & PL/SQL :: Modification Of Fields With Huge Data?

Aug 26, 2011

I have 2 questions, because they can be inter-related I am posting it in a single post. These queries are related to Oracle(PLSQL).

1. I am trying to increase the size of a field in a table which has almost 2 million records and the query for alteration runs for almost and hour and rollsback, wondering is there a better way of doing it.

2. I have modified the size of a field in a table from Varchar2(10) to Varchar2(20), now when I tried to rollback the modification it is not letting me to change the size from Varchar2(20) to Varchar2(10). No data has been inserted after the modification.

View 2 Replies View Related

SQL & PL/SQL :: Reg Data Exporting To Excel From Oracle?

Dec 8, 2010

Now we are having 100+ sql queries and we making all those queries as procedures.after that we want to schedule those procedures and get data to export into excel file.

so we are planning to use utl_file to get data export excel. we may have rows of 30000 above.is it utl_file will be able upload all these rows into excel.any performance issue will come.

View 4 Replies View Related

Archiving And Purging Data From Huge Tables?

Apr 22, 2013

I'm currently working on a project which is to archive the old data and then purge the same data from the main table.

Here is a detail description:

There are around 50 odd tables from which I would need to archive the old data(matching certain filter conditions...not date based). Meaning I have to store the data in a temp table. Once stored in temp table then I would have to delete those rows from the main table. This temp table will be later exported and stored on ARchive database(a seperate database). These tables are very huge. One of the table is actually 250 GB in size. And all these tables have many indexes built - both normal and bitmap.The 250 GB size table has 40 million rows that need to be archived and purged. The total number of rows in the table are 540 million.On this table alone there are 50 bitmap indexes and 2 normal indexes. This table is partitioned based on date column.This date column is not used/useful in identifying the old data. There are around 20 tables which are quite similar in size to the above described table. Rest of them are little small when compared to the above table.

We have to execute this activity over a weekend which gives us about 48 hours time to complete the activity. Best possible ways to handle this activity. Most importantly should be able to complete the activity within the specified 48 hour window.

The solution what we are now thinking of is:

1. Create the temp table ---Create tmp_tbl as select * from main_table where <<conidtions identifying old data>>

2. Once the temp table is created. Make copy of indexes that exist on the main table and eventually drop them.

3. Execute a PL/SQL script to perform the bulk delete from main table and commit for every 100000 rows.

4. Once the bulk delete is finished then recreate the indexes on the main table using the copy made at earlier step.

Our main worry is about the step#4. Considering the size of these tables and the number of indexes to be built,we are not sure how long the index re-creation will run for each table.

depending on the possibilities we may have to split the activity in to 2-3 phases spreading across 2-3 weekends. Even then we are not sure whether we will be able to pull off this activity.

The database we are using is Oracle 10g.

View 1 Replies View Related

Datatype Modification For Table With Huge Data

Aug 11, 2011

Below is my requirement.

Need to change the precision of a column in a existing table. Statistics about the table

* has over 130 columns
* More than 300 million records
* Column to modify is #121 which has data
* No primary key defined

Since the column has data, it is not possible to modify with a simple Alter.

Second option - create temp column in same table, update from original, put null in original, alter, update back from temp, drop the temp column. This approach is very expensive and time consuming.

Also the Column ID needs to preserved as #121.

View 3 Replies View Related

Bulk Deletes Of Data From Huge Table

Jun 13, 2013

I am trying to delete 3 million records of data from huge table which already consists of 3 billion records.

This is hitting performance of DB and halting other activities of my users. Is there any easy way to delete such data fast. I have tried with forall delete but it is even taking lot of time.

View 5 Replies View Related

SQL & PL/SQL :: Group By Gives Wrong Value In Huge Data Records?

Jun 18, 2012

I have table which contains huge data. around 12 lakhs records. when I use sum function on accountname and docdate it gives wrong value. once I restart the server it gives the correct value. one or two days it gives correct value after that again I get the same problem. If I restart again it gives correct value.

I use Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 64 bit server on Linux.

View 6 Replies View Related

Exporting Data Objects Into SQL File Using Exp (export)?

Oct 31, 2012

I need to take only backup of schema objects with out data using exp (export) into .sql file and need to run that .sql file in the target.because I dont have exp/imp privs on target database.

NOTE: using only export (exp) not data pump.

View 1 Replies View Related

PL/SQL :: Oracle Database 11g - Add Partition Based On Amount Of Data To Be Populated

Oct 30, 2013

I'm using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production and TNS for Linux: Version 11.2.0.3.0 - Production.Requirement is to create a script to add a LIST partition to some selected tables in a schema (tables do not have data, they are not partitioned). There are about 300 such tables (can vary) and their names are maintained in a separate table. Example -Existing table

  -CREATE TABLE test_part(id number (2),
name varchar2(20),
audit_userid number (9)); 
Expected table
-CREATE TABLE test_part(id number (2), name varchar2(20), audit_userid number (9))  
PARTITION BY LIST (audit_userid)      (PARTITION p1_audit_userid VALUES (1));

 Ultimate goal is to add more partitions based on the amount of data to be populated.

View 1 Replies View Related

Exporting Data From Tables To External Text File?

Apr 29, 2008

Actually what i am trying to do is to extract data form tables and place them in an external text file....i wrote the following code

FUNCTION

create or replace
FUNCTION dump_data ( p_query in varchar2,
p_separator in varchar2 ,

[Code].....

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved