Server Utilities :: Total Estimation Using BLOCKS Method - Datapump
Apr 5, 2013
I have analyzed that, datapump estimation is 9.902GB. When i check size of .dmp file, it's shows 1.44Gb.
Export: Release 11.2.0.1.0 - Production on Fri Apr 5 02:00:05 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01": system/******** dumpfile=expdp_LVGITRN_30_24_050413.dmp directory=DP_DIR logfile=expdp_LVGITRN_30_24_050413.log full=y exclude=statistics
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: [bold]9.902 GB [/bold]
Why Datapump estimate so much than actual size?
View 8 Replies
ADVERTISEMENT
Jun 15, 2011
I am trying to use the export the database using datapump API dbms_datapump.I want to export the entire database excluding some schemas.
My question is how to exclude some schemas using datapump API?
View 1 Replies
View Related
May 11, 2011
I got an assignment to create Oracle 11g db. I will be provided the full datapump export dump of an Oracle 10g db in linux. I need to import it to 11g Database in Windows. I have no information about the tablespaces, users etc I have created db with system,sysaux,undotbs temp and users tablespaces.
View 28 Replies
View Related
Jan 2, 2012
As a part of our back up we used to export the production data every day using Original export Utility but from 11g original export Utility is de supported and also datapump doesn't support XML Objects so is there any other way to export the full database else any option to export xml Object using datapump.
View 2 Replies
View Related
Mar 31, 2010
We have three instances on the RAC, When I create a directory is there a default instance that it gets created on? When I execute an datapump export on instance 1, it works but on 2 and 3 it fails with error relating to directory.
Error -
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
Is my only option is to use shared storage space?
View 6 Replies
View Related
Jan 13, 2012
Currently we are using "exp and imp" utilities to unload from production and load into Dev server. While importing, we are following below steps
(1) Load only data [by specifying INDEXES=N in the par file]
(2) Unlock statistics
(3) Load indexes, other objects [by specifying ROWS=N]
After doing these steps, both data, indexes and others objects are loaded. To verify indexes, we are checking DBA_INDEXES.
DBA_INDEXES :
-------------
OWNER INDEX_NAME TABLE_NAME STATUS LAST_ANALYZED
----- ---------- ---------- ------ -------------
MYSCH CP_INDEX_1 CP_TABLE_1 VALID 14/JAN/12
Question :-
(1) Does imp utility rebuild the indexes while loading data ? or it simply takes the rows from dump and load into test system without building from scratch ?
(2) I am trying to replace 'exp' and 'imp' with datapump utilities ? But, I am confused about the parameters to be used ?
(a) Can I load both data and meta data at the same time (Using CONTENT=ALL option) ?
(b) I am planning to implement this in two steps :
first load only metadata using - CONTENT=METADATA_ONLY TABLE_EXISTS_ACTION=REPLACE
then, load data - CONTENT=DATA_ONLY.
View 1 Replies
View Related
Jan 3, 2012
I am trying to use the datapump tool to migrate a 10g db to 11g. Everything works fine except for the "nameless" check constraints.
View 7 Replies
View Related
Oct 25, 2010
have a requirement to load .dmp files into existing staging tables and there is package to load the ODS tables from staging.So,I thought of using DBMS_Datapump utility to import the data from .DMP files to the Tables and this need be automated.
--Create Directory
CREATE
OR REPLACE DIRECTORY test_dir AS 'C:Test'
--grant Access to the User
GRANT READ, WRITE ON DIRECTORY test_dir TO scott;
--Script to import
DECLARE
l_dp_handle1 NUMBER;
BEGIN
l_dp_handle1 := dbms_datapump.OPEN(operation => 'IMPORT',
[code]...
Errors
ERROR at line 1:
ORA-31634: job already exists
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 938
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4590
ORA-06512: at line 4
View 6 Replies
View Related
Mar 18, 2013
I have a Datapump Export File which was created in Schema mode.
I have to import the tabelles in a new database where a have to use the REMAP SCHEMA Parameter.
Additionally I would like to add a prefix to tablenames.
For example:
original tablename: THE_TABLE
Name after import: IMP_THE_TABLE
Is there a way to add a prefix while using Datapump Import?
View 5 Replies
View Related
Feb 7, 2011
I'm trying to deploy the schema using DATAPUMP API. The user from which the schema get deployed has the direct privilege of CREATE USER (not through role). But got the insufficient privileges error.
Processing object type SCHEMA_EXPORT/USER
ORA-31685: Object type USER:"SCOTT1" failed due to insufficient privileges. Failing sql is:
CREATE USER "SCOTT1" IDENTIFIED BY VALUES '4EBB0DDE3C79FE47' DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP2" PROFILE "APP_PROFILE".
But the user get created successfully when run the CREATE statement manually.I have created the user manually and again run the deployment procedure. Got the below error for ROLE_GRANTS.
Processing object type SCHEMA_EXPORT/ROLE_GRANT
ORA-39083: Object type ROLE_GRANT failed to create with error:
ORA-01932: ADMIN option not granted for role 'EXP_FULL_DATABASE'
Failing sql is:
GRANT "EXP_FULL_DATABASE" TO "SCOTT1"
The user has EXP_FULL_DATABASE with ADMIN Option and IMP_FULL_DATABASE with ADMIN option direct privileges.which privileges the user needs to deploy the schema successfully?
View 1 Replies
View Related
Jul 31, 2011
I am using Oracle 11.1.0.7.0 version and UNIX-Hp OS.
which export/import method should i use when i am to do export/import... like table level, tablespace level ,full database level, schema level?
1.Datapump or
2.Traditional export/ import ?
View 2 Replies
View Related
May 23, 2012
I am using Datapump import using database link to import an entire schema from another Server but it gives issues with constraints.I tried to first only import the metadata and then disable the constraints and import data and enable constraint but in this case the temp tablespace keeps filling up and i am out of space. Is there any method to do a full import including constraints and indexes.
View 7 Replies
View Related
Aug 23, 2012
Expdp directory=xxx.dmp dumpfile=aaa.dmp logfile=xxx.log FULL=Y
: :: : : :: : : : ;
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 24.87 MB
Processing object type SCHEMA_EXPORT/USER
[code]...
then my export hangs..... checked in alert log nothing found.and then killed the job and reran again but same....checked the status and it's saying EXECUTING.
View 15 Replies
View Related
Jun 9, 2010
I have a problem with DBMS_DATAPUMP.metadata_filter.Let's suppose that I need to export a huge list of tables (a,b,c,d,e,f,g,h,i...). Let's suppose that the list is dynamic do NOT want to use
DBMS_DATAPUMP.metadata_filter (handle => h1,
NAME => 'NAME_EXPR',
VALUE => 'IN (''a'', ''b'', ...)',
object_type => NULL);
In my_export_table there is the list:
CREATE TABLE my_export_table
(
EXPORT_OBJECT_NAME VARCHAR2(50 BYTE)
)
Now I'm trying to use this form:
DBMS_DATAPUMP.metadata_filter (handle => h1,
NAME => 'NAME_EXPR',
VALUE => 'IN (SELECT a.export_object_name FROM my_export_table a, user_objects b WHERE a.export_object_name = b.object_name AND b.object_type = ''TABLE'')',
object_type => NULL
);
but it results in error.
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-00942: table or view does not exist
[code]...
View 1 Replies
View Related
Oct 12, 2011
We have a database running with Oracle 10.2.0.4 on Solaris 5.10 [SUN SPARC v240] where the assigned TEMP tablespace size looks to be quite huge ( = 8GB!). There are not much SQL queries being run on the database, but what can I do/find/investigate to be sure of how much TEMP space is 'actually' required for the application to run OK?
Note. I have seen the previous DBA's have marked AUTOEXTEND = ON for the TEMP datafiles as well.
Separate query (but linked to TEMP tablespace):
Even if the TEMP datafiles are created with AUTOEXTEND = ON, sometimes I am seeing 'ORA-01652: unable to extend temp segment by 128'.
View 4 Replies
View Related
Jul 12, 2011
How can i get the total time when using export a table?
for example:
exp userid=test/test file=d: est.dmp tables=(tb_test)
View 4 Replies
View Related
Jun 12, 2012
After importing my dump, i have noticed that ARGUMENT$ segment taken more than 9 GB out of my total SYSTEM table space.I belive ARGUMENT$ table is used only to store procedure/package parameter details. But I am not sure Why it has taken more space.
Is there any way we can reduce the SYSTEM table space? using with the below details?
Import Details:
--------------
1) Imported using IMP DP. List of parameters used are userid, logfile, dumpfile, directory, job_name and remap_schema.
2) Dump file size is 3GB
3) The below list will be no. of objects imported using my dump.
OBJECT_TYPE COUNT(1)
------------------- ----------
DATABASE LINK 1
FUNCTION 246
INDEX 4742
[code]...
4) The below list will be amount of space occupied by the segments in the SYSTEM.
col owner form a5 word wrap
col segment_name form a15 word wrap
col segment_type form a15 word wrap
select owner,segment_name,segment_type
,bytes/(1024*1024) size_m
[code]...
View 4 Replies
View Related
Mar 30, 2007
I'm taking export dump using expdp of some schema's of total size is 300GB. This is the par file:
DIRECTORY=expdp
FILESIZE=32212254720
DUMPFILE=expdp_schema01.dmp,expdp_schema02.dmp,expdp_schema03.dmp,expdp_schema04.dmp,expdp_schema05.dmp,expdp_schema06.dmp,expdp_sche ma07.dmp,expdp_schema08.dmp,expdp_schema09.dmp,expdp_schema10.dmp,expdp_schema11.dmp,expdp_schema12.dmp,expdp_schema13.d
[code]....
here one biggest schema size is 250GB and the total size of all the schema's is 300GB. The file where am taking the dump has 350GB space but even then the expdp failed saying
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes
why it failed and how to restart it and make sure it runs successfully without error.
View 4 Replies
View Related
Sep 9, 2012
While increasing the tablespace i am getting below error. How to handle this
SQL> set lin 300
SQL> col TABLESPACE_NAME for a25
SQL> col FILE_NAME for a65
SQL> select TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE,sum(BYTES/1024/1024) MB
2 from dba_data_files where TABLESPACE_NAME='SYSAUX' group by TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE order by sum(BYTES/1024/1024) DESC,file_name;
TABLESPACE_NAME FILE_ID FILE_NAME AUT MB
------------------------- ---------- ----------------------------------------------------------------- --- ----------
SYSAUX 3 /ora2/oradata/dbname/sysaux_01.dbf NO 300
SQL> Alter database datafile 3 RESIZE 60000M;
Alter database datafile 3 RESIZE 60000M
*
ERROR at line 1: ORA-01144: File size (7680000 blocks) exceeds maximum of 4194303 block
View 3 Replies
View Related
Sep 27, 2011
have 2 Oracle server. One with Suse Linux (Oracle 10.2.0.4.0) and one with Windows 2003 Server x64 (Oracle 11.2.0.1.0).
I made a mistake by installing Oracle 11 x32 on a x64 server. Nevertheless it works for about half a year. Then my backups
with datapump don't work. I changed the SGA_TARGET down to 1024M and the datapump works again. Now I want to renew the Windows server and want to import the datapump schemes to the Linux server.
Exp: expdp SCHEMA1/****@db1 DIRECTORY=dmpdir DUMPFILE=SCHEMA1_EXPDAT.DMP LOGFILE=SCHEMA1_EXP.LOG VERSION=10.2.0.3 REUSE_DUMPFILES=YES
Imp: impdp SCHEMA1/**** DIRECTORY=dmpdir DUMPFILE=SCHEMA1_EXPDAT.DMP LOGFILE=SCHEMA1_IMP.LOG
Log:
..importing SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
..importing SCHEMA_EXPORT/SEQUENCE/SEQUENCE
[code]...
View 1 Replies
View Related
Oct 22, 2010
We are suffering from very bad application response for last few days, when i try to check and drill down, where the actual contention is? I came to know that there may be contention on data blocks, which may be a prime reason for degraded performance. Herewith i'm pasting my actual stats of gathered from v$waitstat. I gone through some of asktom docs and find that there may be a problem with freelist or segment space management. My data tablespace is segment space management = Manual.
My main question is
1) Should i need to increase freelist value (Right now my value is 1)
2) Or i have to move on segment space management = auto
SQL> select * from v$waitstat;
CLASS COUNT TIME
------------------ ---------- ----------
data block 2022 4052
sort block 0 0
save undo block 0 0
segment header 1 1
save undo header 0 0
free list 0 0
extent map 0 0
1st level bmb 0 0
2nd level bmb 0 0
3rd level bmb 0 0
bitmap block 0 0
bitmap index block 0 0
file header block 0 0
unused 0 0
system undo header 0 0
system undo block 0 0
undo header 6 0
undo block 0 0
18 rows selected.
View 1 Replies
View Related
Jan 17, 2012
The best way to rebuild index and table and also give the reason as why we need to use this method?
View 10 Replies
View Related
Mar 22, 2012
When we want to gather schema statistics which method we should follow and why ?
exec dbms_utility.compile_schema('SHIKHAR');
or
BEGIN
SYS.DBMS_STATS.GATHER_SCHEMA_STATS (
OwnName => 'JACK'
[Code].....
View 3 Replies
View Related
Jul 26, 2011
we have 5 tempfile ( each of 65 gb ) allocated to TEMP tablespace...and still we are running in short of space..when i checked the TEMP segment usage, i am able to see much FREE blocks. how to release those space ?
TABLESPACE_N FILE_ID FILE_NAME Size(MB)
------------ ---------- ------------------------------------------- ----------
TEMP 1 +DATA/tedw/tempfile/temp.3043.727779755 65535.9688
TEMP 2 +DATA/tedw/tempfile/temp.3042.727779749 65535.9688
TEMP 3 +DATA/tedw/tempfile/temp.3041.727779741 65535.9688
TEMP 4 +DATA/tedw/tempfile/temp.4065.730387401 65535.9688
TEMP 5 +DATA/tedw/tempfile/temp.4075.731586241 65535.9688SELECT tablespace_name,
total_blocks,
used_blocks,
free_blocks,
total_blocks*16/1024 as total_MB,
used_blocks*16/1024 as used_MB,
free_blocks*16/1024 as free_MB
FROM v$sort_segment;
TABLESPACE_N TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS TOTAL_MB USED_MB FREE_MB
------------ ------------ ----------- ----------- ---------- ---------- ----------
TEMP 9994624 1007360 8987264 156166 15740 140426
1 row selected.
further when i checked the session details using TEMP segment, i got below output:
SELECT b.tablespace, b.segfile#, b.segblk#, b.blocks, a.sid, a.serial#,a.username, a.osuser, a.status
FROM v$session a,v$sort_usage b
WHERE a.saddr = b.session_addr
ORDER BY b.tablespace, b.segfile#, b.segblk#, b.blocks;
TABLESPACE SEGFILE# SEGBLK# BLOCKS SID SERIAL# USERNAME OSUSER STATUS
------------------------------- ---------- ---------- ---------- ---------- ---------- ------------------------------ ------------------------------ --------
TEMP 15001 3549184 576 475 1237 EQUIPMENT infa ACTIVE
TEMP 15001 4002368 64 796 4677 CRM infa ACTIVE
TEMP 15002 580608 20352 868 615 EDW infa ACTIVE
TEMP 15002 3962112 832 92 1065 EDWSTG infa ACTIVE
TEMP 15002 4021120 576 1236 7257 EQUIPMENT infa ACTIVE
TEMP 15003 23936 64 819 5586 EDW infa ACTIVE
TEMP 15003 3798400 832 855 1801 EDWSTG infa ACTIVE
TEMP 15004 205056 21632 795 8171 EDW infa ACTIVE
TEMP 15004 4031488 832 403 1299 EDWSTG infa ACTIVE
TEMP 15004 4131456 576 19 6802 EQUIPMENT infa ACTIVE
TEMP 15005 3617856 832 1166 6204 EDWSTG infa ACTIVE
TEMP 15005 3741760 576 862 953 EQUIPMENT infa ACTIVE
TEMP 15005 4042752 18176 1226 5379 CDM infa ACTIVE
3 rows selected.
if i killed the SID - 1226, then those temp blocks ( 18176 blocks ) will be released and can other session use that space further ?
there is one more column - SEGBLK#
explain what is the exact meaning of this column ?
to reclaim the space, should i issue below command -
sql>alter tablespace TEMP coalesce;
View 3 Replies
View Related
Jan 10, 2011
If the data blocks in the buffer continuously get updated such that they never reach the Least used list of LRU,then when will they be written to disk?
View 19 Replies
View Related
Aug 8, 2013
how to repair the blocks that gets corrupted found in dbverify
View 8 Replies
View Related
Jul 16, 2011
From the view v$flash_recovery_area_usage, below output shows total no of files are 24+4=28.
FILE_TYPE NUMBER_OF_FILES
------------ ---------------
CONTROLFILE 0
ONLINELOG 0
ARCHIVELOG 24
BACKUPPIECE 4
IMAGECOPY 0
FLASHBACKLOG 0
From the view v$recovery_file_dest, it shows the no of files are 49.
NAME NUMBER_OF_FILES
------------ ----------------
/dba/backup 49
Why there is discrimination?
View 1 Replies
View Related
Feb 23, 2012
how to find the versions of exp and imp utilities of database server from windows command prompt?
Note: Currently i have 10.2.0.10 oracle software installed on my local machine.
View 4 Replies
View Related
Sep 15, 2011
Is there any way to tell the total data growth increase of some tables when compared with last year.
View 4 Replies
View Related
May 4, 2012
is there any view in oracle for getting the total number of processes and currently used process in oracle? i am using oracle 11.2.0.1.0.
View 3 Replies
View Related