Recently i've migrated from Oracle 9i SE(9.2.0.1.0) to Oracle 11gR2 EE(11.2.0.1.0). Previously i'm taking export of some of my schema and it's file size was around 1g.(with exp utility of Oracle 9i). As per earlier practice now i'm taking export of same schema with same no of objects and same data volume, the size of export file size on Oracle 11gR2 database is significantly gone down , actual size around 825mb(with expdp utility of Oracle 11g).
So i would like to know why there is a difference in file size(.dmp files) of export files between two oracle versions. I have crosschecked objects and rows of data tables. It is perfectly same.
I have had the following problem open with Oracle support since March 2011 (8 months), and still no resolution.
When I export all our schema's on Sunday night it takes about 1 hour 50 minutes. When I export the same schema's on any other night it takes 7 hours. The only difference is that on Sunday at 4:00am we drop all connections in the connection pools and reestablish new connections. Then 19hours later on Sunday at 23:00 we perform the exports which only take 2 hours to complete.
I have also tried recreating the connections in the connection pools during the week, and the exports have then only taken 2 hours to complete. But the following night after the connections have been used during the day, the exports again take 7 hours. So it appears the export speed gets significantly slower when there are many open connections that have been used and not closed.
From the stats pack report I found 2 SQL statements internal to the export command, that had an order of magnitude in difference when looking at the elapsed execution time between the fast export, and the slow export (see below).
How to speed up the exports without having to drop and recreate the database connections in the connection pools each night.
FAST: elapsed_time: 430.90 executions: 161,388 Module: exp@Oracle1 (TNS V1-V3) SELECT COLNAME, COLNO, PROPERTY, NOLOG FROM SYS.EXU10CCL WHERE CNO = :1 ORDER BY COLNO
I use the following export command for each schema: $ORACLE_HOME/bin/exp user/pass file=somefile.dmp owner=$SCHEMA log=somelog.log buffer=9000000
I have an Oracle Standard edition 11.1.0.7 database on 64bit Linux with a 7GB SGA. I currently export (I use exp not datapump because datapump is a lot slower and we can't use parallel processing features of datapump on a standard edition database) approx 200 schema's each night. The export normally takes 1 hour 50 minutes which is approximately 2 schema's exported every minute. When the exports run slowly each export takes almost 2 minutes to complete.
The database has about 20 GB data and 50 GB indexes. The database has also approx 500 connections via toplink connection pools from 8 application servers.
I've a question regarding difference of character sets, while taking a export(logical backup) of database on directly to server(linux RHEL 2.1 AS) and export on a client (windows xp prof machine, where only a oracle 9i client is installed). On server it seems to fine and okay, but on client node i'm getting following error for almost all tables.
EXP-00091: Exporting questionable statistics.
My question is :
[1] Is it creating any sort of problem, if later on i import the data which was taken from client node.
[2] Why there is a difference(marginal) in dump(.dmp) file size.
[3] Is there any way to overcome it, or it is the natural behave of it. Means not a problem.
[4] If i'm using a long or blob as datatype for some of my table,is they have any problem if i persist like above.
Additional Information about character sets On server node :
Export done in US7ASCII character set and AL16UTF16 NCHAR character set server uses WE8ISO8859P1 character set (possible charset conversion)
On client node :
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set server uses US7ASCII character set (possible charset conversion)
i have exp dump of size 1gb but when i tried to imp ,it showing error of space , it asking for space of 4gb. But i have 1gb on c: drive and 32gb on d: ,can i add datafile on d: locaion and what is max size i can assign to that datafile .
I am exporting a table that is 3 GB in size and also Partitioned with option NOCOMPRESS specified.
Now when i export it with COMPRESS=N option of exp utility then it should take 3 Gb in target server but will exporting it with COMPRESS=Y will save some storage during import or once NOCOMPRESS option specified on partition has no impact on exp utility COMPRESS=Y option and it will take 3 GB space in both cases
Is this true that whether u specify COMPRESS=N|Y during export it does not matter the size will be 3 GB always after import?
I want to take a schema level export .The schema size is 115 GB size . Do we require same amount of space to be available in server side (where we are taking a dump) as the schema size or less or more space is required in server side ?
We are working on migrating from 9.2.0.4 to 11.2 and we've set up a test machine so that we could test the install and the import (as well as test additional 11g features that we want to begin using).
So we created the database and created all of the tablespaces beforehand.
However, when we run the import, we get the errors like so:
Import: Release 11.2.0.1.0 - Production on Tue Oct 5 15:01:19 2010 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Export file created by EXPORT:V09.02.00 via conventional path
[code]....
First of all, the block size in our "newly" created tablespaces is 8192...and these are obviously trying to recreate the tablespaces with a block size of 2048.
1) Why is it not ignoring these create tablespace commands when those tablespaces already exist?
2) how in the world do we get around the block size issue? We've tried nearly everything we could find, but we've still not had any luck.
here one biggest schema size is 250GB and the total size of all the schema's is 300GB. The file where am taking the dump has 350GB space but even then the expdp failed saying
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes
why it failed and how to restart it and make sure it runs successfully without error.
I want to load lakhs of records into a table. My problem is when after loading the ΒΌ of records my process is abend due to the size of my rollback segment area. I don't have an option to increase it. So, Is there any way to go for intermediate commits when I am using the imp or sqlldr utilities to load the entire data without abend?
I am familiar with tool Netca. However there is one more utility exist for the same functionatlty which is netmgrI checked with many DBAs for the exact difference, however I did not get the best answer from them. I also have checked in google but not exactly got the difference. list the exact difference between those 2 tools (netca, netmgr)
I am working to understand the space allocation of table with the value we provided with the data type. For that I have created a table with varchar2 and length 50. Size of table created is of 65536 Bytes. This is when we don't have any insertion in the table. Later when we insert some rows, total size if the segment still remain same that is 65536 bytes.
Now again when I created table with varchar2 and length this time is 500 but still it is created with same size that is 65536. So can you just explain, on what values segment size depends on and how the length effect the size & space allocation.
I am trying to increase the size of sga or you can say that i want to make my sga in automatic memory management...Following is the steps i am trying
SQL> show parameter sga_max_size;
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ sga_max_size big integer 96M SQL>
after that i am trying to increase the size
SQL> alter system set sga_max_size = 200m; alter system set sga_max_size = 200m * ERROR at line 1: ORA-02095: specified initialization parameter cannot be modified
Name Null Type ------------------------------ -------- ------------------------ ENTITY_ID NOT NULL VARCHAR2(100 CHAR) ENTITY_TYPE_ID NOT NULL NUMBER SOURCE_ID NOT NULL VARCHAR2(512 CHAR) XML_SCHEMA_ID NOT NULL NUMBER JOB_ID NOT NULL NUMBER FINGERPRINT NOT NULL VARCHAR2(100 CHAR) ENTITY_XML_DATA CLOB() ARCHIVED NUMBER(1) CREATION_DATE TIMESTAMP(6) MODIFICATION_DATE TIMESTAMP(6) ARCHIVING_DATE TIMESTAMP(6) CREATED_BY VARCHAR2(50 CHAR) MODIFIED_BY VARCHAR2(50 CHAR)
The problem is that the data of the table are 40GB while on the DB the table holds 400GB! How can I shrink and reuse that space except from drop/recreate and drop/import?
The table has no initial data, so that I can play with the INITIAL parameter. Data are inserted, updated and deleted all the time. I have run DBMS_ADVISOR which recommended to SHRINK table. I have performed the shrink :
As the undo segments are used in round robin fashion, Is it possible that with varying load (concurrent users, size and number of transactions), the size of Undo tablespace on a particular day is less than the Undo tablespace size few days back, by any chance?
As a basic understanding I know that Undo is preserved for read consistency and transaction, instance recovery So if there are lot of transaction on a database on 05 Feb and before that, but there aren't any transactions on 6,7,8,9, then on 10th Feb can we see the Undo tablespace size is less than that of 05 Feb?
In the following case when data belonging to table is not required for any queries, transactions, even then the undo size is not restored upon dropping the table.
As such for large operations and batch processes shall we keep undo tablespace with files as 'Autoextend' with 'Maxsize' as 'Unlimited'?
I want to increase the size of the tablespace but when i login as sysdba or admin user i can just see the 21 tables in the dba_tablespaces or user_tablespaces. I want to see the tablespaces related to the application.
I noticed my DB is generating a lot of "small" .arc files and I am usure why. As you can see from the v$log query my log file size is set to 50MB. But yet BLOCKS*BLOCK_SIZE never adds up to 50MB.
Is there anything else I can look into to see how to make the .arc files larger?
we have a tablespace of size 900 GB where 90% of space is occupied by two tables having BLOB data and now i need to drop these two tables and then to recover the space, i need to resize the tablespace (datafiles).
I am using oracle 10g with sga_max_size =4GB and db block size 16k. Now i am creating a tablespace with block size 32 kb , whats value i select for the parameter db_32k_cache_size.
Is there any standard way to calculate the value of this parameter.