I have requested the Infrastructure DBA to move a table with size 126GB(as shown in the stats/size tab in TOAD) from one tablespace to another.This is free the huge space in the first tablespace which i wanted to use for creating another table.
But when the table is moved to another tablespace, surprisingly for me, i saw that the sizeof the table has come down to 8 GB from 126 GB.Point to be noted is that everyday there are physical deletes happening on the table.
I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.
So far the only option I have is to export and then import back the data.
know if there is any way to move a partitioned table between table spaces of different block size?
Both source and Target DBs are in Grid Infrastructure version : 11.2.0.3
Planning to Migrate RHEL 5.4 to Solaris 10
Currently we have 3 schemas with a combined size of 4 Terabyte in 11.2.0.3 RAC ASM in RHEL 5.4. All these 3 schemas share one tablespace which has around 150 datafiles.
We want to move these 3 schemas to a Solaris RAC DB with ASM.
If we use Transportable tablespace , should there be any down time for Target DB ?
I want drop some old partitions from big table but this will not increase free space on disk. So I want to move table with indexes to anothers tablespaces. What is the fastest way to do that? ALTER TABLE ... MOVE TABLESPACE ...? CTAS ? Or something else?
I want to move the data in the VARRAY column BT_DETAIL to another table. I have create a staging table BT_STG which contains a surrogate key column and the columns from the VARRAY. I am creating this staging table at run time.
We got a request to take offline for few unused partitions and move it to a another drive (lease used).Please set the following partitions offline in PreProd:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
The problem table resides in a locally managed tablespace. About 10 millions records is added in this table every day. After 36 hours all these records moved to another (partitioned) table, so the size of data in the problem table always about 75 Gb. But the size of table is reached 157 Gb today, and it still growing. The results of dbms_space.space_usage are showed below:
Size of blocks with:
0-25% free space: 4726784 25-50% free space: 17301504 50-75% free space: 24920064 75-100% free space: 102418669568 full blocks: 54761594880Thus, a lot of blocks have 75-100% free space but the table constantly growing: during last 9 days the size increased from 123 to 157 Gb.
how to stop the table growing? It there any way to limit the table size in locally managed tablespace?
I have a partitioned table (one partition per month). Every month there are added about 1GB data. What extent size should I set? 1GB will be ok?
What if data will be greater than 1GB, adding new 1GB extent takes probably a lot of time and clients may see delays while they're inserting in this time? (it's OLTP system)
When new extent is allocated? Exact in time of lacking space in existing extent or before? Partitions are dropped after one year so free space isn't a problem.
I'm trying to install Grid Infrastructure for Oracle RAC 11g. I followed each step of pre installation tasks, but when I run run Installer it blocks without any error message.
What I see on my shell is this: ./runInstaller Avvio di Oracle Universal Installer in corso...
Verifica dello spazio Temp: deve essere maggiore di 120 MB. Effettivi 17276 MB Superato Verifica dello spazio di swap: deve essere maggiore di 150 MB. Effettivi 23999 MB Superato Verifica del monitor: deve essere configurato per visualizzare almeno 256 colori. Effettivo 16777216 Superato Preparazione per l'avvio di Oracle Universal Installer da /tmp/OraInstall2012-01-11_03-35-57AM. Attendere...[oracle@ssts2cora1 grid]$
Everything seems ok. But the window of oracle installation does not appear. The java process is active:
but it does nothing!! Nothing appears to me. I attach the log file generated by the installation process.
Using paramFile: /home/software_ssts/grid/install/oraparam.ini
Checking Temp space: must be greater than 120 MB. Actual 17276 MB Passed Checking swap space: must be greater than 150 MB. Actual 23999 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed
The commandline for unzip:
/home/software_ssts/grid/install/unzip -qqqo ../stage/Components/oracle.jdk/1.5.0.17.0/1/DataFiles/\*.jar -d /tmp/OraInstall2012-01-11_03-35-57AM Verr� utilizzato il valore di umask '022' disponibile in oraparam.ini
Execvp of the child jre : the cmdline is /tmp/OraInstall2012-01-11_03-35-57AM/jdk/jre/bin/java, and the argv is /tmp/OraInstall2012-01-11_03-35-57AM/jdk/jre/bin/java -Doracle.installer.library_loc=/tmp/OraInstall2012-01-11_03-35-57AM/oui/lib/linux -Doracle.installer.oui_loc=/tmp/OraInstall2012-01-11_03-35-57AM/oui -Doracle.installer.bootstrap=TRUE -Doracle.installer.startup_location=/home/software_ssts/grid/install -Doracle.installer.jre_loc=/tmp/OraInstall2012-01-11_03-35-57AM/jdk/jre [code].......
I got huge problem trying to install grid infrastructure and oracle db. Separately oracle db works fine and installation is going without any problems as for grid not so much.
i got vmware host 4.1 with several vm's on top. One of them i openfiler which gives 2 luns for installation. Two other vm's are host of windows 2008r2(comes only 64bits i know that many of you know but just to be clear). Those two windows should be connected in oracle cluster and should have one oracle db afterwards. I have already AD domain, free disks and all required staff arranged and prepared.
I wont bodder you ppl about previous problems but current one is that i have no clue what's going wrong. I got two logs file which i paste below but there's no error message just some warning which isn't described even properly as for me.
Attached File(s)
installtion_logs.zip ( 66.62K ) Number of downloads: 0
when trying to calculate the occupied space for a table, I'm using DBA_SEGMENTS, which works fine as long as the table does not have a BLOB column.
As far as I can tell, the size of the BLOB data is stored with the SEGMENT_TYPE = 'LOBSEGMENT', but I cannot find a view that tells me which DBA_SEGMENT row belongs to the BLOB column in the table I'm checking.
To give you an example:
select sum(BYTES) from DBA_SEGMENTS where owner = user and segment_name = 'MY_TABLE' group by SEGMENT_NAME
returns 262144
running:
SELECT sum(length(blob_column)) FROM my_table
returns 821333
There are entries in DBA_SEGMENTS for my user with the type LOBSEGMENT, but I cannot find a way to map the correct DBA_SEGMENTS row to the table I am checking.
how can I reduce the size of ------------- when table is null. I m in sqlplus I typeSelect A,B,C,D,F,G,H from SOMEHERE where B='GOAT1';
if A is 10 char long B is 50 c is 10 d is 30 e is 10 f is 50
if any of those don't have data it still outputs ----------------------------- (50) for B and tht covers the whole screenhow can I make is to show less if it null
Name Null Type ------------------------------ -------- ------------------------ ENTITY_ID NOT NULL VARCHAR2(100 CHAR) ENTITY_TYPE_ID NOT NULL NUMBER SOURCE_ID NOT NULL VARCHAR2(512 CHAR) XML_SCHEMA_ID NOT NULL NUMBER JOB_ID NOT NULL NUMBER FINGERPRINT NOT NULL VARCHAR2(100 CHAR) ENTITY_XML_DATA CLOB() ARCHIVED NUMBER(1) CREATION_DATE TIMESTAMP(6) MODIFICATION_DATE TIMESTAMP(6) ARCHIVING_DATE TIMESTAMP(6) CREATED_BY VARCHAR2(50 CHAR) MODIFIED_BY VARCHAR2(50 CHAR)
The problem is that the data of the table are 40GB while on the DB the table holds 400GB! How can I shrink and reuse that space except from drop/recreate and drop/import?
The table has no initial data, so that I can play with the INITIAL parameter. Data are inserted, updated and deleted all the time. I have run DBMS_ADVISOR which recommended to SHRINK table. I have performed the shrink :
Objective : To find solution to archieve data from 2 big tables which is occupying maximum size in the data base. With current data (From Jan 2005 to Sept 2011) it has records as mentioned below:
We need to load data and run monthly batches from October 2011 to current month which will increase this space.
1. Issue is there will not be having so much space.
2. Maintenance of such table is diffcult now.Also there is huge impact on performance. Can we think of partitioning the table base on date aswe query 1st table based on certain date range?
3. Most of reports use this table and creating performances issues