I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.
So far the only option I have is to export and then import back the data.
know if there is any way to move a partitioned table between table spaces of different block size?
I find posted and written in many places that the DB block size should be a multiple of the OS block size. I can't find any information, however, on how to find what the OS block size is for an OS. How to find the OS block size for Windows and UNIX systems (Solaris, Linux, and HP-UX)?
If I have NO datafiles other than of the default block size, would I need to define a size for those other buffer pool? Is there any process that would benifit of these pools?
I am using oracle 10g with sga_max_size =4GB and db block size 16k. Now i am creating a tablespace with block size 32 kb , whats value i select for the parameter db_32k_cache_size.
Is there any standard way to calculate the value of this parameter.
We are working on migrating from 9.2.0.4 to 11.2 and we've set up a test machine so that we could test the install and the import (as well as test additional 11g features that we want to begin using).
So we created the database and created all of the tablespaces beforehand.
However, when we run the import, we get the errors like so:
Import: Release 11.2.0.1.0 - Production on Tue Oct 5 15:01:19 2010 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Export file created by EXPORT:V09.02.00 via conventional path
[code]....
First of all, the block size in our "newly" created tablespaces is 8192...and these are obviously trying to recreate the tablespaces with a block size of 2048.
1) Why is it not ignoring these create tablespace commands when those tablespaces already exist?
2) how in the world do we get around the block size issue? We've tried nearly everything we could find, but we've still not had any luck.
All the analysis till now on our system proves that our system is clearly I/O bound and db sequential read is the biggest culprit.
We have even identified the index which is being affected by sequential read. I am thinking of creating a new tablespace with 32K blocksize (currently all table spaces are 8k) and migrate this index to the new space. That way, Oracle will have to do less number of reads to get the required data.
But is there anything wrong in having just one tablespace with a differnt block size? Or is there anything that I have to be watchful about while doing it?
i written this code i m facing ORA-04030: out of process memory when trying to allocate 16408 bytes error
/* Formatted on 2011/11/26 11:52 (Formatter Plus v4.8. */ DECLARE row_id varchar2(50); v_batch_id temp.batch_id%TYPE; v_slab_id temp.slab_id%TYPE; flag NUMBER (2); num varchar2(50) := &row_id;
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
The problem table resides in a locally managed tablespace. About 10 millions records is added in this table every day. After 36 hours all these records moved to another (partitioned) table, so the size of data in the problem table always about 75 Gb. But the size of table is reached 157 Gb today, and it still growing. The results of dbms_space.space_usage are showed below:
Size of blocks with:
0-25% free space: 4726784 25-50% free space: 17301504 50-75% free space: 24920064 75-100% free space: 102418669568 full blocks: 54761594880Thus, a lot of blocks have 75-100% free space but the table constantly growing: during last 9 days the size increased from 123 to 157 Gb.
how to stop the table growing? It there any way to limit the table size in locally managed tablespace?
I have a partitioned table (one partition per month). Every month there are added about 1GB data. What extent size should I set? 1GB will be ok?
What if data will be greater than 1GB, adding new 1GB extent takes probably a lot of time and clients may see delays while they're inserting in this time? (it's OLTP system)
When new extent is allocated? Exact in time of lacking space in existing extent or before? Partitions are dropped after one year so free space isn't a problem.
when trying to calculate the occupied space for a table, I'm using DBA_SEGMENTS, which works fine as long as the table does not have a BLOB column.
As far as I can tell, the size of the BLOB data is stored with the SEGMENT_TYPE = 'LOBSEGMENT', but I cannot find a view that tells me which DBA_SEGMENT row belongs to the BLOB column in the table I'm checking.
To give you an example:
select sum(BYTES) from DBA_SEGMENTS where owner = user and segment_name = 'MY_TABLE' group by SEGMENT_NAME
returns 262144
running:
SELECT sum(length(blob_column)) FROM my_table
returns 821333
There are entries in DBA_SEGMENTS for my user with the type LOBSEGMENT, but I cannot find a way to map the correct DBA_SEGMENTS row to the table I am checking.
how can I reduce the size of ------------- when table is null. I m in sqlplus I typeSelect A,B,C,D,F,G,H from SOMEHERE where B='GOAT1';
if A is 10 char long B is 50 c is 10 d is 30 e is 10 f is 50
if any of those don't have data it still outputs ----------------------------- (50) for B and tht covers the whole screenhow can I make is to show less if it null
I have requested the Infrastructure DBA to move a table with size 126GB(as shown in the stats/size tab in TOAD) from one tablespace to another.This is free the huge space in the first tablespace which i wanted to use for creating another table.
But when the table is moved to another tablespace, surprisingly for me, i saw that the sizeof the table has come down to 8 GB from 126 GB.Point to be noted is that everyday there are physical deletes happening on the table.
Name Null Type ------------------------------ -------- ------------------------ ENTITY_ID NOT NULL VARCHAR2(100 CHAR) ENTITY_TYPE_ID NOT NULL NUMBER SOURCE_ID NOT NULL VARCHAR2(512 CHAR) XML_SCHEMA_ID NOT NULL NUMBER JOB_ID NOT NULL NUMBER FINGERPRINT NOT NULL VARCHAR2(100 CHAR) ENTITY_XML_DATA CLOB() ARCHIVED NUMBER(1) CREATION_DATE TIMESTAMP(6) MODIFICATION_DATE TIMESTAMP(6) ARCHIVING_DATE TIMESTAMP(6) CREATED_BY VARCHAR2(50 CHAR) MODIFIED_BY VARCHAR2(50 CHAR)
The problem is that the data of the table are 40GB while on the DB the table holds 400GB! How can I shrink and reuse that space except from drop/recreate and drop/import?
The table has no initial data, so that I can play with the INITIAL parameter. Data are inserted, updated and deleted all the time. I have run DBMS_ADVISOR which recommended to SHRINK table. I have performed the shrink :
Objective : To find solution to archieve data from 2 big tables which is occupying maximum size in the data base. With current data (From Jan 2005 to Sept 2011) it has records as mentioned below:
We need to load data and run monthly batches from October 2011 to current month which will increase this space.
1. Issue is there will not be having so much space.
2. Maintenance of such table is diffcult now.Also there is huge impact on performance. Can we think of partitioning the table base on date aswe query 1st table based on certain date range?
3. Most of reports use this table and creating performances issues
I am exporting a table that is 3 GB in size and also Partitioned with option NOCOMPRESS specified.
Now when i export it with COMPRESS=N option of exp utility then it should take 3 Gb in target server but will exporting it with COMPRESS=Y will save some storage during import or once NOCOMPRESS option specified on partition has no impact on exp utility COMPRESS=Y option and it will take 3 GB space in both cases
Is this true that whether u specify COMPRESS=N|Y during export it does not matter the size will be 3 GB always after import?
I'm using TimesTen Release 11.2.1.9.8 (64 bit Linux/x86_64) 1. is there any limit in size for a single table. How much a table size can be increased? 2. Is there any limit in number of records in a table?
- we have 55 blocks allocated to the table (still) - 35 blocks are totally empty (above the HWM) - 19 blocks contains data (the other block is used by the system) - we have an average of about 2.8k free on each block used.
Therefore, our table
- consumes 19 blocks of storage in total. - of which 19 blocks * 8k blocksize - 19 block * 2.8k free = 98k is used for our data.
not too sure this calculation is accurate for getting the size (data)of the table.