SQL> Alter Table tb_hxl_user Shrink Space Cascade;
Alter Table tb_hxl_user Shrink Space Cascade
*
ERROR at line 1:
ORA-10635: Invalid segment or tablespace type
SQL> desc tb_hxl_user;
Name Null? Type
----------------------------------------- -------- ----------------------------
STATEDATE NOT NULL DATE
USERNUMBER NOT NULL VARCHAR2(13)
PROVCODE NOT NULL NUMBER
Name Null Type ------------------------------ -------- ------------------------ ENTITY_ID NOT NULL VARCHAR2(100 CHAR) ENTITY_TYPE_ID NOT NULL NUMBER SOURCE_ID NOT NULL VARCHAR2(512 CHAR) XML_SCHEMA_ID NOT NULL NUMBER JOB_ID NOT NULL NUMBER FINGERPRINT NOT NULL VARCHAR2(100 CHAR) ENTITY_XML_DATA CLOB() ARCHIVED NUMBER(1) CREATION_DATE TIMESTAMP(6) MODIFICATION_DATE TIMESTAMP(6) ARCHIVING_DATE TIMESTAMP(6) CREATED_BY VARCHAR2(50 CHAR) MODIFIED_BY VARCHAR2(50 CHAR)
The problem is that the data of the table are 40GB while on the DB the table holds 400GB! How can I shrink and reuse that space except from drop/recreate and drop/import?
The table has no initial data, so that I can play with the INITIAL parameter. Data are inserted, updated and deleted all the time. I have run DBMS_ADVISOR which recommended to SHRINK table. I have performed the shrink :
It is not possible to run SHRINK SPACE against a table with a function based index. This is documented, and I've tested on the current release. I've reverse engineered it a bit, and the issue is in fact that you cannot SHRINK SPACE if there is an index on a virtual column:SQL> SQL> create table t1(c1 number, c2 as (c1*2)) segment creation immediate;
Table created. SQL> alter table t1 enable row movement; Table altered. SQL> alter table t1 shrink space; Table altered. SQL> create index i1 on t1(c2); Index created.
SQL> alter table t1 shrink space; alter table t1 shrink space
ERROR at line 1: ORA-10631: SHRINK clause should not be specified for this object.
I need to resize my datafile as i have allocated more space and need to reduce ( i.e.data load completed now). my tablespace is having 11.74 gb free space now. it has 3 datafile.
TABLESPACE TOTAL USED FREE PCT_FREE LARGEST FRAGMENTS ------------------------ ---------- ---------- ---------- ---------- ---------- ---------- CFC_DATA 150528 138780.6 11747.4 7.80412946 1251 992
TABLESPACE_NAME FILE_ID FILE_NAME Size(MB) ------------------ ---------- ------------------------------------------------------- ---------- CFC_DATA 71 +DATA/dedw/datafile/cfc_data.4074.731085435 65535.9688 CFC_DATA 334 +DATA/dedw/datafile/cfc_data.4473.757566557 20480 CFC_DATA 1710 +DATA/dedw/datafile/cfc_data.2012.728095695 64512I used below script to find out HWM in order to resize the datafile. db_block_size is 16KB. [code]....
in TOAD, we have an option, that is "Minimum size" button against each datafile.. Need the SQL which is running behind when we press this button from TOAD ?
I've heard that this statement causes a table lock but cant find any information on this.if it is so, is it a write lock or also a read lock of the table?
I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.
So far the only option I have is to export and then import back the data.
know if there is any way to move a partitioned table between table spaces of different block size?
1.Is it necessary to reorganize a table and index after the deletion of records from table ? Because i see some change in table size after table and index reorganization.
2.Will re org table and index improve the database performance ?
if the command is successful:>alter table my_table shrink;The segment will be defragmented and the High Water Mark will be moved.But what is the importance of the HWM?
Whats the difference between commands? >alter table my_table shrink; -- move HWM >alter table my_table shrink compact; -- not move HWV
I have a tablespace which has around 32gb space consumed. But if i check the used space then its only 16GB. When i tried to resize the datafile it throws the error
ORA-03297: file contains used data beyond requested RESIZE valueAs per my understanding there are not continous blocks which are there in datafile due to fragmentation may be and there by not able to resize it. If i export the tablespace using datapump and reimport this will release the space.
But i want to know if there are any alternative ways to do the same.
We are running 11g (11.2.0.3)We have a "working table" that is empty at the beginning of the day.Then we start adding rows (insert) with a key column called STATE with a value of 100.At the same time, there are other apps that pickup data in state 100 , process that data and change that state to 200 or 300.There is also another app that pickup data in state 200 , process that data and change that state to 300 or 400.
So in summary, the data on that table is at the beginning empty, then all the rows are in state 100, they slowly move to different states (200, 300, etc) and by the end of the day, they are all in 400.
My question is what would be the best way to collect stats on this table?
I was thinking to create an hourly job to collect stats on that table: exec dbms_stats.gather_table_stats ( ownname => 'SCOTT', tabname => 'WORK_T
I want to get the stale stats for Table resides at APPS schema. Is there is any table or view exists to get the details like DBA_STALE_STATS or anything? Currently I am checking LAST_ANALYZED column from DBA_TABLES?
Why do we gather table statistics manually ?Is it because of database performance.
I know In Oracle Database 10g, Automatic Optimizer Statistics Collection reduces the likelihood of poorly performing SQL statements due to stale or invalid statistics and enhances SQL execution performance by providing optimal input to the query optimizer.
Optimizer gathers statistics when 10% table rows have been changed.
3) as a workaround, i compressed these 2 SWP tables with OLTP option, and then i was able to drop the column from these 2 SWP tables.
4) Below statement is correct or not ? IF A TABLE USING BLOCK LEVEL COMPRESSION, THEN this error will come - ORA-39726: unsupported add/drop column operation on compressed tables.
if above statement is correct, then how to find out whether table data is using block level compression ?
5) we have DBMS_COMPRESSION.GET_COMPRESSION_TYPE. using this i just tried to find out, but i am getting "1" as output. I am not getting the exact meaning of it.
confirm what is the conclusion on this ?
SQL> declare rid rowid; n number; begin select max(rowid) into rid from NOVAR.PAYMENT_SWP; n := dbms_compression.get_compression_type('NOVAR','PAYMENT_SWP',rid); dbms_output.put_line(n); end; / 2 3 4 5 6 7 8 9 1
PL/SQL procedure successfully completed.
SQL> SQL> SET SERVEROUTPUT ON SQL> / 1
PL/SQL procedure successfully completed.
SQL> SELECT max(rowid) from NOVAR.PAYMENT_SWP;
MAX(ROWID) ------------------ AAsz4fAHSAAAD3IABs
(ii) 2nd table
SQL> set serveroutput on SQL> declare rid rowid; n number; begin select max(rowid) into rid from NOVAR.PREPAYMENT_SWP; n := dbms_compression.get_compression_type('NOVAR','PREPAYMENT_SWP',rid); dbms_output.put_line(n); end; 2 3 4 5 6 7 8 9 10 / 1
PL/SQL procedure successfully completed.
SQL> SELECT max(rowid) from NOVAR.INVOICELINE_SWP;
- we have 55 blocks allocated to the table (still) - 35 blocks are totally empty (above the HWM) - 19 blocks contains data (the other block is used by the system) - we have an average of about 2.8k free on each block used.
Therefore, our table
- consumes 19 blocks of storage in total. - of which 19 blocks * 8k blocksize - 19 block * 2.8k free = 98k is used for our data.
not too sure this calculation is accurate for getting the size (data)of the table.
I have a partioned table that has close to 2 billion rows and a PK of all columns. Becuase of time constrains my APP team wants the PK disabled while they pump into hundreds of thousands of rows with a batch process.
Now I am finding when I enable the PK its eating up close to close to 200GB of temp space.
Is there something I can do to reduce the amount of temp space being used?
Well, I have a oracle database 10g and the tablespace INDX was getting up to 32 GB size. Now I added second datafile to the space, but can I shrink this space? In my view this space is responsible for indexes, right? There is a command to rebuild the indexes or there's another trick?