Server Administration :: How To Drop A Column In Huge Table
Aug 8, 2011
I want to drop a column in a huge table which contain about 420,000,000 rows,i use the alter table drop coumn command to execute,and found it takes a long time and generate huge redo.
Is there any quickly way to drop a column in a huge table?
I have a question about recyclebin.When i drop a table,the talbe will be moved to recyclebin,the name is changed to BIN$...,but the constraint built on the table aren't moved to recyclebin and their name are also BIN$...,why?
I am facing problem in user_dump_dest directory...I have noticed that there are a lot of trace files with huge size in MBs.I clean it and after 4 days there are 40G of size..
what happens if you mark a column unused in a compressed table and then alter table drop unused columns? We had a customer do this and Oracle threw a -3113 (end of communication) error. They did a system restore before contacting us and blew away any evidence in alert logs/trace files. They did this on a 400GB compressed table.
My question is, when you drop an unused column off a compressed table, does it uncompress? Where does this uncompression occur? In the instances default tablespace? In the tablespace configured for the table?
Basically, we are wondering whether the error was due to poor error-handling of the system running out of space during decompression and trying to see if we can reproduce it. This was on an 11.1.0.7 system.
I can not drop datafile in a tablespace, how can i do?
SQL> Alter Database Datafile '/u02/app/oracle/oradata/oracl/hxl06.dbf' Offline 2 3 /
Database altered.
SQL> Alter Tablespace tps_hxl Drop Datafile '/u02/app/oracle/oradata/oracl/hxl06.dbf'; 2 Alter Tablespace tps_hxl * ERROR at line 1: ORA-03264: cannot drop offline datafile of locally managed tablespace
I created new database. Now I want to drop database. So I had login as sys user and executed shutdown immediate command. The database was closed.Then I tried to execute startup restrict mount command so that I can run drop database command. Then it is giving error as
ORA-12154: TNS:could not resolve the connect identifier specified
I tried to restart the service,also checked lsnrctl command,and tnsnames.ora file.Everything is fine,but still gives the same error as above.
On a SAP system, am trying to drop six indexes, largest is 300MB and smallest is 50MB.
I tried running drop index sapusername.index_name on the 50MB index via SQL*Plus and it seems to be taking forever. anything I can check on the database on why it is taking such a long time?
I can leave it to run overnight but worried that when I come back the next day, it will still be hanged. Is there any quick way of dropping the index, .i.e. drop immediate ...
Am not using SAP's BRTOOLs as it is also hanging from there and the SAP-ADMIN had approved for the DBA to drop it from our end instead.
I want to drop some users which are no longer been used .What are the precautions i need to take before i drop users? I have taken logical backup (Export) of users i want to drop.Is there anything i missed out before i drop user?
I have one user CD_APP. I have one partition table CD.T_FCDR_DT. User has got ALTER/INSERT/UPDATE/DELETE/SELECT privileges on the table..
Now when I try to drop a partition, I get error as below: ------------------------------------- SQL> show user USER is "CD_APP" SQL> ALTER TABLE CD.T_FCDR_DT DROP PARTITION D01 UPDATE GLOBAL INDEXES; ALTER TABLE CD.T_FCDR_DT DROP PARTITION D01 UPDATE GLOBAL INDEXES * ERROR at line 1: ORA-01031: insufficient privileges -------------------------------------- Do I have to grant some other privileges for this user.
SQL> drop MATERIALIZED view log on afccv.tbl_voicechat; drop MATERIALIZED view log on afccv.tbl_voicechat * ERROR at line 1: ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
i dont want to kill the sessions or anything is there any way to set prority to do this task?
Question 1: What is the no of column limits in a single table? Is it same for all type of tables. Question 2: How to estimate the size for a table in advance with "undo generation amount, index size, redo log size"? Question 3: What is the real time scenario for creating a table on another schema? Question 4:I read the below line in a document, but I'm not able to understand it
"we can't move types and extent tables to a different schema when the original data still exists in the database"?
We are unable to drop user due to below error, how to drop the below user without shutdown the database.
SQL> drop user mvm_2010 cascade; drop user mvm_2010 cascade * ERROR at line 1: ORA-00604: error occurred at recursive SQL level 1 ORA-14452: attempt to create, alter or drop an index on temporary table already in use
I have a table with two clob columns and need to manually allocate space to the table and to its lob segment. Is the following command correct?
--to allocate extent to the table alter table emp allocate extent; --the table has columns named col1 and col2 which are clob --to allocate extents to the columns alter table emp modify lob (col1) (allocate extent (size 10m)) / alter table emp modify lob (col2) (allocate extent (size 10m)) /
I am trying to reoarganise a tablespace with Enterprise Manager from manual to automatic, but script generation gives the following warnng :
Quote:Reorganization includes a table with a LONG column. To support reorganization of tables with LONG columns that are greater than 32Kbytes the external procedure MGMT$REORG_MOVELONGCOMMAND must be configured properly. It has been determined this external procedure is not currently configured as expected. Configure SQL*Net appropriately to allow it to call the external procedure service process.
I made changes suggested by [URL] ..... in the section Using Reorganize Objects with LONG Columns but the warning persists. I noticed however that $ORACLE_HOME/lib/libnmuc.so is an empty file (0 bytes).
Oracle Database 10g 10.1.0.2.0 Solaris (SunOS 5.9 Generic_117171-07 (64-bit)) Oracle home /data/lun1/oracle/product/10.1.0/db_1 {ORACLE_HOME}/network/admin/tnsnames.ora # tnsnames.ora Network Configuration File: /usr/oracle/product/10.1.0/db_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. [code]....
Through an Oracle Apex application I need to create/drop a user/schema in another Oracle database. i.e., create/drop user remotely using an Oracle Apex application.
Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.
Method I have used:
1. Created Temp table
2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.
delete from B where exists (select 1 from temp where b.col1=temp.col1);
3. why it is taking too much time for deleting records in table A.
I have a table which have 300+ columns and have 13 million rows. It is on a 32 kb block size. This is a table in data ware house environment. There no# of rows in the table haven't changed much but I see that the time taken to collect statistics have increased significantly.Initially it took only 15 minutes (with the same 13M rows) now it runs for 4+ hours. The max parallel servers is 4 (which is unchanged). The table is not partitioned.
OS: HP UX Itanium Database: Oracle 11g (11.2.0.2)
Command is: exec dbms_stats.gather_table_stats(ownname=>'ABC',tabname=>'ABC_LOAD',estimate_percent=>dbms_stats.auto_sample_size,cascade=>TRUE,DEGREE=>dbms_stats.auto_degree);
I would like to understand:
1) What could have been the causes of this change in time. 15 minutes to 4+hours ? 2) How can we gather statistics of huge table at a faster rate?
Need to change the precision of a column in a existing table. Statistics about the table
* has over 130 columns * More than 300 million records * Column to modify is #121 which has data * No primary key defined
Since the column has data, it is not possible to modify with a simple Alter.
Second option - create temp column in same table, update from original, put null in original, alter, update back from temp, drop the temp column. This approach is very expensive and time consuming.
I am trying to delete 3 million records of data from huge table which already consists of 3 billion records.
This is hitting performance of DB and halting other activities of my users. Is there any easy way to delete such data fast. I have tried with forall delete but it is even taking lot of time.
i am tring to drop column but it is taking much time . And there is not locking session also. The table size is 500GB. We have any way to drop to column in fast?
create table ACTIONARI_ARH ( actionar_id NUMBER(10) not null, id VARCHAR2(20) not null, id_2 VARCHAR2(20), tip VARCHAR2(1), nume VARCHAR2(100), prenume VARCHAR2(100), adresa VARCHAR2(200),
[code]....
and this view
CREATE OR REPLACE VIEW ACTIONARI AS SELECT "ACTIONAR_ID","ID","ID_2","TIP","NUME","PRENUME","ADRESA","LOCALITATE","JUDET","TARA","CERT_DECES","DATA_REGISTRU" Data_operare,"USER_MODIF","DATA_MODIF","REZIDENT" FROM ( select
[code]....
The table has about 30 milion records and holds persons names, addresses, personal id (id), and internal id(actionar_id) and date when a new adress has been added.
The view is about getting only the most recent info for one person (actionar_id).
if i run a
a) select * from actionari a where a.actionar_id = 'nnnnnnn', result is returned immediatly, oracle uses index and does not do a full table scan.
b) select * from actionari a where a.actionar_id in ('nnnnnnn','mmmmmm','ooooooo'), result is returned immediatly, oracle uses index and does not do a full table scan.
my problem when i use this view in a join.let's assume i have another table with no more than 500 records, something like
create table SMALL_TABLE ( actionar_id NUMBER(10) not null, ...... );
and if i run
select * from SMALL_TABLE s join actionari a on a.actionar_id = s.actionar_id;
it takes like forever to process, forever means 1~3 minutes.by looking at the execution plan, oracle does a full table scan, creates the view for all unique 7milion persons, and only then joins the result with the actionar_is's in the small table and returns the desired 500 record result.i am using oracle 10g.
extract a huge amount of data from a couple of views... the problem is that they want it in TXT files with fixed record length. There will be like 6 files, for a total amount of about 10GB.
export those tables in the fastest possible way? If I'm not mistaken exp and expdp can't create txt files, so do I really need to use utl_file or spool?