Dropping Column From Huge Table

Aug 9, 2012

I am trying to drop a column from a huge partitioned table (non compressed including partitions). I am working on 11gR2 database for information.

i used below approach

1. alter table <tab_name> set t1 unused column;
2. alter table <tab_name> drop unused columns;

then i got the below error message

ORA-39726: unsupported add/drop column operation on compressed tables

First statement did work i could be able to add another column with the same name but still don't want the unused column on the table.

View 5 Replies


ADVERTISEMENT

Server Administration :: How To Drop A Column In Huge Table

Aug 8, 2011

I want to drop a column in a huge table which contain about 420,000,000 rows,i use the alter table drop coumn command to execute,and found it takes a long time and generate huge redo.

Is there any quickly way to drop a column in a huge table?

View 5 Replies View Related

Inserting Data In New Column Where Table Has Huge Data

May 26, 2013

I am trying to add a new column in a table and insert data from another column of same table.

alter table POSITION add INT_MK_DATA_ID number(10,0) null;
update POSITION set INT_MK_DATA_ID = INST_MARKET_DATA_ID;
commit

As there are huge number of records in the POSITION table ...its taking for ever to execute this query.

View 1 Replies View Related

Deleting Huge Records From Table?

Apr 29, 2013

Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.

Method I have used:

1. Created Temp table

2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.

delete from B where exists (select 1 from temp where b.col1=temp.col1);

3. why it is taking too much time for deleting records in table A.

View 5 Replies View Related

PL/SQL :: Oracle 11g - Huge Table Statistics

Jun 13, 2012

I have a table which have 300+ columns and have 13 million rows. It is on a 32 kb block size. This is a table in data ware house environment. There no# of rows in the table haven't changed much but I see that the time taken to collect statistics have increased significantly.Initially it took only 15 minutes (with the same 13M rows) now it runs for 4+ hours. The max parallel servers is 4 (which is unchanged). The table is not partitioned.

OS: HP UX Itanium
Database: Oracle 11g (11.2.0.2)

Command is:
exec dbms_stats.gather_table_stats(ownname=>'ABC',tabname=>'ABC_LOAD',estimate_percent=>dbms_stats.auto_sample_size,cascade=>TRUE,DEGREE=>dbms_stats.auto_degree);

I would like to understand:

1) What could have been the causes of this change in time. 15 minutes to 4+hours ?
2) How can we gather statistics of huge table at a faster rate?

View 1 Replies View Related

Datatype Modification For Table With Huge Data

Aug 11, 2011

Below is my requirement.

Need to change the precision of a column in a existing table. Statistics about the table

* has over 130 columns
* More than 300 million records
* Column to modify is #121 which has data
* No primary key defined

Since the column has data, it is not possible to modify with a simple Alter.

Second option - create temp column in same table, update from original, put null in original, alter, update back from temp, drop the temp column. This approach is very expensive and time consuming.

Also the Column ID needs to preserved as #121.

View 3 Replies View Related

Bulk Deletes Of Data From Huge Table

Jun 13, 2013

I am trying to delete 3 million records of data from huge table which already consists of 3 billion records.

This is hitting performance of DB and halting other activities of my users. Is there any easy way to delete such data fast. I have tried with forall delete but it is even taking lot of time.

View 5 Replies View Related

Query Optimization On Join With A View On Huge Table?

Jun 22, 2011

I have this table

create table ACTIONARI_ARH
(
actionar_id NUMBER(10) not null,
id VARCHAR2(20) not null,
id_2 VARCHAR2(20),
tip VARCHAR2(1),
nume VARCHAR2(100),
prenume VARCHAR2(100),
adresa VARCHAR2(200),

[code]....

and this view

CREATE OR REPLACE VIEW ACTIONARI AS
SELECT "ACTIONAR_ID","ID","ID_2","TIP","NUME","PRENUME","ADRESA","LOCALITATE","JUDET","TARA","CERT_DECES","DATA_REGISTRU" Data_operare,"USER_MODIF","DATA_MODIF","REZIDENT"
FROM (
select

[code]....

The table has about 30 milion records and holds persons names, addresses, personal id (id), and internal id(actionar_id) and date when a new adress has been added.

The view is about getting only the most recent info for one person (actionar_id).

if i run a

a) select * from actionari a where a.actionar_id = 'nnnnnnn', result is returned immediatly, oracle uses index and does not do a full table scan.

b) select * from actionari a where a.actionar_id in ('nnnnnnn','mmmmmm','ooooooo'), result is returned immediatly, oracle uses index and does not do a full table scan.

my problem when i use this view in a join.let's assume i have another table with no more than 500 records, something like

create table SMALL_TABLE
(
actionar_id NUMBER(10) not null,
......
);

and if i run

select *
from SMALL_TABLE s
join actionari a
on a.actionar_id = s.actionar_id;

it takes like forever to process, forever means 1~3 minutes.by looking at the execution plan, oracle does a full table scan, creates the view for all unique 7milion persons, and only then joins the result with the actionar_is's in the small table and returns the desired 500 record result.i am using oracle 10g.

View 2 Replies View Related

Backup & Recovery :: Taking A Dump Of Huge Table

Jul 10, 2012

Is this possible to take the dump of a huge table , say 500gb, into multiple files and then import it in other database? if yes how can we do it?

Note that it is a single table with 500GB of size.

View 1 Replies View Related

Oracle 11g Express Edition - Load Huge Data Into Table

Nov 6, 2012

I am using oracle 11g Express Edition, I have a file of .csv forma, Which has a data of size 500MB which needs to be uploaded into oracle table.

Which would be the best method to upload the data into table. Data is employee ticket history which is of huge data.

How to do the mass upload of data into oracle table.

View 3 Replies View Related

JDeveloper, Java & XML :: Load Huge XML File With Hundreds Of Records Into Oracle Table?

Jun 29, 2011

I need to load (using SQL Loader) an huge XML file, with several hundreds of records into an Oracle Table.The XML file schema is pretty simple, and it's anything like this:

<dataroot>
<record>
<companyname>LimitSoft S.A.</companyname>
<address>Street Number 1</address>

[code]...

I'm trying to use the help included in this link [URL]...

When they refer to schema[URL].... what should I use?? I do not need to use the Oracle website to register anything, right?

View 4 Replies View Related

PL/SQL :: Queries To Get Data In Batches From A Huge Data Table

Jan 3, 2013

Here is my problem, i need to create some files with my own format(let say 5000 records each) from a huge data table (May contain 5 Million records). And i want this creation to be multi threaded.

so how can i form queries efficiently to fetch records like 1..5000 and 5001..10000 and so on. I can form some thing like select * from table where rownum<5000 and not exists ( already fetched records) . but it is not the efficient one.

View 5 Replies View Related

SQL & PL/SQL :: Dropping A Snapshot

Apr 13, 2008

I am unable to drop a snapshot , i tried even from sys, it is giving the following error,

drop snapshot adm.dup_resource_status
9:46:29 ORA-08103: object no longer exists

but, when i try to create a new snapshot with the same name

CREATE MATERIALIZED VIEW ADM.DUP_RESOURCE_STATUS
..............
I get
ORA-12006: a materialized view with the same user.name already exists.

View 31 Replies View Related

PL/SQL :: Dropping A Partition?

Jul 7, 2012

Do i have to create indexes again if i drop a partition on a table?

View 6 Replies View Related

PL/SQL :: Dropping Partitions At Once

Nov 2, 2012

ALTER TABLE table_name DROP PARTITION (partition_1000);
ALTER TABLE table_name DROP PARTITION (partition_1001);
...
.........
......
ALTER TABLE table_name DROP PARTITION(partition_1320);(b

it is a delta partition,so trying to remove 320 partitions at once in pl/sql developer for a single table.

Like this i have to remove for more then 15 tables one by one, will this effect the database like filling up the archinve log destination by writing more logs.

kind of problems that i am going to face , as i am doing it on the production box directly.

View 5 Replies View Related

Dropping A User Profile

Jun 9, 2010

Would like to know:

When you drop a user profile, Oracle automatically assigns the default profile to that user - knowing that no other profile has been assigned to that user.
Does this happen in the same session or after a restart?

A user must have a profile at all times, so if a profile is dropped, then the default profile should be assigned in the same session because if not, then during that session the user has no assigned profile which shouldn't happen?

View 2 Replies View Related

SQL & PL/SQL :: Dropping A Procedure During Execution

Apr 26, 2012

When the procedure is executing can we drop a procedure . Is there any way to drop the procedure with force .

View 5 Replies View Related

Dropping Users And Recreating?

Jun 18, 2012

we are refreshing data base for our application from the base load,Every time when ever there is refresh required we need to drop the users and recreate every thing from the base load. For this we need to kill the sessions and drop the users ,Putting the instance in the restricted mode and refreshing the db.Some time when ever During the killing and dropping process there are some errors like you can not drop the users which is currently connected .

set termout on
set echo on
spool Kill_sessions_drop_users.log
DECLARE

[code]...

how to get rid of this

View 6 Replies View Related

Dropping Encrypted Tablespace

Mar 29, 2013

I created an encrypted tablespace for testing. I later dropped it but don't remember if I specified "including contents and datafiles". The tablespace was empty and there are no datafiles for it. However, the information for this dropped tablespace still shows up in v$encrypted_tablespaces. How do I get that lingering information removed?

View 3 Replies View Related

Reports & Discoverer :: Value In Table Column Based On Some Existing Column Value Automatically Without User Intervention

May 15, 2011

i have two questions.

(1) how can i fill some value in a table column based on some existing column value automatically without user intervention. my actual problem is i have 'expiry date' column and 'status'. the 'status' column should get filled automatically based on the current system date. ex: if expiry date is '25-Apr-2011' and current date is '14-May-2011', then status should be filled as 'EXPIRED'

(2)hOw can i build 'select' query in a report (report 6i) so that it will show me list of items 'EXPIRED' or 'NOT EXPIRED' or both expired and not expired separately in a single report based on user choice. 'EXPIRED' & 'NOT EXPIRED' can be taken from the above question no. 1.

View 3 Replies View Related

Modifying Fields With Huge Data

Aug 26, 2011

I have 2 questions, because they can be inter-related I am posting it in a single post. These queries are related to Oracle(PL\SQL).

1. I am trying to increase the size of a field in a table which has almost 2 million records and the query for alteration runs for almost and hour and rollsback, wondering is there a better way of doing it.

2. I have modified the size of a field in a table from Varchar2(10) to Varchar2(20), now when I tried to rollback the modification it is not letting me to change the size from Varchar2(20) to Varchar2(10). No data has been inserted after the modification.

View 1 Replies View Related

SQL & PL/SQL :: Modification Of Fields With Huge Data?

Aug 26, 2011

I have 2 questions, because they can be inter-related I am posting it in a single post. These queries are related to Oracle(PLSQL).

1. I am trying to increase the size of a field in a table which has almost 2 million records and the query for alteration runs for almost and hour and rollsback, wondering is there a better way of doing it.

2. I have modified the size of a field in a table from Varchar2(10) to Varchar2(20), now when I tried to rollback the modification it is not letting me to change the size from Varchar2(20) to Varchar2(10). No data has been inserted after the modification.

View 2 Replies View Related

Performance Tuning :: Update Currency Column Of One Table Using Other Table Currency Column?

May 11, 2012

I am trying to update currency column of one table using the currency column of other table using the following sql code.

update ODS.SO_ITEM OSI
set CURRENCY__CODE=(select currency__code from sa_sales.SO_ITEM SSI where SSI.ID=OSI.ID)

This update is taking taking a lot of time and is never ending.

should i create index on source table (SA_SALES.SO_ITEM) or on target table (ODS.SO_ITEM) ?

View 7 Replies View Related

SQL & PL/SQL :: Dropping Users - Queue Tables

Dec 9, 2010

I am trying to drop a user but i get the following error

*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-24005: must use DBMS_AQADM.DROP_QUEUE_TABLE to drop queue tables

A couple of questions on this error:

- I did a search on the forum and the thread [URL] appears to have a solution to it by using

DBMS_AQADM.DROP_QUEUE_TABLE(queue_table => 'DEF$_AQCALL', force =>TRUE);

What i am not sure of is which queue will the above statement drop if run as sys and there are multiple schemas with different schema names but with the same queue name?

- We recently rebuilt the database server and prior to the rebuild i have always managed to drop a user without the above error. Is it likely that some setting somewhere is changed?

View 2 Replies View Related

SQL & PL/SQL :: Dropping Unused Columns In A Partition?

Jan 21, 2003

I have been trying to drop an unused column in a partitioned table, and the number of records stored in this unused column was very high. I kept on running into errors as follows:

ORA-01562: failed to extend rollback segment number 10

ORA-01650: unable to extend rollback segment R09 by 256 in tablespace RBS

I tried to "SET TRANSACTION USE ROLLBACK SEGMENT <name>" with a larger rollback segment, but it still did not work. Can I drop the "unused column" from each partition instead?

How to apply that? Or, what are my options besides increasing the size of the rollback segment?

View 6 Replies View Related

SQL & PL/SQL :: Does Dropping Schema Purges Data

Oct 21, 2011

I am working on Oracle 10g. I want to drop a schema and I want to re use the space in data files to other schema.

If I drop the user by "DROP USER USERNAME CASCADE" , tables in that schema are purged by default or do I have to explicitly drop the tables , purge the re cycle bin and then drop the schema?

View 2 Replies View Related

Dropping Database Also Drop SP File?

Aug 21, 2013

when going through some white papers i find athing which made me confusing it stats that dropping a database will also drop the sp file.

is it true

sp file is important is n't it

View 1 Replies View Related

PL/SQL :: Dropping Connected Users In Oracle DB?

Mar 27, 2013

I want to drop some users in Oracle DB using sqlplus but I am getting error:

SQL> DROP USER test CASCADE;
DROP USER test CASCADE
*
ERROR at line 1:
ORA-01940: cannot drop a user that is currently connected.But when I ran below command to know sessions connected I am not getting any results:

SQL> select sid,serial# from v$session where username = 'test';

no rows selected

View 3 Replies View Related

While Dropping Old Undo Tablespace / Getting Error

Dec 23, 2012

i Cannot drop old undo tablespace. While dropping the old undo tablespace we get an error

ERROR at line 1: ORA-01548: active rollback segment '_SYSSMU77$' found, terminate dropping tablespace

SQL> select tablespace_name, status, segment_name from dba_rollback_segs where status != 'OFFLINE';

TABLESPACE_NAME STATUS SEGMENT_NAME
------------------------------ ---------------- ------------------------------
SYSTEM ONLINE SYSTEM
APPS_UNDO NEEDS RECOVERY _SYSSMU77$

View 11 Replies View Related

Archiving And Purging Data From Huge Tables?

Apr 22, 2013

I'm currently working on a project which is to archive the old data and then purge the same data from the main table.

Here is a detail description:

There are around 50 odd tables from which I would need to archive the old data(matching certain filter conditions...not date based). Meaning I have to store the data in a temp table. Once stored in temp table then I would have to delete those rows from the main table. This temp table will be later exported and stored on ARchive database(a seperate database). These tables are very huge. One of the table is actually 250 GB in size. And all these tables have many indexes built - both normal and bitmap.The 250 GB size table has 40 million rows that need to be archived and purged. The total number of rows in the table are 540 million.On this table alone there are 50 bitmap indexes and 2 normal indexes. This table is partitioned based on date column.This date column is not used/useful in identifying the old data. There are around 20 tables which are quite similar in size to the above described table. Rest of them are little small when compared to the above table.

We have to execute this activity over a weekend which gives us about 48 hours time to complete the activity. Best possible ways to handle this activity. Most importantly should be able to complete the activity within the specified 48 hour window.

The solution what we are now thinking of is:

1. Create the temp table ---Create tmp_tbl as select * from main_table where <<conidtions identifying old data>>

2. Once the temp table is created. Make copy of indexes that exist on the main table and eventually drop them.

3. Execute a PL/SQL script to perform the bulk delete from main table and commit for every 100000 rows.

4. Once the bulk delete is finished then recreate the indexes on the main table using the copy made at earlier step.

Our main worry is about the step#4. Considering the size of these tables and the number of indexes to be built,we are not sure how long the index re-creation will run for each table.

depending on the possibilities we may have to split the activity in to 2-3 phases spreading across 2-3 weekends. Even then we are not sure whether we will be able to pull off this activity.

The database we are using is Oracle 10g.

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved