Adding Column To Large Table
Aug 12, 2013I want to add column to table which has huge amount of data and fill with data from another table. What is the best way to do it? Is it faster to use CTAS instead of ALTER TABLE ADD COLUMN?
View 2 RepliesI want to add column to table which has huge amount of data and fill with data from another table. What is the best way to do it? Is it faster to use CTAS instead of ALTER TABLE ADD COLUMN?
View 2 Repliesbelow query.
1) How to add a new column to the existing table's particular position, instead of atlast.
2) I created a table without mentioned the datatype size as below Create table dummy (name char, age number). Then what is the default size will be allocated for those column's?
1. When querying the "alert_log" table I created from the alert log using the script below, 2 new files were created ALERT_LOG_30499.bad and ALERT_LOG_30499.log.
The ALERT_LOG_30499.log. contains this error message:
error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log
ORA-12899: value too large for column MSG (actual: 82, maximum: 80)
the ALERT_LOG_30499.bad , so far, only contains datafile resize information. The datafiles have plenty of space and there is plenty of space on the San slice the datafiles reside.
2. then each time I recreate the table and increased the increased the varchar2 size, the "actual" size will also increase in the log file.
error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log ORA-12899: value too large for column MSG (actual: 92, maximum: 90)
3. When I increased the varchar2 size to 120+ it gave me this error message:
[oracle@tds_dw bdump]$ cat ALERT_LOG_30715.log
LOG file opened at 03/09/11 14:46:20
Field Definitions for table ALERT_LOG
Record format DELIMITED BY NEWLINE
Data in file has same endianness as the platform
Rows with all null fields are accepted
Fields in Data Source:
MSG CHAR (255)
Terminated by ","
Trim whitespace same as SQL Loader
TABLE DDL:
create table
alert_log ( msg varchar2(80) )
organization external (
type oracle_loader
default directory BDUMP
access parameters (
records delimited by newline
)
location('alert_damistst.log')
)
reject limit 1000;
**** QUESTION
I can still query the alert_log table in sqlplus, but those log and bad files are generated, is this an issue?
example of a piece of the results from " select * from alert_log; "
MSG
--------------------------------------------------------------------------------
Thread 1 advanced to log sequence 5254 (LGWR switch)
Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log
Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log
Thread 1 cannot allocate new log
Checkpoint not complete
Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log
Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log
Wed Mar 9 14:33:09 2011
Thread 1 advanced to log sequence 5255 (LGWR switch)
Current log# 2 seq# 5255 mem# 0: /tds_oradata/redo02a.log
Current log# 2 seq# 5255 mem# 1: /u02/damistst/REDO_LOGS/redo02b.log
13076 rows selected.
I am using trim function in my select query. But still I am getting white space in my output. because of this, I am getting the error "value too large for column... " when I load the data into a table through sqlloader.
define APPName="&1"
set heading off;
set verify off;
set newpage 0
set feedback off;
set rtrimspool on;
set termout off;
set pagesize 40000;
[code].....
how do I add a column to an existing table which when rows are added to the table this column will have the same default value?
alter table thetable add (new_column_name varchar2(100) default bbubu);
I've been trying to write some code to add a column if it does not exist as the code will be run numerous times and will be parameterized in other software to run across multiple tables.
Here is what I have made so far:
DECLARE
COLEXISTS integer;
vCmdStr varchar2(4000);
[Code]....
I can't figure out where the invalid character is.
I had created a table which have 100s' of entries in it.
create table reg_user
(
USERNAME VARCHAR2(50),
PASSWORD VARCHAR2(20)
)
***
[Code]..
1. In this table, how to update the Nth row, how can I do it
2. Now I need to add USERID in this table,which will get value from 1 to max no. of rows. I do not want to drop the table and again re create it adding USERID or update each row manually. Is there any other way to add USERID and have IDs from 1 to max IDs.
I have encountered some problems in SQL I want to create a table with a bunch of prepared data. For ease of use, I choose to generate a SQL file which contains all the sql clauses used to create the table and insert the data. So all the data can only be inserted to a table using sql clause.
My questions:
1) If data of a column is large (for example, 1 M text), how to insert it using SQL, is there a piecewise method.
2) And how can I insert BLOB data using SQL clause.
What I what is to enclose all the operations in a single SQL file, and when the table is needed, just execute this SQL file.
I want to add some new columns in an existing table that has over 10 millions rows, but not at the end of existing columns, my requirement is such i want new columns between existing existing columns. Is it possible.....
View 16 Replies View RelatedWhile creating the external Table, i am getting error.
ORA-12899: value too large for column PRC_REC_TYPE (actual: 2, maximum: 1)
But while checking the CSV file, i found that prc_rec_type is having one 1 length value. I am uploading the csv file also.
-- Create table
create table ET_PGIT_POL_RISK_COVER
(
prc_pol_no VARCHAR2(60),
prc_end_no_idx VARCHAR2(22),
prc_sec_code VARCHAR2(12),
prc_risk_id VARCHAR2(12),
prc_smi_code VARCHAR2(12),
[code].....
i have 2 data block (maintenance & maintenanceparts) in one form and i will like to add the values of the attribute "LineCost" in the maintenanceparts datablock and put the sum into 'Totalcost' attribute in the maintenance datablock.
I tried use the code
"begin
select sum(maintenanceparts.LineCost)into :maintenance.totalcost
from MaintenanceParts;
end;"
But it ended up adding all the records that was save in that attribute and putting it in totalcost. I am new to oracle
we received a design advise to add columns to track the update and delete done on each row in our tables:
- DELETED_DATE_TIME
- DELETED_BY
- UPDATE_DATE_TIME
- UPDATED_BY
In our architecture, the application can only access functions/procedures to access/modify data. Each function logs the action, the executed sql statement, oracle error, user terminal, and the user into a unified log table by using v$ tables to create a general log function that is called after execution or error.
The only advantage is that it will be easier to know the delete and last update information faster versus space and design modification.
I am having problem in inserting a long file (80000 B) into CLOB cloumn in oracle database 10g.
ERROR at line 23:
ORA-06550: line 23, column 1:
PLS-00172: string literal too long
I am calling the oracle stored procedure through unix shell script as fallows.
#!/bin/ksh
cd /usr/home/dfusr/backup
i=`cat pe_proxy_master_2160.Thu.log | sed "s/'/''/g"`
sqlplus username/password@db 1> temp.log <<!
DECLARE
BEGIN
Write_Text_To_CLOB(1,'$i');
COMMIT;
END;
/
The file pe_proxy_master_2160.Thu.log is about 50000 B.
My Stored Procedure is fallows
CREATE OR REPLACE PROCEDURE Write_Text_To_CLOB (
p_id IN NUMBER
, p_clob IN VARCHAR2
)
IS
[code]....
I also tried with ojdbc5.jar and ojdbc14.jar files in class path.
i am exporting / importing from 10.2.0.4.0 to 11.2.0.3.0 but while doing import some of rows are rejected ...
IMP-00019: row rejected due to ORACLE error 12899
IMP-00003: ORACLE error 12899 encountered
ORA-12899: value too large for column XXXX (actual: 51, maximum: 50)
Column 1 264
Column 2 123432
Column 3
Column 4 7
[code]....
having looked at data in source system i cant see see character "Â" in the column 11 i think this is what causing issue !!why is oracle adding this character to this field ? how can i fix this ? without having to modify table in new system to allow more characters?
Oracle 11g r2, APEX 4.1.1.00.23.
I have some classic reports.
I go to Report Attributes, then I click Add Column Link in the "Tasks" right menu, it adds me a column link, I just add some text for the link and a page to go to. Then I run the report and I get :
report error: ORA-01403: no data foundTested with several classic reports on multiple pages.
Debug mode shows me :
0.43816 0.00240 ...Execute Statement: select distinct [...] order by 3,11 ,4
0.44056 0.00162 print column headings
0.44218 0.04816 rows loop: 25 row(s)
0.49037 0.00141 report error: ORA-01403: aucune donnée trouvée
0.49175 0.00078 Computation point: After Box BodyWhen I run the query in my favorite tool, I get expected results.
Did I missed something ?
I need to dump the contents of a very large table into text files for archiving as we retire this old DB. The table has about 16 million rows, and a few of the columns are up to 4000 characters wide (varchar2(40000)). I've got 2 problems:
1) How can I select records that occur in a certain month of a year (there is a date column) and put the selected records into a file?
2) I don't have access to the server OS, so UTL_FILE is not possible. The output is also so large that I'm having trouble with the DBMS_OUTPUT.PUT_LINE.
I'm trying to get the first block of the IF working first, so the rest is just placeholders.
DECLARE
v_mm number (2);
v_yyyy number (4);
min_mm number (2);
min_yyyy number (4);
max_mm number (2);
max_yyyy number (4);
min_date date;
[code]....
i have three tables and all of these tables have around 30L records.
Using join i am retrieving records from these tables but it is taking much more time to get output.
Partition can improve performance?
How to estimate next extent size for very large table? What should I take into account? Is there any formula for that?
View 4 Replies View RelatedI have two large tables(rptbody and rpthead) which has over millions or even more records. Below is the table schema
describe rpthead
Name Null Type
--------------------------- -------- -------------
RPTNO NOT NULL NUMBER
RPTDATE NOT NULL DATE
RPTD_BY NOT NULL VARCHAR2(25)
PRODUCT_ID NOT NULL NUMBER
[code]...
What I want is getting all data if the referenced RPTNO belongs to a particular product_id from rptbody table, here's the sql
SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE
FROM RPTBODY t0
WHERE
(
t0.RPTNO IN
(
SELECT t1.RPTNO FROM RPTHEAD t1 where t1.PRODUCT_ID IN ('4647')
)
)
ORDER BY t0.LINENO
Since the result set is pretty large, so my application(think it as c couple of jobs, each job should be finished in a time window) can only process a subset of all data, so I need pagination so that the next job can continue the processing until all data is processed, below is the SQL with pagination
select * from (
select a.*, ROWNUM rnum from
(
SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE
FROM RPTBODY t0
WHERE
(
[code]....
As you can see each query will take 100 rows from the db. The problem for now is that the query taking too much of time(10+ mins), I know the slowness is due to "ORDER BY t0.LINENO", but it's required for pagination.
I am trying to create a new index on large table of size around 100GB. but i am getting the following error:
ORA-1652: unable to extend temp segment by 128 in tablespace TEMP.
temp tablespace size is : 20 GB.
does it mean that the whole index will be created at temp tablspace first?
Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.
Method I have used:
1. Created Temp table
2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.
delete from B where exists (select 1 from temp where b.col1=temp.col1);
3. Why it is taking too much time for deleting records in table A.
Is there any thing that during deleting data from such master table, it is referring to all dependent tables even if dependent data is not present ?
I am working with an online application with the database in Oracle 10G. We have a table with 10 million rows and this table is subjected to grow in future also. Moreover we cannot archive some of these rows as these records are required for referencing.
We have all necessary indexes on the table but querying this table takes a lot of time especially when it is joined with other tables. some methods with which I can manage this table in a better way so that queries joining this table would execute faster..
SELECT
TAB1.C6,
TAB1.C8,
TAB1.C10,
TAB3.C4,
[code]....
I created a table but I want to add the Unique check to it as I forgot to apply it to the table when I created it.Is it possible to make the field Unique after having created the table or do I have to drop the table and re-create it?
View 3 Replies View RelatedIve got a look up table and i want it to display text instead of just the ID number, how would i do that? so for example if
1 = Hello
2 = Bye
it would display "Hello" instead of "1"
I want to have sum of Time Stamp Values in a Table. i.e. Group by Query
example : Select Name, Sum(Timestamp column) from Emp group by name;
OR
I have Time values in character column of a table. i.e. Table TT column t1 char(5) values '03:02'
Above mentioned values in more than one line, i want to get the sum of all the values and it should be a time value.
We have two databases running on 10.2.0.4 and 9.2.0.8. Both are having the same unpartitioned table of size 80G. I am exporting the table on 10g by using parallel=8 and dumpfile with %U option. That took around 4 hours to export the table.
And on 9.2.0.8, i am exporting using below parameters, taking around 5 hours.
buffer=2000000
recordlength=64000
options i can try to speed up the export in both versions.
I have a large table and want to calculate just a few values. Therefore, I don't want to create a new table, I want to update the table. Here an example:
I want to calculate the VALUE_LAG with ID = 4 only (-> two values).
create table zTEST
( PRODUCT number,
ID number,
VALUE number,
VALUE_L1 number );
[Code]..
I tried this, but obviously, windows functions are not allowed in the update statement.
update zTEST
set VALUE_L1 = lag(VALUE) over (partition by PRODUCT, order by ID)
where ID = 4
How can I do this?
I have a query on , how to view the sample data from a very table which is large in size ( more than 10 million ).
I just need to see some sample data from a large table ( to see what kind of data which is application related ).
My question is :
Select *
from Sample_table
where rownum < 10
is this a Good way to view the sample data ?
I have understanidng that the rownum will be assigined to the rows once all the rows are reteived.
So what is the best way to view ?..I am not sure of any condition to put in the intial time of querying.
I am searching for a decent method / example code to subdivide a large table (into a global temp table (GTT) for further processing) based on a list of numeric/alphanumeric which is the resultset from a view.
I am groping with the following strategy in PL/SQL:
1 -- set up cursor, execute the view (so I have the list of identifiers)
2 -- create a second cursor (or loop?) which: accepts each of the identifiers in turn executes a query (EXECUTE IMMEDIATE?) on the larger table INSERTs (or appends?) each resultset into the GTT
3 -- Then the GTT contains just the requires subset of the larger table for further processing and eventual import into iReport for reporting.
GTT is defined and ready to go, the larger table contains approx 40,000 rows and I need to extract a dozen subsets or so which add up to approx 1000 rows.
At moment we use range-hash partitioning of a large dimension table (dimension model warehouse) table with 2 levels - range partitioned on columns only available at bottom level of hierarchy - date and issue_id.
Result is a partition with null value - assume would get a null partition in large fact table if was partitioned with reference to the large dimension.Large fact table similarly partitioned date range-hash local bitmap indexes
Suggested to use would get automatic partition-wise joins if used reference partitioningWould have thought would get that with range-hash on both dimension.