Performance Tuning :: Unable To Update Newly Added Column In Existing Table
May 21, 2013
I am facing some challenge while running update query on newly added column in existing table.
Environment Details
Oracle 9i, version 9.2.0.6
Os Unix Aix 6.1
No of records in table : 12572770
Below are the step i followed.
1. In table testtablename, I have added new column COLUMNNAME29 with datatype VARCHAR2(8).
2. After adding the new column, i executed the update query to populate the data form COLUMNNAME1 to COLUMNNAME29.
3. The query is executed using COLUMNNAME24 in where clause, to drive query in index based.
SQL> desc testtablename
Name Null? Type
----------------------------------------- -------- ----------------------------
COLUMNNAME1 VARCHAR2(8)
COLUMNNAME2 CHAR(1)
COLUMNNAME3 CHAR(1)
COLUMNNAME4 VARCHAR2(8)
COLUMNNAME5 VARCHAR2(11)
[Code]...
Table altered.
SQL> select index_name, column_position, column_name from dba_ind_columns where table_name = 'TESTTABLENAME' order by index_name,column_position;
1. The update query is hanging in database, it's not progressing (In single update, approximately 40000 records will get update)
2. No oracle error thrown in alert log or in session where the query being executed.
3. The event for the query is "db file sequential read".
4. When i update the newly added column COLUMNNAME29 with static value "1", the update completed successfully in few seconds.
5. Then i changed the static value to "1111" and executed the update statement, which result to query hanging in database.
6. I tried to update the existing column(COLUMNNAME1) in table with static value "1111", the update completed successfully.
Below are the queries completed successfully
Update Testtablename
Set Columnname29 = '1'
Where Columnname24 >= To_Date('01-12-2002 00:00:00', 'DD-MM-YYYY HH24:MI:SS' )
And Columnname24 < To_Date('01-01-2003 00:00:00', 'DD-MM-YYYY HH24:MI:SS')
[Code]...
Below are the queries hanging in database
Update Testtablename
Set Columnname29 = Columnname1
Where Columnname24 >= To_Date('01-12-2002 00:00:00', 'DD-MM-YYYY HH24:MI:SS' )
And Columnname24 < To_Date('01-01-2003 00:00:00', 'DD-MM-YYYY HH24:MI:SS')
Update Testtablename
Set Columnname29 = '1111'
Where Columnname24 >= To_Date('01-12-2002 00:00:00', 'DD-MM-YYYY HH24:MI:SS' )
And Columnname24 < To_Date('01-01-2003 00:00:00', 'DD-MM-YYYY HH24:MI:SS')
Below is character set in database
SQL> select * from v$nls_parameters;
PARAMETER VALUE
---------------------------------------------------------------- ----------------------------------------------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
If one of the redolog member corrupted and overcome this problem, I had removed the corrupted redolog member. Later I had added a new member to this group.
I would like to know, is the newly added log member will get sync with existing log member? How the newly added log member get sync with existing log member?
I am trying to update columns of Table A with the columns of Table B. Both these tables have 60,000 rows each. I tried this operation using following 2 queries:
Query 1
Update TableA A set (A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3 from TableB where A.CODE=B.CODE)
Query 2 Update TableA A set (A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3 from TableB where A.CODE=B.CODE) where exists A.code = (select B.code from TableB B where A.code=B.code)
When i execute these two above queries, it keeps executing indefinitely.
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns: (id number, job varchar2(20), time date, plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names. For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
(1) how can i fill some value in a table column based on some existing column value automatically without user intervention. my actual problem is i have 'expiry date' column and 'status'. the 'status' column should get filled automatically based on the current system date. ex: if expiry date is '25-Apr-2011' and current date is '14-May-2011', then status should be filled as 'EXPIRED'
(2)hOw can i build 'select' query in a report (report 6i) so that it will show me list of items 'EXPIRED' or 'NOT EXPIRED' or both expired and not expired separately in a single report based on user choice. 'EXPIRED' & 'NOT EXPIRED' can be taken from the above question no. 1.
Below query is getting delayed becasue of BitMap Indexes on the table. I am trying to avoid indexes by using Hints in the query but unable to do so, Details are as follows.
explain plan for SELECT cbu_cid, cbu_cid_customer_en_nm, COUNT (billg_acct_no) AS billg_acct_no, SUM (subscriber_cnt) AS subscriber_cnt FROM daily_view WHERE (billg_system_id = 'TM' AND mktg_sub_segment_a_nm = 'TM') AND (cbu_cid NOT IN ('0001988048', '0001379962', '0001350469')) GROUP BY cbu_cid, cbu_cid_customer_en_nm HAVING SUM (subscriber_cnt) > 10 ORDER BY subscriber_cnt DESC; [code]....
I have tried with ALL_ROWS & PARALLEL.how to avoid above two indexes in a query.
Can this be optimized, in dev and Ist we didn't realize since 1000 rows were there, but in PERF since 2 mil rows are there this is taking a long time,
SET SERVEROUTPUT ON DECLARE counter number := 0; CURSOR insertValues IS select roleid, productcode, functioncode, typecode, restrictiontype, value1 from restrictions where actionmode = 'INSERT';
[code]...
can this be done in a single update since Selects /Updates are happening on same table
One of my Customer had followed the document applied pertaining to mitigate against vulnerability CVE-2012-3132.
New Document Mitigation steps for CVE-2012-3132 [ID 1482694.1]
This was applied to a database and it stopped the 'password' command from succeeding in SQLPLUS. Once the trigger was disabled, it worked fine.
ORA-00604: error occurred at recursive SQL level 1 ORA-06531: Reference to uninitialized collection ORA-06512: at "SYS.NAME_SECURITY", line 165 ORA-06512: at line 2
Why is this behavior seen? By the way, he was able to change the password using TOAD. Only sqlplus is the issue.
I am using a shell script to load unix content to a database.I have captured the unix data to a csv file and I am using a sql loader to inser that csv data to database. following is my ctl file contents.
[b]load data infile data.csv into table AVS_LOGS fields terminated by ',' ( RUNDATETIME, SERVER, DIRECTORY, FILENAME, LASTUPDATETIMESTAMP ) [/b]
and I am using sql loader command in unix that is [b]sqlldr $CLOGIN control=control.ctl log=test.log[/b]
But this is working only if the table is empty.Now I am looking for something where I do not need to delete the data from table each time. it should update the table.
I am looking at an existing utility which inserts data into configuration tables. The utility is fairly basic, you simply add the UPDATE / INSERT / DELETE sql commands to a .sql file, set up a few params in a .sh script in order to tell it which Database / Schema to run against and away it goes, doing some logging, etc on the way.
Most of the time this is fine. However there is one table that causes big performance problems. This large table holds rating data and it has two large triggers on it. It also gets updated quite a bit with new rating tariffs.
The triggers check that many fields are not null or are certain values... but they also check that dates of the rates do no overlap, etc. So, in short, they do a lot of work. I can see that these are the main performance obstacle. I have no ability to alter or disable these triggers, this is a core table supplied by the vendor and as such I cannot manipulate it.
So looking at the things I can change, what am I left with?... only the way I load the data..
I can consider using SQLloader in order to handle INSERTS or using the APPEND hint in order to perform a direct path insert rather than having individual INSERT statements.
I can try to ensure that my data is sorted along the same lines as the index on the table in order to ensure that I am updating the index nodes in as streamlined way as possible. I can improve performance still more, or even circumnavigate the drag of the triggers?
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */ INTO DWH_BILL_DET rq USING (SELECT rated_que_rowid, detail_rerate_flag_code, rerate_sel_key,
I had created a new table named USERLOG with two fields from a previous VIEW. The table already consist of about 9000 records. The two fields taken from the VIEW, i.e. weblog_views consist of IP (consists of IP address), and WEB_LINK (consists of URL). This is the code I used,
CREATE TABLE USERLOG AS SELECT C_IP, WEB_LINK FROM weblog_views;
I want to add another column to this table called the USER_ID, which would consists of a sequence starting with 1 to 9000 records to create a unique id for each existing rows. I'm using Oracle SQL Developer: ODMiner version 3.0.04. I tried using the AUTO-INCREMENT option,
ALTER TABLE USERLOG ADD USER_ID INT UNSIGNED NOT NULL AUTO_INCREMENT;
But I get an error with this,
Error report: SQL Error: ORA-01735: invalid ALTER TABLE option 01735. 00000 - "invalid ALTER TABLE option"
I had created a new table named USERLOG with two fields from a previous VIEW. The table already consist of about 9000 records. The two fields taken from the VIEW, i.e. weblog_views consist of IP (consists of IP address), and WEB_LINK (consists of URL). This is the code I used,
CREATE TABLE USERLOG AS SELECT C_IP, WEB_LINK FROM weblog_views;
I want to add another column to this table called the USER_ID, which would consists of a sequence starting with 1 to 9000 records to create a unique id for each existing rows. I'm using Oracle SQL Developer: ODMiner version 3.0.04.
I tried using the AUTO-INCREMENT option,
ALTER TABLE USERLOG ADD USER_ID INT UNSIGNED NOT NULL AUTO_INCREMENT;
But I get an error with this,
Error report: SQL Error: ORA-01735: invalid ALTER TABLE option 01735. 00000 - "invalid ALTER TABLE option"
1) How to add a new column to the existing table's particular position, instead of atlast.
2) I created a table without mentioned the datatype size as below Create table dummy (name char, age number). Then what is the default size will be allocated for those column's?
Now we are supposed to apply column level TDE to some of our table in database. Now it will be a 'ALTER' on the columns. it involves 4 big tables out of which 3 tables having size ~30GB(one is partitioned table) and another one ~800GB(Not partitioned) Now the concern is, what will be the efficient/safest way to apply TDE on columns, below are the two options with us. (NOTE - We do have downtime window during DB maintenance but looking at the size of the table, i suspect it might take lot.)
1. Directly apply 'ALTER' on the columns. (Note- i was testing on my local, it took 3hrs for a 30GB table to ALTER the column to TDE)
2. Use Table Redefinition for Altering the column. (Creating interim table with column as TDE and then Redefining whole table).
SELECT CURRENTSTEP FROM (SELECT ( WFENTRY.NAME || ',' || CURRENTSTEP.STEP_ID ) AS CURRENTSTEP, (CASE WHEN WFENTRY.NAME IN
[Code]...
in this query I am concatenating tow columns , I use this query as a sub query in my other queries and filter the results with and CURRENTSTEP = ?
here is how I use it
select sys_audit_id from ( SELECT * FROM (SELECT F.FINDING_NUMBER,
[Code]....
I saw adding this as a subquery with the filter and CURRENTSTEP = ? is slowing my query very much , as this is a derived column i cannot add index then how to improve performance for this subquery ?
We have a table called address and having the address fields and city ,state etc. The table will store huge amount of data .We need to query on the table. I would like to know how can we fasten the query and improve the performance of the query by creating index on these columns...Query is given below . note that the nullable columns can have data
I have an issue in materialized view which has got one of the null able column and query on this column taking approximately 2 mins where as other indexed columns takes less than 10 sec.
Here is the summary
SQL> Select Count (1), Count (VAT_NO) From Mv_customer;
If an index is created on VAT_NO will that improve the performance. What kind of index can be created considering very less number of records has got VAT_NO
I'm trying to collect histograms for column COL_C of table TAB_A(150K records), So an index "BAD_IDX" will *not* be used in a query when the value is not selective.
This is my query:
SELECT COL_A, COL_B , COL_C , COL_D , COL_E , COL_F FROM TAB_A WHERE COL_A = 050 AND COL_B = 13012345 AND COL_C = 0 AND COL_D = 0 AND COL_D >= '07/23/2013 00:00:00' ORDER BY COL_E ASC;
Now, I have index "BAD_IDX" on columns (COL_C, COL_E ).and the distribution of values looks like this:
select COL_C, count(*) FROM TAB_A --very not selective for 0, selective for the rest, also no histogram group by rollup(COL_C) order by 2 desc;
and the result is 20k row long (20k distincts), So I'll post just the top part of it:
Now, the problem with the query was that "COL_B = 13012345" was the most selective predicate, And an index for it did not exist, so the index "BAD_IDX" is used, and is scanning 86k records (all the "0" value records for column COL_C)!
So, I created an index
Create index GOOD_IDX on TAB_A(COL_B) compute statistics;
However, that BAD_IDX index is still being used! I've thought that maybe it's because the lack of histograms for the column COL_C.I've also understood from documentation I've read that the suitable histogram type is TOP FREQUENCY, Because although I have 20k distincts here for the column COL_C, what does the difference is the 86k records of value 0.
select distinct S.ID ID from ods.hso_Scheduled H, ods.SO_SCHEDULED S where S.insertion_date >= to_date('01-DEC-2011') and S.insertion_date < to_date('01-FEB-2012') and H.ID=S.ID
Both the involved tables, HSO_SCHEDULED is having 15 million records and SO_SCHEDULED table is having 7 million records.
I have created following indexes on these tables:
Indexes on SO_SCHEDULED: Index name Column name SS_IDX1ID, SO_SUB_ITEM__ID SS_IDX2INSERTION_DATE SS_IDX3ID, INSERTION_DATE SS_IDX4ID, SO_SUB_ITEM__ID, INSERTION_DATE SO_SCHEDULED_ID_PKID
I came across situation where a Nullable column is not using index for 'order by' clause. I added Not Null condition in the 'where' condition but it wasn't useful. I don't wanted to make composite index with not nullable column or with constant or modify column to 'Not Null'
So I carried out test cases and during which I found that in one case the sql statement does 'fast full scan' for data access but does not use index for 'order by' sorting
here are the steps
Initially I kept the column Nullable
SQL> create sequence s5; Sequence created.
SQL> create table t5 as select s5.nextval id,a.* from dba_objects a where rownum<1001; Table created.
SQL> set pages 100 SQL> select column_name,nullable from user_tab_columns where table_name='T5';
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 SORT ORDER BY (cr=16 pr=0 pw=0 time=4771 us) 1000 TABLE ACCESS FULL T5 (cr=16 pr=0 pw=0 time=1157 us)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 49.49 49.72 ********************************************************************************
select /*+ index(t i5) */ * from t5 t where id is not null order by id
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 TABLE ACCESS BY INDEX ROWID T5 (cr=150 pr=0 pw=0 time=5167 us) 1000 INDEX FULL SCAN I5 (cr=71 pr=0 pw=0 time=3141 us)(object id 4673065)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 69 0.00 0.00 SQL*Net message from client 69 22.89 28.04
Now I modified the 'id' column to Not Null
SQL> alter table t5 modify id not null;
SQL> set pages 100 SQL> select column_name,nullable from user_tab_columns where table_name='T5';
COLUMN_NAME N ------------------------------ - ID N OWNER Y OBJECT_NAME Y SUBOBJECT_NAME Y OBJECT_ID Y DATA_OBJECT_ID Y OBJECT_TYPE Y CREATED Y LAST_DDL_TIME Y TIMESTAMP Y STATUS Y TEMPORARY Y GENERATED Y SECONDARY Y
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 SORT ORDER BY (cr=16 pr=0 pw=0 time=2398 us) 1000 TABLE ACCESS FULL T5 (cr=16 pr=0 pw=0 time=1152 us)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 37.74 37.91 ********************************************************************************
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 TABLE ACCESS BY INDEX ROWID T5 (cr=150 pr=0 pw=0 time=4166 us) 1000 INDEX FULL SCAN I5 (cr=71 pr=0 pw=0 time=3142 us)(object id 4673065)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 8.28 8.45
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 SORT ORDER BY (cr=6 pr=0 pw=0 time=1342 us) 1000 INDEX FAST FULL SCAN I5 (cr=6 pr=0 pw=0 time=1093 us)(object id 4673065)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 1.88 1.89
Questions are
1) Why adding 'where id is not null wasn't enough for the index to get used in 'order by'? 2) While we got 'fast full scan' why index wasn't used for 'order by' clause? 3) Do we need the indexed column in where clause for being used in 'order by clause' too? 4) Do we need 'order by' clause if we are selecting only the indexed column with sequence generated values?