Performance Tuning :: Update In A Loop?

Mar 11, 2011

Can this be optimized, in dev and Ist we didn't realize since 1000 rows were there, but in PERF since 2 mil rows are there this is taking a long time,

SET SERVEROUTPUT ON
DECLARE
counter number := 0;
CURSOR insertValues IS select roleid, productcode, functioncode, typecode, restrictiontype, value1 from restrictions where actionmode = 'INSERT';

[code]...

can this be done in a single update since Selects /Updates are happening on same table

View 4 Replies


ADVERTISEMENT

Performance Tuning :: FOR Loop And Driving Site?

Sep 18, 2012

So usually joining a local table and a remote table (much larger table), the best practice is using a /*+ DRIVING_SITE(remote table) */ hint.

*CASE1:

INSERT INTO another_local_table
SELECT /*+ DRIVING_SITE(remote_table) */
FROM local_table
,remote_table@remotedb
WHERE join condition

*CASE2:
However I saw this particular
FOR x IN Select id From local_table l
LOOP
INSERT INTO another_local_table

[code]...

So far I haven't seen the explain plan of both cases since most of the tables are Global temporary tables. But in terms of the logic of the two cases and the common best practice, logically doesn't CASE1 should have a better performance?

For me, its like taking a trip to a grocery. CASE1 buys all that you need, one time, it will take you a half-day trip perhaps. However CASE2 is like quickly buying a grocery item, one at a time, for several short trips. You'll save on gas on CASE1 right.

View 1 Replies View Related

Performance Tuning :: Cost Calculation For Nested Loop Join

Mar 27, 2012

Following is the query on TPC-H schema.

explain plan for select
count(*)
from
orders,
lineitem
where
o_orderkey= l_orderkey.

The trace 10053 (as shown below) for this query shows nested loop join with Lineitem as outer table and Orders as inner table. It is effectively join on composite index (pk_lineitem) of Lineitem and unique index(Pk_orderkey) of Orders table. The cost calculation formula as given in the book as "outer table cost + cardinality of outer table * inner table cost " fails here. I am not able to understand this.

BASE STATISTICAL INFORMATION
***********************
Table Stats::
Table: LINEITEM Alias: LINEITEM
#Rows: 6001215 #Blks: 109048 AvgRowLen: 124.00
Column (#1): L_ORDERKEY(NUMBER)
AvgLen: 6.00 NDV: 1500000 Nulls: 0 Density: 6.6667e-07 Min: 1 Max: 6000000
[code]....

how the cost has been calculated. This does not follow the traditional nested loop cost formula as mentioned in the book.

View 7 Replies View Related

Performance Tuning :: Query With Nested Loop Takes 6 Hours To Complete

Jun 23, 2011

I'm joinging two tables event_types and tmp_acc tables.

event_types contains 2 Billion records
tmp_acc contains 20,000 records.

Resulting rows are about 300,000 records in event_types table end_t and account_obj_id0 are joined indexed

no indexs in tmp_acc.

When I run below query with nexted loop it takes 6 hrs to complete. But when I run with hash join even after 4 days it was still running. what is wrong with hash join here. Why it takes so long. I'm joining only 20000 rows. So I think there should be a way to get result rows quickly.

show parameters hash_area_size

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
hash_area_size integer 2097152

explain plan for
select --+ parallel(e,6)
[code]....

View 21 Replies View Related

Performance Tuning :: How Oracle Optimizer Choose Joins (hash / Merge And Nested Loop Join)

Oct 18, 2012

I want to know how the Oracle optimizer choose joins and apply them while executing the query. So that I will insure about optimizer join before writing any query.

View 2 Replies View Related

Performance Tuning :: Update XML Eating Up A Lot Of Time

Sep 26, 2012

update xml eating up a lot of time is there any way to tune

SELECT UPDATEXML(:B3 , '/FCUBS_RES_ENV/FCUBS_BODY/FLD/FN[@TYPE="' || :B2 ||
'"]', :B1 )
FROM
DUAL

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 499 0.44 0.90 0 3 0 0
Fetch 499 1.49 2.87 0 0 0 499
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 999 1.93 3.77 0 3 0 499

real code
SELECT updatexml(l_xml,
'/FCUBS_RES_ENV/FCUBS_BODY/FLD/FN[@TYPE="' ||
upper(replace(cspkes_misc.fn_getparam(p_parent_list,
l_parent_list_clob,
'Y',
l_cnt,
'>'),
'-',
'_')) || '"]/text()',
l_fn_str)
INTO l_xml
FROM dual;

View 4 Replies View Related

Performance Tuning :: Insert / Update Due To Triggers

Aug 10, 2011

I am looking at an existing utility which inserts data into configuration tables. The utility is fairly basic, you simply add the UPDATE / INSERT / DELETE sql commands to a .sql file, set up a few params in a .sh script in order to tell it which Database / Schema to run against and away it goes, doing some logging, etc on the way.

Most of the time this is fine. However there is one table that causes big performance problems. This large table holds rating data and it has two large triggers on it. It also gets updated quite a bit with new rating tariffs.

The triggers check that many fields are not null or are certain values... but they also check that dates of the rates do no overlap, etc. So, in short, they do a lot of work. I can see that these are the main performance obstacle. I have no ability to alter or disable these triggers, this is a core table supplied by the vendor and as such I cannot manipulate it.

So looking at the things I can change, what am I left with?... only the way I load the data..

I can consider using SQLloader in order to handle INSERTS or using the APPEND hint in order to perform a direct path insert rather than having individual INSERT statements.

I can try to ensure that my data is sorted along the same lines as the index on the table in order to ensure that I am updating the index nodes in as streamlined way as possible. I can improve performance still more, or even circumnavigate the drag of the triggers?

View 5 Replies View Related

Performance Tuning :: Update Million Rows In One Table With Values From Another Tables?

Feb 15, 2011

I am trying to update a million rows in one table with the values from another tables.

Table being updated CI_ADJ_CHAR column CHAR_VAL_FK1
Table from which values will be used CK_ADJ columns (cx_id, ci_id)

The CI_ADJ_CHAR.CHAR_VAL_FK1 values match CK_ADJ.CX_ID and should be updated with the value CK_ADJ.CI_ID.

The CK_ADJ table has 1.3 million rows and both the columns have indexes defined. Table definitiuon mentioned below

The CI_ADJ_CHAR table has 14 million rows and will update 1 million rows and has an index on the ADJ_ID column but not on the CHAR_VAL_FK1 column.

View 1 Replies View Related

Performance Tuning :: Get Number Of Rows Processed While Update Statement Is Still Running

Aug 25, 2010

Is there any way i can Get how many rows are processing with UPDATE statement while the Update statement is still running.

View 2 Replies View Related

Performance Tuning :: Unable To Update Newly Added Column In Existing Table

May 21, 2013

I am facing some challenge while running update query on newly added column in existing table.

Environment Details
Oracle 9i, version 9.2.0.6
Os Unix Aix 6.1

No of records in table : 12572770

Below are the step i followed.

1. In table testtablename, I have added new column COLUMNNAME29 with datatype VARCHAR2(8).
2. After adding the new column, i executed the update query to populate the data form COLUMNNAME1 to COLUMNNAME29.
3. The query is executed using COLUMNNAME24 in where clause, to drive query in index based.

SQL> desc testtablename
Name Null? Type
----------------------------------------- -------- ----------------------------
COLUMNNAME1 VARCHAR2(8)
COLUMNNAME2 CHAR(1)
COLUMNNAME3 CHAR(1)
COLUMNNAME4 VARCHAR2(8)
COLUMNNAME5 VARCHAR2(11)

[Code]...

Table altered.

SQL> select index_name, column_position, column_name from dba_ind_columns where table_name = 'TESTTABLENAME' order by index_name,column_position;

INDEX_NAME COLUMN_POSITION COLUMN_NAME
------------------------------ --------------- --------------------------------------------------
IDX_TESTTABLENAME 1 COLUMNNAME24

Problem faced & My analysis

1. The update query is hanging in database, it's not progressing (In single update, approximately 40000 records will get update)
2. No oracle error thrown in alert log or in session where the query being executed.
3. The event for the query is "db file sequential read".
4. When i update the newly added column COLUMNNAME29 with static value "1", the update completed successfully in few seconds.
5. Then i changed the static value to "1111" and executed the update statement, which result to query hanging in database.
6. I tried to update the existing column(COLUMNNAME1) in table with static value "1111", the update completed successfully.

Below are the queries completed successfully

Update Testtablename
Set Columnname29 = '1'
Where Columnname24 >= To_Date('01-12-2002 00:00:00', 'DD-MM-YYYY HH24:MI:SS' )
And Columnname24 < To_Date('01-01-2003 00:00:00', 'DD-MM-YYYY HH24:MI:SS')

[Code]...

Below are the queries hanging in database

Update Testtablename

Set Columnname29 = Columnname1
Where Columnname24 >= To_Date('01-12-2002 00:00:00', 'DD-MM-YYYY HH24:MI:SS' )
And Columnname24 < To_Date('01-01-2003 00:00:00', 'DD-MM-YYYY HH24:MI:SS')

Update Testtablename

Set Columnname29 = '1111'
Where Columnname24 >= To_Date('01-12-2002 00:00:00', 'DD-MM-YYYY HH24:MI:SS' )
And Columnname24 < To_Date('01-01-2003 00:00:00', 'DD-MM-YYYY HH24:MI:SS')

Below is character set in database

SQL> select * from v$nls_parameters;
PARAMETER VALUE
---------------------------------------------------------------- ----------------------------------------------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA

[Code]....

View 15 Replies View Related

Performance Tuning :: Update Columns Of One Table Using Another Table

Feb 6, 2011

I am trying to update columns of Table A with the columns of Table B. Both these tables have 60,000 rows each. I tried this operation using following 2 queries:

Query 1

Update TableA A
set
(A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3
from TableB
where A.CODE=B.CODE)

Query 2
Update TableA A
set
(A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3
from TableB
where A.CODE=B.CODE)
where exists
A.code = (select B.code
from TableB B
where A.code=B.code)

When i execute these two above queries, it keeps executing indefinitely.

View 4 Replies View Related

Performance Tuning :: Tools For Database Tuning And Instance Tuning

Jul 12, 2010

Looking to understand the difference between instance tuning and database tuning.

What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.

However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?

View 1 Replies View Related

SQL & PL/SQL :: Update Two Table Using For Loop?

Feb 19, 2010

I want to update column in table 1 based on a substraction of two column, one from the same table and the other from different table. Then update the result of substraction in table 1. Number of rows in two tables are different.

--for r in (( select (table2.y - table1.y as x from table1, table2 where table1.x = c and table2,.x = m))
declare
i number := 1;
c number ;
m number;

[Code]....

View 8 Replies View Related

SQL & PL/SQL :: Update Table Using Loop

Apr 7, 2010

a project I'm working on. I normally work in SQL Server, so I'm a little stuck on this one.

I have a temp table (tmp_stack) with four columns:

Floor [varchar]
Unit [varchar]
Block [number]
BlockStart [number]
BlockEnd [number]

BlockStart and BlockEnd are currently NULL. What I need to do is loop through the table for each Floor and update BlockStart and BlockEnd for each Unit depending on how many blocks they use and how many have been used by prior units on that floor.

For example:

There are three units on Floor #1: 1A, 1B, and 1C.
1A = 5 blocks
1B = 3 blocks
1C = 2 blocks

For 1A, BlockStart should = 1 and BlockEnd should = 5
For 1B, BlockStart should = 6 and BlockEnd should = 8
For 1C, BlockStart should = 9 and BlockEnd should = 10

And everything should reset back to the beginning on successive floors.

In T-SQL, I would use a cursor, and I assume I need to do the same kind of thing in Oracle, but I can't figure out the syntax.

View 8 Replies View Related

Loop Through Records In A Table And Update?

Feb 2, 2012

I'm working with Oracle 10g.

I have a table like this;

ID Amount Date
123 5000 Oct-07-2011
123 null Oct-09-2011
124 7000 Oct-14-2011
124 null Oct-17-2011
124 null Oct-24-2011

What I'm trying to do here is loop thruogh the records and update the amount that's null with the previous amount with the same ID.

View 3 Replies View Related

SQL & PL/SQL :: Loop Over Recordsets In Update Tables

Jan 21, 2011

I can't seem to get away from writing php scripts to handle the update. I want to learn to use procedures more. What I have is a table like this:

courses(
id number(16) pk,
Division_title varchar2(100),
department_title varchar2(100)

I also have a temp table where I upload updated data into from time to time it looks like this

load_updates_courses(
id number(16) pk,
div_desc varchar2(100),
dep_desc varchar2(100)
)

Currently when I need to update the courses table, I write a php script to do the update. But what I really would like to do is write a procedure I could call to handle this.

I figured I need to loop over the recordsets in the update tables then do a update but I can't figure out how to get started with the plsql.

View 5 Replies View Related

PL/SQL :: Commit After 2000 Records In Update Statement But Not Using Loop

Mar 12, 2013

My oracle version is oracle 9i

I need to commit after every 2000 records.Currently am using the below statement without using the loop.how to do this?

do i need to use rownum?

BEGIN

UPDATE
(SELECT A.SKU,M.TO_SKU,A.TO_STORE FROM
RT_TEMP_IN_CARTON A,
CD_SKU_CONV M
WHERE

[Code].....

View 8 Replies View Related

Performance Tuning :: Merge Statement Tuning For 100M Records In Table?

Oct 31, 2011

I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.

SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,

[code].....

View 39 Replies View Related

Performance Tuning :: How Length Of Column Width Effects Index Performance

Sep 30, 2010

How the length of column width effects index performance?

For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)

Table key consist of(id, job, time)

Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).

What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.

View 24 Replies View Related

Performance Tuning :: Fragmentation Can Reduce Performance In Query Times

Jun 16, 2010

I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..

I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.

Any useful script or query to do this, any interesting oracle system view?

View 2 Replies View Related

Performance Tuning :: Method Of Tuning Database - Row Reduction?

Oct 20, 2010

There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.

what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?

View 6 Replies View Related

Performance Tuning :: Performance Standard Edition Without Partitioning?

Jun 16, 2011

How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?

300 Millions in only one table with 500K transactions / day is too much?

Simple database with simple schema.

How many records begin to be too many?

View 2 Replies View Related

Performance Tuning :: Procedure Performance On New Database Import?

Nov 15, 2010

Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.

The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.

any possible reason that we'd have to re-install a procedure to correct a performance problem?

View 13 Replies View Related

Performance Tuning :: Checking Delete Performance In Package

Apr 12, 2013

I need to check the package performance and need to improve the package performance.

1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.

View 13 Replies View Related

Performance Tuning :: Query Performance Gain Using Statistics?

Aug 9, 2010

Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.

Q1. If it is true what statistics do we adjust for influencing the execution plan and how?

For example, I have the following simple query:

select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;

emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.

If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.

Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?

I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.

View 6 Replies View Related

Performance Tuning :: How To Improve The Performance Of Export Job (expdp)

Dec 6, 2011

I have an issue with export(expdp).

When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.

expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk

Do i need any look into any memory parameters for this?

View 1 Replies View Related

Performance Tuning :: DECODE In WHERE CLAUSE Performance?

Oct 17, 2011

The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.

If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.

For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.

DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'

[code]...

View 9 Replies View Related

Performance Tuning :: Same Data But Different Performance Results

Sep 3, 2010

what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.

View 10 Replies View Related

Performance Tuning :: Update Currency Column Of One Table Using Other Table Currency Column?

May 11, 2012

I am trying to update currency column of one table using the currency column of other table using the following sql code.

update ODS.SO_ITEM OSI
set CURRENCY__CODE=(select currency__code from sa_sales.SO_ITEM SSI where SSI.ID=OSI.ID)

This update is taking taking a lot of time and is never ending.

should i create index on source table (SA_SALES.SO_ITEM) or on target table (ODS.SO_ITEM) ?

View 7 Replies View Related

Performance Tuning :: DB Performance Keys?

Mar 17, 2012

are the most important performance keys we have to calculate or take in account to preserve or to increase the DB performance in terms of response times, and whatsoever according to performance ?

View 8 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved