How To Increase DML Statement Performance
Jun 27, 2012How do we increase the DML statement Performance?
View 8 RepliesHow do we increase the DML statement Performance?
View 8 RepliesI'm Using Oracle DB, I have got performance issue in one of my portlet, Can Re-Indexing work for Increasing the performance.
View 3 Replies View RelatedAdditional Information
Step 1: Increased Physical Memory on one Node from 32 G to 48 G.
Step 1 Impact : DB was running same as before
Step 2 : Increased SGA from 12 G to 15 G.
Step2 Impact : DB was running same as before for 1 day next day one reporting job was hanging.
Step 3 : Increased DB_CACHE_SIZE from 5G to 7G.
Step 3 Impact : Over all CPU Utilization was high and no effect on reporting job.
Step 4 : Decreased DB_CACHE_SIZE from 7 G to 5 G.
Step 4 Impact : CPU Utilization came down little bit but no effect on reporting job.
Now our main concern is why CPU Utilization is going high. Because same thing we did last time and we got positive results.
How to replace the like operator for increase the performance. Because it is taking more time and not using the index.
SELECT *
FROM emp
WHERE ename like '%AL';
One of my MOD_PLSQL based Oracle APEX Application is running on the web. I have almost 1000 web users to access the Application where at least 250-300 users are always on line. In Oracle EM web interface When run ADDM I see suggestions "Investigate the cause of SQL*Net more data to client" or something like this. and I am getting complains about poor server response. Here are my configuration of the system:
Database Server Host: DBServer (Oracle Database 10.2.0.3)
HTTP Server Host: OraHTTP (Oracle Companion CD 10g)
DADS.conf HTTP Server configuration
MOD_PLSQL database access for web clients.
How can I increase database connection performance?
SQL> show parameter sga
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1152M
sga_target big integer 0
[code]....
in scenario above, the database do not using ASMM, and spfile If I wan to increase db_cache_size parameter, do i need to rebounce instance?
We have a table emp_details with 23772889 records. Our requirement is to increase few of the columns size in the table emp_details. We are following the below alter statement which is taking around 2 hours of time.
ALTER TABLE emp_details
MODIFY
(
address char(90)
,department char(30)
)
/
Is there any way to improve the above query performance?
We are just but new oracle 10g(10.1.2.0.2) Database and application server ( form and report server ).have installed application server on window server 2003.
it will running ,but it is used lots of memory.so the performance decrease when user increase.
How To Increase Data Retrieval / Insertion Speed my data base has more than 0.5 million records Forms Some Time Respond Very Slow .
View 8 Replies View RelatedI have got the following error yesterday
ORA-01555 caused by SQL statement below (SQL ID: fdxcyoin67ty8t, Query Duration=380128 sec, SCN: 0x0229.ff00afd0):
following are the existing settings
SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 96000
undo_tablespace string undo
[code]....
following are the details from v$undostat
select begin_time, end_time, undotsn, undoblks, maxquerylen, maxqueryid, activeblks, unexpiredblks, expiredblks, tuned_undoretention from v$undostat
where trunc(begin_time)=trunc(sysdate)-1 order by begin_time;
BEGIN_TIME END_TIME UNDOTSN UNDOBLKS MAXQUERYLEN MAXQUERYID ACTIVEBLKS UNEXPIREDBLKS EXPIREDBLKS TUNED_UNDORETENTION
-------------- -------------- ---------- ---------- ----------- ------------- ---------- ------------- ----------- -------------------
21-04-13 00:08 21-04-13 00:18 1 12733 378446 duqnawh32hp4u 91152 7068448 225440 345600
21-04-13 00:18 21-04-13 00:28 1 8951 379047 duqnawh32hp4u 99344 7072800 225440 345600
21-04-13 00:28 21-04-13 00:38 1 14073 379650 duqnawh32hp4u 90128 7075872 234656 345600
[code]....
Following are the details in AWR report (00:00 til 01:00 of 21-Apr-2013) .... not thet the error was produced at 00:42
Undo Segment Summary DB/Inst: DBCPY/dbcpy01 Snaps: 18853-18854
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, your - unexpired Released, uU - unexpired reUsed
[code]....
Undo Advisor information taken 'now' is as following
SQL> select dbms_undo_adv.longest_query(sysdate-2,sysdate) from dual;
DBMS_UNDO_ADV.LONGEST_QUERY(SYSDATE-2,SYSDATE)
----------------------------------------------
379650
SQL> select dbms_undo_adv.required_retention from dual;
[code]....
In above situation what should be my first choice (assuming increasing space is not an issue) - increase undo tablespace or increase undo retention?
If latter is the choice then what should be the value? Because as I understand present 96000 value is taken as lower limit and because of auto tuning the actual value (TUNED_UNDORETENTION) being used was 345600 In that case shall I set it to something > max(maxquerylen) i.e 379,650 + X?Or I shall increase the undo tablespace size?
From Undo Advisor output it looks to me that even if I increase the undo retention to 379650 current undo size will be able to support it (may be at the expense of DMLs)Is that right?
I have the following problem. When I used in the IN-Statement fixed values e.q. 197321,197322,197323 ..., the index i_tab2_index works fine (index range scan).
But when I used in the IN-Statement an Sub-Select, the index i_tab2_index doesn't work (fast full scan)!My scale indices and used Selects:
CREATE INDEX i_tab1_index ON tab1 ( datum, flag_inst );
CREATE INDEX i_tab2_index ON tab2 ( tab2Idx, kontro );
SELECT count(epidx) as rowAnz
FROM tab2
WHERE tab2Idx IN ( SELECT tab1IDX FROM tab1
WHERE datum BETWEEN '20120117' AND '20120117'
AND flag_inst = '1' )
AND kontro = '9876521'
[code]...
We have a person running a query and following is the explain plan
explain plan for
select distinct(extractvalue(xmltype(a.email_variables), '/CalliopeData/Attributes/HOTEL_BRAND')) as ThisBrand
from hh.t_ecomm_mem_relations a
where extractvalue(xmltype(a.email_variables), '/CalliopeData/Attributes/HOTEL_BRAND') not in (select b.code_brand from hh.t_pr_brand b)
and a.code_corr_ecat = 'PREA'
and a.status = 'S'
and a.audit_time > sysdate - 1
;
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1904775187
-------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 32018 | 25 (4)| 00:00:01 | | |
| 1 | HASH UNIQUE | | 14 | 32018 | 25 (4)| 00:00:01 | | |
|* 2 | FILTER | | | | | | | |
| 3 | PARTITION RANGE ITERATOR | | 14 | 32018 | 17 (0)| 00:00:01 | KEY | 13 |
|* 4 | TABLE ACCESS BY LOCAL INDEX ROWID| T_ECOMM_MEM_RELATIONS | 14 | 32018 | 17 (0)| 00:00:01 | KEY | 13 |
|* 5 | INDEX RANGE SCAN | X_ECOMM_MEM_RELATIONS3 | 15 | | 3 (0)| 00:00:01 | KEY | 13 |
|* 6 | INDEX FULL SCAN | I_PR_BRAND | 1 | 3 | 1 (0)| 00:00:01 | | |
-------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter( NOT EXISTS (SELECT /*+ */ 0 FROM "HH"."T_PR_BRAND" "B" WHERE
LNNVL("B"."CODE_BRAND"<>EXTRACTVALUE("XMLTYPE"."XMLTYPE"(:B1),'/CalliopeData/Attributes/HOTEL_BRAND'))))
4 - filter("A"."STATUS"='S')
5 - access("A"."AUDIT_TIME">SYSDATE@!-1 AND "A"."CODE_CORR_ECAT"='PREA')
filter("A"."CODE_CORR_ECAT"='PREA')
6 - filter(LNNVL("B"."CODE_BRAND"<>EXTRACTVALUE("XMLTYPE"."XMLTYPE"(:B1),'/CalliopeData/Attributes/HOTEL_BRAND')))
=========================
I tried not exists and some Antijoin hints in the subquery which is used in filter NOT IN. I tried minus too.
mbr has 60,000 rows and member has 60,000 rows approx. two tables have indexes on ssn, and citi_no on them.
PK of mbr : mbr_id
PK of member : mbr_id
other columns are not PK, and have no index on it.
I'm wondering why the statment doesn't use index while ssn and citi_no have index.
MERGE INTO mbr t
USING (SELECT mbr_id,citi_no
FROM member) a
ON (t.ssn = a.citi_no)
WHEN MATCHED THEN
UPDATE SET t.asis_mbr_id = a.mbr_id
where t.ssn not in(select ssn from mbr group by ssn having count(*) > 1)
Is there is any view/query from where I can find how many sql using literals.
View 4 Replies View RelatedI have queries on the execution plan of a sql statement
Following is the example
create table t1 as select s1.nextval id,a.* from dba_objects a;
create table t2 as select s2.nextval id,a.* from dba_objects a;
insert into t1 select s1.nextval id,a.* from dba_objects a;
insert into t1 select s1.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
commit;
create index i1 on t1(id);
create index i2 on t2(id);
create index i11 on t1(object_type);
exec dbms_stats.gather_table_stats(user,'T1',cascade=>true);
exec dbms_stats.gather_table_stats(user,'T2',cascade=>true);
select count(*) from t1 where object_type='VIEW';
COUNT(*)
----------
8934
set autotrace traceonly explain
Can we say in the following case, that,
(1) First index on object_type is accessed to get rowids - t1.object_type='VIEW'
(2) Then the filter on owner is applied - t1.owner='SYS'
(3) Then the table T1 is accessed to fetch data from the rowids returned by the index I11 and filer application - TABLE ACCESS BY INDEX ROWID
Though I am unable to understand how filter can be applied to the rowids retrieved from index, we can see from the plan below that The rows accessed have reduced from 8550 to 1221 before we access the table...Thus filter "t1.owner='SYS'" is applied in between. Right?
another question is
Case 1 - do we retrieve a rowid from index for a given value, then retrieve required values from table for that rowid
Thus row at a time in both ... in loop
OR
Case 2 - we first fetch all rowids from index and then retrieve values from table one row at a time from the collection of rowids fetched?
Suppose Case 1 is what is happening then can we say, both the steps mentioned by IDS 2,3 in plan below are executed exactly equal number of times and the filter "t1.owner='SYS'" is applied at some later stage? Of course in this case the values in ROWS stand misleading then
select * from t1,t2 where t1.id = t2.id and t1.object_type='VIEW' and t1.owner='SYS';
Execution Plan
----------------------------------------------------------
Plan hash value: 26873579
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1221 | 233K| 915 (1)| 00:00:11 |
|* 1 | HASH JOIN | | 1221 | 233K| 915 (1)| 00:00:11 |
|* 2 | TABLE ACCESS BY INDEX ROWID| T1 | 1221 | 116K| 381 (1)| 00:00:05 |
|* 3 | INDEX RANGE SCAN | I11 | 8550 | | 24 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | T2 | 161K| 15M| 533 (1)| 00:00:07 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T1"."ID"="T2"."ID")
2 - filter("T1"."OWNER"='SYS')
3 - access("T1"."OBJECT_TYPE"='VIEW')
I'm working on a query that will show how many differents SKUs we have on-hand, how many of those SKUs have been cycle-counted, and how many we have yet to cycle-count.I've prepared a sample table and data:
CREATE TABLE SKU
(
ABC VARCHAR2(1 CHAR),
SKU VARCHAR2(32 CHAR) NOT NULL,
Lastcyclecount DATE,
[code]....
What I also want to do is select another column that will group by sku.abc and count the total number of A, B, and C SKUs where the lot.qty is > 0:
SELECT sk.abc AS "STRATA",
COUNT (DISTINCT sk.sku) AS "Total"
FROM sku sk,
(SELECT sku
FROM lot
WHERE qty > 0) item
WHERE item.sku = sk.sku(+)
GROUP BY sk.abc
Finally, I need the last column to display the DIFFERENCE between the two totals from the queries above (the difference between the "counted" and the "total"):
COUNT (DISTINCT sk.sku) - COUNT (DISTINCT s.sku)
Test1 table have around 385772300 rows. below delete and select statment talking lot of time.
Select stament taking more than 1 hrs.
SELECT TO_NUMBER(MAX(f.T3))
--INTO v_FISCAL_MONTH_ID
FROM Test1 f;
delete statment taking more than 2 hours
DELETE FROM TEST1 WHERE TRUNC(T10) < TRUNC(ADD_MONTHS(SYSDATE,-36));
CREATE TABLE Test1
(
[Code].....
i am trying to analyze a query i have and noticed that it does not show the sql_id in v$session.
preparing a test case:
create table t1(a number, b varchar(10));
insert into t1 values(123 , 'value1');
when i execute
select count(*) from dual;
select * from dual;
select count(*) from t1;
i can see the sql_id by running
select
sql_id sql_id_,
sql_child_number sql_child_num,
module module_,
action action_,
logon_time lgtime,
[code]....
however, when i'm running
select * from t1
sql_id and sql_child_id in v$session appears to be null, and i can't analyze it.
why those columns are NULL?
this statement is taking 1hr , can we reduce the timing?
CREATE TABLE DGT_ITEMEFFORTDATA (ENTERPRISEID, OWNERTYPE, OWNERID, SUPEROWNERTYPE, SUPEROWNERID,
ITEMTYPE, ITEMID, STAGEID, USERID, DATEIDENTIFIED,
DATECLOSED, ACTIVITYCODEID, PHASEID, RELEASEID, MONTHID,
QUARTERID, INITIALEFFORT, BASELINEDEFFORT,
ACTUALEFFORT, ITEMSTATUS, ALLOCATIONSTATUS, STAGESTATUS,
OCCURANCETYPE, DSLPROJECTTYPE, METRICCALCRUNID,
[code].....
This is the explain plan of the above query
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%C
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 11M| 4137M| 46149 (
| 1 | UNION-ALL | | | |
| 2 | TABLE ACCESS FULL| DGT_ITEMEFFORTDATA_DAILY | 3455K| 428M| 14575
[code].....
This is the index details
1DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDITEMID4
2DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDITEMTYPE3
3DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDOWNERID2
4DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDOWNERTYPE1
There is no index on DGT_ITEMEFFORTDATA_TEMP table
[code].....
In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T
WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT)
IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT
FROM LOAD_TRADE_ORDER
WHERE IND_IS_BAD_RECORD='N');
Tables Used:
oTRADE_ORDER_EMP_ALLOCATION Row count (329525880)
oLOAD_TRADE_ORDER Row count (29281)
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
refer following sql statements and code
Session 1
create table tab1 as select * from dba_objects where object_id is not null;
alter session set events '10046 trace name context forever, level 12';
declare
x number;
begin
for i in 1..4
loop
[code]....
Session 2
after "starting" the above pl/sql block from Session 1, I keep on querying tab2 from Session 2 And as soon as 2 records are inserted in tab2, I create index from Session 2
select * from tab2;
select * from tab2;
select * from tab2;
N
----------
1
2
create index i on tab1(object_id);
As I have tested from a single session (just before this test) such index is used for the sql statement
select count(1) into x from tab1 where object_id=2331;
However when I checked the trace file I am not geeting results as expected
I am expecting 4 execution plans - 2 FTS and 2 Index Access scans and for this I am issuing following command
tkprof dst1_ora_7369.trc dst1_ora_7369.txt aggregate=no sys=no
But unfortunately I am getting following output
SELECT COUNT(1)
FROM
TAB1 WHERE OBJECT_ID=2331
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 1 0 0
Execute 4 0.00 0.00 0 2 0 0
[code]....
1) Why I am unable to see 4 execution plans - 2 with FTS and 2 with Index access when I mentioned 'aggregate=no'?
2) Whether the index i will be used for last 2 iterations after first 2 iterations of FTS?
If answer to above question 2) is 'No'
By which method I can force an ongoing sql statement in loop to take different execution path? Of course I can't hard parse sql in 'that' current session Will flushing Shared pool work in above case?
Can we have same execution plan for a create table statement where the name of the table changes every time as follows:
create table test
as
select * from t1
Here table name changes from test to another table name next time
I am running the following delete query and it has been running for over 2hrs:
delete from dw.ACCOUNT_FACT
where rowid in
(select rowid from DW.ACCOUNT_FACT
minus
select max(rowid) from DW.ACCOUNT_FACT
[Code]..
Here is the explan plain result:
explain plan for delete from dw.ACCOUNT_FACT
where rowid in
(select rowid from DW.ACCOUNT_FACT
minus
select max(rowid) from DW.ACCOUNT_FACT
group by CRTORD_FIPS_CD, LAST_PAYMENT_DT, ORDER_NUM,
[Code]....
PLAN_TABLE_OUTPUT
Plan hash value: 611392786
----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 2604G| 260T| | 9018K (91)| 30:03:37 |
| 1 | DELETE | ACCOUNT_FACT | | | | | |
|* 2 | HASH JOIN | | 2604G| 260T| 369M|
[Code].....
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(ROWID="$kkqu_col_1")
I have all constraints disabled. How do I make this delete finish faster? We're trying to remove duplicates from this table using the criteria giving in the statement.
Is there any way i can Get how many rows are processing with UPDATE statement while the Update statement is still running.
View 2 Replies View RelatedI have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
when i check the free space of tablespaces,i found that sysaux increase so rapidly that i must increate in less than a week!when i checked the sysaux using v$sysaux_occupants ,and i found than
select occupant_name,space_usage_kbytes from v$sysaux_occupants;
the result :
OCCUPANT_NAME SPACE_USAGE_KBYTES
SM/AWR 3159680
[code]...
i also use the package: exec dbms_workload_repository.drop_snapshot_range(),it din't work!!!!
in my test db,i rebuild awr,it works ,but is there any risk to rebuild in my real db?
I am trying to increase the column value for example now the column values is (rate(15,2)). I want to change it to alter table table1 modify(rate(15,5)); when i change I get an error saying that ORA-01440: column to be modified must be empty to decrease precision or scale. How to change the value of the column when there is a record in the table.
View 2 Replies View RelatedWe have oracle 10g R2 on windows...
I have a table test and it contains date datatype column JDATE;
SQL> desc test
Name Null? Type
----------------------------------------- -------- --------------------
EMPNO NUMBER
EMPTYPE VARCHAR2(20)
SALARY NOT NULL NUMBER
JDATE DATE
DEPTNO NOT NULL NUMBERSELECT TO_CHAR(JDATE,'DD/MM/YYYY HH24:MI:SS') JDATE FROM DUAL;
JDATE
1/11/2010 4:17:29 PM
1/11/2010 4:15:47 PM
1/5/2010 3:50:44 PMIn the above case i want to update test table and increase the minut of each row by 1 minut.
like
for 1/11/2010 4:17:29 PM It would be like 1/11/2010 4:18:29 PM.
for 1/11/2010 4:15:47 PM it would be like 1/11/2010 4:16:47 PM.Can we do this...
I have table and it's size is full when i'm inserting records , records are not inserting , How can i increase table size
View 8 Replies View RelatedI would like to know how to increase the result set of a 'Select' statement? I did a 'Select' that should have returned 36,000 rows and got only 5000 rows. What access level do I need to change this and what do I need to change? I am trying to do a migration of data from a delimited file to a table in Oracle. Yet, the data has to be filtered out prior to loading to the table?
SQLSELECT MIN(major_zipcode)
FROM TEMP WHERE MAJOR_CITIES IN (select distinct major_cities from temp);