Performance Tuning :: One Job Is Hanging After SGA Increase?
Mar 11, 2013
Additional Information
Step 1: Increased Physical Memory on one Node from 32 G to 48 G.
Step 1 Impact : DB was running same as before
Step 2 : Increased SGA from 12 G to 15 G.
Step2 Impact : DB was running same as before for 1 day next day one reporting job was hanging.
Step 3 : Increased DB_CACHE_SIZE from 5G to 7G.
Step 3 Impact : Over all CPU Utilization was high and no effect on reporting job.
Step 4 : Decreased DB_CACHE_SIZE from 7 G to 5 G.
Step 4 Impact : CPU Utilization came down little bit but no effect on reporting job.
Now our main concern is why CPU Utilization is going high. Because same thing we did last time and we got positive results.
View 15 Replies
ADVERTISEMENT
Apr 7, 2011
Following query is hanging either with 'Sequential access read' or 'Latch Free' wait event Important thing is the table which is self joined in subquery here does not have any index at all While it was hanged I tried to get trace of it and terminated twice. As such haven't got 'row source generataion' The table has only 120000 records and it shall update 34000 records
UPDATE invoice_header inv
SET inv.modified_due_date =
(SELECT inv1.btn_due_date
FROM invoice_header inv1
WHERE inv.dct_code = inv1.dct_code AND inv1.release = 'A5')
[code]...
During 'sequential read' using p1,p2 values tried to get what the session is reading and found that it is using the table itself.
During lath free I found following
SELECT name, 'Child '||child#, gets, misses, sleeps
FROM v$latch_children
WHERE addr= (select p1raw from v$session_wait where sid=18)
UNION
[code]...
However instead of self join when I creaed global temporary table as
create global temporary table t as select * from invoice_header where release='A5'
And used it in the update as
UPDATE invoice_header inv
SET inv.modified_due_date =
(SELECT t.btn_due_date
FROM t
WHERE inv.dct_code = t.dct_code AND t.release = 'A5')
WHERE inv.release = 'A5' AND inv.btn_due_date >= TRUNC (SYSDATE)
It updated the records in a second!!
Questions are
1) why it is producing 'sequential read' wait event when there is no index access or else why it is doing single block access when FTS is required?
2) Why is the 'latch free' wait event here and what it indicates here with 'cache buffer handles'?
Is it because we are reading and updating the same segment?
know in case DDL of table is required. It has all nullable columns and no index at all. Since it is 9i I am unable to use MERGE effectively in this case
View 14 Replies
View Related
Aug 23, 2012
SQL> show parameter sga
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1152M
sga_target big integer 0
[code]....
in scenario above, the database do not using ASMM, and spfile If I wan to increase db_cache_size parameter, do i need to rebounce instance?
View 6 Replies
View Related
Feb 14, 2013
We have a table emp_details with 23772889 records. Our requirement is to increase few of the columns size in the table emp_details. We are following the below alter statement which is taking around 2 hours of time.
ALTER TABLE emp_details
MODIFY
(
address char(90)
,department char(30)
)
/
Is there any way to improve the above query performance?
View 2 Replies
View Related
Oct 24, 2013
How To Increase Data Retrieval / Insertion Speed my data base has more than 0.5 million records Forms Some Time Respond Very Slow .
View 8 Replies
View Related
Apr 22, 2013
I have got the following error yesterday
ORA-01555 caused by SQL statement below (SQL ID: fdxcyoin67ty8t, Query Duration=380128 sec, SCN: 0x0229.ff00afd0):
following are the existing settings
SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 96000
undo_tablespace string undo
[code]....
following are the details from v$undostat
select begin_time, end_time, undotsn, undoblks, maxquerylen, maxqueryid, activeblks, unexpiredblks, expiredblks, tuned_undoretention from v$undostat
where trunc(begin_time)=trunc(sysdate)-1 order by begin_time;
BEGIN_TIME END_TIME UNDOTSN UNDOBLKS MAXQUERYLEN MAXQUERYID ACTIVEBLKS UNEXPIREDBLKS EXPIREDBLKS TUNED_UNDORETENTION
-------------- -------------- ---------- ---------- ----------- ------------- ---------- ------------- ----------- -------------------
21-04-13 00:08 21-04-13 00:18 1 12733 378446 duqnawh32hp4u 91152 7068448 225440 345600
21-04-13 00:18 21-04-13 00:28 1 8951 379047 duqnawh32hp4u 99344 7072800 225440 345600
21-04-13 00:28 21-04-13 00:38 1 14073 379650 duqnawh32hp4u 90128 7075872 234656 345600
[code]....
Following are the details in AWR report (00:00 til 01:00 of 21-Apr-2013) .... not thet the error was produced at 00:42
Undo Segment Summary DB/Inst: DBCPY/dbcpy01 Snaps: 18853-18854
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, your - unexpired Released, uU - unexpired reUsed
[code]....
Undo Advisor information taken 'now' is as following
SQL> select dbms_undo_adv.longest_query(sysdate-2,sysdate) from dual;
DBMS_UNDO_ADV.LONGEST_QUERY(SYSDATE-2,SYSDATE)
----------------------------------------------
379650
SQL> select dbms_undo_adv.required_retention from dual;
[code]....
In above situation what should be my first choice (assuming increasing space is not an issue) - increase undo tablespace or increase undo retention?
If latter is the choice then what should be the value? Because as I understand present 96000 value is taken as lower limit and because of auto tuning the actual value (TUNED_UNDORETENTION) being used was 345600 In that case shall I set it to something > max(maxquerylen) i.e 379,650 + X?Or I shall increase the undo tablespace size?
From Undo Advisor output it looks to me that even if I increase the undo retention to 379650 current undo size will be able to support it (may be at the expense of DMLs)Is that right?
View 13 Replies
View Related
Feb 28, 2011
I'm Using Oracle DB, I have got performance issue in one of my portlet, Can Re-Indexing work for Increasing the performance.
View 3 Replies
View Related
Jun 27, 2012
How do we increase the DML statement Performance?
View 8 Replies
View Related
Jul 16, 2012
How to replace the like operator for increase the performance. Because it is taking more time and not using the index.
SELECT *
FROM emp
WHERE ename like '%AL';
View 30 Replies
View Related
Nov 25, 2011
One of my MOD_PLSQL based Oracle APEX Application is running on the web. I have almost 1000 web users to access the Application where at least 250-300 users are always on line. In Oracle EM web interface When run ADDM I see suggestions "Investigate the cause of SQL*Net more data to client" or something like this. and I am getting complains about poor server response. Here are my configuration of the system:
Database Server Host: DBServer (Oracle Database 10.2.0.3)
HTTP Server Host: OraHTTP (Oracle Companion CD 10g)
DADS.conf HTTP Server configuration
MOD_PLSQL database access for web clients.
How can I increase database connection performance?
View 6 Replies
View Related
Aug 27, 2009
We are just but new oracle 10g(10.1.2.0.2) Database and application server ( form and report server ).have installed application server on window server 2003.
it will running ,but it is used lots of memory.so the performance decrease when user increase.
View 1 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
Oct 31, 2011
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
View 39 Replies
View Related
Sep 30, 2010
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
View 24 Replies
View Related
Jun 16, 2010
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
View 2 Replies
View Related
Oct 20, 2010
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
View 6 Replies
View Related
Jun 16, 2011
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
View 2 Replies
View Related
Nov 15, 2010
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
View 13 Replies
View Related
Apr 12, 2013
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
View 13 Replies
View Related
Aug 9, 2010
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
View 6 Replies
View Related
Dec 6, 2011
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
View 1 Replies
View Related
Oct 17, 2011
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'
[code]...
View 9 Replies
View Related
Sep 3, 2010
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
View 10 Replies
View Related
Mar 17, 2012
are the most important performance keys we have to calculate or take in account to preserve or to increase the DB performance in terms of response times, and whatsoever according to performance ?
View 8 Replies
View Related
Oct 6, 2010
I am running one update statement which is running almost one hour still no response.
I would like to know either query is processing or hanging(suppose to finish the update within few minutes).
Is there any way or sql to find either the statement(my update query) is running or hanging.
View 4 Replies
View Related
Jan 6, 2011
I am facing the problem of hanging in my forms 10g application.But,the place of hanging is different.There is no other user,using the same application.
View 2 Replies
View Related
May 30, 2011
We are using host_command to run a batch file from database.
begin
host_command('start /min E:Deepakipacklatmailmailbatch.bat') ;
end;
The batch file contains
C:WINDOWSsystem32lat.exe -install x.com y@x.com
blat C:lat.log -to z@m.com -subject 'test' -attach "file" -body 'test' -u y@x.com -pw *****
we are sending mails using blat.exe.
The procedure is working and the mail is reaching destination.But the database is hanging.
let me know what is the reason behind hanging.
View 19 Replies
View Related
Mar 27, 2013
I am working on an assignement where client is using Oracle 10g but stuck to using RBO Now the application team, from the GUI available to them build dynamic queries and some of them run very slow.
Neither the code can not be changed to tune the queries nor do we get the exact step in the plan which is an issue (being RBO).For some long running queries the Tuning advisor is not producing any recommendations.
Another hurdle is that all the application users are using same application user id so we can not write a logon trigger to use CBO for some particular queries to see what is happening in the background!
View 11 Replies
View Related
Jul 4, 2012
I want to tuning the next sql sentence. In this sql I want to get the hash_value and sql_text of the sentences that it's causing TX blocks. Is it possible?. This sentence works fine but sometimes It's slow.
SELECT DISTINCT hash_value,
sql_text
FROM gv$sql sq
WHERE hash_value IN (SELECT DISTINCT prev_hash_value
FROM gv$session se
WHERE sid IN (SELECT sid
FROM gv$lock l
WHERE type = 'TX'
AND ctime >= 2000
AND l.inst_id = se.inst_id
AND l.sid = se.sid)
AND sq.inst_id = se.inst_id);
[code]....
View 7 Replies
View Related
Nov 2, 2012
I see one of my SQL's which is ran by the user on a 10.2.0.3 database changing its SQL_ID after some runs even if the query is not changed a bit! However the HASH VALUE for this query remains the same.
how a same query can have different SQL_ID's but same HASH_VALUE?
Note: Statistics are not modified on the base tables of this query.
View 10 Replies
View Related