Performance Tuning :: High CPU Utilization In RAC And Log File Sequential Read Event
Apr 13, 2011
In a 3-node RAC setup; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS
CPU
time 5,802 34.9
RFS
ping 15 5,118 33,671 30.8 Other
Log file sequential
read 234,831 5,036 21 30.3 System I/O
Sql*Net
more data from
client 24,1711,08745 6.5 Network
Db file sequential
read130,939 4533 2.7 User I/O
Findings:-
On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
1)Are these symptoms of undersized log buffer?
2)I feel Network wait can be reduced by tweaking SDU & TDU values based on MDU.
View 2 Replies
ADVERTISEMENT
Nov 27, 2012
The below query is utilizing more than 17 Gb temp space. But still it is getting failed out due to insufficient temp space. is there any way to rewrite this query to reduce the temp utilization?
SELECT T12.FRGHT_AMT_CURCY_CD,T23.LAST_UPD,T11.PAR_OU_ID,T9.MAIN_PH_NUM,T23.DISCNT_PERCENT,T23.X_ERROR_NUM,T18.ADDR,T14.X_ECO_B_END_1141,
T14.X_ECO_A_END_1141,T9.X_ECO_VALIDATION_FLG,T23.X_ECO_ERR_DESCR,T14.ASSET_NUM,T20.NAME,T23.X_ECO_REASON2,T14.X_ECO_B_END_ID,
T14.ASSET_NUM,T14.X_ECO_B_END_IWPC,T23.X_AE_CON_PH_NUM,T23.SHIP_ADDR_ID,T19.NAME,T23.X_BE_CON_LST_NAME,T23.CREATED_BY,T23.X_ECO_LOCATION,T8.LOC,
T3.MODIFICATION_NUM,T10.INTEGRATION_ID,T23.INTEGRATION_ID,T23.X_MESSAGE,T9.PR_ADDR_ID,T12.ACCNT_ID,T23.X_BEARERNO,T23.X_SUB_STATUS_CD,
[code]....
View 3 Replies
View Related
Apr 7, 2011
Following query is hanging either with 'Sequential access read' or 'Latch Free' wait event Important thing is the table which is self joined in subquery here does not have any index at all While it was hanged I tried to get trace of it and terminated twice. As such haven't got 'row source generataion' The table has only 120000 records and it shall update 34000 records
UPDATE invoice_header inv
SET inv.modified_due_date =
(SELECT inv1.btn_due_date
FROM invoice_header inv1
WHERE inv.dct_code = inv1.dct_code AND inv1.release = 'A5')
[code]...
During 'sequential read' using p1,p2 values tried to get what the session is reading and found that it is using the table itself.
During lath free I found following
SELECT name, 'Child '||child#, gets, misses, sleeps
FROM v$latch_children
WHERE addr= (select p1raw from v$session_wait where sid=18)
UNION
[code]...
However instead of self join when I creaed global temporary table as
create global temporary table t as select * from invoice_header where release='A5'
And used it in the update as
UPDATE invoice_header inv
SET inv.modified_due_date =
(SELECT t.btn_due_date
FROM t
WHERE inv.dct_code = t.dct_code AND t.release = 'A5')
WHERE inv.release = 'A5' AND inv.btn_due_date >= TRUNC (SYSDATE)
It updated the records in a second!!
Questions are
1) why it is producing 'sequential read' wait event when there is no index access or else why it is doing single block access when FTS is required?
2) Why is the 'latch free' wait event here and what it indicates here with 'cache buffer handles'?
Is it because we are reading and updating the same segment?
know in case DDL of table is required. It has all nullable columns and no index at all. Since it is 9i I am unable to use MERGE effectively in this case
View 14 Replies
View Related
Mar 5, 2013
How can check database performance in a non-sequential parts of a day using AWR tool. For example, suppose we divide a day to 2 parts: low-traffic time and high-traffic time with the following time interval:
low-traffic time : 7:00am - 10:00am and 7:00pm - 11:00pm
high-traffic time : 10:00am - 01:00pm and 4:00pm - 06:00pm
I want to examine performance using snapshots gathered by AWR. If I get 2 AWR reports for low-traffic time of day (one for 7:00am - 10:00am, another for 7:00pm - 11:00pm ) and compute average of values in 2 reports, could I called it database performance in low-traffic part of day or not?
View 9 Replies
View Related
Jun 14, 2010
we are running a 2 node rac database of oracle 10.2.0.4 in aix 6.3. The shared pool utilisation goes up when we run our jobs and sql statements. But it does not come down. It been 2 gig for over 3 days without any processes running.
Is there a threshold time period after which oracle will release the space utilisation from the shared pool????
View 9 Replies
View Related
Jul 23, 2010
I am facing one performance issue, in which the query cost is very low compare to cpu cost and as a result the cpu always show the high graph.I am also attaching the gv$sql and gv$sql_plan data of this query.
This is the query:
SELECT PTLS.ITEMTYPE , PTLS.ITEMID , PTLS.STAGEID, TS.USERID, SUM(PREVIOUSHOURS) AS PREVIOUSHOURS, MIN(STARTDATE) AS STARTDATE, MAX(STARTDATE) AS ENDDATE FROM PROJECTTIMELOGSSTAGE PTLS, PROJECTTIMESHEETITEM PTSI, TIMESHEET TS WHERE PTLS.PROJECTID = :B2 AND TS.TIMESHEETID = PTSI.TIMESHEETID AND TS.USERID = :B1 AND PTSI.TIMESHEETID = PTLS.TIMESHEETID AND PTSI.ITEMTYPE = PTLS.ITEMTYPE AND PTSI.ITEMID = PTLS.ITEMID AND (PTSI.ISPWFITEM = 'N' OR PTSI.ISPWFITEM IS NULL) AND PTLS.ITEMTYPE NOT IN ('OtherTsk','NewTsk','Loc','Glb') AND (PTLS.ITEMTYPE, PTLS.ITEMID ) IN (SELECT ITEMTYPE, ITEMID FROM PROJECTTIMELOGSSTAGE PTLS1 WHERE PTLS1.PROJECTID = :B2 AND PTLS1.TIMESHEETID = :B3 ) GROUP BY PTLS.ITEMTYPE, PTLS.ITEMID, PTLS.STAGEID, TS.USERID
View 17 Replies
View Related
May 28, 2013
why CPU utilization is high(99%) on one server in 2 node RAC at every time. On the other server it is very low.
View 9 Replies
View Related
Jan 13, 2009
Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-
---------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 178K| 6609K| 129K|
| 1 | TABLE ACCESS FULL| BHI_TRACKING | 178K| 6609K| 129K|
----------------------------------------------------------------
Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.
View 14 Replies
View Related
Jan 22, 2009
how to reduce the cpu cost for a query at query level.
View 10 Replies
View Related
Feb 13, 2013
I have two tables with same columns(15 of them), I am trying to find difference between two tables using minus operator and then insert in stage table using below code
Issue is table1 has 50 million records
table2 is empty
so when first time when we execute this v_collection1,v_collection2 collection will have 50 million records in it which will go in memory, I think this is not good, because going in memory will eat memory and resources while sorting and other activities ?
After fetching records in collection we are inserting that in stage table and then COMMIT so i think that wont be good because committing 50 million will generate large amount of redo?
below is snippet of my code
DECLARE
type lst_collection1
IS
TABLE OF table1.col1%type INDEX BY binary_integer;
type lst_collection2
IS
[code].......
View 4 Replies
View Related
Jun 4, 2010
The prod stats has been implemented in development. The stats has been gathered 2 months back on dev while in production the stats has been gathered 2 weeks back.
My question shouldn't the high volume of data causes changes in plan in both the environment? My thinking is that plan can be different as the high volume of data are changing in prod it may lead to a different plan.
View 6 Replies
View Related
Jul 31, 2012
We are using the 11g AMM feature and Memory_Target set to 96GB and total RAM on the Server is 128GB Now the top and free shows up only 200MB memory free on the system.
There are 2 process dbw0 and dbw1 which consumes the top memory and this is 30GB per dbw.
Why is the dbw process taking up so much memory when there is not much load on the database.
View 4 Replies
View Related
Feb 4, 2011
How to find the tables in the database on which high DMLs are firing.
View 5 Replies
View Related
Mar 28, 2013
How can I find out the particular oracle session which was consuming high memory in the past?
I can't get the data in v$sessstat
Unable to get the information in AWR
dba_hist_active_session_history do not have field which indicate memory related information
Shall I concetrate on EVENT in dba_hist_active_session_history which continuosly had sort, direct path read
Or
Locate sql_id from dba_hist_sqlstat with high SORTS_DELTA for snapshots belonging to problematic time period and then using the sql_id query dba_hist_active_session_history
which approach I shall take to find out the session which consumed most memory in the past?
View 8 Replies
View Related
Dec 24, 2012
The below query is taking high CPU almost 98% and longer time to execute.
SELECT ancestor,
Max(D.alarmstate) ALARMSTATE,
Max(D.sialarmstate) SIALARMSTATE,
Max(D.uncralarmstate) UNCRALARMSTATE,
Max(M.commstate) COMMSTATE,
Max(M.nncommstate) NNCOMMSTATE,
Max(M.servicestate) SERVICESTATE,
Max(M.abnormal) ABNORMAL,
CASE
[code]....
View 15 Replies
View Related
Nov 23, 2010
We have a table (call it PROBLEM table) that varies between 60 and 100 million records from one data archival process to the next.The table has data from all 50 states and is range partitioned by each states unique code and has a primary key based on the following:
STATE the data is from
BMRK YEAR the year the data was last benchmarked
AREA location the data is from
SERIES industry the data is located in
YEAR the data was reported
MONTH the data was reported
DATA TYPE the type of data that was reported
ESTIMATE TYPE the type of statistical estimate that was produced from the data
REPORT ID report id unique to each state
REPT SFC state the reporter resided in
We have developed a job that will process data from other tables and create some statistics resulting in approx 2.2 million inserts, .5 million updates and .5 million deletes on PROBLEM table during each job run. Once this table is loaded another process (call this PROBLEM_PROCESS)takes off that reads this updated information to produce some statistical data that is stored in another table. This data is produced by summing anywhere between 2 and 5000 records per query from the PROBLEM table.Here is the query (call it PROBLEM_QUERY)
select round(sum(cm_value*sample_weight),3),round(sum(pm_value*sample_weight),3)
from matched_sample
where state_fips_code = :1
and area_fips_code = :2
and series_type_code = 'B'
and year = :3
and month = :4
[code]...
When we run this job it is very fast (20 minutes) until it gets to the PROBLEM_QUERY, at this point we get DB file scattered read waits, DB file sequential read waits and async IO waits and the job has run for as long as 2 days. When I look at grid, the PROBLEM query is doing a full table scan and ignoring the index.
Our systems people advised us to re-organize and re-index that data in the PROBLEM table after all of the DML and before the PROBLEM_PROCESS takes off. If we do this, it works and cuts our time down alot. However, the re-organize and re-index process adds eight hours to this process. Note: When we re-roganized the data we create a copy table on another set of table spaces and insert the data into the copy sorting the data on the primary key columns. After this we re-index the new table and drop the old table renaming the new table. When we do this later, we move the data back to another set of table spaces. So, we move the data back and for between table spaces A and B we can say.
We have Oracle installed on a M5000 server with Solaris as the OS the binaries and data files are stored on a NetApp Storage array(model 3160) of 500 1TB SATA 7200 RPM drives. However, there are 128 other databases on the NetApp filer as well.
IMO the array and the slow disks are the problem. I believe this because we are catering to the slow disks by re-organizing and re-indexing the PROBLEM table during each run. I don't believe this should be neccessary. We normally re-organize and re-index our data each week in our production system after many more transactions than this.
Our systems people state it is our application. Oracle Support tells us the statistics are out of date but have not answered us on why the statistics are out of date and the index is abandoned after 1 run.
View 7 Replies
View Related
Sep 9, 2011
One of my procedure recently taking very long time to execute. I'm attaching AWR report. But I don't know how to read and understand this.
View 4 Replies
View Related
Oct 1, 2013
If I use the conventional path will SQL*Loader process a data file sequentially from top to bottom? I have a file comprised of header and detail records with no value found in the detail records that can be used to relate to the header records. The only option is to derive a header value via a sequence (nextval) and then populate the detail records with the same value pulled from the same sequence (currval). But for this to work SQL*Loader must process the file in the exact same sequence that the data has been written to the data file. I've read through the 11g Oracle® Database Utilities SQL*Loader sections looking for proof that this is what will happen but haven't found this information and I don't want to assume that SQL*Loader will always process the data file records sequentially.
View 9 Replies
View Related
Aug 11, 2011
I need to take the database trace of each page of my web application. I am giving the following commands.
BEGIN
dbms_monitor.session_trace_enable(session_id=>122, serial_num=>NULL, waits=>true, binds=>true);
END;
Accessing the web application. Once it renders completely, I am executing the following command.
BEGIN
dbms_monitor.session_trace_disable(session_id=>122);
END;
In the user_dump_dest folder, I am expecting to see a new trace whenever I execute these commands. But the same trace file is getting updated. How do I make oracle to create a new trace for each iteration. I am using Oracle 11g Release on CentOS 5.x
View 3 Replies
View Related
Apr 6, 2011
The REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
View 3 Replies
View Related
Apr 10, 2013
i want to know about trace file and TKPROF. any example.
View 1 Replies
View Related
Feb 10, 2011
I would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
#Optimal log file size:
select optimal_logfile_size
from v$instance_recovery
----------------------------
OPTIMAL_LOGFILE_SIZE
1842
[code]....
View 9 Replies
View Related
Oct 15, 2013
I am getting back into Oracle (from a long haul in MS only env.) and am now testing Oracle installs.I have been given a task of seeing the diff. between 12c and 10.2g...I set up 2 vms (excatly same configs) and used the same dmp file (on both env.) to restore data and settings for our jobs to run.We have some aggregated data, and cubes with DIM tables each being run on the vm machines. We run nightly jobs to rebuild our cubes.
I am supposed to see/analyze the value of 12c, and understand things might vary from company to company, but am perplexed at my result.12c is half the speed of 10.2g, both env. are the same out of the box with same dmp file and same hardware.
I am using the same dmp file, with the same jobs on each machine, with both vms having 10.2g or 12c installed out of the box as is.what default oracle settings might have changed from 10.2g to 12c that could make the exact same env. run twice as slow on the 12c?
Expectations were that out of the box with both machines running same jobs on same data (from dmp files) would have it that 10.2g would be slower than the 12c, except the 12c takes 2 times as long to run the jobs. I have reviewed every possibility as I know usually the problem is the person sitting in the chair and not the pc...but I confirmed all was identical from the one vm env. to the other, except the version of oracle out of the box.
What could be done to bring that default setting back to atleast equal time between the 2, that would give me a great starting point. Otherwise, I would have to toss this up to bloatware.
I read up a bit on the CBO, and know this might have changed in 12c.is there a way to bring it back to a backwards ealier config, so as to atleast match both env. execution plans?
View 19 Replies
View Related
Feb 25, 2011
I try to current redo log file location to multiplex redo log configuration. I did do this on the DB.I import a backup file to DB when I made this change, but its too slow running import process from old configuration. Approximately it is 4 times slow.
when I recovery old configuration for redo log file, import process is normal running...Why this changes hit to DB import process performance? The old redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/crtest1/oradata/redo04b.log"524288000
3"/crtest1/oradata/redo03a.log"524288000
3"/crtest1/oradata/redo03b.log"524288000
2"/crtest1/oradata/redo02a.log"524288000
2"/crtest1/oradata/redo02b.log"524288000
1"/crtest1/oradata/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
The new multiplex redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/opt/redolog/redo04b.log"524288000
3"/opt/redolog/redo03a.log"524288000
3"/usr/redolog/redo03b.log"524288000
2"/usr/redolog/redo02a.log"524288000
2"/disk1/redolog/redo02b.log"524288000
1"/disk1/redolog/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
I think that new configuration is better for old configuration at security issue.Here is the disk partitions on the server:
-bash-3.00$ df -lh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9.6G 2.2G 7.3G 23% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
[code]....
View 4 Replies
View Related
Mar 6, 2013
we are running Oracle R12.1.3 on DB version 11.2.0.3. I just migrated a created a custom subscription for the oracle.apps.ap.supplier.event Oracle Event. I execute a custom package when this event fires. My package is working fine and I'm getting the expected results. My problem is I keep getting a Workflow notification saying:
Sent: Monday, March 04, 2013 9:14 PM
To: SYSADMIN
Subject: Action Required: Local Event UNEXPECTED : oracle.apps.ap.supplier.event / 36422
To SYSADMIN
Sent 04-MAR-2013 21:13:40
ID 1659929
An Error occurred in the following Event Subscription: Event Subscription
Event Error Name:
Event Error Message: No Event Subscriptions exist for this Event
Event Error Stack:
Event Data: Event Data URL
Event Details
Event Field Value
[code]...
I saw an earlier post about this and have tried changing the Source Type to "External" but that didn't change anything. why Workflow is telling me a subscription doesn't exist when my subscription is executing with no problems.
View 5 Replies
View Related
Jan 13, 2011
find the attached tkprof'ed file of session
I started the trace after the query started (upon user's complaint)
However even after tracing the session for more than 30 minutes I am not geeting where the 30 minutes are accounted in this file
View 11 Replies
View Related
Nov 16, 2011
I executed a query which executed quickly (1.7 seconds) but since its output took time in displaying on the console the time shown by 'set timing on was 39.5 seconds
also I took trace (tkprof) for the same.My query is why the timings under 'Total Waited' (43.19 and 1.69) are not added to the elapsed time 1.83 seconds
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.06 0 10 0 0
Fetch 758 0.03 1.77 0 0 0 11345
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 760 0.03 1.83 0 10 0 11345
[code]....
View 1 Replies
View Related
Apr 21, 2011
I ran a scp command to transfer a file from local server to a remote server. when I am trying to kill the process, its giving an error.
ksh: kill: 19258: No such process
If I do a ps -ef | grep scp its showing the process running, but if i try to kill it, its showing me error.
View 7 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
Oct 31, 2011
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
View 39 Replies
View Related