Performance Tuning :: New Trace File
Aug 11, 2011
I need to take the database trace of each page of my web application. I am giving the following commands.
BEGIN
dbms_monitor.session_trace_enable(session_id=>122, serial_num=>NULL, waits=>true, binds=>true);
END;
Accessing the web application. Once it renders completely, I am executing the following command.
BEGIN
dbms_monitor.session_trace_disable(session_id=>122);
END;
In the user_dump_dest folder, I am expecting to see a new trace whenever I execute these commands. But the same trace file is getting updated. How do I make oracle to create a new trace for each iteration. I am using Oracle 11g Release on CentOS 5.x
View 3 Replies
ADVERTISEMENT
Apr 10, 2013
i want to know about trace file and TKPROF. any example.
View 1 Replies
View Related
Nov 16, 2011
I executed a query which executed quickly (1.7 seconds) but since its output took time in displaying on the console the time shown by 'set timing on was 39.5 seconds
also I took trace (tkprof) for the same.My query is why the timings under 'Total Waited' (43.19 and 1.69) are not added to the elapsed time 1.83 seconds
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.06 0 10 0 0
Fetch 758 0.03 1.77 0 0 0 11345
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 760 0.03 1.83 0 10 0 11345
[code]....
View 1 Replies
View Related
Jun 13, 2012
I've been searching the web for examples of how to run a trace.It's needed for a session different then current in trace LEVEL 4 (I need the bind variables values in the trace).
Unfortunately, I couldn't trace with DBMS_SUPPORT.START_TRACE_IN_SESSION, I understand that this is because it was only introduced in Oracle 11g.
how can i trace a session in level 4 on Oracle 10g for another session?
View 8 Replies
View Related
Jul 13, 2013
I need to create a script which can fetch the error related details from Trace files regularly (may be twice a day ) to a custom table in DB.
View 3 Replies
View Related
Feb 4, 2011
I've got a query running a select count (*) over a table. The default plan takes in the order of 15 minutes to return, a hinted plan to use a different index takes 3 minutes to return.
Unfortunately I cant get at the index stats and a few other areas which I suspect may be key here.When running autotrace against the two queries I see fairly different values as one would expect.
Query
select count (*) from fulfilmentitem bfi where created >= sysdate-30 AND bfi.status = 'FA' AND bfi.fulfilmentmethod = 'D'
Slow run
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | 33119 (1)|
| 1 | SORT AGGREGATE | | 1 | 15 | |
|* 2 | TABLE ACCESS BY INDEX ROWID| FULFILMENTITEM | 12525 | 183K| 33119 (1)|
|* 3 | INDEX RANGE SCAN | IDX_FULFIL_METHODSTATUS | 250K| | 1786 (1)|
----------------------------------------------------------------------------------------------
[code]....
IDX_FULFIL_METHODSTATUS is across FULFILMENTMETHOD & STATUS in that order.
IDX_BFI_CREATED is on CREATED and is approx 70% of the size of the other index
The row counts estimated in the explain plan are out, the count(*) comes in at 32.8k rows.As you will have seen, the fast run shows a pretty significant consistent get increase compared to the slow run and a decent though not dramatic physical read drop.
My uncertainty is around if these changes in consistent get/phys read values would typically be enough to suggest the real time improvements I'm observing or if other (albeit perhaps temporary) factors are involved. It is a prod OLTP environment so the data will be rapidly changing and that may be a factor.
I know it can never be an exact science without intimately knowing the hardware/current loads etc but I also know that there's enough experience on these boards to have a loose handle on if the time shifts between queries are likely (or not) to be reflective of the stat changes or if those differences alone shouldn't (or typically wouldn't account) for it.
I'm thinking about instructing the query to ignore its original plan but am hesitant to do so without being a little more confident that it's not just a timing thing or something other than the change of index approach which may be causing the improvement. the autotrace stat changes observed I couldn't put my hand on heart say "yup - that change is good, ignore the default index all the time for this job".
View 11 Replies
View Related
Apr 6, 2011
The REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
View 3 Replies
View Related
Feb 10, 2011
I would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
#Optimal log file size:
select optimal_logfile_size
from v$instance_recovery
----------------------------
OPTIMAL_LOGFILE_SIZE
1842
[code]....
View 9 Replies
View Related
Oct 15, 2013
I am getting back into Oracle (from a long haul in MS only env.) and am now testing Oracle installs.I have been given a task of seeing the diff. between 12c and 10.2g...I set up 2 vms (excatly same configs) and used the same dmp file (on both env.) to restore data and settings for our jobs to run.We have some aggregated data, and cubes with DIM tables each being run on the vm machines. We run nightly jobs to rebuild our cubes.
I am supposed to see/analyze the value of 12c, and understand things might vary from company to company, but am perplexed at my result.12c is half the speed of 10.2g, both env. are the same out of the box with same dmp file and same hardware.
I am using the same dmp file, with the same jobs on each machine, with both vms having 10.2g or 12c installed out of the box as is.what default oracle settings might have changed from 10.2g to 12c that could make the exact same env. run twice as slow on the 12c?
Expectations were that out of the box with both machines running same jobs on same data (from dmp files) would have it that 10.2g would be slower than the 12c, except the 12c takes 2 times as long to run the jobs. I have reviewed every possibility as I know usually the problem is the person sitting in the chair and not the pc...but I confirmed all was identical from the one vm env. to the other, except the version of oracle out of the box.
What could be done to bring that default setting back to atleast equal time between the 2, that would give me a great starting point. Otherwise, I would have to toss this up to bloatware.
I read up a bit on the CBO, and know this might have changed in 12c.is there a way to bring it back to a backwards ealier config, so as to atleast match both env. execution plans?
View 19 Replies
View Related
Feb 25, 2011
I try to current redo log file location to multiplex redo log configuration. I did do this on the DB.I import a backup file to DB when I made this change, but its too slow running import process from old configuration. Approximately it is 4 times slow.
when I recovery old configuration for redo log file, import process is normal running...Why this changes hit to DB import process performance? The old redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/crtest1/oradata/redo04b.log"524288000
3"/crtest1/oradata/redo03a.log"524288000
3"/crtest1/oradata/redo03b.log"524288000
2"/crtest1/oradata/redo02a.log"524288000
2"/crtest1/oradata/redo02b.log"524288000
1"/crtest1/oradata/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
The new multiplex redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/opt/redolog/redo04b.log"524288000
3"/opt/redolog/redo03a.log"524288000
3"/usr/redolog/redo03b.log"524288000
2"/usr/redolog/redo02a.log"524288000
2"/disk1/redolog/redo02b.log"524288000
1"/disk1/redolog/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
I think that new configuration is better for old configuration at security issue.Here is the disk partitions on the server:
-bash-3.00$ df -lh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9.6G 2.2G 7.3G 23% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
[code]....
View 4 Replies
View Related
Jan 13, 2011
find the attached tkprof'ed file of session
I started the trace after the query started (upon user's complaint)
However even after tracing the session for more than 30 minutes I am not geeting where the 30 minutes are accounted in this file
View 11 Replies
View Related
Apr 13, 2011
In a 3-node RAC setup; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS
CPU
time 5,802 34.9
RFS
ping 15 5,118 33,671 30.8 Other
Log file sequential
read 234,831 5,036 21 30.3 System I/O
Sql*Net
more data from
client 24,1711,08745 6.5 Network
Db file sequential
read130,939 4533 2.7 User I/O
Findings:-
On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
1)Are these symptoms of undersized log buffer?
2)I feel Network wait can be reduced by tweaking SDU & TDU values based on MDU.
View 2 Replies
View Related
Apr 21, 2011
I ran a scp command to transfer a file from local server to a remote server. when I am trying to kill the process, its giving an error.
ksh: kill: 19258: No such process
If I do a ps -ef | grep scp its showing the process running, but if i try to kill it, its showing me error.
View 7 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
Jul 2, 2013
I am getting below mentioned error in alertlog file very frequently.
ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 747
ORA-06512: at "BEE_CODE_05252013.SS_INDEX_JOB_PKG", line 496
Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc:
Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc:
Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc:
Tue Jul 02 15:31:15 2013
[code]....
View 1 Replies
View Related
Jul 24, 2012
i am using 10.2.0.4.0 version of oracle.I am having trace file info as below, for one of the query. So how should i interpret the trace file? What is the issue in the query, and the scope of improvement in the query? I have removed the query and its plans from the trace file, i have only posted the wait sections.
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.14 0.13 0 0 1 0
Execute 1 6.63 162.12 33540 72921 383 0
Fetch 17272 178.89 1933.95 274835 3147603 20 259063
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 17274 185.66 2096.21 308375 3220524 404 259063
[code]....
View 18 Replies
View Related
Apr 18, 2013
We are running Oracle 11.2.0.3 on a Windows 2008 R2 Server with an app server on the same. Our app has a search function that has recently started timing out on us. Sometimes intermittently but often now. I've ran a trace in our dev environment, below code, in an attempt to track down the issue but I'm finding very little useful information in relation to the errors I see in the trace file below. My experience in deciphering trace code is nill so how to interpret these errors.
*** 2013-04-18 14:27:23.353
*** SESSION ID:(3093.63097) 2013-04-18 14:27:23.353
*** CLIENT ID:() 2013-04-18 14:27:23.353
*** SERVICE NAME:(SYS$USERS) 2013-04-18 14:27:23.353
*** MODULE NAME:(w3pw.exe) 2013-04-18 14:27:23.353
*** ACTION NAME:() 2013-04-18 14:27:23.353
[code]....
View 3 Replies
View Related
Jul 26, 2013
As a sys user i want to trace a user session. so i am using
SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234, sql_trace=>TRUE);SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234, sql_trace=>FALSE);
to trace. but this trace file is mixing with another trace files in udump. did some Google and found that we can use
ALTER SESSION SET TRACEFILE_IDENTIFIER = "MY_TEST_SESSION".
but this will generate trace file for sys user account not for the desired user session. how we can define trace file name for desired user session trace.
View 2 Replies
View Related
Nov 19, 2011
after apply patch 10.2.0.4 for 2 10gr2 node i tried to start asm on 2 node but i found that the 1st node (master node)
can not started but the second node is ok and this error found in trace file for lmon.trc
kjxgmpoll: terminate the CGS reconfig.
Error: KGXGN polling error (15)
error 29702 detected in background process
ORA-29702: error occurred in Cluster Group Service operation
ksuitm: waiting up to [5] seconds before killing DIAG
View 3 Replies
View Related
Mar 11, 2013
Oracle version: 11.2.0.3.0 Enterprise Edition
OS - IBM/AIX RISC System/6000
I am trying to generate a trace file from a piece of code executed by java server. What I asked the java developer to do is to place this block immediately after establishing a connection:
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER = ''M1''';
dbms_monitor.session_trace_enable(waits => FALSE, binds => TRUE);
END;And at the end of the logical java block of code:
BEGIN
dbms_monitor.session_trace_disable;
END;
What I want to know is how many rows the java server fetches after executing one particular select statement, because they complain about receiving less in count rows from the select statement than expecting. For example, if I execute the same sql query in sqlplus session, then I fetch let's say 1000 rows. When the same query is executed from java side, the fetched rows are less in count, let's say 500.And because I doubt it, I wanted to trace to see what actually is executed and how.the excerpt of the trace file I see exactly the same query which I execute myself in a sqplus session.is no fine-grained control on the udnerlying tables in the query.
And my question is, how to interpret the FETCH phase of the cursor (for the select statement)?example, if I see one FETCH for this cursor, does this mean that the java server has fetched only one row?If I see 100 FETCHes, does this mean they fetched 100 rows from the cursor?
Here is a short excerpt from the trace file (don't crucify me for the query and the obvious denormalized design of the tables, this is not invented by me):
PARSING IN CURSOR #4573587152 len=667 dep=0 uid=737 oct=3 lid=737 tim=17685516462413 hv=954980718 ad='70000006d3e4940' sqlid='69pm96nwfrqbf'
select /* ordered */ o.id, nvl(o.par_id, -1) as par_id, o.NAME_GER, o.NAME_ENG, o.NAME_ESP, o.NAME_ITL,o.NAME_FRA, decode(lo.lflag, 'Y', 'L', 'N') as leaf_or_node, lo.distance + 1 as "LEVEL", to_char(o.beg_date,
[code]...
View 5 Replies
View Related
Nov 8, 2010
I got the below message in trace file. What does the line "swap info: free = 0.00M alloc = 0.00M total = 0.00M" trying to say?
I have
RAM=1.5G
Swap=3.5G
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /home/oracle
System name:Linux
Node name:linuxdev
Release:2.6.5-7.97-default
Version:#1 Fri Jul 2 14:21:59 UTC 2004
[Code]....
View 1 Replies
View Related
Jun 16, 2011
I am working on tuning the performance of one of the concurrent request in our 11i ERP System having database 11.1.0.7
I had enabled oradebug trace for the request and generated tkprof out of it. For below query which is taking time , I found that , in the trace generated , wait event is "db file sequential read" on an PO_LINES_N10 index but in the generated tkprof , for the same below query , the full table scan for PO_LINES_ALL is happening , as that table is 600 MB in size.
Below is the query ,
===============
UPDATE PO_LINES_ALL A
SET A.VENDOR_PRODUCT_NUM = (SELECT SUPPLIER_ITEM FROM APPS.IRPO_IN_BPAUPDATE_TMP C WHERE BATCH_ID = :B1 AND PROCESSED_FLAG = 'P' AND ACTION = 'UPDATE' AND C.LINE_ID =A.PO_LINE_ID AND ROWNUM = 1 AND SUPPLIER_ITEM IS NOT NULL),
LAST_UPDATE_DATE = SYSDATE
===============
Index PO_LINES_N10 is on the column LAST_UPDATE_DATE , logically for such query , index should not have got used as that indexed column is not in select / where clause.
Also, why there is discrepancy between tkprof and trace generated for the same query .
So , I decided to INVISIBLE the index PO_LINES_N10 but still that index is getting accessed in the trace file .
I have also checked the below parameter , which is false so optimizer should not make use of invisible indexes during query execution.
SQL> show parameter invisible
NAME TYPE VALUE
----------- optimizer_use_invisible_indexes boolean FALSE
View 5 Replies
View Related
Nov 19, 2012
I am having enq: TX - row lock contention in top wait event. it is occurring between 10pm - 2am.
We are having sqlloader job running every one hour(conventional path). But for the specific period of time i am getting "Global Enqueue Services Deadlock detected". Between 10-5. I analyzed related trace file it is make me little confusion.I found there are four insert query culprit for this locking. out of four sql , tow of them are ran by same SID, other two insert ran by same id. I got confused because how same sid locking them self. trace file below. during this period oracle maintenance window is active.
Trace file:
*** 2012-10-09 03:40:31.135
user session for deadlock lock 0x15365e060
sid: 1104 ser: 22256 audsid: 8797820 user: 49/iurth flags: 0x45
pid: 71 O/S info: user: oracle, term: UNKNOWN, ospid: 8601
image: oracle@sgh0909
client details:
[code]....
View 3 Replies
View Related
Oct 19, 2010
In Trace file I am always geeting one error "Error encountered: ORA-10980" due to which MVs fails . how to overcome from this error?
View 2 Replies
View Related
Oct 4, 2012
In my database smon tracefile was huge,now we want to delete the same,I also tried with ordebug operation but no use, delete tracfile which Smon generates.
View 5 Replies
View Related
Jan 6, 2011
I am trying to get a readable version of a .trc file generated by Oracle
10.2.0.4.0 with tkprof, but I still have been unable to get this done...and this is what I have already tried:
tkprof /u00/app/oracle/admin/DB/udump/*BITN1234.trc /batch/salidas/student/BITNTRACE.txt
And whether I run this from my PL/SQL process (PRO*C) or from a command line, it returns:
TKPROF: Release 10.2.0.4.0 - Production on Thu Jan 6 11:04:38 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
could not open trace file /u00/app/oracle/admin/DB/udump/BITN1234.trc
I have already (from my PRO*C code and from command line)...
- made sure that the file is in the directory.
- run this from the udump directory, where the .trc file is...didn't work.
- run this from the udump directory, and specifying explicitely the
complete path anyway in the tkprof line (redundant...I know)...didn't work.
- tried to copy the file to another directory in order to run the tkprof,
and it returns:
cp: BITN1234.trc: The file access permissions do not allow the specified
action.
- tried changing privileges with:
chmod a+r BITN1234.trc
and it returns:
chmod: BITN1234.trc: Operation not permitted.
- tried to run tkprof01 instead of tkprof and it returns:
We trust you have received the usual lecture from the local System Administrator. It usually boils down to these two things:
#1) Respect the privacy of others.
#2) Think before you type.
View 3 Replies
View Related
Apr 8, 2013
I am trying to install Oracle 11g R2(64bit) on Windows 7 64 bit OS. While Creating Database using DBCA. I am getting error. Below is the screenshot.
Below is the error in the Trace file...
oracle.sysman.assistants.util.step.StepExecutionException:
Error in Process: C:Oracleapporacleproduct11.2.0serverinorapwd.exe
Unable to find error file %ORACLE_HOME%RDBMSopw<lang>.msb
View 3 Replies
View Related
Oct 31, 2011
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
View 39 Replies
View Related
Apr 9, 2012
I have just migrated database to 11.2 ..Migration is successfull and now database is in open mode working fine.BUT i m getting following mesage in alert log file
"Time drift detected. Please check VKTM trace file for more details."I m using windows platform.
View 4 Replies
View Related
May 7, 2010
We are facing one issue on one of the database. The database is generating large trace files(14000) from last two days. That consumes around 15G space on the disk. And the content of the trace files is not having any meaningful message to debug:
cat /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc
*** TRACE DUMP CONTINUES IN FILE /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc ***
Dump file /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc
*** TRACE DUMP CONTINUED FROM FILE /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc ***
... (Many lines with above message)
The alert log is having one repeated error yesterday:
Thu May 6 22:00:03 2010
Errors in file /apps/oracle/admin/fs90uat/bdump/fs90uat_j000_11811.trc:
ORA-12012: error on auto execute of job 2647927
ORA-04063: ORA-04063: package body "ORACLE_OCM.MGMT_DB_LL_METRICS" has errors
ORA-06508: PL/SQL: could not find program unit being called: "ORACLE_OCM.MGMT_DB_LL_METRICS"
ORA-06512: at line 1
The corresponding trace file is having error:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /apps/oracle/product/10.2.0/db_1
System name: SunOS
Node name: corpqadb30
[Code] .......
View 2 Replies
View Related