I am having enq: TX - row lock contention in top wait event. it is occurring between 10pm - 2am.
We are having sqlloader job running every one hour(conventional path). But for the specific period of time i am getting "Global Enqueue Services Deadlock detected". Between 10-5. I analyzed related trace file it is make me little confusion.I found there are four insert query culprit for this locking. out of four sql , tow of them are ran by same SID, other two insert ran by same id. I got confused because how same sid locking them self. trace file below. during this period oracle maintenance window is active.
In my current project, I am handling an Oracle database with version 10.2.0.3. I analyzed the AWR report for the past month abd saw the following wait events consistently:
CPU time - 45.6% (% Total Call Time) db file sequential read - 30.6% (% Total Call Time) enq: TX - row lock contention - 18.8 % (% Total Call Time)
I have uploaded the report for your reference.What should be my approach to troubleshoot this?
Attached File(s)
awr_report.html ( 382.28K ) Number of downloads: 6
We have occurrences of enq : TX - index contentions in the database. Using the SQL ID, we have identified the INSERT statement and the table which they are trying to insert.
This table has almost 25 different indexes, some of which are unique as well.I am wondering how to identify the actual index causing issue, out of these 25 indexes.
Is there any way to pin point to the name of index which is causing the lock?My plan is, once the index is identified, I would like to check the extents and inittrans and other attributes of this index to fix.
I am getting below mentioned error in alertlog file very frequently.
ORA-06512: at "CTXSYS.DRUE", line 160 ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 747 ORA-06512: at "BEE_CODE_05252013.SS_INDEX_JOB_PKG", line 496 Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc: Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc: Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc: Tue Jul 02 15:31:15 2013 [code]....
i am using 10.2.0.4.0 version of oracle.I am having trace file info as below, for one of the query. So how should i interpret the trace file? What is the issue in the query, and the scope of improvement in the query? I have removed the query and its plans from the trace file, i have only posted the wait sections.
I need to take the database trace of each page of my web application. I am giving the following commands.
BEGIN dbms_monitor.session_trace_enable(session_id=>122, serial_num=>NULL, waits=>true, binds=>true); END;
Accessing the web application. Once it renders completely, I am executing the following command.
BEGIN dbms_monitor.session_trace_disable(session_id=>122); END;
In the user_dump_dest folder, I am expecting to see a new trace whenever I execute these commands. But the same trace file is getting updated. How do I make oracle to create a new trace for each iteration. I am using Oracle 11g Release on CentOS 5.x
We are running Oracle 11.2.0.3 on a Windows 2008 R2 Server with an app server on the same. Our app has a search function that has recently started timing out on us. Sometimes intermittently but often now. I've ran a trace in our dev environment, below code, in an attempt to track down the issue but I'm finding very little useful information in relation to the errors I see in the trace file below. My experience in deciphering trace code is nill so how to interpret these errors.
to trace. but this trace file is mixing with another trace files in udump. did some Google and found that we can use
ALTER SESSION SET TRACEFILE_IDENTIFIER = "MY_TEST_SESSION".
but this will generate trace file for sys user account not for the desired user session. how we can define trace file name for desired user session trace.
after apply patch 10.2.0.4 for 2 10gr2 node i tried to start asm on 2 node but i found that the 1st node (master node) can not started but the second node is ok and this error found in trace file for lmon.trc
kjxgmpoll: terminate the CGS reconfig. Error: KGXGN polling error (15) error 29702 detected in background process ORA-29702: error occurred in Cluster Group Service operation ksuitm: waiting up to [5] seconds before killing DIAG
Oracle version: 11.2.0.3.0 Enterprise Edition OS - IBM/AIX RISC System/6000
I am trying to generate a trace file from a piece of code executed by java server. What I asked the java developer to do is to place this block immediately after establishing a connection:
BEGIN EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER = ''M1'''; dbms_monitor.session_trace_enable(waits => FALSE, binds => TRUE); END;And at the end of the logical java block of code:
BEGIN dbms_monitor.session_trace_disable; END;
What I want to know is how many rows the java server fetches after executing one particular select statement, because they complain about receiving less in count rows from the select statement than expecting. For example, if I execute the same sql query in sqlplus session, then I fetch let's say 1000 rows. When the same query is executed from java side, the fetched rows are less in count, let's say 500.And because I doubt it, I wanted to trace to see what actually is executed and how.the excerpt of the trace file I see exactly the same query which I execute myself in a sqplus session.is no fine-grained control on the udnerlying tables in the query.
And my question is, how to interpret the FETCH phase of the cursor (for the select statement)?example, if I see one FETCH for this cursor, does this mean that the java server has fetched only one row?If I see 100 FETCHes, does this mean they fetched 100 rows from the cursor?
Here is a short excerpt from the trace file (don't crucify me for the query and the obvious denormalized design of the tables, this is not invented by me):
PARSING IN CURSOR #4573587152 len=667 dep=0 uid=737 oct=3 lid=737 tim=17685516462413 hv=954980718 ad='70000006d3e4940' sqlid='69pm96nwfrqbf' select /* ordered */ o.id, nvl(o.par_id, -1) as par_id, o.NAME_GER, o.NAME_ENG, o.NAME_ESP, o.NAME_ITL,o.NAME_FRA, decode(lo.lflag, 'Y', 'L', 'N') as leaf_or_node, lo.distance + 1 as "LEVEL", to_char(o.beg_date,
I got the below message in trace file. What does the line "swap info: free = 0.00M alloc = 0.00M total = 0.00M" trying to say?
I have
RAM=1.5G Swap=3.5G
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /home/oracle System name:Linux Node name:linuxdev Release:2.6.5-7.97-default Version:#1 Fri Jul 2 14:21:59 UTC 2004
I am working on tuning the performance of one of the concurrent request in our 11i ERP System having database 11.1.0.7
I had enabled oradebug trace for the request and generated tkprof out of it. For below query which is taking time , I found that , in the trace generated , wait event is "db file sequential read" on an PO_LINES_N10 index but in the generated tkprof , for the same below query , the full table scan for PO_LINES_ALL is happening , as that table is 600 MB in size.
Below is the query , =============== UPDATE PO_LINES_ALL A SET A.VENDOR_PRODUCT_NUM = (SELECT SUPPLIER_ITEM FROM APPS.IRPO_IN_BPAUPDATE_TMP C WHERE BATCH_ID = :B1 AND PROCESSED_FLAG = 'P' AND ACTION = 'UPDATE' AND C.LINE_ID =A.PO_LINE_ID AND ROWNUM = 1 AND SUPPLIER_ITEM IS NOT NULL), LAST_UPDATE_DATE = SYSDATE ===============
Index PO_LINES_N10 is on the column LAST_UPDATE_DATE , logically for such query , index should not have got used as that indexed column is not in select / where clause.
Also, why there is discrepancy between tkprof and trace generated for the same query .
So , I decided to INVISIBLE the index PO_LINES_N10 but still that index is getting accessed in the trace file .
I have also checked the below parameter , which is false so optimizer should not make use of invisible indexes during query execution.
SQL> show parameter invisible
NAME TYPE VALUE ----------- optimizer_use_invisible_indexes boolean FALSE
I am using Oracle 11g in my Windows machine. I wrote a multithreaded program that will be inserting into Oracle database so I was getting this exception before-
ORA-12519: TNS:no appropriate service handler found
So after going through various article on the internet. I found that the solution for above exception is to increase the number of processes- So I followed the below thing and it works fine for me after that.
cmd>sqlplus / as sysdba sqlplus>alter system set processes=300 scope=spfile; sqlplus>shut immediate; sqlplus>startup
Now after some more time, I increased the processes to something like below
alter system set processes=1000000 scope=spfile;
And since then whenever I am starting up my Oracle database, it is giving me this exception below and it is happening since one day.
ORA-10997: another startup/shutdown operation of this instance inprogress ORA-09968: unable to lock file
In my database smon tracefile was huge,now we want to delete the same,I also tried with ordebug operation but no use, delete tracfile which Smon generates.
And whether I run this from my PL/SQL process (PRO*C) or from a command line, it returns:
TKPROF: Release 10.2.0.4.0 - Production on Thu Jan 6 11:04:38 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved.
could not open trace file /u00/app/oracle/admin/DB/udump/BITN1234.trc
I have already (from my PRO*C code and from command line)...
- made sure that the file is in the directory. - run this from the udump directory, where the .trc file is...didn't work. - run this from the udump directory, and specifying explicitely the
complete path anyway in the tkprof line (redundant...I know)...didn't work.
- tried to copy the file to another directory in order to run the tkprof,
and it returns:
cp: BITN1234.trc: The file access permissions do not allow the specified
I'm using SAP ECC6.0, Oracle 10G, HPUX B11.23.Recently I had performed database restoration from my backup tape. However, during the process of bringing up the Oracle and SAP database, I observed that there could be some archive logs went missing. I had tried to check them from my backup tape, but could not find it.
In short, now my SAP database is up and running, but I'm having another problem when I executed a "CheckDB" job in DB13. The job is unable to complete successfully
DB13 job log: 30.08.2010 19:44:40 Job started 30.08.2010 19:44:40 Step 001 started (program RSDBAJOB, variant &0000000000061, user ID BASIS) 30.08.2010 19:44:40 Execute logical command BRCONNECT On host drqaecc 30.08.2010 19:44:40 Parameters: -u / -jid CHECK20100830194440 -c -f check
I executed a query which executed quickly (1.7 seconds) but since its output took time in displaying on the console the time shown by 'set timing on was 39.5 seconds
also I took trace (tkprof) for the same.My query is why the timings under 'Total Waited' (43.19 and 1.69) are not added to the elapsed time 1.83 seconds
I have just migrated database to 11.2 ..Migration is successfull and now database is in open mode working fine.BUT i m getting following mesage in alert log file
"Time drift detected. Please check VKTM trace file for more details."I m using windows platform.
We are facing one issue on one of the database. The database is generating large trace files(14000) from last two days. That consumes around 15G space on the disk. And the content of the trace files is not having any meaningful message to debug:
*** TRACE DUMP CONTINUED FROM FILE /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc ***
... (Many lines with above message)
The alert log is having one repeated error yesterday:
Thu May 6 22:00:03 2010 Errors in file /apps/oracle/admin/fs90uat/bdump/fs90uat_j000_11811.trc: ORA-12012: error on auto execute of job 2647927 ORA-04063: ORA-04063: package body "ORACLE_OCM.MGMT_DB_LL_METRICS" has errors ORA-06508: PL/SQL: could not find program unit being called: "ORACLE_OCM.MGMT_DB_LL_METRICS" ORA-06512: at line 1
The corresponding trace file is having error:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORACLE_HOME = /apps/oracle/product/10.2.0/db_1 System name: SunOS Node name: corpqadb30 [Code] .......
I have a deadlock trace file to analyse and i used to be able to see the rowid as a 16 bit hex value in the trace file, which i could then query on to get the actual real world row name.I see in the 11GR2 deadlock trace the formatting is different and for the life of me i am unable to see the rowid. Has Oracle stopped reporting this now, or is their another way to get this value? The deadlock graph shows me
We are setting RAC between two machines(Desktop PC connected wirth LAN ....have storage Local to them and no Shared storage..Only NFS).We have shared the mount point by NFS.We do not have any shared storage but we need to share the RAW files to setup the OCR and VOTING DISKS.
The problem is:
1. We need to keep the RAW devices for OCR and voting disk common to the machines (Desktop PC..both Linux OELU5). I am able to create the raw file systems but that is local to one machine. I am not able to understand how to share the raw devices between two machines.
2. Tried to use ISCSI utility but that did not work. (How_to_use_iSCSI_Targets_on_Linux)
Scenario : One Machine has 500 GB Storage. Second machine has 80 GB Storage. Made private and public networks(Used two Network cards in LAN). All are communicating.
Problem : Need to make the storage 300GB (Out of 500 GB) of Machine 1 as shared storage. How can we do that? (Do we need any virtual software...ex .. vmware for that?)
I need to copy .CSV File from a Windows Server shared path (\hostnameoutput) to another server which i believe is on unix.The other server name is abc.hcl.com. On this server i need to put it in the root directory. I will have to use SFTP and not FTP.
Our server got crash in server1 and we recovered the same DB to server2 using RMAN back up. But while running our scheduler for email alert we got the error message as ORA-01157: cannot identify/lock data file 201 - see DBWR trace file ORA-01110: data file 201: 'D:ORADATAKFDBTEMP01.DBF' Our temp file is already in E folder.
we know we can see lock mode held in session can be analysed using LM column in v$lock.But i confused in seeing LM column it all shows in numbers from 0 to 6.
I am trying to write data to a network shared folder. When I write to a local file it works perfectly. Below is my procedure.
CREATE OR REPLACE procedure nbpsbp_file as type r_cursor is ref cursor; refr r_cursor; tab_name varchar2(20):= null; tab_name1 varchar2(20) := null; tab_name2 varchar2(20) := null;
[code]....
When I execute the above procedure, it gives me the following error
ORA-29283: invalid file operation ORA-06512: at "SYS.UTL_FILE", line 475 ORA-29283: invalid file operation ORA-06512: at "NBPSBP_FILE", line 36 ORA-06512: at line 1
I have also set the parameter utl_file_dir = '\10.16.10.225 emp' When I set the utl_file_sir to a local folder, for example, c: emp, and use the same path in UTL_FILE.FOPEN, then it works fine and writes the desired output to text file. But when I give it a network address, it raises the above error.