And whether I run this from my PL/SQL process (PRO*C) or from a command line, it returns:
TKPROF: Release 10.2.0.4.0 - Production on Thu Jan 6 11:04:38 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
could not open trace file /u00/app/oracle/admin/DB/udump/BITN1234.trc
I have already (from my PRO*C code and from command line)...
- made sure that the file is in the directory.
- run this from the udump directory, where the .trc file is...didn't work.
- run this from the udump directory, and specifying explicitely the
complete path anyway in the tkprof line (redundant...I know)...didn't work.
- tried to copy the file to another directory in order to run the tkprof,
and it returns:
cp: BITN1234.trc: The file access permissions do not allow the specified
I am recieving errors when trying to load the control file. The errors are as follows:
SQL*Loader-500 Unable to open file (homework.ctl) SQL*Loader-553 file not found SQL*Loader-559 SYstem error: The system cannot find the file specified.
My control file is located directly in the C drive (C:homework.ctl). The control file contains the following
LOAD DATA INFILE 'c:country.dat' APPEND INTO TABLE homework fields terminated by ',' optionally encloded by '"' (country, month, day) WHEN (month='April')
The command I am entering is:
sqlldr system/password control=homework.ctl
I've tried c:homework.ctl, 'c:homework.ctl', and placing the file in the BIN folder of Oracle.
I'm getting an error when trying to use the new Data Pump Export/Import utility.
I am able to create a directory using SQLPLus, and I get the "Directory Created" message, but no directory actually gets created on the server.
SQL> CREATE DIRECTORY datapump AS 'C:Inetpubdatafiledatapump';
Directory created. But I dont see the directory created on the server.
Then on the server:
C:Documents and SettingsAdministrator>expdp ******/****** FULL=y DIRECTORY=datapump DUMPFILE=expdata.dmp LOGFILE=expdata.log Export: Release 10.2.0.1.0 - Production on Wednesday, 01 November, 2006 1:51:55 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORA-39002: invalid operation ORA-39070: Unable to open the log file. ORA-29283: invalid file operation ORA-06512: at "SYS.UTL_FILE", line 475 ORA-29283: invalid file operation
We are facing one issue on one of the database. The database is generating large trace files(14000) from last two days. That consumes around 15G space on the disk. And the content of the trace files is not having any meaningful message to debug:
*** TRACE DUMP CONTINUED FROM FILE /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc ***
... (Many lines with above message)
The alert log is having one repeated error yesterday:
Thu May 6 22:00:03 2010 Errors in file /apps/oracle/admin/fs90uat/bdump/fs90uat_j000_11811.trc: ORA-12012: error on auto execute of job 2647927 ORA-04063: ORA-04063: package body "ORACLE_OCM.MGMT_DB_LL_METRICS" has errors ORA-06508: PL/SQL: could not find program unit being called: "ORACLE_OCM.MGMT_DB_LL_METRICS" ORA-06512: at line 1
The corresponding trace file is having error:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORACLE_HOME = /apps/oracle/product/10.2.0/db_1 System name: SunOS Node name: corpqadb30 [Code] .......
I have had the following problem open with Oracle support since March 2011 (8 months), and still no resolution.
When I export all our schema's on Sunday night it takes about 1 hour 50 minutes. When I export the same schema's on any other night it takes 7 hours. The only difference is that on Sunday at 4:00am we drop all connections in the connection pools and reestablish new connections. Then 19hours later on Sunday at 23:00 we perform the exports which only take 2 hours to complete.
I have also tried recreating the connections in the connection pools during the week, and the exports have then only taken 2 hours to complete. But the following night after the connections have been used during the day, the exports again take 7 hours. So it appears the export speed gets significantly slower when there are many open connections that have been used and not closed.
From the stats pack report I found 2 SQL statements internal to the export command, that had an order of magnitude in difference when looking at the elapsed execution time between the fast export, and the slow export (see below).
How to speed up the exports without having to drop and recreate the database connections in the connection pools each night.
FAST: elapsed_time: 430.90 executions: 161,388 Module: exp@Oracle1 (TNS V1-V3) SELECT COLNAME, COLNO, PROPERTY, NOLOG FROM SYS.EXU10CCL WHERE CNO = :1 ORDER BY COLNO
I use the following export command for each schema: $ORACLE_HOME/bin/exp user/pass file=somefile.dmp owner=$SCHEMA log=somelog.log buffer=9000000
I have an Oracle Standard edition 11.1.0.7 database on 64bit Linux with a 7GB SGA. I currently export (I use exp not datapump because datapump is a lot slower and we can't use parallel processing features of datapump on a standard edition database) approx 200 schema's each night. The export normally takes 1 hour 50 minutes which is approximately 2 schema's exported every minute. When the exports run slowly each export takes almost 2 minutes to complete.
The database has about 20 GB data and 50 GB indexes. The database has also approx 500 connections via toplink connection pools from 8 application servers.
I am using oracle 10g R2. Some how control file is corrupted and database is not open. and there is no backup of control file. Now i need to open the database without recreating the database.
I am getting below mentioned error in alertlog file very frequently.
ORA-06512: at "CTXSYS.DRUE", line 160 ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 747 ORA-06512: at "BEE_CODE_05252013.SS_INDEX_JOB_PKG", line 496 Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc: Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc: Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc: Tue Jul 02 15:31:15 2013 [code]....
i am using 10.2.0.4.0 version of oracle.I am having trace file info as below, for one of the query. So how should i interpret the trace file? What is the issue in the query, and the scope of improvement in the query? I have removed the query and its plans from the trace file, i have only posted the wait sections.
I have created the below stored procedure and calling the procedure in when-button-pressed trigger. Problem here is that when I cancel the file selection in file open dialogue box its raising exception.
PROCEDURE TEST IS buffer_lines client_text_io.file_type; v_outputstr VARCHAR2 (32767); p_delimiter VARCHAR2 (10) := '","'; v_transaction_no VARCHAR2 (10); BEGIN [code].......
I have a requirement to read flat text file(around 15000 lines) residing at a client location from DB server and write into a table in One cell.
I tried UTL_FILE and DBMS_LOB but, i am not able to access client location to read the file as it reads path from Oracle Directory.
eg. my client path is 198.168.1.1 and my DB server is in unix say 192.168.1.10. file location is: \192.168.1.1shareabc.txt So I created One Oracle directory as MY_DIR having DIRECTORY_PATH as '\192.168.1.1share'. But both UTL_FILE and DBMS_LOB is not able to access the file.
Error Message: ------------- Unable to process CLOB -22288 ~ ORA-22288: file or LOB operation FILEOPEN failed No such file or directory
Few Details for reference: ------------------------- File Location: \192.168.1.1shareabc.txt Unix DB Server location: 192.168.1.10 Table : Test (filename varchar2(30), Content CLOB) Oracle Dir: MYDIR Directory_Path: \192.168.1.1share
I need to take the database trace of each page of my web application. I am giving the following commands.
BEGIN dbms_monitor.session_trace_enable(session_id=>122, serial_num=>NULL, waits=>true, binds=>true); END;
Accessing the web application. Once it renders completely, I am executing the following command.
BEGIN dbms_monitor.session_trace_disable(session_id=>122); END;
In the user_dump_dest folder, I am expecting to see a new trace whenever I execute these commands. But the same trace file is getting updated. How do I make oracle to create a new trace for each iteration. I am using Oracle 11g Release on CentOS 5.x
We are running Oracle 11.2.0.3 on a Windows 2008 R2 Server with an app server on the same. Our app has a search function that has recently started timing out on us. Sometimes intermittently but often now. I've ran a trace in our dev environment, below code, in an attempt to track down the issue but I'm finding very little useful information in relation to the errors I see in the trace file below. My experience in deciphering trace code is nill so how to interpret these errors.
to trace. but this trace file is mixing with another trace files in udump. did some Google and found that we can use
ALTER SESSION SET TRACEFILE_IDENTIFIER = "MY_TEST_SESSION".
but this will generate trace file for sys user account not for the desired user session. how we can define trace file name for desired user session trace.
after apply patch 10.2.0.4 for 2 10gr2 node i tried to start asm on 2 node but i found that the 1st node (master node) can not started but the second node is ok and this error found in trace file for lmon.trc
kjxgmpoll: terminate the CGS reconfig. Error: KGXGN polling error (15) error 29702 detected in background process ORA-29702: error occurred in Cluster Group Service operation ksuitm: waiting up to [5] seconds before killing DIAG
Oracle version: 11.2.0.3.0 Enterprise Edition OS - IBM/AIX RISC System/6000
I am trying to generate a trace file from a piece of code executed by java server. What I asked the java developer to do is to place this block immediately after establishing a connection:
BEGIN EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER = ''M1'''; dbms_monitor.session_trace_enable(waits => FALSE, binds => TRUE); END;And at the end of the logical java block of code:
BEGIN dbms_monitor.session_trace_disable; END;
What I want to know is how many rows the java server fetches after executing one particular select statement, because they complain about receiving less in count rows from the select statement than expecting. For example, if I execute the same sql query in sqlplus session, then I fetch let's say 1000 rows. When the same query is executed from java side, the fetched rows are less in count, let's say 500.And because I doubt it, I wanted to trace to see what actually is executed and how.the excerpt of the trace file I see exactly the same query which I execute myself in a sqplus session.is no fine-grained control on the udnerlying tables in the query.
And my question is, how to interpret the FETCH phase of the cursor (for the select statement)?example, if I see one FETCH for this cursor, does this mean that the java server has fetched only one row?If I see 100 FETCHes, does this mean they fetched 100 rows from the cursor?
Here is a short excerpt from the trace file (don't crucify me for the query and the obvious denormalized design of the tables, this is not invented by me):
PARSING IN CURSOR #4573587152 len=667 dep=0 uid=737 oct=3 lid=737 tim=17685516462413 hv=954980718 ad='70000006d3e4940' sqlid='69pm96nwfrqbf' select /* ordered */ o.id, nvl(o.par_id, -1) as par_id, o.NAME_GER, o.NAME_ENG, o.NAME_ESP, o.NAME_ITL,o.NAME_FRA, decode(lo.lflag, 'Y', 'L', 'N') as leaf_or_node, lo.distance + 1 as "LEVEL", to_char(o.beg_date,
I got the below message in trace file. What does the line "swap info: free = 0.00M alloc = 0.00M total = 0.00M" trying to say?
I have
RAM=1.5G Swap=3.5G
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /home/oracle System name:Linux Node name:linuxdev Release:2.6.5-7.97-default Version:#1 Fri Jul 2 14:21:59 UTC 2004
I am working on tuning the performance of one of the concurrent request in our 11i ERP System having database 11.1.0.7
I had enabled oradebug trace for the request and generated tkprof out of it. For below query which is taking time , I found that , in the trace generated , wait event is "db file sequential read" on an PO_LINES_N10 index but in the generated tkprof , for the same below query , the full table scan for PO_LINES_ALL is happening , as that table is 600 MB in size.
Below is the query , =============== UPDATE PO_LINES_ALL A SET A.VENDOR_PRODUCT_NUM = (SELECT SUPPLIER_ITEM FROM APPS.IRPO_IN_BPAUPDATE_TMP C WHERE BATCH_ID = :B1 AND PROCESSED_FLAG = 'P' AND ACTION = 'UPDATE' AND C.LINE_ID =A.PO_LINE_ID AND ROWNUM = 1 AND SUPPLIER_ITEM IS NOT NULL), LAST_UPDATE_DATE = SYSDATE ===============
Index PO_LINES_N10 is on the column LAST_UPDATE_DATE , logically for such query , index should not have got used as that indexed column is not in select / where clause.
Also, why there is discrepancy between tkprof and trace generated for the same query .
So , I decided to INVISIBLE the index PO_LINES_N10 but still that index is getting accessed in the trace file .
I have also checked the below parameter , which is false so optimizer should not make use of invisible indexes during query execution.
SQL> show parameter invisible
NAME TYPE VALUE ----------- optimizer_use_invisible_indexes boolean FALSE
I am having enq: TX - row lock contention in top wait event. it is occurring between 10pm - 2am.
We are having sqlloader job running every one hour(conventional path). But for the specific period of time i am getting "Global Enqueue Services Deadlock detected". Between 10-5. I analyzed related trace file it is make me little confusion.I found there are four insert query culprit for this locking. out of four sql , tow of them are ran by same SID, other two insert ran by same id. I got confused because how same sid locking them self. trace file below. during this period oracle maintenance window is active.
In my database smon tracefile was huge,now we want to delete the same,I also tried with ordebug operation but no use, delete tracfile which Smon generates.
i have files a.text which is parent file and another one is child one called file b.txt . Both files are linked together by common field called "id". Interesting part child file have multiple layers name associated with ids. (we are only aware that in b.txt for each id there could be max 3 layers)
So they needs to get loaded into Table called PARENT_TBL
So PARENT_TABLE looks like ID NAME SUBJECT LAYER LAYERNO
I executed a query which executed quickly (1.7 seconds) but since its output took time in displaying on the console the time shown by 'set timing on was 39.5 seconds
also I took trace (tkprof) for the same.My query is why the timings under 'Total Waited' (43.19 and 1.69) are not added to the elapsed time 1.83 seconds