Performance Tuning :: Script To Load Trace Files Errors To A Table?
Jul 13, 2013I need to create a script which can fetch the error related details from Trace files regularly (may be twice a day ) to a custom table in DB.
View 3 RepliesI need to create a script which can fetch the error related details from Trace files regularly (may be twice a day ) to a custom table in DB.
View 3 RepliesI need to take the database trace of each page of my web application. I am giving the following commands.
BEGIN
dbms_monitor.session_trace_enable(session_id=>122, serial_num=>NULL, waits=>true, binds=>true);
END;
Accessing the web application. Once it renders completely, I am executing the following command.
BEGIN
dbms_monitor.session_trace_disable(session_id=>122);
END;
In the user_dump_dest folder, I am expecting to see a new trace whenever I execute these commands. But the same trace file is getting updated. How do I make oracle to create a new trace for each iteration. I am using Oracle 11g Release on CentOS 5.x
i want to know about trace file and TKPROF. any example.
View 1 Replies View RelatedI've been searching the web for examples of how to run a trace.It's needed for a session different then current in trace LEVEL 4 (I need the bind variables values in the trace).
Unfortunately, I couldn't trace with DBMS_SUPPORT.START_TRACE_IN_SESSION, I understand that this is because it was only introduced in Oracle 11g.
how can i trace a session in level 4 on Oracle 10g for another session?
I executed a query which executed quickly (1.7 seconds) but since its output took time in displaying on the console the time shown by 'set timing on was 39.5 seconds
also I took trace (tkprof) for the same.My query is why the timings under 'Total Waited' (43.19 and 1.69) are not added to the elapsed time 1.83 seconds
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.06 0 10 0 0
Fetch 758 0.03 1.77 0 0 0 11345
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 760 0.03 1.83 0 10 0 11345
[code]....
I've got a query running a select count (*) over a table. The default plan takes in the order of 15 minutes to return, a hinted plan to use a different index takes 3 minutes to return.
Unfortunately I cant get at the index stats and a few other areas which I suspect may be key here.When running autotrace against the two queries I see fairly different values as one would expect.
Query
select count (*) from fulfilmentitem bfi where created >= sysdate-30 AND bfi.status = 'FA' AND bfi.fulfilmentmethod = 'D'
Slow run
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | 33119 (1)|
| 1 | SORT AGGREGATE | | 1 | 15 | |
|* 2 | TABLE ACCESS BY INDEX ROWID| FULFILMENTITEM | 12525 | 183K| 33119 (1)|
|* 3 | INDEX RANGE SCAN | IDX_FULFIL_METHODSTATUS | 250K| | 1786 (1)|
----------------------------------------------------------------------------------------------
[code]....
IDX_FULFIL_METHODSTATUS is across FULFILMENTMETHOD & STATUS in that order.
IDX_BFI_CREATED is on CREATED and is approx 70% of the size of the other index
The row counts estimated in the explain plan are out, the count(*) comes in at 32.8k rows.As you will have seen, the fast run shows a pretty significant consistent get increase compared to the slow run and a decent though not dramatic physical read drop.
My uncertainty is around if these changes in consistent get/phys read values would typically be enough to suggest the real time improvements I'm observing or if other (albeit perhaps temporary) factors are involved. It is a prod OLTP environment so the data will be rapidly changing and that may be a factor.
I know it can never be an exact science without intimately knowing the hardware/current loads etc but I also know that there's enough experience on these boards to have a loose handle on if the time shifts between queries are likely (or not) to be reflective of the stat changes or if those differences alone shouldn't (or typically wouldn't account) for it.
I'm thinking about instructing the query to ignore its original plan but am hesitant to do so without being a little more confident that it's not just a timing thing or something other than the change of index approach which may be causing the improvement. the autotrace stat changes observed I couldn't put my hand on heart say "yup - that change is good, ignore the default index all the time for this job".
I have a OWB mapping which takes input from a staging table and add those row to the Cube. The underlying table behind cube is a relational fact table joined with the dimensions using foreign keys. Explain plan behind the query has a rather high cost and the mapping runs for 30 minutes. If you see below, in step 17, the cost goes up to 1,396,573 which is also where nested loops start to appear. Query plan is also attached in the image format.
Plan
SELECT STATEMENT ALL_ROWSCost: 1,746,526,275 Bytes: 386,835,904 Cardinality: 464,947
46 NESTED LOOPS OUTER Cost: 1,746,526,275 Bytes: 386,835,904 Cardinality: 464,947
41 NESTED LOOPS OUTER Cost: 1,744,200,663 Bytes: 380,791,593 Cardinality: 464,947
37 NESTED LOOPS OUTER Cost: 1,743,270,415 Bytes: 374,747,282 Cardinality: 464,947
34 NESTED LOOPS OUTER Cost: 1,740,476,128 Bytes: 368,702,971 Cardinality: 464,947
29 NESTED LOOPS OUTER Cost: 1,739,545,862 Bytes: 362,658,660 Cardinality: 464,947
25 NESTED LOOPS OUTER Cost: 1,710,193,475 Bytes: 356,614,349 Cardinality: 464,947
[code].........
which tools are available for monitoring load of the database?
View 4 Replies View RelatedWe are running Oracle 11.2.0.3 on a Windows 2008 R2 Server with an app server on the same. Our app has a search function that has recently started timing out on us. Sometimes intermittently but often now. I've ran a trace in our dev environment, below code, in an attempt to track down the issue but I'm finding very little useful information in relation to the errors I see in the trace file below. My experience in deciphering trace code is nill so how to interpret these errors.
*** 2013-04-18 14:27:23.353
*** SESSION ID:(3093.63097) 2013-04-18 14:27:23.353
*** CLIENT ID:() 2013-04-18 14:27:23.353
*** SERVICE NAME:(SYS$USERS) 2013-04-18 14:27:23.353
*** MODULE NAME:(w3pw.exe) 2013-04-18 14:27:23.353
*** ACTION NAME:() 2013-04-18 14:27:23.353
[code]....
1) I have 5 Exported Dump files.
2) All of those 5 dump files were taken in different time periods.
3) Many of those Dump files are having the same Partition records.
eg:-
Dump 1:- 01-06-2010 to 31-11-2010
Dump 2:- 01-09-2010 to 31-12-2010
4) Now i want to import all those partitioning data into a single table, without having any duplication.
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
un-documented parameter _trace_files_public / I want to set this to true so my app team can review trace files. better way to proceed to open read permissions for non oracle users.
View 2 Replies View RelatedI have a big table in which we load about 37M recrods. We have informatica ETL which Loads the data in bulk Mode and creats index after completion. The data load takes about 1Hr and Index Creation takes about 1/2 hr. In total it takes about 90 to 95 Mnts.
Now I thought if Partition and Load paralley, It will improve perfromance. We did 4 partition and and each Partition about 9M records. The data load in Bulk mode is completing in 25 Mnts. Again When I am creating index over it, It is taking about 40 Mnts. and in Total Load time is 65 Mnts.
Is there way I can better performance to complete the load in 1/2 hr ?
I am using Oracle RAC 11gR2.I am planing to move my old trace files to an archive location.
I have the below questions concerning the users and groups used by Oracle RAC when moving the trace files.It is related to the permissions but it is about oracle behavior.
The trace files of database (oracle user) are created as below:
284 -rw-r----- 1 oracle asmadmin 283571 Aug 20 18:08 PPRD1_mmon_13893716.trc
4 -rw-r----- 1 oracle asmadmin 125 Aug 20 18:10 PPRD1_gcr0_11730998.trm
4 -rw-r----- 1 oracle asmadmin 2833 Aug 20 18:10 PPRD1_gcr0_11730998.trc
580 -rw-r----- 1 oracle asmadmin 588489 Aug 20 18:12 PPRD1_lmhb_15859792.trm
[code]....
when moving the files using the oracle user it will be created as oracle:oinstall and not oracle:asmadmin since asmadmin is not a group of the oracle user.
1. How oracle created my trace and log files with permission oracle:asmadmin?
2. How should I move these files? I moved it using the Oracle user and it was created as oracle:oinstall with Operating system error "unable to duplicate owner and mode after move"
Some trace files were deleted with rm command. Obviously they were active and now the free space does not free up (over 9GB). How can this be fixed without database interruption?
View 4 Replies View RelatedIn our development database, we have created 5 dimensions and a cube with 2 measures in OLAP. All these are mapped to relational tables. When we try to load this cube (MAINTAIN CUBE option from AWM) we are getting this below error:
An error has occurred on the server
Error class: Express Failure
Server error descriptions:
INI: error creating a definition manager, Generic at TxsOqConnection::generic<BuildProcess>
INI: XOQ-01600: OLAP DML error "ORA-35571: The maximum number of load errors has occured." while executing DML
[code].....
What are the database parameter i should look for as a DBA ?
We have Oracle 10g(10.2.0.4) RAC on AIX 5.3, there are 3 RAC instances on each node.From Oct 9th, we found one of the instance in node 1 generated a large size trace files in udump. The largest size of the trace file take up 3G. But there's only about 15G $ORACLE_BASE direcotry. After some time, we should delete some trace files for releasing the space.
Here is a part of alert log when this issue happen:
Sun Oct 9 23:18:15 2011
Errors in file /oracle/app/oracle/admin/bzywk/udump/bzywk1_ora_3166258.trc:
ORA-00600: internal error code, arguments: [17087], [0x70000010DF9F580], [], [], [], [], [], []
Sun Oct 9 23:18:16 2011
Trace dumping is performing id=[cdmp_20111009231816]
[code]....
I checked with some trace files, I found all these files contains a huge info of processes.
Does trcsess combine various trace files in the order in which the queries were executed?
View 3 Replies View RelatedI have some problem with '/' into the .sql files: after anonymous blocks - it haven't applied without '/'; Also not-anonymous block applied twice when we have both ';' and '/'. I need to report about problems before file will apply.)
how to handle these cases by a SQLPlus?
I am facing problem in user_dump_dest directory...I have noticed that there are a lot of trace files with huge size in MBs.I clean it and after 4 days there are 40G of size..
View 1 Replies View RelatedI am trying to update columns of Table A with the columns of Table B. Both these tables have 60,000 rows each. I tried this operation using following 2 queries:
Query 1
Update TableA A
set
(A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3
from TableB
where A.CODE=B.CODE)
Query 2
Update TableA A
set
(A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3
from TableB
where A.CODE=B.CODE)
where exists
A.code = (select B.code
from TableB B
where A.code=B.code)
When i execute these two above queries, it keeps executing indefinitely.
Below query is taking a long time...
select gam.SOL_ID,COUNT(gam.FORACID) from gam,smt where
gam.ACID=smt.ACID and gam.ACID NOT IN(select ACID from imt) and
gam.SCHM_TYPE in('SBA','CCA','CAA','ODA') and GAM.ACCT_CLS_FLG='N' and
gam.SOL_ID IN(select SOL_ID from IMT) group by gam.SOL_ID
/
attached is the explain plan.
in which index on IMT table is not used. And the query is doing a FTS on IMT table. What needs to be done to avoid FTS on IMT table.
What are the factors that decide on which column we should partition the table and which partition method we should chose.
View 2 Replies View RelatedI am using one script to delete the records from a table, its taking 1hr to delete.
declare
cursor c1 is select ownerid,ownertype from nightly_metric_projects
;
v1 c1%rowtype;
open c1;
loop
fetch c1 into v1;
exit when c1%notfound;
DELETE FROM DGT_ITEMEFFORTDATA WHERE OWNERTYPE = c1.OWNERTYPE
AND OWNERID = c1.OWNERID;
end loop;
close c1;
commit;
nightly_metric_projects--1200 records
DGT_ITEMEFFORTDATA--13200000
I have normal tables with hugh Data and would like to increase the performace by following means:
1) Add a new column in each table. Say this column Name is IS_LIVE. This new column have only two value 1 ( LIVE ) OR 0 ( NOT LIVE ).
2) Change the normal tables to Partitioned table. There would be only two partitioned in all the table. The partitioned key column would be IS_LIVE and both partitioend recrods would be in two different tablespace.
3) Added a POLICY function to these partitioned table to Always add a Query Predicate of '1' to all queuries.
I am interested to know that what kind of Indexes ( Global Or local ) would be suitable for these kind of Design.Is there any use of having Local index on IS_LIVE.Please note that Primary Key doesnot have this new column in it.
what analyzing a table does to existing indexes? Do I need to rebuild the indexes after dbms_stats.gather_table_stats command ?
View 4 Replies View RelatedObjective : To find solution to archieve data from 2 big tables which is occupying maximum size in the data base. With current data (From Jan 2005 to Sept 2011) it has records as mentioned below:
transaction - 41687927
trnansaction_dtl - 83945934
We need to load data and run monthly batches from October 2011 to current month which will increase this space.
1. Issue is there will not be having so much space.
2. Maintenance of such table is diffcult now.Also there is huge impact on performance. Can we think of partitioning the table base on date aswe query 1st table based on certain date range?
3. Most of reports use this table and creating performances issues
We have few tables in our production database which are havoc in size and will increase in size in future too so as part of the corrective measures , we have jotted down the below 3 methods to manage the size of those tables :-
1> Partitioning the table and take the export of identified partitions and after that, truncate those partition.
2> Creating history tables and remove not so current data from the original table to history table.
I have tried below steps for removing the table fregmentation but for some table i am not getting good result here.
1. It will collect the data which are having more than 100MB fragmentation.
select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len
/1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE" from
dba_tables where owner in('a','b','c','d') and round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2)
> 100 order by 8 desc;
2. then move the object(table) to the same tablespace.
alter table abc move;
alter table bcd move;
alter table efg move;
3. also rebuild the dependent objects.
alter index abc_PK rebuild online;
4. Then analyze the table which are having more than 100MB of fragmentation.
exec dbms_stats.gather_table_stats('a','abc');
exec dbms_stats.gather_table_stats('b','bcd');
exec dbms_stats.gather_table_stats('c','cdf');
after that when check the table fragmentation, i am getting the same result, which i have collected from the 1st query.
From the below query i found that there are some stale stats for 3 tables.
=================================
select table_name, stale_stats, last_analyzed
from dba_tab_statistics
where owner= 'SYSADM' and stale_stats='YES'
order by last_analyzed desc
I collect stats for those above 3 tables with dbms_stats.gather_table_stats().But no luck.After collection of stats immediately I ran the above query.But still it is showing there are stale stats for 3 tables.
how can I change "STALE-STATS" status, so that optimizer can use the updated stats eficiently.