Performance Tuning :: TEMP Usage History
Jul 19, 2010My stats jobs failed last night with "ORA-01652 :Unable to extend TEMP" error.
Is there any way to check this history data, what other session was using TEMP tablespace extensively ?
My stats jobs failed last night with "ORA-01652 :Unable to extend TEMP" error.
Is there any way to check this history data, what other session was using TEMP tablespace extensively ?
One of our customer have problem with following sql statement:
SELECT c.table_name, c.column_name
FROM user_tab_columns c, user_tables t
WHERE c.table_name = t.table_name
AND c.data_type IN ('CLOB', 'BLOB');
During execution it takes all the TEMP tablespace size(8GB).
I gather system stats (dbms_stats.gather_dictionary_stats(estimate_percent=>null)) but it doesn't resolve problem.Above sql statement works fine with RULE hint but I want to know what is the reason of problem with temporary tablespace.
Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-
---------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------
| 0 | SELECT STATEMENT | | 178K| 6609K| 129K|
| 1 | TABLE ACCESS FULL| BHI_TRACKING | 178K| 6609K| 129K|
----------------------------------------------------------------
Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.
determine if a function is worth pinning in memory? I want to come up with a percentage, implying that if the function is already im memory 80%+ of the time then it is not worth it.
View 5 Replies View RelatedAllwasy temp tablespace shows 100% full, even though database bounce temp is not cleared again it shows 100% full.Is their to Tune this issue.
View 1 Replies View RelatedWe are using the 11g AMM feature and Memory_Target set to 96GB and total RAM on the Server is 128GB Now the top and free shows up only 200MB memory free on the system.
There are 2 process dbw0 and dbw1 which consumes the top memory and this is 30GB per dbw.
Why is the dbw process taking up so much memory when there is not much load on the database.
I came across situation where a Nullable column is not using index for 'order by' clause. I added Not Null condition in the 'where' condition but it wasn't useful. I don't wanted to make composite index with not nullable column or with constant or modify column to 'Not Null'
So I carried out test cases and during which I found that in one case the sql statement does 'fast full scan' for data access but does not use index for 'order by' sorting
here are the steps
Initially I kept the column Nullable
SQL> create sequence s5;
Sequence created.
SQL> create table t5 as select s5.nextval id,a.* from dba_objects a where rownum<1001;
Table created.
SQL> set pages 100
SQL> select column_name,nullable from user_tab_columns where table_name='T5';
SQL> create index i5 on t5(id);
Index created.
SQL> exec dbms_stats.gather_table_stats(user,'T5',cascade=>true);
PL/SQL procedure successfully completed.
exit
SQL> alter session set events '10046 trace name context forever, level 12';
select *
from
t5 where id is not null order by id
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 68 0.00 0.00 0 16 0 1000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 70 0.01 0.00 0 16 0 1000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
------- ---------------------------------------------------
1000 SORT ORDER BY (cr=16 pr=0 pw=0 time=4771 us)
1000 TABLE ACCESS FULL T5 (cr=16 pr=0 pw=0 time=1157 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 68 0.00 0.00
SQL*Net message from client 68 49.49 49.72
********************************************************************************
select /*+ index(t i5) */ *
from
t5 t where id is not null order by id
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 68 0.00 0.00 0 150 0 1000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 70 0.00 0.00 0 150 0 1000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
------- ---------------------------------------------------
1000 TABLE ACCESS BY INDEX ROWID T5 (cr=150 pr=0 pw=0 time=5167 us)
1000 INDEX FULL SCAN I5 (cr=71 pr=0 pw=0 time=3141 us)(object id 4673065)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 69 0.00 0.00
SQL*Net message from client 69 22.89 28.04
Now I modified the 'id' column to Not Null
SQL> alter table t5 modify id not null;
SQL> set pages 100
SQL> select column_name,nullable from user_tab_columns where table_name='T5';
COLUMN_NAME N
------------------------------ -
ID N
OWNER Y
OBJECT_NAME Y
SUBOBJECT_NAME Y
OBJECT_ID Y
DATA_OBJECT_ID Y
OBJECT_TYPE Y
CREATED Y
LAST_DDL_TIME Y
TIMESTAMP Y
STATUS Y
TEMPORARY Y
GENERATED Y
SECONDARY Y
14 rows selected.
select *
from
t5 order by id
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 29 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 68 0.00 0.00 0 16 0 1000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 70 0.01 0.01 0 45 0 1000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
------- ---------------------------------------------------
1000 SORT ORDER BY (cr=16 pr=0 pw=0 time=2398 us)
1000 TABLE ACCESS FULL T5 (cr=16 pr=0 pw=0 time=1152 us)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 68 0.00 0.00
SQL*Net message from client 68 37.74 37.91
********************************************************************************
select /*+ index(t i5) */ *
from
t5 t order by id
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 68 0.00 0.00 0 150 0 1000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 70 0.00 0.00 0 150 0 1000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
------- ---------------------------------------------------
1000 TABLE ACCESS BY INDEX ROWID T5 (cr=150 pr=0 pw=0 time=4166 us)
1000 INDEX FULL SCAN I5 (cr=71 pr=0 pw=0 time=3142 us)(object id 4673065)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 68 0.00 0.00
SQL*Net message from client 68 8.28 8.45
select id
from
t5 order by id
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 68 0.00 0.00 0 6 0 1000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 70 0.00 0.00 0 6 0 1000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5
Rows Row Source Operation
------- ---------------------------------------------------
1000 SORT ORDER BY (cr=6 pr=0 pw=0 time=1342 us)
1000 INDEX FAST FULL SCAN I5 (cr=6 pr=0 pw=0 time=1093 us)(object id 4673065)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 68 0.00 0.00
SQL*Net message from client 68 1.88 1.89
Questions are
1) Why adding 'where id is not null wasn't enough for the index to get used in 'order by'?
2) While we got 'fast full scan' why index wasn't used for 'order by' clause?
3) Do we need the indexed column in where clause for being used in 'order by clause' too?
4) Do we need 'order by' clause if we are selecting only the indexed column with sequence generated values?
we have 96GB Memory on the UNIX server and 85% of its usage shows oracle processes I want to determine which Oracle processes are taking most of the memory
SGA is around 36G
SGA_TARGET is 40G
PGA is around 4G
the total of around 40-45 GB of usage is understandable but what other oracle process are chewing up the remaining 30-40 GB on the server is not known
load averages: 7.35, 6.46, 6.15; up 248+11:33:21 12:25:03
2202 processes: 2196 sleeping, 1 zombie, 5 on cpu
CPU states: 83.8% idle, 10.5% user, 5.8% kernel, 0.0% iowait, 0.0% swap
Memory: 96G phys mem, 15G free mem, 128G total swap, 128G free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
21720 oracle 258 0 0 40G 40G cpu/48 215:28 2.04% oracle
10709 oracle 1 0 2 1816K 1448K cpu/9 0:02 0.90% res_conf_email_
[code]......
I am getting temp tablespace error "ORA-01652: unable to extend temp segment by 128 in tablespace TEMP" for the following code.
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */ vd1.vendor_record_seq_no,
tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1, vendor_data vd2, vendor_data vd3,
(SELECT rownumber, MAX (DECODE (control_column_seq_no, 91150, original_value, NULL)) AS value1,
[code]...
Right now used tables has the following number of records-
SELECT COUNT(*) FROM vendor_data --292890442
SELECT COUNT(*) FROM temp_vendor_data --0
SELECT COUNT(*) FROM temp_vendor_record --0
This query is part of an application, but consuming too much of temporary tablespace (68 GB allocated). I found it out by using query below:
select * from v$session a, v$sql b
where a.sql_id=b.sql_id
and status = 'ACTIVE'
I am not sure, why this problem is occuring.
I run a query, takes 20 minutes or so, I traced it and can see no more then 20-30 mb of temp space required in the plan.
I developed it for use in a materialized view, however when I create the mview with the sql, the temp space required grows until it maxxes out. I increased the existing 10gb to 50gb but still maxxed out. Took the SQL out, reran it, ran in 20 minutes barely scratching the temp, I ran a "create table as <select>" and same behaviour as the SQL, barely touched the temp as per the plan. So the temp space blolwing is unique to the mview create.
Im working with mviews years on several sites and have never seen this.
The below query is utilizing more than 17 Gb temp space. But still it is getting failed out due to insufficient temp space. is there any way to rewrite this query to reduce the temp utilization?
SELECT T12.FRGHT_AMT_CURCY_CD,T23.LAST_UPD,T11.PAR_OU_ID,T9.MAIN_PH_NUM,T23.DISCNT_PERCENT,T23.X_ERROR_NUM,T18.ADDR,T14.X_ECO_B_END_1141,
T14.X_ECO_A_END_1141,T9.X_ECO_VALIDATION_FLG,T23.X_ECO_ERR_DESCR,T14.ASSET_NUM,T20.NAME,T23.X_ECO_REASON2,T14.X_ECO_B_END_ID,
T14.ASSET_NUM,T14.X_ECO_B_END_IWPC,T23.X_AE_CON_PH_NUM,T23.SHIP_ADDR_ID,T19.NAME,T23.X_BE_CON_LST_NAME,T23.CREATED_BY,T23.X_ECO_LOCATION,T8.LOC,
T3.MODIFICATION_NUM,T10.INTEGRATION_ID,T23.INTEGRATION_ID,T23.X_MESSAGE,T9.PR_ADDR_ID,T12.ACCNT_ID,T23.X_BEARERNO,T23.X_SUB_STATUS_CD,
[code]....
I am building a reporting table using the count analytic function in order to count up several different attributes in one statement.What I find is that this method quickly eats up my TEMP space. This is 10gR2. I have attempted to use MANUAL workarea policy with as large ofsort_area_size as possible (2G) but that does not seem to have any effect on performance or TEMP usage. The RAW table is about 12G with 75 million rows. I am not that concerned about execution time, but rather TEMP usage.
--INSERT into <object>...
select distinct
file_sid,filename,control_numb,processing_date,file_class,
vendor_id,vendor_desc,
c_status_id,c_status_desc,
[code]...
I am not seeing any increase in onepass or multipass executions on the PGA during execution of this statement using...
SELECT CASE WHEN low_optimal_size < 1024*1024
THEN to_char(low_optimal_size/1024,'999999') ||
'kb <= PGA < ' ||
(high_optimal_size+1)/1024|| 'kb'
ELSE to_char(low_optimal_size/1024/1024,'999999') ||
[code]...
I'd like to get a better explaination of how analytics use the instance resources and TEMP space. For example if I add
a count with a different window (such as the last two columns commented in the above query) I blow out my temp space (70G).
Is the critcal factor the use of distinct? or multiple windows? or something else?
Env: RHEL 5.8 RAC 11.2.0.2
I'm currently moving an IOT from one database to another using expdp/imdp. The IOT is non-partitioned and about 100GB in size containing ~1,1 billion rows.
The dumpfile contains nothing else but the IOT. I'm importing with no special parameters, no pre-created IOT, just ordinary dumpfile import. (impdp username/password dumpfile=impdp:iot.dmp nologfile=y )
During import I got unable to extend TEMP errors from impdp.
ORA-39171: Job is experiencing a resumable wait.
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
ORA-39171: Job is experiencing a resumable wait.
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
I had to add 2 additional files to my Temp tablespace (total 96GB of temp) before the import could finish off.
Is this temp usage to be expected when importing IOT's ?
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
I have a view, which has a union. (Union is required because of the nature of the data fetched). THis view is later joined with a global temp table which holds the -say employee Id the user selects.
So at runtime there is a join with the global temp table and the view. But the performance is really bad. I have tried using various hints, like materialize, /*+ CARDINALITY(gtmp 1) */ etc.
When i query the view alone,. the performance is good. When I remove the union, the performance is good. Some how with the union- there is a full table scan on one of the joining tables.
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
Is their any query to find cpu usage & memory usage of all the queries currently running on DB?
View 5 Replies View RelatedI have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
I am trying to run on Oracle report via Oracle Application Concurrent job. Concurrent job is completing normal but I don't get anything on print out page. In log file of this request I see message 'MSG-01003: Errors =>ORA-01652: unable to extend temp segment by 128 in tablespace TEMP'. I almost doubled the TEMP tablespace in size but still I am not able to get rid of this error message.
View 1 Replies View RelatedThe following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'
[code]...
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
View 10 Replies View RelatedThe below query throws an error as mention below
My PGA_AGREGATOR_TARGET = 2GB
below query is given below.
RowsPlan
1SELECT STATEMENT
1 HASH JOIN
1 MERGE JOIN CARTESIAN
1 TABLE ACCESS BY INDEX ROWID WAT_SOURCE_DATA
BITMAP CONVERSION TO ROWIDS
BITMAP INDEX SINGLE VALUE INDX_WAT_SRC_DATA_BIT
[code]....
Error Message : ORA-01652:unable to extend temp segment by 128 in tablespace TEMP
Query :
SELECT OR004.wat_id "WAT_ID",
SYSDATE "DATE_FIRST_IDENTIFIED",
SYSDATE "DATE_LAST_IDENTIFIED",
'OR-004' "RULE_REFNO",
'RISK' "RULE_TYPE",
OR004.workspace_id "WORKSPACE_ID",
OR004.workspace_name "WORKSPACE_NAME",
[code]....
this huge report that uses inline views. I keep getting the following error message when running the script through toad. I was thinking about using the USE_HASH hints. The sql optimizer we use is very buggy in Toad. I'm using oracle database version 10.2.0.3.
I can upload explain plan if needed.
SELECT 'Project Number^Project Start Date^Project End Date^Status^Project Manager^Task Number^'||
'Task Start Date^Task Completion Date^Task Manager^Award Number^Award Short Name^Project Organization^'||
'Task Organization^Expense Code^OMB Code^Revenue Line^Burden Rate^Burden Structure^Site^Sponsor^Type^Customer^'||
'Award Type^Award Purpose^Federal Flow Thru Code^IDC Schedule Name^Total Expenditure^Direct Charges^'||
'Indirect Charges^Cost Share Charges^Total Commitments^Direct Commitments^Indirect Commitments^Cost Share Commitments^'||
[Code]...