Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-
Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.
We are using the 11g AMM feature and Memory_Target set to 96GB and total RAM on the Server is 128GB Now the top and free shows up only 200MB memory free on the system.
There are 2 process dbw0 and dbw1 which consumes the top memory and this is 30GB per dbw.
Why is the dbw process taking up so much memory when there is not much load on the database.
The below query is utilizing more than 17 Gb temp space. But still it is getting failed out due to insufficient temp space. is there any way to rewrite this query to reduce the temp utilization?
I am facing one performance issue, in which the query cost is very low compare to cpu cost and as a result the cpu always show the high graph.I am also attaching the gv$sql and gv$sql_plan data of this query.
This is the query:
SELECT PTLS.ITEMTYPE , PTLS.ITEMID , PTLS.STAGEID, TS.USERID, SUM(PREVIOUSHOURS) AS PREVIOUSHOURS, MIN(STARTDATE) AS STARTDATE, MAX(STARTDATE) AS ENDDATE FROM PROJECTTIMELOGSSTAGE PTLS, PROJECTTIMESHEETITEM PTSI, TIMESHEET TS WHERE PTLS.PROJECTID = :B2 AND TS.TIMESHEETID = PTSI.TIMESHEETID AND TS.USERID = :B1 AND PTSI.TIMESHEETID = PTLS.TIMESHEETID AND PTSI.ITEMTYPE = PTLS.ITEMTYPE AND PTSI.ITEMID = PTLS.ITEMID AND (PTSI.ISPWFITEM = 'N' OR PTSI.ISPWFITEM IS NULL) AND PTLS.ITEMTYPE NOT IN ('OtherTsk','NewTsk','Loc','Glb') AND (PTLS.ITEMTYPE, PTLS.ITEMID ) IN (SELECT ITEMTYPE, ITEMID FROM PROJECTTIMELOGSSTAGE PTLS1 WHERE PTLS1.PROJECTID = :B2 AND PTLS1.TIMESHEETID = :B3 ) GROUP BY PTLS.ITEMTYPE, PTLS.ITEMID, PTLS.STAGEID, TS.USERID
I have two tables with same columns(15 of them), I am trying to find difference between two tables using minus operator and then insert in stage table using below code
Issue is table1 has 50 million records table2 is empty
so when first time when we execute this v_collection1,v_collection2 collection will have 50 million records in it which will go in memory, I think this is not good, because going in memory will eat memory and resources while sorting and other activities ?
After fetching records in collection we are inserting that in stage table and then COMMIT so i think that wont be good because committing 50 million will generate large amount of redo?
below is snippet of my code
DECLARE type lst_collection1 IS TABLE OF table1.col1%type INDEX BY binary_integer; type lst_collection2 IS [code].......
determine if a function is worth pinning in memory? I want to come up with a percentage, implying that if the function is already im memory 80%+ of the time then it is not worth it.
The prod stats has been implemented in development. The stats has been gathered 2 months back on dev while in production the stats has been gathered 2 weeks back.
My question shouldn't the high volume of data causes changes in plan in both the environment? My thinking is that plan can be different as the high volume of data are changing in prod it may lead to a different plan.
In a 3-node RAC setup; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS CPU time 5,802 34.9
RFS ping 15 5,118 33,671 30.8 Other
Log file sequential read 234,831 5,036 21 30.3 System I/O
Sql*Net more data from client 24,1711,08745 6.5 Network
Db file sequential read130,939 4533 2.7 User I/O
Findings:- On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
1)Are these symptoms of undersized log buffer? 2)I feel Network wait can be reduced by tweaking SDU & TDU values based on MDU.
How can I find out the particular oracle session which was consuming high memory in the past?
I can't get the data in v$sessstat Unable to get the information in AWR
dba_hist_active_session_history do not have field which indicate memory related information Shall I concetrate on EVENT in dba_hist_active_session_history which continuosly had sort, direct path read Or Locate sql_id from dba_hist_sqlstat with high SORTS_DELTA for snapshots belonging to problematic time period and then using the sql_id query dba_hist_active_session_history
which approach I shall take to find out the session which consumed most memory in the past?
I came across situation where a Nullable column is not using index for 'order by' clause. I added Not Null condition in the 'where' condition but it wasn't useful. I don't wanted to make composite index with not nullable column or with constant or modify column to 'Not Null'
So I carried out test cases and during which I found that in one case the sql statement does 'fast full scan' for data access but does not use index for 'order by' sorting
here are the steps
Initially I kept the column Nullable
SQL> create sequence s5; Sequence created.
SQL> create table t5 as select s5.nextval id,a.* from dba_objects a where rownum<1001; Table created.
SQL> set pages 100 SQL> select column_name,nullable from user_tab_columns where table_name='T5';
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 SORT ORDER BY (cr=16 pr=0 pw=0 time=4771 us) 1000 TABLE ACCESS FULL T5 (cr=16 pr=0 pw=0 time=1157 us)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 49.49 49.72 ********************************************************************************
select /*+ index(t i5) */ * from t5 t where id is not null order by id
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 TABLE ACCESS BY INDEX ROWID T5 (cr=150 pr=0 pw=0 time=5167 us) 1000 INDEX FULL SCAN I5 (cr=71 pr=0 pw=0 time=3141 us)(object id 4673065)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 69 0.00 0.00 SQL*Net message from client 69 22.89 28.04
Now I modified the 'id' column to Not Null
SQL> alter table t5 modify id not null;
SQL> set pages 100 SQL> select column_name,nullable from user_tab_columns where table_name='T5';
COLUMN_NAME N ------------------------------ - ID N OWNER Y OBJECT_NAME Y SUBOBJECT_NAME Y OBJECT_ID Y DATA_OBJECT_ID Y OBJECT_TYPE Y CREATED Y LAST_DDL_TIME Y TIMESTAMP Y STATUS Y TEMPORARY Y GENERATED Y SECONDARY Y
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 SORT ORDER BY (cr=16 pr=0 pw=0 time=2398 us) 1000 TABLE ACCESS FULL T5 (cr=16 pr=0 pw=0 time=1152 us)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 37.74 37.91 ********************************************************************************
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 TABLE ACCESS BY INDEX ROWID T5 (cr=150 pr=0 pw=0 time=4166 us) 1000 INDEX FULL SCAN I5 (cr=71 pr=0 pw=0 time=3142 us)(object id 4673065)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 8.28 8.45
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 SORT ORDER BY (cr=6 pr=0 pw=0 time=1342 us) 1000 INDEX FAST FULL SCAN I5 (cr=6 pr=0 pw=0 time=1093 us)(object id 4673065)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 1.88 1.89
Questions are
1) Why adding 'where id is not null wasn't enough for the index to get used in 'order by'? 2) While we got 'fast full scan' why index wasn't used for 'order by' clause? 3) Do we need the indexed column in where clause for being used in 'order by clause' too? 4) Do we need 'order by' clause if we are selecting only the indexed column with sequence generated values?
we have 96GB Memory on the UNIX server and 85% of its usage shows oracle processes I want to determine which Oracle processes are taking most of the memory
SGA is around 36G SGA_TARGET is 40G PGA is around 4G
the total of around 40-45 GB of usage is understandable but what other oracle process are chewing up the remaining 30-40 GB on the server is not known
load averages: 7.35, 6.46, 6.15; up 248+11:33:21 12:25:03 2202 processes: 2196 sleeping, 1 zombie, 5 on cpu CPU states: 83.8% idle, 10.5% user, 5.8% kernel, 0.0% iowait, 0.0% swap Memory: 96G phys mem, 15G free mem, 128G total swap, 128G free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND 21720 oracle 258 0 0 40G 40G cpu/48 215:28 2.04% oracle 10709 oracle 1 0 2 1816K 1448K cpu/9 0:02 0.90% res_conf_email_ [code]......
I have a RAC 48 cores on solaris. I check dbconsole when application performance is very slow and everyone complains, and I see that the main wait is cpu - also on the awr report. however when I check server cpu I see about 80% idle! so how can I make oracle use more cpu power instead of waiting for it? I don't think that parallel is an option here because I can't change the application code.
I am running oracle linux 5.7 on VMware Workstation 9 and trying to install Grid 11gr2, during installation my system freezes and I see out of memory errors. Actually Java installer is eating up too much memory over 1gb. Is there a way ? I can limit memory usage by Java process to a certain extent ?
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname from emp e, dept d where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query, Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that? Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
INSERT INTO tra VALUES (2, 20503, SDO_GEOMETRY (2001, NULL, SDO_POINT_TYPE (1387, 0, NULL), NULL, NULL), 23037 ) /
and( position) indexed as Rtree spatial index
now when i run spatial query such as
SELECT * FROM tra t WHERE SDO_FILTER(t.position, MDSYS.SDO_GEOMETRY(2001,NULL,NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1,1003,3), MDSYS.SDO_ORDINATE_ARRAY(0,0,9000,0)), 'querytype=WINDOW') = 'TRUE' and t.position.sdo_point.X=1;
i do not know how many IO accrued ?
i tried set autotrace on
but the physical read is 0 , this is not possible because i have more than 100000 objects there and all indexed as R-tree
select gam.SOL_ID,COUNT(gam.FORACID) from gam,smt where gam.ACID=smt.ACID and gam.ACID NOT IN(select ACID from imt) and gam.SCHM_TYPE in('SBA','CCA','CAA','ODA') and GAM.ACCT_CLS_FLG='N' and gam.SOL_ID IN(select SOL_ID from IMT) group by gam.SOL_ID /
attached is the explain plan.
in which index on IMT table is not used. And the query is doing a FTS on IMT table. What needs to be done to avoid FTS on IMT table.
It is taking different approaches (execution plans) while executing for same set of parameters. Due to which sometimes it executes successfully, but sometimes it fills all TEMP space and get failed. I am pasting both the execution plan (different from expalin plan) below:
We are facing performance issues on our production instance 10g(10.2.0.4) 32-bit OELinux 5.3 2GB SGA. The performance is mainly related to one of the table which is sized about 32Gb. We have rebuild the indexes as well but problemstill persist. We are considering to pin SQL statement in shared pool which is hitting the same table frequently. But as far what we have find, is that we can only pin procedures or function in shared pool. True/false?If we can, then how to pin SQL statement in shared pool?If we can not, then is there any other way?
I am facing a weird situation wherein the explain plan of same sql in SIT and PROD is different.In fact the explain plan is very costly in Prod.Also the DB version of both SIT and PROD is same.
Below is the sql and corresponding explain plan in Prod and SIT respectively.
Query: SELECT seq,CCN,ProcessorPart,root_item,comp_path,Item,comp_item,comp_item_type, lag(comp_item_type,1,'PART') over(PARTITION BY seq ORDER BY lvl)Nxt_comp_item_type,lvl,bom_qty, ROUND(CASE min(abs(bom_qty)) OVER (PARTITION BY seq ORDER BY lvl) WHEN 0 THEN 0 ELSE 1 END * EXP (SUM (LN (nullif(abs(bom_qty),0))) OVER (PARTITION BY seq ORDER BY lvl))) Ulti_qty, 'AMER'
[code]...
The tables referred in above query is small tables containing arnd 10k records.The above tables are partitioned on Region and not indexed.
I am not able to attribute why there is a huge change in Cost between SIT and Prod.Apparently the Job is going for 3-5 hours which used to get completed within 20mins in SIT.
When i run a script that does a select from a single table (table has 33521868 records)the query is executed in about .094 seconds. I use the exact same query to insert into a temporary table and the query takes 10 minutes and more.
What should I be doing to speed up this process. Also tried using hints and it does not speed up the insert.
I have a query optimized as to it indexes and others runs immediately when the answer is few records in SQL Server such as Oracle, however when the result is large eg 20,000 records all data access times are very diferent. The query returns many fields (about 20) and some of them are of type Varchar 250 and some of 2000 I understand here may be the problem, but not is because for similar results (20,000 records) sql run in 2 seconds and Oracle but it responds little to have full data takes around 30 seconds. The problem is really in bringing information to all these fields since if the inquiry it also but only returning a numeric field is done in 2 seconds. Tests I've done them both through ODBC, in the Toad as in the own Oracle console on the server, so it is not problem Driver or flow of data through the network, I would like to think that this is some of the settings I think there is as much difference between Oracle and Sql. The databases are ORACLE 10 and SQL Server 2008.