PL/SQL :: Add Consumer Group To System Plan
Mar 6, 2013
I am trying to write a SQL statement that will do the following:
2. Add the APPUSER consumer group to the SYSTEM_PLAN resource plan. Change the level 3 CPU resource allocation percentages: 60% for the APPUSER consumer group and 40% for the LOW_GROUP consumer group.
SQL>BEGIN
DBMS_RESOURCE_MANAGER.CREATE_SYSTEM_PLAN (SYSTEM_PLAN => ‘system_plan’,
CONSUMER_GROUP1 => ‘APPUSER’, GROUP1_CPU3 => 60,
CONSUMER_GROUP2 => ‘LOW_GROUP’, GROUP2_CPU3 => 40);
END;
/
When executing the statement, I get the following error message:
Error message:
ERROR at line 2:
ORA-06550: line 2, column 23:
PLS-00302: component 'CREATE_SYSTEM_PLAN' must be declared
ORA-06550: line 2, column 1:
PL/SQL: Statement ignored
View 11 Replies
ADVERTISEMENT
Jun 15, 2012
the most accurate/efficient way of obtaining the execution plan for a piece of running SQL in Oracle 9i. in 10g and 11g obviously dbms_xplan.display_cursor(sql_id) can be used,
How can this be achieved in 9i, currently I am simply obtaining the SQL_TEXT and then running an explain plan ("EXPLAIN PLAN FOR..") - I believe this is not necessarily the same explain plan that will be used for the sql that is executing though
View 7 Replies
View Related
Sep 6, 2013
I am using Release 11.2.0.3.0 - 64bit Production version of oracle. Now we are having 3-tier architecture, (firewal/web/app/DB).Now i saw , some of the 'sql' queries, running till ~10hrs in my database and those are part of application(module JDBC THIN CLIENT). After had a talk java guys, they ask to kill the sessions specific to those queries. They are part of search TO, in which user put some large values for the date range and went to other TAB, but these queries gets running infinitely in the database, and user is not interested in the result set.
So how to avoid these things, as because in past, our database has suffered resource contention leading to application slowness. So i was planing to set different timeouts using 'database resource consumer group' for online user request and batch request depending on the app server(that is by machine names) request.
So i have done below set up in my local to test one scenario, in which i will try give a database call from difference machine, and it should get timeout after the specified duration. But its not working , as expected. The calls from the specified machine are not getting assigned to the created 'Consumer group'.
Begin
-- create the pending area
dbms_resource_manager.create_pending_area();
END;
/
BEGIN
-- Create the consumer group
[code]....
After this when i am verifying calls from machine, 'LR9XY7T8' they are belongs to the consumer group 'OTHER_GROUPS' and sql query not getting timed out within 60 seconds as mentioned.
View 9 Replies
View Related
Apr 1, 2012
We have system.current_item and system.cursor_item.....
both represent the same....
like this we have few other system variables also .....
what is the main difference between current_item and cursor_item?
View 2 Replies
View Related
Aug 21, 2012
couple of questions about below parameters, do they measure single processor or avg usage across all processorss and cpus for example will i recieve alert if CPU system time goes above 95% for single processors? or not
CPU Idle Time (%)
CPU System Time (%)
CPU User Time (%)
View 1 Replies
View Related
Nov 1, 2013
I'm trying to group sets of data based on time separations between records and then count how many records are in each group.
In the example below, I want to return the count for each group of data, so Group 1=5, Group 2=5 and Group 3=5
SELECT AREA_ID AS "AREA ID",
LOC_ID AS "LOCATION ID",
TEST_DATE AS "DATE",
TEST_TIME AS "TIME"
FROM MON_TEST_MASTER
WHERE AREA_ID =89
AND LOC_ID ='3015'
AND TEST_DATE ='10/19/1994';
[code]....
Group 1 = 8:00:22 to 8:41:22
Group 2 = 11:35:47 to 11:35:47
Group 3 = 15:13:46 to 15:13:46
Keep in mind the times will always change, and sometime go over the one hour mark, but no group will have more then a one hour separation between records.
View 4 Replies
View Related
Jun 23, 2011
I read that rownum is applied after the selection is made and before "order by". So, in order to get the sum of salaries for all employees in all departments with a row number starting from 1, i wrote :
select ROWNUM,department_id,sum(salary) from employees group by department_id
If i remove rownum, it gives the correct output. Why can't rownum be used here ?
View 16 Replies
View Related
May 17, 2011
Refer to the txt file to create table and insert data.
I executed the following query-
SELECT priority, detail, COUNT(1) FROM TEST GROUP BY priority, detail
and got the following result-
PRIORITYDETAIL COUNT(1)
StandardPatch 27
StandardInitial TSS 1
StandardInitial development 10
StandardProduction deployment5
High PriorPatch 1
Now I want that Initial TSS and Initial development should be combined as Initial together and I should get the result as follows:
PRIORITYDETAIL COUNT(1)
StandardPatch 27
StandardInitial 11
StandardProduction deployment5
High PriorPatch 1
View 3 Replies
View Related
May 12, 2013
How to use see the query plan.
View 1 Replies
View Related
Aug 13, 2012
After we have upgraded our database from 10g to 11gR2 one of the sql started running very slow, is there any way to use the 10g sql plan for this query in 11gR2..? the 10g sql plan still shows up in history table along with it hash value.
We have tried using SPM but due select_workload_repository is set to basic(default) in 10g, the plans are not getting populated into profiles.
View 1 Replies
View Related
Dec 30, 2011
I observed a strange behaviour of a query after using stored outline on it. I created a stored outline for a query in one database before creating the outline i had set 2 session level parameters optimizer_index_caching and optimizer_cost adjust. i then took an export of the stored outline and then imported into another db. but here the plan of the query seems to be different from the database from which the outline was taken. why is this change? My version of oracle is 10.2.0.5.0
View 3 Replies
View Related
Jul 30, 2012
I ran exchange partition from non-partitioned table to a partitioned table with the following params: WITHOUT VALIDATION UPDATE GLOBAL INDEXES since we have a GLOBAL index( the GLOBAL is a must). After the exchange , if I'm running a simple query on the first column of the PK the plan is very bad and the EM Adviser advices me to build an index based on that column. I'm using 11.2.0.2
View 1 Replies
View Related
Oct 8, 2010
[font="Lucida Console"][/font]
Issue: For this sql statement client is changing the date and this sql is running fine in development and taking time in production.So I created the sql profile and push it ot prodcution so for EFFDT <= '25-APR-2010' it was running fine as plan is same as development .....but then again client changed the EFFDT <= '28-AUG-2010' is changed then plan neglected my sql profile because of hardcoded value and so it has parsed the sql again.
How we can fix this plan ? there application is like that so they are goin to pass the hardoce value like this only.....so they can not use bind variable... they are going to fire the sql from one session can we set on the session level like cursor sharing or some hints to get the development plan *for proper formating see the attached file* Statement :
SELECT
DECODE(SUBSTR(JL.PAYGROUP,1,1),'P','P','T','P','C'), JL.DEPTID_CF, JL.OPERATING_UNIT, JL.FUND_CODE,
JL.CLASS_FLD, JL.PROGRAM_CODE, PC.EMPLID, PC.EMPL_RCD, BUGL.BUSINESS_UNIT, JL.ACCOUNT,
SUM(DECODE(JL.GL_NBR,'REGER',JL.AMOUNT,0)), 'Regular Earnings', SUM(DECODE(JL.GL_NBR,'OTERN',JL.AMOUNT,0)), 'Overtime', SUM(DECODE(JL.GL_NBR,'NRTAL',JL.AMOUNT,0)),
[code]....
View 3 Replies
View Related
Mar 6, 2012
I am executing below query, but optimizer generating 2 different plans for the same. I don't want to use sql profiles to fix execution plan.
Query
SELECT R.VENDOR_RECORD_SEQ_NO ,
R.VENDOR_SUBJECT_SEQ_NO ,
NVL(D.RESOLVED_VALUE, D.ORIGINAL_VALUE) VAL,
D.CONTROL_COLUMN_SEQ_NO
[code]....
View 3 Replies
View Related
Jul 4, 2012
The view that can be used to find the changes the plans (hash_value,plan_hash_value for a particular sql statement).
I have a particular sql statement for which the execution plan changes but, unfortunately i cannot find regarding which view can be used to find the details regarding this. V$SQL_PLAN_MONITOR is not working as well.
View 4 Replies
View Related
Jul 20, 2013
Why the query is behaving differently with the different database.(execution plan)
Whatever the production database is having same database instance replicated to a new schema. I tried both the queries running on both environment.In prod the index has been used but in newdev it is not. This case existing primary key index were not been used.
View 6 Replies
View Related
Oct 1, 2012
I am on 11.2 on Linux.I am looking into a performance issue. The issue is around 1 particular SQL, involving about 5 tables.I re-gathered statistics on 2 main tables in the query (out of 5 tables).
When I say re-gathered, I first did DBMS_STATS.DELETE_TABLE_STATS and then did DBMS_STATS.GATHER_TABLE_STATS.
Earlier, we had histograms on these tables, which I removed and gathered stats without generating histograms.
SQL> select table_name, num_rows, sample_size, last_analyzed from user_tables where
2 table_name in ( 'DETAIL_TABLE','MASTER_TABLE');
TABLE_NAME NUM_ROWS SAMPLE_SIZE LAST_ANALYZED
------------------------------ ---------- ----------- -------------------
MASTER_TABLE 50615338 50615338 01/10/2012 11:09:27
DETAIL_TABLE 353550440 353550440 01/10/2012 11:10:05
2 rows selected.Then ran the SQL again couple of times (actually, that SQL is in a stored procedure, which I ran couple of times).I found this wonderfull SQL on internet, which tells me when the SQL ran and which plan (identified by its hash value) it used. Using this SQL I tried to check if my SQL was run using any different plan, but it used exactly same plan it used before I re-gathered the stats. See the last analyzed time above and begin_interval_time below, same SQL has run before and after stats collection, with same plan_hash_value.
SQL> select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value,
2 nvl(executions_delta,0) execs,
3 (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime,
4 (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio
5 from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS
[code]....
7 rows selected.
My question is, when I re-gathered stats on 2 tables out of 5 tables in a given SQL, are the plans not flushed out of SGA? I was expecting that, at least a new plan hash value would show up front of my SQL, before and after stats collection.
View 8 Replies
View Related
Oct 10, 2012
I don't know, if this is the intent behavior of oracle or not. But i noticed, my queries Execution plan randomly changes after statistics collection. Several tables are truncated after the daily run at 8AM and statistics gathered for all the tables in that schema.
However execution plans for 2-3 sql statements always changes after this and performance is brought back to normal by executing the procedure by explicitly calling it from the command line with arguments instead of bind variables.
View 3 Replies
View Related
Feb 1, 2011
I am using Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
I have 2 schemas in my application.
1. Application schema
2. EOD(End of day) schema.
End of day schema is populated from Application schema whenever user runs EOD process. The tables are pulled like this.
1. Master tables : Always deleted and reinserted at each EOD process
2. Log tables for each transaction table: Delta between the last EOD and current EOD data is pulled and are used for populating transaction tables
3. Transaction tables: These are populated from log tables pulled from previous step. The logic is like this
Now based on these tables about 30 reports are generated in EOD schema. Please note that each transaction table will have a EOD_ID and any report generated uses the where condition <transaction table>.EOD_ID = <current EOD_ID>
My log table contract_log and transaction table is contract in EOD schema.
contract_log table has data like this
contract_number contract_date customer_idqtyrateeffective_date
11/Jan/2010CUST-0110101/Jan/2010
1NULLNULLNULL112/Jan/2010
EOD on 1st Jan 2010 constructs contract table as
contract_numbercontract_datecustomer_idqtyrateeod_id
11/Jan/2010CUST-011010EOD-1
(Since the change of rate 11 is not visible on 1st Jan 2010 because it is effective on 2nd Jan 2010)
EOD on 2nd Jan 2010 constructs contract table as
contract_numbercontract_datecustomer_idqtyrateeod_id
11/Jan/2010CUST-011011EOD-2
(Since the change of rate 11 is visible on 2nd Jan 2010)
This logic is working fine. But we run more than 20-30 EODs the processing time increased to 10-15 hours.
It took some time to figure out the issue as a single query when run from toad or pl sql developer runs in few seconds but as a part of the whole package it takes 2-3 hours(each query).
The problem found was that oracle execution plan gets corrupted when the process starts. So what we did was to analyze the tables
after they are pulled. This perfectly solved our problem. Currently the whole process is taking only about 12-13 minutes where about 3 minutes is lost on analyze tables and indexes. I know this is a temporary solution as I need to get out of online analyze of tables and indexes.
My code for table and index regeneration is as below
PROCEDURE sp_gather_table_index_stats(pc_table_name VARCHAR2) IS
CURSOR cur_ind IS
SELECT index_name
FROM user_indexes
WHERE table_name = pc_table_name;
BEGIN
EXECUTE IMMEDIATE ' begin DBMS_STATS.gather_table_stats(user,' || '''' ||
pc_table_name || '''' || '); end;';
FOR cur_ind_rows IN cur_ind LOOP
EXECUTE IMMEDIATE ' begin DBMS_STATS.gather_index_stats(user,' || '''' ||
cur_ind_rows.index_name || '''' || '); end;';
END LOOP;
END;
View 1 Replies
View Related
May 24, 2012
what privilege is require for a user to execute explain plan? I get below error while try to execute explain plan.
SQL> explain plan for SELECT /*+ FULL(t) */ COUNT(*) FROM "DREAM"."CONSUMER.TAB" t WHERE ROWNUM <= 1000000;
explain plan for SELECT /*+ FULL(t) */ COUNT(*) FROM "DREAM"."CONSUMER.TAB" t WHERE ROWNUM <= 1000000
*
ERROR at line 1:
ORA-01031: insufficient privileges
View 9 Replies
View Related
Mar 14, 2012
I want to get the execute plan of a sql in standby database(read only),but failed,how can i do?
SQL> explain plan for select count(1) from hxl.tb_objects;
explain plan for select count(1) from hxl.tb_objects
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-16000: database open for read-only access
SQL> select count(1) from hxl.tb_objects;
COUNT(1)
----------
22238018
SQL> select open_mode from v$database;
OPEN_MODE
----------
READ ONLY
View 4 Replies
View Related
Apr 12, 2013
How can i check the avg time taken by an execution plan. Actually i have a very big query and it changes its execution plan very often, we would like to lock the best execution plan and to find it , i would like to know the Average Execution Time the query takes when it runs using different different execution plans.
View 7 Replies
View Related
Mar 25, 2012
I have queries on the execution plan of a sql statement
Following is the example
create table t1 as select s1.nextval id,a.* from dba_objects a;
create table t2 as select s2.nextval id,a.* from dba_objects a;
insert into t1 select s1.nextval id,a.* from dba_objects a;
insert into t1 select s1.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
insert into t2 select s2.nextval id,a.* from dba_objects a;
commit;
create index i1 on t1(id);
create index i2 on t2(id);
create index i11 on t1(object_type);
exec dbms_stats.gather_table_stats(user,'T1',cascade=>true);
exec dbms_stats.gather_table_stats(user,'T2',cascade=>true);
select count(*) from t1 where object_type='VIEW';
COUNT(*)
----------
8934
set autotrace traceonly explain
Can we say in the following case, that,
(1) First index on object_type is accessed to get rowids - t1.object_type='VIEW'
(2) Then the filter on owner is applied - t1.owner='SYS'
(3) Then the table T1 is accessed to fetch data from the rowids returned by the index I11 and filer application - TABLE ACCESS BY INDEX ROWID
Though I am unable to understand how filter can be applied to the rowids retrieved from index, we can see from the plan below that The rows accessed have reduced from 8550 to 1221 before we access the table...Thus filter "t1.owner='SYS'" is applied in between. Right?
another question is
Case 1 - do we retrieve a rowid from index for a given value, then retrieve required values from table for that rowid
Thus row at a time in both ... in loop
OR
Case 2 - we first fetch all rowids from index and then retrieve values from table one row at a time from the collection of rowids fetched?
Suppose Case 1 is what is happening then can we say, both the steps mentioned by IDS 2,3 in plan below are executed exactly equal number of times and the filter "t1.owner='SYS'" is applied at some later stage? Of course in this case the values in ROWS stand misleading then
select * from t1,t2 where t1.id = t2.id and t1.object_type='VIEW' and t1.owner='SYS';
Execution Plan
----------------------------------------------------------
Plan hash value: 26873579
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1221 | 233K| 915 (1)| 00:00:11 |
|* 1 | HASH JOIN | | 1221 | 233K| 915 (1)| 00:00:11 |
|* 2 | TABLE ACCESS BY INDEX ROWID| T1 | 1221 | 116K| 381 (1)| 00:00:05 |
|* 3 | INDEX RANGE SCAN | I11 | 8550 | | 24 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | T2 | 161K| 15M| 533 (1)| 00:00:07 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("T1"."ID"="T2"."ID")
2 - filter("T1"."OWNER"='SYS')
3 - access("T1"."OBJECT_TYPE"='VIEW')
View 7 Replies
View Related
Oct 31, 2012
So the situation is like this
- Database A (20 tables)
- Database B (20 tables)
- Both A and B are Oracle 11gR2
- Both of these databases run on different hardware (A is a VM, B is on a physical host)
- The 20 tables in A and B have exactly same number of rows and after preparing the data, the schemas were analysed using the same DBMS_STATS parameters
Despite this, the execution plans appear to be quite different for the same queries between A and B
I imagine there is something outside of the Oracle table rowcounts, table stats, column stats, index stats that's resulting in the different execution plans.
View 3 Replies
View Related
Apr 30, 2012
refere to below 2 queries and their execution plans:
First Query
INSERT INTO temp_vendor(vendor_record_seq_no,checksum,rownumber,transaction_type,iu_flag)
SELECT /*+ USE_NL ( vd1 ,vd2 ,vd3 ) leading ( vd1 ,vd2 ,vd3 , tvd) */
vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM vendor_data vd1,
[code]...
Second Query
SELECT vd1.vendor_record_seq_no, tvr.checksum, tvr.rownumber, tvr.transaction_type, 'U'
FROM ( select * from vendor_data vd1
where vd1.study_seq_no = 99903
AND vd1.control_column_seq_no = 435361232
[code]...
Both are to achieve same output but written in different ways. CAn I get same exectuion plan from 1st query as there is for 2nd using hints
View 10 Replies
View Related
Jul 1, 2013
We have recently upgraded application (from Oracle Applications 11.5.9 to 12.1.3) and database (from 9.2.0.5.0 to 11.2.0.3.0).Since we are confronting to performances issues, i try to analyse some queries which Explains plans seems strange (in my opinion).Studying one of them i discover the next case (which according to my logic, i can't explain): --
Just bind variable --select *from MTL_MATERIAL_TRANSACTIONS mmtwhere 1 = 1and mmt.INVENTORY_ITEM_ID = :p1and mmt.ORGANIZATION_ID = :p2and mmt.TRANSACTION_REFERENCE = :p3--and mmt.SUBINVENTORY_CODE = :p4 PlanSELECT STATEMENT ALL_ROWS Cost: 5 Bytes: 361 Cardinality: 1 2 TABLE ACCESS BY INDEX ROWID TABLE INV.MTL_MATERIAL_TRANSACTIONS Cost: 5 Bytes: 361 Cardinality: 1 1 INDEX RANGE SCAN INDEX XXSPE.XXSPE_MTL_MATERIAL_TRANSAC_N99 Cost: 3 Cardinality: 2-- Nvl on bind variable --select *from MTL_MATERIAL_TRANSACTIONS mmtwhere 1 = 1and mmt.INVENTORY_ITEM_ID = :p1and
[code]...
View 3 Replies
View Related
Dec 27, 2011
I can not find the sql plan for insert statement,why?
Select sql_text From v$sql a
Where a.SQL_ID = '0yungrk19a277';
-------------------------------
INSERT INTO OS_USERBILL_ACTV_READ_MON_DT
(MONTHNO, MAILCODE, OPERTYPE, PROVCODE, AREACODE, DAY_TOTALCOUNT, TOTALCOUNT,
CREATETIME, MODIFYTIME, SENDER_TYPE, SENDER_DOMAIN)
VALUES
(:B9 , :B1 , :B2 , :B3 , :B4 , :B5 , :B6 , SYSDATE, SYSDATE, :B7 , :B8 )
[code]....
View 4 Replies
View Related
Mar 5, 2013
One of our clients is using Rule Based Optimizer on Oracle 10.2.0.3.0
2-3 weeks backs, during performance issue in one of the sql queries, one of our team members executed tuning adviser for it, created SQL profile and the subsequent execution of the SQL did not took much time (less I/O). Now it took hardly a minute to execute
When this happened I checked that the SQL profile forced that particular query to use CBO (say plan_hash_value is PHV1 here). Yesterday the same query again took 15-20 minutes for execution. I checked that even for this execution the query used the same SQL profile but "this time" with different plan_hash_value - say PHV2.
Today again the query executed in less than a minute and used the plan_hash_value as PHV1.
select distinct plan_hash_value,timestamp from dba_hist_sql_plan where sql_id='mysqlid' order by 1,2;
PLAN_HASH_VALUE TIMESTAMP
--------------- --------------------
890360113 20-feb-2013 16:38:39
3736413466 04-mar-2013 08:12:52
1237282258 03-jan-2013 17:15:02
I confirmed from awrsqrpt as well that different plans were used for different plan_hash_values and every time same SQL profile was used
SQL> select name,CATEGORY,SIGNATURE,CREATED,LAST_MODIFIED,TYPE,STATUS,FORCE_MATCHING from dba_sql_profiles;
NAME CATEGORY SIGNATURE CREATED LAST_MODIFIED TYPE STATUS FOR
------------------------------ ------------------------------ ---------- -------------------- -------------------- --------- -------- ---
SYS_SQLPROF_015ffffcc3e1c5b000 DEFAULT 1.5512E+19 20-feb-2013 16:30:48 20-feb-2013 16:30:48 MANUAL ENABLED NO
I am unable to understand how execution plan and thus plan_hash_value is changing for the same SQL Profile. I read that SQL Profile (unlike stored outline) keeps up with increasing data volume and may not keep up with changing data distribution.
I checked that values for 4 bind variables out of 81 are different for execution between today and yesterdays' run(queried v$sql_bind_capture based on last_captured)
My questions are
1) does the different plan_hash_values with different execution plans for query using same SQL profile mean the query was hard parsed multiple times and still used the same SQL profile?
2) If that is the case why I never saw child_number = 1 in any of the views for the same sql_id. I tried it repeatedly over last 2 weeks and always found child_number=0 in v$sql (also loaded_versions=1)
3) Does the different values of bind variable are causing this flip-flop of the plans? How can I conclude this?
I have 2 plans with 2 different plan_hash_values. I know which would be better. How can I force the sql to use better plan in the two in this case where I am using Rule Based Optimizer and have SQL profile created If this is not possible then how can I create stored outline from the existing plan (not waiting for subsequent execution to take place).
View 6 Replies
View Related
Aug 17, 2012
I am facing a weird situation wherein the explain plan of same sql in SIT and PROD is different.In fact the explain plan is very costly in Prod.Also the DB version of both SIT and PROD is same.
Below is the sql and corresponding explain plan in Prod and SIT respectively.
Query:
SELECT seq,CCN,ProcessorPart,root_item,comp_path,Item,comp_item,comp_item_type,
lag(comp_item_type,1,'PART') over(PARTITION BY seq ORDER BY lvl)Nxt_comp_item_type,lvl,bom_qty,
ROUND(CASE min(abs(bom_qty)) OVER (PARTITION BY seq ORDER BY lvl)
WHEN 0 THEN 0 ELSE 1 END * EXP (SUM (LN (nullif(abs(bom_qty),0))) OVER (PARTITION BY seq ORDER BY lvl))) Ulti_qty,
'AMER'
[code]...
The tables referred in above query is small tables containing arnd 10k records.The above tables are partitioned on Region and not indexed.
Explain Plan in Prod: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS165173613539322883634804
SORT UNIQUE236360
UNION-ALL
PARTITION LIST SINGLE117240
[code]...
Explain Plan in SIT: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS3211689
SORT UNIQUE347240
UNION-ALL
PARTITION LIST SINGLE172120
[code]...
I am not able to attribute why there is a huge change in Cost between SIT and Prod.Apparently the Job is going for 3-5 hours which used to get completed within 20mins in SIT.
View 5 Replies
View Related
Oct 24, 2011
however I was able to identify a poorly performing query that seemed to be maxing out our CPU. I have been trying to understand the Explain Plan. The plan below is from our test system which has considerably less information in the tables than our PROD system.
I can see there are a bunch of table scans at the end which may indicate missing indexes, but I am unclear on whether this is actually a problem as the %CPU seems to be worse for the JOIN near the top of the plan.
-------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Inst |IN-OUT|
-------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1870M| 3018G| | 677M (1)|999:59:59 | | |
| 1 | SORT ORDER BY | | 1870M| 3018G| 3567G| 677M (1)|999:59:59 | | |
[code]...
View 4 Replies
View Related