Performance Tuning :: ORA-02017 / Integer Value Required
Nov 28, 2012
Recent events at work are forcing me to take a much closer look at hash joins in an attempt to understand them much deeper than just on the surface. But my question today is maybe simple. I have done lots of reading and can't for the life of me figure out how to get more memory to my HASH JOINS.
is there are way to get around this limit of 2GB on a box that has 64GB with some 20gb not in use?
1) my databases are all using workarea_size_policy=AUTO
2) I am not afraid to go back to =MANUAL and set my own work area sizes.
3) It seems I cannot set HASH_AREA_SIZE to more that about 2GB.
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Solaris: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
18:40:31 SQL> alter session set hash_area_size = 6000000000 ;
alter session set hash_area_size = 6000000000
*
ERROR at line 1:
ORA-02017: integer value required
I know there is a limit of about 2GB on my box for HASH_AREA_SIZE and setting it to 2GB works fine. But it is still not enough.
18:50:22 SQL> alter session set hash_area_size = 2147483647;
Session altered.
Elapsed: 00:00:00.23
is there are way to get around this limit of 2GB on a box that has 64GB with some 20gb not in use? Using hash_area_size and 2GB, I get better performance than with my current PGA_AGGREGATE_TARGET doing the allocation for me.
I think I'd like to get as much as 20GB to specific sessions for hash joins but maybe I am pipe dreaming?
NAME TYPE VALUE
------------------------------------ ----------- -------------------
_pga_max_size big integer 1258280K
pga_aggregate_target big integer 6G
We are experiencing Network waits on one of our 2-node clustered databases...In every 1 hour of clock time we are finding 700-900 seconds of Network waits
From the AWR data I find that "ARCH wait on SENDREQ" is one of the main constituent for these Network waits and as such I suspect Network between Production database and its corresponding database might be slow
Question 1) Does this understanding look correct?
Question 2) Apart from the above what could be the other causes of the Network waits. Can we point out any particular area from following AWR extract...Seeing some gc* waits initially I thought it might be due to slow interconnect between the cluster nodes but some google search denotes it is not the case...So what could be other causes? I mean which network link I would check?
Snap Id Snap Time Sessions Curs/Sess --------- ------------------- -------- --------- Begin Snap: 22631 22-May-13 10:00:11 976 7.9 End Snap: 22632 22-May-13 11:00:28 978 8.1 Elapsed: 60.29 (mins) DB Time: 795.66 (mins) [code].....
I am getting the below error in alert log file,when my application calling a procedure.
ORA-01555 caused by SQL statement below (Query Duration=1576 sec, SCN: 0x09a2.5dda3165): Fri Sep 16 16:33:40 2011 UPDATE SSPT_NETWORK_DETAILS SET INCLUDE_OFFERS = 'Yes' WHERE SESS_ID = SESS_ID
There is no ROLLBACK statement in my procedure. As per my understanding, the ORA-1555 error will occur,
1. The required old image is not in the undo,when we rollback the trasaction. 2. the select query may face this error because of delayed block cleanout concept.
But I don't know why this update statement causing this 1555 error?
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */ INTO DWH_BILL_DET rq USING (SELECT rated_que_rowid, detail_rerate_flag_code, rerate_sel_key,
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns: (id number, job varchar2(20), time date, plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names. For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)? 2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname from emp e, dept d where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query, Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that? Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN OPEN v_refcursor FOR SELECT /*+ FULL(a) PARALLEL(a, 5) */ * FROM items a WHERE a.sku_status = 'A'
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
are the most important performance keys we have to calculate or take in account to preserve or to increase the DB performance in terms of response times, and whatsoever according to performance ?
I have a field in a table that is declared in the CREATE statement as an INT datatype. However, when I query that table using vb.net, the value comes back as a decimal.
How do you declare a field in Oracle as a true integer data type?
Using Oracle SQL Developer 2.1.1.64 to run the queries & Oracle 11g.
I have two numbers in two colomns of an oracle table(colomn a & colomn b). I am trying to divide colomn a/colomn b and putting the results in colomn c & also in colomn d (all in the same table) using update commands
Eg: UPDATE MTOTABLE_PWELD pw SET pw.WELDO=(pw.pipe_length/12000);
But here is the real issue. In colomn d I only need the integer portion of the division value.
For example , when I divide colomn a/colomn b , let us assume that we are getting a value of 2.56. Then I want the value of 2 to go to colomn d.
I tried round((colomn a/colomn b),0). But it rounds off 2.56 to 3. I dont want that. I need the exact integer portion of the value to be seperated.
F1 F2 F3 ---------- ---------- ---------- 1200015 0 1200.015
In above result F3 represent the actual result, which is nearest value where mod returns the 0, but i want nearest integer value which is 1206. how it is possible. In above case consider 1200 as Kgs and 45 as Grams.
I have the following function that I am using as a template for any function that executes a select statement and return a single value as an output.
The function is working but I wanted to take an expert opinion if it can be optimized.
CREATE OR REPLACE FUNCTION AFESD.F_AGR_GET_AGREEMENT_SERIAL (I_NUMBER0 IN NUMBER, S_SUB_NUMBER VARCHAR2 DEFAULT NULL, I_TYPE_ID NUMBER)
[Code]....
In addition I want to use the parameter S_SUB_NUMBER that can be NULL and add it to the select statement of the cursor, but I dont know how to do that in one statement.
CURSOR C_AGREEMENT IS SELECT AGREEMENT_SERIAL FROM VW_AGR_AGREEMENT WHERE NUMBER0 = I_NUMBER0 AND TYPE_ID = I_TYPE_ID -->and sub_number is null; -->and sumb_number = s_sub_number
I am working on an assignement where client is using Oracle 10g but stuck to using RBO Now the application team, from the GUI available to them build dynamic queries and some of them run very slow.
Neither the code can not be changed to tune the queries nor do we get the exact step in the plan which is an issue (being RBO).For some long running queries the Tuning advisor is not producing any recommendations.
Another hurdle is that all the application users are using same application user id so we can not write a logon trigger to use CBO for some particular queries to see what is happening in the background!
I want to tuning the next sql sentence. In this sql I want to get the hash_value and sql_text of the sentences that it's causing TX blocks. Is it possible?. This sentence works fine but sometimes It's slow.
SELECT DISTINCT hash_value, sql_text FROM gv$sql sq WHERE hash_value IN (SELECT DISTINCT prev_hash_value FROM gv$session se WHERE sid IN (SELECT sid FROM gv$lock l WHERE type = 'TX' AND ctime >= 2000 AND l.inst_id = se.inst_id AND l.sid = se.sid) AND sq.inst_id = se.inst_id); [code]....
I see one of my SQL's which is ran by the user on a 10.2.0.3 database changing its SQL_ID after some runs even if the query is not changed a bit! However the HASH VALUE for this query remains the same.
how a same query can have different SQL_ID's but same HASH_VALUE?
Note: Statistics are not modified on the base tables of this query.
I am running Oracle 10.2.0.1.0 on MS Windows 2003 server 64-bit with 16G RAM.
Here is the findings for my Oracle database.
SQL> select * * from v$sgainfo; NAME BYTES RES -------------------------------- ---------- --- Fixed SGA Size 1293560 No Redo Buffers 7094272 No Buffer Cache Size 830472192 Yes
[code]...
I find that the SGA component "Buffer Cache" is decreasing from the start "1.8G" and down to now 0.8G. On the other hand, the component "Shared Pool" is increasing from the start 0.3G to now 1.2G. I noticed that there are 100 operations of shrinking of "Buffer cache" and growth of "Shared Pool" in Oracle every day.Is it a indicator that I should raise up the SGA_MAX_SIZE?
I tried to increase the SGA_MAX_SIZE to 4G. But I cannot start the Oracle afterward.Is it a limitation of MS Windows(OS) or Oracle?I set the SGA_MAX_SIZE to 3G. This time, I can startup Oracle.What is the optimum/maximum I can set to SGA_MAX_SIZE?Is there any adverse effect/concern when setting the SGA_MAX_SIZE more than 2G?
Here i have three tier application. I want to know it host name from sid or sqlid . I want to know which query run on which host. Because i have one user from application to database. So i want to know which query consume more time on which host ?
I want to run some OLTP benchmarks on my system. I have looked up the TPC-E benchmarking suite .. but the documentation on the site makes no sense to me .