I am using 11g version, and trying to understand the Memory Allocation.
1) The new feature Memory target parameters are dynamic? is in it? But when I set with the ALTER SYSTEM... it was not changed, then I had to restart the DB.
2) With the new 2 parameters (MEMORY_MAX_TARGET, MEMORY_TARGET), the SGA_MAX_SIZE parameter should be 0, right? But in my case, the SGA_MAX_SIZE value is automatically assigned.
I wanted to know what is the standard memory allocation for SGA & PGA in a normal installation of Oracle database 10g or 11g? Are the figures of 270M and 90M standard for a 10g or 11g installation using automatic memory management feature?
determine if a function is worth pinning in memory? I want to come up with a percentage, implying that if the function is already im memory 80%+ of the time then it is not worth it.
I am aware that from 11g, memory_target is sufficient for memeory management between SGA and PGA.
what happens if MEMORY_TARGET set to non-zero and SGA_TARGET set to zero values in a 11g database? Does it enable automatic memory management within the SGA?
We regularly hit by ORA-4031 errors. Also, memory_target advisory (v$memory_target_advice) does not show any advisory information.
for eg: memory_max_target = 500m memory_target = 500m
I have two tables with same columns(15 of them), I am trying to find difference between two tables using minus operator and then insert in stage table using below code
Issue is table1 has 50 million records table2 is empty
so when first time when we execute this v_collection1,v_collection2 collection will have 50 million records in it which will go in memory, I think this is not good, because going in memory will eat memory and resources while sorting and other activities ?
After fetching records in collection we are inserting that in stage table and then COMMIT so i think that wont be good because committing 50 million will generate large amount of redo?
below is snippet of my code
DECLARE type lst_collection1 IS TABLE OF table1.col1%type INDEX BY binary_integer; type lst_collection2 IS [code].......
I have a question regarding memory parameters in oracle database 9.2.0.8, especially sga_max_size and db_cache_size. Database server has 32G of ram. Oracle parameter on server shmmax is set to 16G. Is reasonable to set sga_max_size to the same value, and db_cache_size to 80% of that size?
I am trying to run a trace file with TKPROF. It throws an error :
MEMORY FAULT
The size of the trace file is 8MB. I tried with some other file trace files bigger in size (10MB) than the above file, it works fine. I tried seeing any permission right is required on that trace file is required for the tkprof to excute, but it has got the same permissions like the other trace files.
The problem is that it is not showing any error number or any other Error Description other than "MEMORY FAULT".
I have few queries on PGA memory management.Since these queries are based on 2-3 examples not exactly same by nature I am summarising it after my understanding for the same
As I understand many workareas can be allocated to a single sql statement and number and sizes of theses workareas is controlled internally by Oracle when Automatic Memory management (PGA_aggregate_target and workarea_size_policy=Auto are set) Since many sessions share the PGA memory, the amount of memory available to each session may vary and if less amount of memory is available for a session for sorting then TEMP tablespace is used
[1] Can we say paging happens and can be checked at this time?
[2] Is there a difference in handling memory while populating pl/sql tables?
As I have encountered ora-04030 while some our developers were populating pl/sql tables but never encountered this error for sorting, hash joins etc Though I don't remember the width of pl/sql table, I am sure the developer used 'LIMIT' clause during bulk collect and still faced the issue.
With a single session on the server, I noticed that the difference in values displayed issuing 'free' command in linux and output values from sesstat did not match at all while there wasn't any heavy OS process involved during the period. I was expecting 'used' and 'free' values displayed by free command (linux) will change and difference would be approximately equals 'before and after values of session pga memory.
[3] Isn't it expected to match?
[4] Can we say in dedicated server, at any moment of time, the SUM of 'session pga memory' represents all the memory used by Oracle SGA, at that point of time?
select sum(value)/1024/1024 "memory in MB" from v$sesstat where statistic#=20;
During one of the tests I got following output (divide value by 10 for my visibility and avoid formatting)
SQL> select a.name, to_char(b.value/10, '999,999,999') value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name like '%ga memory%'; 2 3 4
[code]...
The above query is showing above values even when the pl/sql block execution is completed 30 minutes back
[5] Do we call this as 'memory leak' where memory is not released even while some time has passed since session has done something?Of course I am not checking at OS level as mentioned in question [3] above the values won't match!
Still the output of free command for reference(After the pl/sql block executed)
SQL> select * from v$pgastat; NAMEVALUEUNIT aggregate PGA target parameter 524288000bytes aggregate PGA auto target 456256512bytes global memory bound 26214400bytes total PGA inuse 17328128bytes
[code]...
[6] What could be the significance of negative values of 'session pga memory/max'?
Last We have an OLTP system and in the night we run batch processes in 2-4 sessions
Suppose I have 10 GB RAM and with PGA setting of 3.5 GB Now I want the batch process sessions to use max possible memory during nighttime and toggle the setting back in the morning
[7] With above settings (10 GB RAM and 3.5 GB PGA) how can I divide the memory among 4 sessions?
Shall I set 1) PGA_aggregate_target=0 2)Workarea_size_policy=manual 3) Sort_are_size 4) Hash_area_size
[8] What would be approx values for parameter 3 and 4? will it be straight 3.5 GB/ 4?
We are using the 11g AMM feature and Memory_Target set to 96GB and total RAM on the Server is 128GB Now the top and free shows up only 200MB memory free on the system.
There are 2 process dbw0 and dbw1 which consumes the top memory and this is 30GB per dbw.
Why is the dbw process taking up so much memory when there is not much load on the database.
we have 96GB Memory on the UNIX server and 85% of its usage shows oracle processes I want to determine which Oracle processes are taking most of the memory
SGA is around 36G SGA_TARGET is 40G PGA is around 4G
the total of around 40-45 GB of usage is understandable but what other oracle process are chewing up the remaining 30-40 GB on the server is not known
load averages: 7.35, 6.46, 6.15; up 248+11:33:21 12:25:03 2202 processes: 2196 sleeping, 1 zombie, 5 on cpu CPU states: 83.8% idle, 10.5% user, 5.8% kernel, 0.0% iowait, 0.0% swap Memory: 96G phys mem, 15G free mem, 128G total swap, 128G free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND 21720 oracle 258 0 0 40G 40G cpu/48 215:28 2.04% oracle 10709 oracle 1 0 2 1816K 1448K cpu/9 0:02 0.90% res_conf_email_ [code]......
I have a confusion with MEMORY_TARGET and MEMORY_MAX_TARGET parameter. if i set SGA_TARGET, SGA_MAX_SIZE along with MEMORY_TARGET and MEMORY_MAX_TARGET then how oracle will manage the memory? Because as per my understanding if we set MEM
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
we are using windows server 2008 R2 and oracle database version 11.2.0.2
Have two points here : I/O performance and Out of memory issue
1.) ADDM Report recommended solution is to stripe all data files for to reduce "Wait class "User I/O" was consuming" But I am not sure It will improve more I/O performance(it will resolved the issue) and same issue we are getting on many servers.
ADDM Report Snipped:
Findings and Recommendations ---------------------------- Finding 1: I/O Throughput Impact is .28 active sessions, 44.97% of total activity. --------------------------------------------------------
The throughput of the I/O subsystem was significantly lower than expected.
Recommendation 1: Host Configuration Estimated benefit is .28 active sessions, 44.97% of total activity.
[code]....
2.) Same issue we got in another server and we increased memory thrice but again we got same below error. Now we are getting same below error another server.
Alert log Snipped:=
Wed Jun 12 07:12:15 2013 Process startup failed, error stack: Errors in file E:\APP\ORACLE\diag\rdbms\oemgc\oemgc\trace\oemgc_psp0_3316.trc: ORA-27300: OS system dependent operation:CreateThread failed with status: 8 ORA-27301: OS failure message: Not enough storage is available to process this command. ORA-27302: failure occurred at: ssthrddcr
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */ INTO DWH_BILL_DET rq USING (SELECT rated_que_rowid, detail_rerate_flag_code, rerate_sel_key,
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns: (id number, job varchar2(20), time date, plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names. For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)? 2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname from emp e, dept d where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query, Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that? Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN OPEN v_refcursor FOR SELECT /*+ FULL(a) PARALLEL(a, 5) */ * FROM items a WHERE a.sku_status = 'A'
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
are the most important performance keys we have to calculate or take in account to preserve or to increase the DB performance in terms of response times, and whatsoever according to performance ?
I Installed Oracle 11g R2 in the server SystemI want to Allocate Dedicated Memory for oracle..How Should i calculate the Memory size of SGA+PGA for good Performance..
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ lock_sga boolean FALSE pre_page_sga boolean FALSE sga_max_size big integer 3G sga_target big integer 2G
from what I read I beleive this will initially grab 2GB of memory on startup and will grab up to to 3GB of memory total for the SGA. The "total" memory can be allocated to different peices of the SGA when needed but will never exceed 3GB. Is this correct or would these settings infringe on any available memory on a system that is already tight on memory?
Secondly, what happens if both these values are set to the same value?
We are planning the RAC installations. We are allocate 3TB on HP SAN Storage to be used to setup rac environment with 2 nodes.
Question.
1. Do we accept one 3 TB from SAN engineers or ( Which RAID) 2. Multiple Luns totality to 3 TB. 3. Remember this is 11g R1 OCR and VOTING DISK cannot be managed by ASM.
Can i know the internal process of initialization of DB into memory in timesten , when a new connection is establishing? Will timesten create tables and indexes in RAM when first connection is established if the RAM policy is default?
want to know the internal functional flow of timesten when any command is fired against it.