Performance Tuning :: Memory Parameters In Oracle Database 9.2.0.8
Nov 18, 2012
I have a question regarding memory parameters in oracle database 9.2.0.8, especially sga_max_size and db_cache_size. Database server has 32G of ram. Oracle parameter on server shmmax is set to 16G. Is reasonable to set sga_max_size to the same value, and db_cache_size to 80% of that size?
View 2 Replies
ADVERTISEMENT
Dec 29, 2012
I am aware that from 11g, memory_target is sufficient for memeory management between SGA and PGA.
what happens if MEMORY_TARGET set to non-zero and SGA_TARGET set to zero values in a 11g database? Does it enable automatic memory management within the SGA?
We regularly hit by ORA-4031 errors. Also, memory_target advisory (v$memory_target_advice) does not show any advisory information.
for eg:
memory_max_target = 500m
memory_target = 500m
and
sga_max_size=500m
sga_target=0
View 6 Replies
View Related
Jul 31, 2012
We are using the 11g AMM feature and Memory_Target set to 96GB and total RAM on the Server is 128GB Now the top and free shows up only 200MB memory free on the system.
There are 2 process dbw0 and dbw1 which consumes the top memory and this is 30GB per dbw.
Why is the dbw process taking up so much memory when there is not much load on the database.
View 4 Replies
View Related
Jun 20, 2012
we have 96GB Memory on the UNIX server and 85% of its usage shows oracle processes I want to determine which Oracle processes are taking most of the memory
SGA is around 36G
SGA_TARGET is 40G
PGA is around 4G
the total of around 40-45 GB of usage is understandable but what other oracle process are chewing up the remaining 30-40 GB on the server is not known
load averages: 7.35, 6.46, 6.15; up 248+11:33:21 12:25:03
2202 processes: 2196 sleeping, 1 zombie, 5 on cpu
CPU states: 83.8% idle, 10.5% user, 5.8% kernel, 0.0% iowait, 0.0% swap
Memory: 96G phys mem, 15G free mem, 128G total swap, 128G free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
21720 oracle 258 0 0 40G 40G cpu/48 215:28 2.04% oracle
10709 oracle 1 0 2 1816K 1448K cpu/9 0:02 0.90% res_conf_email_
[code]......
View 6 Replies
View Related
Sep 4, 2012
I have 3 questions.
I have 11.2.0.1 database on Windows 2008 R2 SP1.
1) While connecting to the database as sysdba (sqlplus "/as sysdba"), it takes about 12 to 15 seconds what things I need to check/change so that connection will be faster?
2) Also, I have read somewhere that changing the sga/memory parameters connecting to the database. is it true?
3) This is not concerned with above 2 questions. What should be the memory parameters size for a 8 gb database?
What I have is:
Total System Global Area 1068937216 bytes
Fixed Size 2182592 bytes
Variable Size 1027605056 bytes
Database Buffers 33554432 bytes
Redo Buffers 5595136 bytes
View 14 Replies
View Related
Jun 10, 2013
I am using 11g version, and trying to understand the Memory Allocation.
1) The new feature Memory target parameters are dynamic? is in it? But when I set with the ALTER SYSTEM... it was not changed, then I had to restart the DB.
2) With the new 2 parameters (MEMORY_MAX_TARGET, MEMORY_TARGET), the SGA_MAX_SIZE parameter should be 0, right? But in my case, the SGA_MAX_SIZE value is automatically assigned.
View 5 Replies
View Related
Jul 24, 2013
determine if a function is worth pinning in memory? I want to come up with a percentage, implying that if the function is already im memory 80%+ of the time then it is not worth it.
View 5 Replies
View Related
Feb 13, 2013
I have two tables with same columns(15 of them), I am trying to find difference between two tables using minus operator and then insert in stage table using below code
Issue is table1 has 50 million records
table2 is empty
so when first time when we execute this v_collection1,v_collection2 collection will have 50 million records in it which will go in memory, I think this is not good, because going in memory will eat memory and resources while sorting and other activities ?
After fetching records in collection we are inserting that in stage table and then COMMIT so i think that wont be good because committing 50 million will generate large amount of redo?
below is snippet of my code
DECLARE
type lst_collection1
IS
TABLE OF table1.col1%type INDEX BY binary_integer;
type lst_collection2
IS
[code].......
View 4 Replies
View Related
Aug 31, 2011
I am trying to run a trace file with TKPROF. It throws an error :
MEMORY FAULT
The size of the trace file is 8MB. I tried with some other file trace files bigger in size (10MB) than the above file, it works fine. I tried seeing any permission right is required on that trace file is required for the tkprof to excute, but it has got the same permissions like the other trace files.
The problem is that it is not showing any error number or any other Error Description other than "MEMORY FAULT".
View 1 Replies
View Related
Feb 11, 2011
I have few queries on PGA memory management.Since these queries are based on 2-3 examples not exactly same by nature I am summarising it after my understanding for the same
As I understand many workareas can be allocated to a single sql statement and number and sizes of theses workareas is controlled internally by Oracle when Automatic Memory management (PGA_aggregate_target and workarea_size_policy=Auto are set) Since many sessions share the PGA memory, the amount of memory available to each session may vary and if less amount of memory is available for a session for sorting then TEMP tablespace is used
[1] Can we say paging happens and can be checked at this time?
[2] Is there a difference in handling memory while populating pl/sql tables?
As I have encountered ora-04030 while some our developers were populating pl/sql tables but never encountered this error for sorting, hash joins etc Though I don't remember the width of pl/sql table, I am sure the developer used 'LIMIT' clause during bulk collect and still faced the issue.
With a single session on the server, I noticed that the difference in values displayed issuing 'free' command in linux and output values from sesstat did not match at all while there wasn't any heavy OS process involved during the period. I was expecting 'used' and 'free' values displayed by free command (linux) will change and difference would be approximately equals 'before and after values of session pga memory.
[3] Isn't it expected to match?
[4] Can we say in dedicated server, at any moment of time, the SUM of 'session pga memory' represents all the memory used by Oracle SGA, at that point of time?
select sum(value)/1024/1024 "memory in MB" from v$sesstat where statistic#=20;
During one of the tests I got following output (divide value by 10 for my visibility and avoid formatting)
SQL> select a.name, to_char(b.value/10, '999,999,999') value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and a.name like '%ga memory%'; 2 3 4
[code]...
The above query is showing above values even when the pl/sql block execution is completed 30 minutes back
[5] Do we call this as 'memory leak' where memory is not released even while some time has passed since session has done something?Of course I am not checking at OS level as mentioned in question [3] above the values won't match!
Still the output of free command for reference(After the pl/sql block executed)
SQL> !free
total used free shared buffers cached
Mem: 3016796 2999660 17136 0 4308 1173260
-/+ buffers/cache: 1822092 1194704
Swap: 1048568 636124 412444
--(After the pl/sql block executed)
SQL> select * from v$pgastat;
NAMEVALUEUNIT
aggregate PGA target parameter 524288000bytes
aggregate PGA auto target 456256512bytes
global memory bound 26214400bytes
total PGA inuse 17328128bytes
[code]...
[6] What could be the significance of negative values of 'session pga memory/max'?
Last We have an OLTP system and in the night we run batch processes in 2-4 sessions
Suppose I have 10 GB RAM and with PGA setting of 3.5 GB Now I want the batch process sessions to use max possible memory during nighttime and toggle the setting back in the morning
[7] With above settings (10 GB RAM and 3.5 GB PGA) how can I divide the memory among 4 sessions?
Shall I set 1) PGA_aggregate_target=0 2)Workarea_size_policy=manual 3) Sort_are_size 4) Hash_area_size
[8] What would be approx values for parameter 3 and 4? will it be straight 3.5 GB/ 4?
View 8 Replies
View Related
Aug 11, 2010
I have OEl 5.5 installed and patched. I have my SSD mounted..
but when i ran the command 'alter system set db_flash_cache_file ='/media/ssd/' scope=spfile;'
It wouldn't startup giving ORA-01078: failure in processing system parameters...
View 6 Replies
View Related
Jan 15, 2011
I have a confusion with MEMORY_TARGET and MEMORY_MAX_TARGET parameter. if i set SGA_TARGET, SGA_MAX_SIZE along with MEMORY_TARGET and MEMORY_MAX_TARGET then how oracle will manage the memory? Because as per my understanding if we set MEM
View 3 Replies
View Related
Jun 15, 2012
What value should i set for sga_target in my oracle 10g database?
Currently -
sga_max_size = 32GB
pga_aggregate_target = 6GB
RAM on server = 64 GB
I'll need to disable db_block_buffers parameters in order to enable sga_target. right?
View 8 Replies
View Related
Sep 2, 2011
I am querying v$sga and getting variable size : 211337216 bytes.when querying v$sgastat then getting
java Pool : 16777216
Large Pool : 41943040
Shared pool : 398560392
But as per my knowledge following condition should satisfy,but not getting
[code]
Variable sga = java pool + large pool + shared pool
select pool,name,sum(bytes)
from v$sgastat
where pool in ('shared pool','java pool','large pool')
group by pool,name;
Here variable size using v$sga : 211337216 bytes
and java pool + large pool + shared pool : 211302536 bytes.
[/code]
but it should match?
View 5 Replies
View Related
Jul 13, 2010
I have a procedure which mainly run queries on a Table which has nearly 9.5 million recodes. This procedures takes nearly 15 min to complete execution on our main database. I exported and imported the schema to our backup database and the same procedure just took 3 seconds to complete.
I tried to analyze the table in our main database and tried to execute the procedure again but did not show any improvements. ANALYZE TABLE DN_ACTIONS COMPUTE STATISTICS;
I am not sure computing the statistics for all the tables in the schema will work. I also checked there is enough disk space where oracle data files are stored. I am also turning on the sql trace to see what sql statements in the procedure is taking longer time.
View 8 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
May 22, 2012
Customer is sending data from legacy system (Source) with the web service which in turn calls a package lying on Oracle server (Target). Now this package is simply inserting data passed by legacy system into master staging table in Oracle database. When they started this process in Sept 2011 then 4 lack records were inserted into staging table. In Oct 11 it was 0 records Nov 11 it was 2 lack records, Dec 11 it was 1 lack records, in Jan 12 it was 1 lac records, Feb 12 73k records, Mar 12 0 records, Apr 12 52k records.
As we see that number of records inserted in the table got reduced with time.. what should be the starting point here since web service is calling that package on the fly, how can i enable trace for that package? I cannot replicate this is Dev as this process is only working in PROD.
View 6 Replies
View Related
Oct 20, 2010
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
View 6 Replies
View Related
Nov 15, 2010
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
View 13 Replies
View Related
Jul 16, 2013
An SQL query is taking a lot of time than usual and not completing even left after hours! The query joins a table with a quite complex view.
The same query in a test database completes in less than 2 mins.
I would like to export the sql plan from test database to prod database.
how to export/import in 10.2.0.4 version for a particular sql statement's execution plan.
View 2 Replies
View Related
Apr 18, 2011
the erelationship between sga_max_size,sga_targt,shared_pool_size,pga_aggregate_target and the server memory.
In short how shud i choose the above parameters for a server with a fixed RAM.
View 1 Replies
View Related
Oct 10, 2012
insight into the overheads for mutally authenticated SSL for database connections? This is over a fast local network, to a RAC cluster, with DB firewall in front. There's always a large element of "it depends"
Information I'm interested in are things like latency for initial session setup and subsequent data transfer. Also the increase in network packet size, and the increase in CPU cost for the database server. I guess there is some implications for session memory usage as well.
View 4 Replies
View Related
Jul 4, 2012
I have one big database which I need to migrate to Oracle, because it rocks with big databases, instead of other databases and when I was made transfer software and all works great except one more think. During this process I found that Oracle normally fill log & undo table, and my question is how to migrate (or can I migrate) database to oracle without filling undo database (deactivate this process) and after that to put database to work normally, because I just need to transfer data as is and from that point Oracle goes on...
View 4 Replies
View Related
May 23, 2013
Can i know the internal process of initialization of DB into memory in timesten , when a new connection is establishing? Will timesten create tables and indexes in RAM when first connection is established if the RAM policy is default?
want to know the internal functional flow of timesten when any command is fired against it.
View 3 Replies
View Related
Sep 1, 2013
i am using 11.2.0.3 version of oracle. We have recently migrated to 11g, after 1 month of smooth and comparatively better performance, we are suddenly facing performance issues with our database and it got crashed twice within 5 days. even we didnt push any new code to our database in recent past, atleast after the 11G migration. And after getting feedback from the ORACLE corporation guys , they pointed out about the default database stats gathering job, which was eating most of the CPU, because of the default degree mentioned So it was running in 160 parallel threads causing resource starvation.so we reduce the degree of the stats gathering job to 8 .
But the database crashed again two days back, and rebooted within 3 mins to back to normal, even after this default degree changed to 8. This is happening due to any specific application related sql or anything else.
View 9 Replies
View Related
Oct 14, 2010
I am doing import and export of database.Before loading data i drop all the tables and import.Is there any issue if we do drop tables and import data frequently.
View 2 Replies
View Related
Jul 9, 2012
I understand that when data is read from the disk, I/O is done..And When computations are done then CPU is used..Then where the following equation fits?
DB Time = sum of database CPU time + waits
Is I/O considered as a part of CPU time?
Does this equation changes with SAN, OS caching?
View 3 Replies
View Related
Mar 23, 2012
we are using oracle 9i on AIX Server. When Customer were accessing the database, accidentally power was shut down. we restarted the Server,and Oracle database. all resumed successfully.
However while doing "Payments by the customer" it takes a lot of time to insert even a single payment record on database.The database is Live and our customer are very much frustrated,
View 1 Replies
View Related
Aug 23, 2012
SQL> show parameter sga
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1152M
sga_target big integer 0
[code]....
in scenario above, the database do not using ASMM, and spfile If I wan to increase db_cache_size parameter, do i need to rebounce instance?
View 6 Replies
View Related
Aug 17, 2012
I am facing a weird situation wherein the explain plan of same sql in SIT and PROD is different.In fact the explain plan is very costly in Prod.Also the DB version of both SIT and PROD is same.
Below is the sql and corresponding explain plan in Prod and SIT respectively.
Query:
SELECT seq,CCN,ProcessorPart,root_item,comp_path,Item,comp_item,comp_item_type,
lag(comp_item_type,1,'PART') over(PARTITION BY seq ORDER BY lvl)Nxt_comp_item_type,lvl,bom_qty,
ROUND(CASE min(abs(bom_qty)) OVER (PARTITION BY seq ORDER BY lvl)
WHEN 0 THEN 0 ELSE 1 END * EXP (SUM (LN (nullif(abs(bom_qty),0))) OVER (PARTITION BY seq ORDER BY lvl))) Ulti_qty,
'AMER'
[code]...
The tables referred in above query is small tables containing arnd 10k records.The above tables are partitioned on Region and not indexed.
Explain Plan in Prod: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS165173613539322883634804
SORT UNIQUE236360
UNION-ALL
PARTITION LIST SINGLE117240
[code]...
Explain Plan in SIT: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS3211689
SORT UNIQUE347240
UNION-ALL
PARTITION LIST SINGLE172120
[code]...
I am not able to attribute why there is a huge change in Cost between SIT and Prod.Apparently the Job is going for 3-5 hours which used to get completed within 20mins in SIT.
View 5 Replies
View Related