Quote: If MEMORY_TARGET is set to non zero value: SGA_TARGET, SGA_MAX_SIZE and PGA_AGGREGATE_TARGET are set to 0, 60% of memory mentioned in MEMORY_TARGET is allocated to SGA and rest 40% is kept for PGA.
So, I have set: alter system set sga_target=0 scope=spfile; alter system set pga_aggregate_target=0 scope=spfile; alter system set sga_max_size=0 scope=spfile; alter system set memory_max_target=512M scope=spfile; alter system set memory_target=300M scope=spfile;
, and then bounced the instance. After startup, I see:
SQL> startup ORACLE instance started.
Total System Global Area 272027648 bytes Fixed Size 1384012 bytes Variable Size 100663732 bytes Database Buffers 163577856 bytes Redo Buffers 6402048 bytes Database mounted. Database opened. SQL>
But Total System Global Area should't be in that case 60% of memory mentioned in MEMORY_TARGET? Memory mentioned in MEMORY_TARGET was 300M, and 60% of 300M is 180M, which is not 272027648 bytes.
I just want to use automatic memory management, so I've set the other parameters above to 0. My instance is on my local machine, so just for my own personal use.
I have a quick question about Automatic Memory Management in 11g. I know that it combines the SGA and PGA components together, but the pga_aggregate_target (in 10g) was a target, rather than a strict limit. In certain circumstances you could (and we have) exceed the pga_ aggregate_ target by quite some way.
If I use AMM, does this enforce a limit on the PGA usage – or if I through enough bind variables at it, can I still exceed the ‘target’?
I have few queries on PGA memory management.Since these queries are based on 2-3 examples not exactly same by nature I am summarising it after my understanding for the same
As I understand many workareas can be allocated to a single sql statement and number and sizes of theses workareas is controlled internally by Oracle when Automatic Memory management (PGA_aggregate_target and workarea_size_policy=Auto are set) Since many sessions share the PGA memory, the amount of memory available to each session may vary and if less amount of memory is available for a session for sorting then TEMP tablespace is used
[1] Can we say paging happens and can be checked at this time?
[2] Is there a difference in handling memory while populating pl/sql tables?
As I have encountered ora-04030 while some our developers were populating pl/sql tables but never encountered this error for sorting, hash joins etc Though I don't remember the width of pl/sql table, I am sure the developer used 'LIMIT' clause during bulk collect and still faced the issue.
With a single session on the server, I noticed that the difference in values displayed issuing 'free' command in linux and output values from sesstat did not match at all while there wasn't any heavy OS process involved during the period. I was expecting 'used' and 'free' values displayed by free command (linux) will change and difference would be approximately equals 'before and after values of session pga memory.
[3] Isn't it expected to match?
[4] Can we say in dedicated server, at any moment of time, the SUM of 'session pga memory' represents all the memory used by Oracle SGA, at that point of time?
select sum(value)/1024/1024 "memory in MB" from v$sesstat where statistic#=20;
During one of the tests I got following output (divide value by 10 for my visibility and avoid formatting)
SQL> select a.name, to_char(b.value/10, '999,999,999') value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name like '%ga memory%'; 2 3 4
[code]...
The above query is showing above values even when the pl/sql block execution is completed 30 minutes back
[5] Do we call this as 'memory leak' where memory is not released even while some time has passed since session has done something?Of course I am not checking at OS level as mentioned in question [3] above the values won't match!
Still the output of free command for reference(After the pl/sql block executed)
SQL> select * from v$pgastat; NAMEVALUEUNIT aggregate PGA target parameter 524288000bytes aggregate PGA auto target 456256512bytes global memory bound 26214400bytes total PGA inuse 17328128bytes
[code]...
[6] What could be the significance of negative values of 'session pga memory/max'?
Last We have an OLTP system and in the night we run batch processes in 2-4 sessions
Suppose I have 10 GB RAM and with PGA setting of 3.5 GB Now I want the batch process sessions to use max possible memory during nighttime and toggle the setting back in the morning
[7] With above settings (10 GB RAM and 3.5 GB PGA) how can I divide the memory among 4 sessions?
Shall I set 1) PGA_aggregate_target=0 2)Workarea_size_policy=manual 3) Sort_are_size 4) Hash_area_size
[8] What would be approx values for parameter 3 and 4? will it be straight 3.5 GB/ 4?
1) While connecting to the database as sysdba (sqlplus "/as sysdba"), it takes about 12 to 15 seconds what things I need to check/change so that connection will be faster?
2) Also, I have read somewhere that changing the sga/memory parameters connecting to the database. is it true?
3) This is not concerned with above 2 questions. What should be the memory parameters size for a 8 gb database?
What I have is:
Total System Global Area 1068937216 bytes Fixed Size 2182592 bytes Variable Size 1027605056 bytes Database Buffers 33554432 bytes Redo Buffers 5595136 bytes
I have a question regarding memory parameters in oracle database 9.2.0.8, especially sga_max_size and db_cache_size. Database server has 32G of ram. Oracle parameter on server shmmax is set to 16G. Is reasonable to set sga_max_size to the same value, and db_cache_size to 80% of that size?
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ lock_sga boolean FALSE pre_page_sga boolean FALSE sga_max_size big integer 3G sga_target big integer 2G
from what I read I beleive this will initially grab 2GB of memory on startup and will grab up to to 3GB of memory total for the SGA. The "total" memory can be allocated to different peices of the SGA when needed but will never exceed 3GB. Is this correct or would these settings infringe on any available memory on a system that is already tight on memory?
Secondly, what happens if both these values are set to the same value?
How the people in production counts the exact position of the data in fixed format of Sql*loader? Isn't it critical especially in a critical and having many column to be inserted.
i have install 0racle 9i its working fine .I can access data from launch standalone but when i tried with login to oracle management server i can't login .
am using Oracle 10.2.0.4 on win 2008 server SP2. I would like to know if we can set the Segment Space Management feature to AUTO for RBS and Temporary tablespaces. As the data is not permanent in these tablespaces, will it manage automatically?
Presently its Segment Space Mgmt is manual for System, RBS, Temporary tablespaces.
I have created a form in Oracle Forms 6i that utilizes a tabbed interface. Each section of the form has different modules to create a quote on the end page. As far as the logic goes, I have it complete. However, I would like to choose the tab that is in focus, and I can't find a place to change this in the property panel. What am I missing? I don't want it to focus on the last tab which is the totals page, when it should focus on the first page where you enter the data.
working on setting up connection between a Windows 2008 server and a pair of Oracle 11g DBs in a RAC Cluster. One Database (let's say DatabaseA) is in one data center, and the other (DatabaseB) is an a secondary, backup database. The RAC Cluster is all set up, working fine, etc. However, I Need to set up the machine.config file on my Windows Server, to go only connect to DatabaseA, unless it fails, in which case, we want it to connect to DatabaseB. Think we could do this if the host app server was Linux/Unix, but it is windows, and I just don't have the background as to the parameters to set up in the machine.config file. They are similar, but different, and we want a very specific behavior (use DatabaseA, unless fails, then DatabaseB). Application is .NET 4.0 app.
I have been reading various articles about the undo management. This basic concept of undo management is simple but how oracle implements it is bit harder for me to grasp.
What i have read and understood is that whenever a DML(Update, Delete, Insert) statement is issued by a user, the data is fetched from datafile to database buffer cache and at the same time a copy of the original data is saved in undo segment. Now if other users requests the same data, they are presented with the unchanged copy in the undo segment.
Now I have the following questions: 1) In case of Insert statement, what data is saved in undo segment. Is it the complete data in the table to which we want to insert the new row? 2)When the user issues DML statement, there are three copies of the same data, one in Memory (which is changed and not the same as original data), second in Undo segment (Which is unchanged copy of original data) and third in datafile file ( which is original data). What is the difference in the data in undo segment and data in datafile at this stage. Why are the other users presented with the data from undo segment rather than original data from the datafile to maintain read consistency. 3)When the user issues rollback, the changes made to the copy of data in memory are undone.The copies of data in memory and undo segment are now same?. What happens to the before change copy in undo segment. Is it still there or deleted.
Is there any page where I can find a list (with descriptions) of all the bugs fixed in a particular CPU (such as July 2013, #16742095)? The readme for the patchset has "Bugs Fixed by This Patch" in section 6, but that is just some patch numbers, with no description. In oracle support, searching by bug id for any of those numbers, returns a "bug not found" page. I would like to have a formatted table with the bug numbers and a link to a detailed description for each bug.
Even the links on the patch details page go to "page not found".
Is there any possibility of creating a selection interface similar to our report or forms wizard where user can select single fields or multiple fields together and when he clicks on the arrow those fields data to be inserted into another table.Please refer attachment.
--this is the main table create table batch_item (batch_no varchar2(12),item_batch varchar2(12),total_batch_qty number);
insert into batch_item ('0001','a',300); insert into batch_item ('0002','b',200); insert into batch_item ('0003','a',102);
--after inserting the data i should be able to move this data to corresponding batches by manually selection and pushing them to a batch.create table ct_item (item varchar2(12),item_nm varchar2(20),item_qty number);
insert into om_item values ('a','alpha',2); insert into om_item values ('b','beta',3); insert into om_item values ('c','gama',4);
--left side is om_item and right will have batch items where batch qty will be accumulated upon choose arrows.
Can i know the internal process of initialization of DB into memory in timesten , when a new connection is establishing? Will timesten create tables and indexes in RAM when first connection is established if the RAM policy is default?
want to know the internal functional flow of timesten when any command is fired against it.
I have one generic question about space management. I have one table with size of 1TB. This table stored in ORC1 tablespace. This tablespace contains 70 datafiles.
Since it's 10.2.0.4 database. I have dropped this table by using purge
drop table <<table_name>> purge;
Once table drop was completed. When I check the tablespace space it was 100% free but due to HWM was unable to resize the datafile from current size to small size. What was the reason behind this. Is there any process needs to follow when dropping big tables ? like instead of dropping the tables do I need to truncate first & then drop .
I have one tablespace called U01. This tablepspace contains 31 data files. Due to high water mark I was unable to most datafiles. Since my database running onair application they will not provide me downtime to move the tables. Is there anyway to fix the high water mark without getting downtime window? almost 700+g space unused. I need to reuse them asap because running out of space with in asm diskgroup.
I have the following 2 SQLs ; one return 1 row, another one is no row returned.
select v.value from v$parameter v where v.name = 'cpu_count';
return value "1"
select o.value from v$osstat o where o.stat_name = 'NUM_CPU_CORES';
No row returned.
combine the above two in to 1 SQL, and return 1 , null or 1, 1. I assume we can get it with left join for the condition "o.stat_name (+) = 'NUM_CPU_CORES'" , but no row returned for the following SQL. How could we get the result for 1, null for this case?
select v.value, o.value -- or NVL(o.value, 1) from v$parameter v, v$osstat o where v.name = 'cpu_count' and o.stat_name (+) = 'NUM_CPU_CORES';
Oracle Version: 11.2.0.2.0. I have two explicit cursors and I would like to choose at run time which one to run. Here is a simplified code snippet of what I am doing today:
DECLARE CURSOR Cursor_A IS SELECT * FROM EMP_A; CURSOR Cursor_B IS SELECT * FROM EMP_B; RUNA CHAR(1) := 'Y';
[code]....
I want to avoid maintaining the same long list of transformations. I also want to avoid, if possible, an explicit FETCH INTO, because there are hundreds of fields in both tables. I'm looking for something like this (and I know this doesnt work):
DECLARE CURSOR Cursor_A IS SELECT * FROM EMP_A; CURSOR Cursor_B IS SELECT * FROM EMP_B; RUNA CHAR(1) := 'Y'; CursorToRun IS REF CURSOR;
I have been studying possible ways to calculate the best size for the stripe size on my new storage. From what I have seen it appears that there is no straight answer but I believe there must be good way of estimating it. I am deploying a new storage that uses 16 x 600GB 15k RPM 6Gbps SAS drives, it is supposed to be very fast but I think if I get the stripe size on the RAID10 array right I should be able to get even more out of it. I am running oracle 10.2.0.4.0 (10gR2) on 64bit Linux with 32GB of RAM, the SGA is set to 20GB.
What I read said that the "stripe size must be at least as large as the I/O size." The I/O size is calculated from DB_BLOCK_SIZE * DB_ FILE_ MULTIBLOCK_ READ_COUNT, which is my case is "db_block_size = 16k * db_file_multiblock_read_count = 16 = 256k." The current RAID10 array that I am using which will be decommissioned soon has a stripe size of 128k, and if I understood it all correctly it is undersized for my Oracle instance.
With an I/O size of 256k the smallest stripe that I should choose is 256k but how do I know what is the best combination of db_file_ multiblock_ read_ count and db_block_size?
We are planning to create new table space with a block size of 32k as this has been recommended by a number of DBAs but with "db_block_size = 32k * db_file_multiblock_read_count = 16 = 512k".
Is this a viable size? Do you think this will actually improve performance on my system? Note that this database gets about 1500 insertions per second and peaks around 3000 insertions per second, it is a mixture of OLTP and data warehousing.
After doing a lot of research I came across an article by "Don Burleson" which says that starting in Oracle 10gR2 the "Oracle recommends not setting the db_file_multiblock_read_count parameter, allowing Oracle to empirically determine the optimal setting." References: URLs.....
I have a repeating frame wich displays each item and its quantity and pricethe second frame is a fixed frame which will display the total and price after discount.both frames are enclosed in the same frame.I made space beteewn the two frames so the second frame always prints in the same placewhen I run the report the second frame appears in the second page and the first frame (items in invoice) appear in the first page.
I've tried to change the vertical and horizental properties of the frames but the report is still the same
I got a issue with a query to fetch records between two dates for fixed timings
Date From 29-09-2011 to 04-10-2011 Time From 00:00:00hrs to 08:00:00hrs
I tried the below queries, it doesnt work select a.detectorid,sum(b.totalvolume),a.updatetime,a.averagespeed from traffic_data a left outer join volume_data b on a.traffic_id=b.traffic_data_id where pollinterval=1 and detectorid=�AIDC_0154� and updatetime between to_date(�29-aug-2011:00:00:00�,�DD-MON-YYYY:HH24:MI:SS�)