Server Administration :: Unable To Increase Size Of Sga
Feb 2, 2012
I am trying to increase the size of sga or you can say that i want to make my sga in automatic memory management...Following is the steps i am trying
SQL> show parameter sga_max_size;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 96M
SQL>
after that i am trying to increase the size
SQL> alter system set sga_max_size = 200m;
alter system set sga_max_size = 200m
*
ERROR at line 1: ORA-02095: specified initialization parameter cannot be modified
I have Oracle 11gR2 running on windows xp machine. Windows xp has total size of 150 GB and free space of 95 GB.
I checked the size of the database that I created. It showed the total size of the database as 2 GB and used space as 2 GB. If I want to increase the total size of the database to 50 GB, what should i do? Now which is the disk space size? Windows or Oracle?
We had a database (DB A)that is having version 9.2.0.6.0.This DB is having multiple tables and volume of 6 million in individual tables.Another database is also 9.2.0.6.0 (DB B), this DB has Mviews pointing to DB A. Mviews are refreshed in every 15 mins, with fast refresh option in 90% cases and remaining having complete refresh.
Last weekend we have migrated DB 2 to version 10.2.0.4.0 - 64bi and on another server.After version upgrade and DB migration complete refresh was done once for all mviews.
Now DB A is generating huge amount of archive log and also it's UNDO space is getting fully consumed causing performance issue and DB hang.what has gone wrong and what we can do to improve response of DB A and also to reduce size of Archive log ?
I need to resize my datafile as i have allocated more space and need to reduce ( i.e.data load completed now). my tablespace is having 11.74 gb free space now. it has 3 datafile.
TABLESPACE TOTAL USED FREE PCT_FREE LARGEST FRAGMENTS ------------------------ ---------- ---------- ---------- ---------- ---------- ---------- CFC_DATA 150528 138780.6 11747.4 7.80412946 1251 992
TABLESPACE_NAME FILE_ID FILE_NAME Size(MB) ------------------ ---------- ------------------------------------------------------- ---------- CFC_DATA 71 +DATA/dedw/datafile/cfc_data.4074.731085435 65535.9688 CFC_DATA 334 +DATA/dedw/datafile/cfc_data.4473.757566557 20480 CFC_DATA 1710 +DATA/dedw/datafile/cfc_data.2012.728095695 64512I used below script to find out HWM in order to resize the datafile. db_block_size is 16KB. [code]....
in TOAD, we have an option, that is "Minimum size" button against each datafile.. Need the SQL which is running behind when we press this button from TOAD ?
SQL> alter system set sga_max_size=1G scope=spfile;
System altered.
SQL> startup force ORA-12545: Connect failed because target host or object does not exis SQL> show parameter sga ERROR: ORA-03114: not connected to ORACLE
SQL> startup force ORA-24324: service handle not initialized ORA-01041: internal error. hostdef extension doesn't exist SQL> startup ORA-24324: service handle not initialized ORA-01041: internal error. hostdef extension doesn't exist SQL> startup force ORA-24324: service handle not initialized ORA-01041: internal error. hostdef extension doesn't exist
Tengo instalado el Oracle 9i en un windows server 2003 a 32 bits pero solo reconoce 2 gb de ram de memoria, como puedo hacer que reconozca mas memoria?
I have installed Oracle 9i on windows server 2003 32 bit but only recognizes 2 gb ram memory, what can i do to recognize more memory?
I am studying Oracle® Database VLDB and Partitioning Guide 11g Release 2 (11.2),
ORA-14767: Cannot specify this interval with existing high boundsby giving an example other than given in the above mentioned document.
the example given in the document is as follows:
To increase the interval for date ranges, then you need to ensure that you are at a relevant boundary for the new interval. For example, if the highest interval partition boundary in your daily interval partitioned table transactions is January 30, 2007 and you want to change to a monthly partition interval, then the following statement results in an error:
CREATE TABLE transactions ( id NUMBER , transaction_date DATE , value NUMBER) PARTITION BY RANGE (transaction_date)
I am executing a plsql procedure and trying to increase buffer size to display all characters, procedure is given below:
create or replace procedure prc_p(prm_t1 in VARCHAR2, prm_t2 in VARCHAR2, prm_tab in varchar2 ) AUTHID CURRENT_USER as str_sql VARCHAR2 (4000); [code]..........
We have a table emp_details with 23772889 records. Our requirement is to increase few of the columns size in the table emp_details. We are following the below alter statement which is taking around 2 hours of time.
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ undo_management string AUTO undo_retention integer 96000 undo_tablespace string undo
Following are the details in AWR report (00:00 til 01:00 of 21-Apr-2013) .... not thet the error was produced at 00:42
Undo Segment Summary DB/Inst: DBCPY/dbcpy01 Snaps: 18853-18854 -> Min/Max TR (mins) - Min and Max Tuned Retention (minutes) -> STO - Snapshot Too Old count, OOS - Out of Space count -> Undo segment block stats: -> uS - unexpired Stolen, your - unexpired Released, uU - unexpired reUsed
[code]....
Undo Advisor information taken 'now' is as following
SQL> select dbms_undo_adv.longest_query(sysdate-2,sysdate) from dual; DBMS_UNDO_ADV.LONGEST_QUERY(SYSDATE-2,SYSDATE) ---------------------------------------------- 379650 SQL> select dbms_undo_adv.required_retention from dual;
[code]....
In above situation what should be my first choice (assuming increasing space is not an issue) - increase undo tablespace or increase undo retention?
If latter is the choice then what should be the value? Because as I understand present 96000 value is taken as lower limit and because of auto tuning the actual value (TUNED_UNDORETENTION) being used was 345600 In that case shall I set it to something > max(maxquerylen) i.e 379,650 + X?Or I shall increase the undo tablespace size?
From Undo Advisor output it looks to me that even if I increase the undo retention to 379650 current undo size will be able to support it (may be at the expense of DMLs)Is that right?
I am using apex 4.1. I have this interactive report and i would like to make the text inside the report bigger and bolder. I have seen different examples of using css but none have worked for me. If you have any other way to do this or can give me a detailed way of using css!
I am working to understand the space allocation of table with the value we provided with the data type. For that I have created a table with varchar2 and length 50. Size of table created is of 65536 Bytes. This is when we don't have any insertion in the table. Later when we insert some rows, total size if the segment still remain same that is 65536 bytes.
Now again when I created table with varchar2 and length this time is 500 but still it is created with same size that is 65536. So can you just explain, on what values segment size depends on and how the length effect the size & space allocation.
Name Null Type ------------------------------ -------- ------------------------ ENTITY_ID NOT NULL VARCHAR2(100 CHAR) ENTITY_TYPE_ID NOT NULL NUMBER SOURCE_ID NOT NULL VARCHAR2(512 CHAR) XML_SCHEMA_ID NOT NULL NUMBER JOB_ID NOT NULL NUMBER FINGERPRINT NOT NULL VARCHAR2(100 CHAR) ENTITY_XML_DATA CLOB() ARCHIVED NUMBER(1) CREATION_DATE TIMESTAMP(6) MODIFICATION_DATE TIMESTAMP(6) ARCHIVING_DATE TIMESTAMP(6) CREATED_BY VARCHAR2(50 CHAR) MODIFIED_BY VARCHAR2(50 CHAR)
The problem is that the data of the table are 40GB while on the DB the table holds 400GB! How can I shrink and reuse that space except from drop/recreate and drop/import?
The table has no initial data, so that I can play with the INITIAL parameter. Data are inserted, updated and deleted all the time. I have run DBMS_ADVISOR which recommended to SHRINK table. I have performed the shrink :
As the undo segments are used in round robin fashion, Is it possible that with varying load (concurrent users, size and number of transactions), the size of Undo tablespace on a particular day is less than the Undo tablespace size few days back, by any chance?
As a basic understanding I know that Undo is preserved for read consistency and transaction, instance recovery So if there are lot of transaction on a database on 05 Feb and before that, but there aren't any transactions on 6,7,8,9, then on 10th Feb can we see the Undo tablespace size is less than that of 05 Feb?
In the following case when data belonging to table is not required for any queries, transactions, even then the undo size is not restored upon dropping the table.
As such for large operations and batch processes shall we keep undo tablespace with files as 'Autoextend' with 'Maxsize' as 'Unlimited'?
I want to increase the size of the tablespace but when i login as sysdba or admin user i can just see the 21 tables in the dba_tablespaces or user_tablespaces. I want to see the tablespaces related to the application.
I noticed my DB is generating a lot of "small" .arc files and I am usure why. As you can see from the v$log query my log file size is set to 50MB. But yet BLOCKS*BLOCK_SIZE never adds up to 50MB.
Is there anything else I can look into to see how to make the .arc files larger?
we have a tablespace of size 900 GB where 90% of space is occupied by two tables having BLOB data and now i need to drop these two tables and then to recover the space, i need to resize the tablespace (datafiles).
I am using oracle 10g with sga_max_size =4GB and db block size 16k. Now i am creating a tablespace with block size 32 kb , whats value i select for the parameter db_32k_cache_size.
Is there any standard way to calculate the value of this parameter.
I want to know what is the size of each granule for oracle 10g. I read it from the following link
[URL].........
There it is described that
Quote: The memory for dynamic components in the SGA is allocated in the unit of granules. Granule size is determined by total SGA size. Generally speaking, on most platforms, if the total SGA size is equal to or less than 1 GB, then granule size is 4 MB. For SGAs larger than 1 GB, granule size is 16 MB. Some platform dependencies may arise. For example, on 32-bit Windows NT, the granule size is 8 MB for SGAs larger than 1 GB. Consult your operating system specific documentation for more details.
Now My query about full list of granule size for different platform like windows 64 bit, unix etc.
- we have 55 blocks allocated to the table (still) - 35 blocks are totally empty (above the HWM) - 19 blocks contains data (the other block is used by the system) - we have an average of about 2.8k free on each block used.
Therefore, our table
- consumes 19 blocks of storage in total. - of which 19 blocks * 8k blocksize - 19 block * 2.8k free = 98k is used for our data.
not too sure this calculation is accurate for getting the size (data)of the table.