Server Administration :: Increasing Size Of Redolog Member
Feb 27, 2008The size of redolog member is 12m . Can I increase the size of that member dynamically, without adding a new member to that group and dropping the old one.
View 8 RepliesThe size of redolog member is 12m . Can I increase the size of that member dynamically, without adding a new member to that group and dropping the old one.
View 8 RepliesI want to increase the size of the tablespace but when i login as sysdba or admin user i can just see the 21 tables in the dba_tablespaces or user_tablespaces. I want to see the tablespaces related to the application.
View 8 Replies View RelatedMy listener.trc file has grown to a size of 56G and is contineously growing with the following message:
9:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:670] naeshow: [05-DEC-2012
[code]...
How to increase ASM DATA free size? I tried deleting expired backups and old archive logs which are backed up more than one time. Still no success ..
View 1 Replies View RelatedI had created a Primary key and wanted to compress as per my senior instructions.Below are my results the size increased after compression.
select compression from dba_indexes where index_name = 'TEST_IDX';
Compression
----------
DISABLED
select sum(blocks) no_of_blocks, (sum(blocks)*8192)/(1024*1024)size_MB
[code]....
We ran a compression on the primary key index TEST_IDX
ALTER INDEX SCOTT.TEST_IDX REBUILD INITRANS 15 TABLESPACE DATA_01 COMPRESS;
ANALYZE INDEX SCOTT.TEST_IDX VALIDATE STRUCTURE;
Now when i ran the below select statement:
select compression from dba_indexes where index_name = 'TEST_IDX';
Compression
----------
ENABLED
select sum(blocks) no_of_blocks, (sum(blocks)*8192)/(1024*1024)size_MB
[code]....
As you can see after compression the blocks and size has been increased, but i ran for many tables and other indexes, we observed the blocks and size was reduced by 50-70%, i am not sure why this happened to the index compression.
1.2.0.2 on RHL.. 3 Log Groups with 1 member each. db_recovery_file_dest string /oracle/oraarch For the purpose of increasing log file size, if i use ALTER DATABASE ADD LOGFILE GROUP 1 SIZE 300M; but it creates Log Group with 2 member. one is at /oracle/oraarch location and other at /oracle/oradata (db_create_file_dest).
We are using ORACLE MANAGED FILE SYSEM . I want only 1 member at /oracle/oraarch (to keep the previous setting intact ...just increasing the size from 100 to 300M). If I manually give the path where to create the logfile member, I get this error:
ALTER DATABASE ADD LOGFILE GROUP 1 '/oracle/oraarch/DB/onlinelog/' SIZE 300M;
ALTER DATABASE ADD LOGFILE GROUP 1 '/oracle/oraarch/DB/onlinelog/' SIZE 300M
*
ERROR at line 1:
ORA-00301: error in adding log file '/oracle/oraarch/DB/onlinelog/' - file cannot be created
ORA-27038: created file already exists
Additional information: 1
I am using oracle 8.1.5 database and my temp01.dbf file size is increased upto 19.8 GB now i want reduce its size .
View 13 Replies View RelatedHow can in increase the allocated space for a schema in Apex Admin section?
I know you can set this when creating a schema alongsite a workspace and looks like the only way to do it is via raising a service request for more space and loggin in as ADMIN and approving it (in increments of 500MB).
I have This output
GROUP#STATUSTYPE MEMBERIS_RECOVERY_DEST_FILE
1 ONLINE E:orcl_FILESorclREDO21.LOGNO
1 ONLINEE:orcl_FILESorclREDO11.LOGNO
1 ONLINEE:orcl_FILESorclREDO31.LOGNO
2 ONLINEF:orcl_FILESorclREDO12.LOGNO
2 ONLINEF:orcl_FILESorclREDO22.LOGNO
2 ONLINEF:orcl_FILESorclREDO32.LOGNO
3 ONLINEQ:orcl_FILESorclREDO23.LOGNO
3 ONLINEQ:orcl_FILESorclREDO13.LOGNO
3 ONLINEQ:orcl_FILESorclREDO33.LOGNO
GROUP#MEMBERSSTATUSARCHIVED
1 3UNUSEDYES
2 3CURRENTNO
3 3UNUSEDYES
when perform this both select
SELECT GROUP#, MEMBERS, STATUS, ARCHIVED FROM V$LOG;
select * from v$logfile;
Why the status values is null in the first select and how i can update the status. in the second select How I can Update the status of group 1 of redo log to current Value and i need to shutdown the database to update status.
How to know DB size increase per hour or day on the Oracle?
View 3 Replies View RelatedI have one tablespace PSINDEX with Maxsize of 6 GB. But when I query the tablespace its showing the BYTES is greater than MAXBYTES.
View 5 Replies View RelatedI am working to understand the space allocation of table with the value we provided with the data type. For that I have created a table with varchar2 and length 50. Size of table created is of 65536 Bytes. This is when we don't have any insertion in the table. Later when we insert some rows, total size if the segment still remain same that is 65536 bytes.
Now again when I created table with varchar2 and length this time is 500 but still it is created with same size that is 65536. So can you just explain, on what values segment size depends on and how the length effect the size & space allocation.
db_block_size is 8192.
I am trying to increase the size of sga or you can say that i want to make my sga in automatic memory management...Following is the steps i am trying
SQL> show parameter sga_max_size;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_max_size big integer 96M
SQL>
after that i am trying to increase the size
SQL> alter system set sga_max_size = 200m;
alter system set sga_max_size = 200m
*
ERROR at line 1: ORA-02095: specified initialization parameter cannot be modified
I have a table: desc STG_XML
Name Null Type
------------------------------ -------- ------------------------
ENTITY_ID NOT NULL VARCHAR2(100 CHAR)
ENTITY_TYPE_ID NOT NULL NUMBER
SOURCE_ID NOT NULL VARCHAR2(512 CHAR)
XML_SCHEMA_ID NOT NULL NUMBER
JOB_ID NOT NULL NUMBER
FINGERPRINT NOT NULL VARCHAR2(100 CHAR)
ENTITY_XML_DATA CLOB()
ARCHIVED NUMBER(1)
CREATION_DATE TIMESTAMP(6)
MODIFICATION_DATE TIMESTAMP(6)
ARCHIVING_DATE TIMESTAMP(6)
CREATED_BY VARCHAR2(50 CHAR)
MODIFIED_BY VARCHAR2(50 CHAR)
The problem is that the data of the table are 40GB while on the DB the table holds 400GB! How can I shrink and reuse that space except from drop/recreate and drop/import?
The table has no initial data, so that I can play with the INITIAL parameter. Data are inserted, updated and deleted all the time. I have run DBMS_ADVISOR which recommended to SHRINK table. I have performed the shrink :
alter table STG_XML shrink space COMPACT;
but I haven't gained any space.
As the undo segments are used in round robin fashion, Is it possible that with varying load (concurrent users, size and number of transactions), the size of Undo tablespace on a particular day is less than the Undo tablespace size few days back, by any chance?
As a basic understanding I know that Undo is preserved for read consistency and transaction, instance recovery So if there are lot of transaction on a database on 05 Feb and before that, but there aren't any transactions on 6,7,8,9, then on 10th Feb can we see the Undo tablespace size is less than that of 05 Feb?
In the following case when data belonging to table is not required for any queries, transactions, even then the undo size is not restored upon dropping the table.
As such for large operations and batch processes shall we keep undo tablespace with files as 'Autoextend' with 'Maxsize' as 'Unlimited'?
SQL> select b.tablespace_name, Total_Kbytes_Available/1024 Tot_Mbytes_Available,
Kbytes_alloc/1024 Mbytes_allocated, kbytes_free/1024 Mbytes_Free_from_allocated,
((Kbytes_alloc - kbytes_free)*100/ Total_Kbytes_Available) Pctused
2 from ( select sum(bytes)/1024 Kbytes_free,
3 tablespace_name
4 from sys.dba_free_space
[code]....
how can we find the size of the oracle database 11g.
View 2 Replies View RelatedI noticed my DB is generating a lot of "small" .arc files and I am usure why. As you can see from the v$log query my log file size is set to 50MB. But yet BLOCKS*BLOCK_SIZE never adds up to 50MB.
Is there anything else I can look into to see how to make the .arc files larger?
SQL> select group#, thread#, bytes from v$log;
GROUP# THREAD# BYTES
---------- ---------- ----------
1 1 52428800
2 1 52428800
3 2 52428800
4 2 52428800
select blocks, block_size, blocks*block_size from v$archived_log where sequence# between 63876 and 72851 and thread# = 1
BLOCKS BLOCK_SIZE BLOCKS*BLOCK_SIZE
---------- ---------- -----------------
28 512 14336
28 512 14336
28 512 14336
55 512 28160
[code]...
we have a tablespace of size 900 GB where 90% of space is occupied by two tables having BLOB data and now i need to drop these two tables and then to recover the space, i need to resize the tablespace (datafiles).
View 3 Replies View RelatedI am using oracle 10g with sga_max_size =4GB and db block size 16k. Now i am creating a tablespace with block size 32 kb , whats value i select for the parameter db_32k_cache_size.
Is there any standard way to calculate the value of this parameter.
One of our solaris machines is running Oracle 8.0.3
A table reached the 2 Gb size and oracle failed due to the operating system file size limitation.
The information in the table is not relevant and can be deleted, but the table contains a lot of indexes.
I would like to know the best procedure to delete the information and reduce the size of the file.
I have Oracle 11gR2 running on windows xp machine. Windows xp has total size of 150 GB and free space of 95 GB.
I checked the size of the database that I created. It showed the total size of the database as 2 GB and used space as 2 GB. If I want to increase the total size of the database to 50 GB, what should i do? Now which is the disk space size? Windows or Oracle?
I want to know what is the size of each granule for oracle 10g. I read it from the following link
[URL].........
There it is described that
Quote:
The memory for dynamic components in the SGA is allocated in the unit of granules. Granule size is determined by total SGA size. Generally speaking, on most platforms, if the total SGA size is equal to or less than 1 GB, then granule size is 4 MB. For SGAs larger than 1 GB, granule size is 16 MB. Some platform dependencies may arise. For example, on 32-bit Windows NT, the granule size is 8 MB for SGAs larger than 1 GB. Consult your operating system specific documentation for more details.
Now My query about full list of granule size for different platform like windows 64 bit, unix etc.
ops$tkyte@DEV8I.WORLD> select blocks, empty_blocks,
2 avg_space, num_freelist_blocks
3 from user_tables
4 where table_name = 'T'
5 /
BLOCKS EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
---------- ------------ ---------- -------------------
19 35 2810 3
Ok, the above shows us:
- we have 55 blocks allocated to the table (still)
- 35 blocks are totally empty (above the HWM)
- 19 blocks contains data (the other block is used by the system)
- we have an average of about 2.8k free on each block used.
Therefore, our table
- consumes 19 blocks of storage in total.
- of which 19 blocks * 8k blocksize - 19 block * 2.8k free = 98k is used for our data.
not too sure this calculation is accurate for getting the size (data)of the table.
select username,account_status,default_tablespace from dba_users;
The ran the above query and it return 80 users but when i ran below query it shows just 14 rows.
select owner,sum(bytes)/(1024*1024) "GB" from dba_segments group by owner;how to get the size of all 80 users from database ?
We had a database (DB A)that is having version 9.2.0.6.0.This DB is having multiple tables and volume of 6 million in individual tables.Another database is also 9.2.0.6.0 (DB B), this DB has Mviews pointing to DB A. Mviews are refreshed in every 15 mins, with fast refresh option in 90% cases and remaining having complete refresh.
Last weekend we have migrated DB 2 to version 10.2.0.4.0 - 64bi and on another server.After version upgrade and DB migration complete refresh was done once for all mviews.
Now DB A is generating huge amount of archive log and also it's UNDO space is getting fully consumed causing performance issue and DB hang.what has gone wrong and what we can do to improve response of DB A and also to reduce size of Archive log ?
1-how can i alter/change the size of tablespaces?.
2-is any changing in tablespace size will effect the over all performance?
Tablespace ; Size (MB); Free (MB); % Free; % Used
------------------------------;----------;----------;----------;----------
USERS ; 5; 4; 80; 20
SYSAUX ; 600; 140.875; 23; 77
UNDOTBS1 ; 640; 114.125; 18; 82
SYSTEM ; 700; 28.3125; 4; 96
TEMP ; 64; 0; 0; 100
how to reduce the system tablespace size
my system01.dbf size is 6gb
i want reduce from 6gb to 2gb
I have checked the space of my tablespaces/datafiles in my database. I have 8 GB space left in my database server. I cant add more hard-disk as there is no slot left. We r planning to buy a new server with latest config.
My question is, how can we know upto what size our database can increase and when a datafile need to be added in advance. Sometimes even though datafiles have space left,it shows errors abt extents cannot be extended. We have coalesce the tablespaces and added a new datafile.
I am storing customer's snaps in a table ( column's data type as LONG RAW) using oracle forms Webutil. Now there are 250 snaps in the table. The file type of these snaps is JPG with the average size 30KB.
I made a backup using export utility before storing these snaps and the exported DMP file's size was 36MB. Now after storing these just 250 snaps of 30KB the DMP file's size is gone over 300MB.
i need to change column's datatype? or some where in oracle forms's image item. Because on window's file system the size of these files is just 8MB.
I am having I/O issues if i create 20 GB DATAFILES on SMALL TABLE SPACE. guide me with the maximum size limit of data file that I can create in Windows 2003 32 bit server.
View 3 Replies View Related