Server Administration :: XE Limitations - How To Calculate Size Of User Data
Apr 15, 2011
I have read following statement from a link [URL]...
Oracle Database XE can be installed on any size host machine with any number of CPUs (one database per machine), but XE will store up to 4GB of user data, use up to 1GB of memory, and use one CPU on the host machine.
calculation of this 4GB size. how can we calculate this size?
by simply going to DBF file and seeing their size?
or
by exporting dump and seeing the size of that dump?
I need to resize my datafile as i have allocated more space and need to reduce ( i.e.data load completed now). my tablespace is having 11.74 gb free space now. it has 3 datafile.
TABLESPACE TOTAL USED FREE PCT_FREE LARGEST FRAGMENTS ------------------------ ---------- ---------- ---------- ---------- ---------- ---------- CFC_DATA 150528 138780.6 11747.4 7.80412946 1251 992
TABLESPACE_NAME FILE_ID FILE_NAME Size(MB) ------------------ ---------- ------------------------------------------------------- ---------- CFC_DATA 71 +DATA/dedw/datafile/cfc_data.4074.731085435 65535.9688 CFC_DATA 334 +DATA/dedw/datafile/cfc_data.4473.757566557 20480 CFC_DATA 1710 +DATA/dedw/datafile/cfc_data.2012.728095695 64512I used below script to find out HWM in order to resize the datafile. db_block_size is 16KB. [code]....
in TOAD, we have an option, that is "Minimum size" button against each datafile.. Need the SQL which is running behind when we press this button from TOAD ?
I am facing problem in user_dump_dest directory...I have noticed that there are a lot of trace files with huge size in MBs.I clean it and after 4 days there are 40G of size..
I'm currently assessing the design/performance of a Distributed System in which hundreds of Field reps have local Oracle DBs (10.2.0.4) on laptops & have to update a remote database (11.2.0.1) via a PUBLIC database link. Field data (millions of records) collected daily is synched from the local to the remote DB & vise versa through this database link. I have 2 concerns here:
1. Is the database link the best option for such a configuration? (recently field reps have been complaining about the slowness in synchronizing data between local & remote DBs). If not, what other options are available for such processing?
2. I've read a lot about security concerns with using PUBLIC database links, but haven't seen any documents to proof they're a majority security issue. why PUBLIC database links are considered not to be very secure?
- we have 55 blocks allocated to the table (still) - 35 blocks are totally empty (above the HWM) - 19 blocks contains data (the other block is used by the system) - we have an average of about 2.8k free on each block used.
Therefore, our table
- consumes 19 blocks of storage in total. - of which 19 blocks * 8k blocksize - 19 block * 2.8k free = 98k is used for our data.
not too sure this calculation is accurate for getting the size (data)of the table.
I am having I/O issues if i create 20 GB DATAFILES on SMALL TABLE SPACE. guide me with the maximum size limit of data file that I can create in Windows 2003 32 bit server.
We will create a new instance in our production server, but this time, part of it's table structure has a BLOB data type (re: <column name> blob(3000)). It's our first time to handle this kind of Oracle data type. What would be my estimate size for it's default tablespace?
Is there any way I can calculate percentage of space used in a block.Eg if a table size is 100 blocks,How Can I check the percentage of used space in block.
I was reading the documentation for oracle 11gr2, with reference to URL>.....
The following examples show how to correctly choose the cluster key and set the HASH IS, SIZE, and HASHKEYS parameters. For all examples, assume that the data block size is 2K and that on average, 1950 bytes of each block is available data space (block size minus overhead).Note that 34 hash keys are assigned for each data block
how they arrive at 34 hash keys because another portion of the document states
This space determines the maximum number of cluster or hash values stored in a data block. If SIZE is not a divisor of the data block size, then Oracle Database uses the next largest divisor.
if that is the case, then number of hash keys should be 1900/55 = 34.55 which should have rounded up to 35.
when trying to calculate the occupied space for a table, I'm using DBA_SEGMENTS, which works fine as long as the table does not have a BLOB column.
As far as I can tell, the size of the BLOB data is stored with the SEGMENT_TYPE = 'LOBSEGMENT', but I cannot find a view that tells me which DBA_SEGMENT row belongs to the BLOB column in the table I'm checking.
To give you an example:
select sum(BYTES) from DBA_SEGMENTS where owner = user and segment_name = 'MY_TABLE' group by SEGMENT_NAME
returns 262144
running:
SELECT sum(length(blob_column)) FROM my_table
returns 821333
There are entries in DBA_SEGMENTS for my user with the type LOBSEGMENT, but I cannot find a way to map the correct DBA_SEGMENTS row to the table I am checking.
I have a strange problem when creating a view in user from another user
I have a user called "Cash_tst"
its syntax creation is
-- Create the user create user CASH_TST identified by "" default tablespace CASH temporary tablespace TEMP profile DEFAULT quota unlimited on cash; -- Grant/Revoke object privileges grant connect to CASH_TST; grant dba to CASH_TST; grant resource to CASH_TST;
-- Grant/Revoke system privileges grant create any view to CASH_TST; grant unlimited tablespace to CASH_TST;
I want to create a view
CREATE VIEW TAMER AS SELECT * FROM [b]AROFL[/b].RA_CUSTOMER_TRX_LINES_ALL_BEFO
"AROFL" is another user on the same database when try to create the view "tamer" i got message of "insufficent privilege" although i granted "create any view" to the user "cash_tst"
I am working to understand the space allocation of table with the value we provided with the data type. For that I have created a table with varchar2 and length 50. Size of table created is of 65536 Bytes. This is when we don't have any insertion in the table. Later when we insert some rows, total size if the segment still remain same that is 65536 bytes.
Now again when I created table with varchar2 and length this time is 500 but still it is created with same size that is 65536. So can you just explain, on what values segment size depends on and how the length effect the size & space allocation.
I am trying to increase the size of sga or you can say that i want to make my sga in automatic memory management...Following is the steps i am trying
SQL> show parameter sga_max_size;
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ sga_max_size big integer 96M SQL>
after that i am trying to increase the size
SQL> alter system set sga_max_size = 200m; alter system set sga_max_size = 200m * ERROR at line 1: ORA-02095: specified initialization parameter cannot be modified
Name Null Type ------------------------------ -------- ------------------------ ENTITY_ID NOT NULL VARCHAR2(100 CHAR) ENTITY_TYPE_ID NOT NULL NUMBER SOURCE_ID NOT NULL VARCHAR2(512 CHAR) XML_SCHEMA_ID NOT NULL NUMBER JOB_ID NOT NULL NUMBER FINGERPRINT NOT NULL VARCHAR2(100 CHAR) ENTITY_XML_DATA CLOB() ARCHIVED NUMBER(1) CREATION_DATE TIMESTAMP(6) MODIFICATION_DATE TIMESTAMP(6) ARCHIVING_DATE TIMESTAMP(6) CREATED_BY VARCHAR2(50 CHAR) MODIFIED_BY VARCHAR2(50 CHAR)
The problem is that the data of the table are 40GB while on the DB the table holds 400GB! How can I shrink and reuse that space except from drop/recreate and drop/import?
The table has no initial data, so that I can play with the INITIAL parameter. Data are inserted, updated and deleted all the time. I have run DBMS_ADVISOR which recommended to SHRINK table. I have performed the shrink :
As the undo segments are used in round robin fashion, Is it possible that with varying load (concurrent users, size and number of transactions), the size of Undo tablespace on a particular day is less than the Undo tablespace size few days back, by any chance?
As a basic understanding I know that Undo is preserved for read consistency and transaction, instance recovery So if there are lot of transaction on a database on 05 Feb and before that, but there aren't any transactions on 6,7,8,9, then on 10th Feb can we see the Undo tablespace size is less than that of 05 Feb?
In the following case when data belonging to table is not required for any queries, transactions, even then the undo size is not restored upon dropping the table.
As such for large operations and batch processes shall we keep undo tablespace with files as 'Autoextend' with 'Maxsize' as 'Unlimited'?
I want to increase the size of the tablespace but when i login as sysdba or admin user i can just see the 21 tables in the dba_tablespaces or user_tablespaces. I want to see the tablespaces related to the application.
I noticed my DB is generating a lot of "small" .arc files and I am usure why. As you can see from the v$log query my log file size is set to 50MB. But yet BLOCKS*BLOCK_SIZE never adds up to 50MB.
Is there anything else I can look into to see how to make the .arc files larger?
we have a tablespace of size 900 GB where 90% of space is occupied by two tables having BLOB data and now i need to drop these two tables and then to recover the space, i need to resize the tablespace (datafiles).
I am using oracle 10g with sga_max_size =4GB and db block size 16k. Now i am creating a tablespace with block size 32 kb , whats value i select for the parameter db_32k_cache_size.
Is there any standard way to calculate the value of this parameter.
I have Oracle 11gR2 running on windows xp machine. Windows xp has total size of 150 GB and free space of 95 GB.
I checked the size of the database that I created. It showed the total size of the database as 2 GB and used space as 2 GB. If I want to increase the total size of the database to 50 GB, what should i do? Now which is the disk space size? Windows or Oracle?