SQL & PL/SQL :: Find Total Space Occupied On Disk By Tablespaces Of Database
Nov 3, 2010
I am trying to find the space occupied on disk by the tablespaces of the database that contain tables, some (and not all) of whose columns are encrypted. My query is like this:
select distinct a.tablespace_name, file_name, bytes /(1024*1024*1024) File_Size_In_GB
from dba_data_files a, dba_tables b,
(select distinct owner, table_name from DBA_ENCRYPTED_COLUMNS) c
where
a.tablespace_name = b.tablespace_name and
b.owner = c.owner and
b.table_name = c.table_name
order by a.tablespace_name;
The output of the query is as shown in the attached file:
Since the output (under the heading Total Size of the tablespace) is probably the sum of all the datafiles returned by the query and is obviously incorrect, I have not given the rest of it. I also tried the following:
select distinct a.tablespace_name, file_name, bytes /(1024*1024*1024) File_Size_In_GB,
sum (bytes/(1024*1024*1024))over (partition by a.tablespace_name order by file_name) "Total Size of the tablespace"
from dba_data_files a, dba_tables b,
(select distinct owner, table_name from DBA_ENCRYPTED_COLUMNS) c
where
a.tablespace_name = b.tablespace_name and
b.owner = c.owner and
b.table_name = c.table_name
order by a.tablespace_name ;
[code]...
Here, the fig. under the heading "Total Size of the tablespace" are probably the sum of all the records returned by the query if distinct is not used i.e all the data file sizes returned by the query.
tune my query and get the desired results? I think this can be achieved by group by with rollup, cube, order by and grouping functions, but am not sure how to proceed. I know that I can get the results by using Enterprise Mgr. Console in 2 mins., but would still like to get the results with the queries.
If we want to know the number of instances, number of RAC databases and whole total disk space used by oracle (not file system size),1. Any script can be ran from OEM grid control against all instances/databases? or2. we have a repository unix server which has all tnsnames of whole databases, any script we can run from there?
I need to check the exact amount of space used (in bytes or MB) by a table which is having a BLOB column.I tried the following query but it is not giving the proper usage.
select segment_name , sum(bytes)from dba_extentswhere segment_type='TABLE'and segment_name in ('TEST_CLOB','TEST_BLOB','TEST_CLOB_ADV','TEST_BLOB_ADV') group by segment_name; I even tried the following stored procedure create or replace procedure sp_get_table_size (p_table_name varchar2)as l_segment_name varchar2(30); l_segment_size_blocks number; l_segment_size_bytes number; l_used_blocks number; l_used_bytes number; l_expired_blocks number; l_expired_bytes number; l_unexpired_blocks number; l_unexpired_bytes number; begin select [code].......
But it is giving the error
Error starting at line 298 in command:exec sp_get_table_size ('TEST_CLOB_ADV')Error report:ORA-03213: Invalid Lob Segment Name for DBMS_SPACE packageORA-06512: at "SYS.DBMS_SPACE", line 210ORA-06512: at "SYS.SP_GET_TABLE_SIZE", line 20ORA-06512: at line 103213. 00000 - "Invalid Lob Segment Name for DBMS_SPACE package"*Cause: The Lob Segment specified in the DBMS_SPACE operation does not exist.
*Action: Fix the Segment Specification Although the LOB section is specified in create table syntax.
I look after a team of DBAs and I have a request to free up space on our very expensive storage system. However the answers on how to do this differ and i'd like to ask for external input...So not being a techincal person I see the world as quite black and white. Meaning that you delete data and you free space but after doing much reading I understand this is not the case, as you essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table. We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data. What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
How about deleting the data and reclaiming the space. Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.
how they have done this as it must be a common problem that people have created some different solutions. What commands, procedures have been used?
I guessed that was because of LUN size (it was exceed 2 TB)After that I dinamically shrinked LUN size on our external storage, rebooted and perfomed cfgmgr command on both nodes. But I still have no enough free space.
In my environment Oracle database 11gR1 is running & dg is configured i.e >> 1 primary & 1 standby. In near future space issues will arise for standby. I want to create 1 more standby with max disk space, but how? Active dataguard is configured where report are generated from where & what changes should be made in Primary pfile & new standby pfile.
essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table.
We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data.
What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
deleting the data and reclaiming the space.
Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.
After importing my dump, i have noticed that ARGUMENT$ segment taken more than 9 GB out of my total SYSTEM table space.I belive ARGUMENT$ table is used only to store procedure/package parameter details. But I am not sure Why it has taken more space.
Is there any way we can reduce the SYSTEM table space? using with the below details?
Import Details: -------------- 1) Imported using IMP DP. List of parameters used are userid, logfile, dumpfile, directory, job_name and remap_schema. 2) Dump file size is 3GB 3) The below list will be no. of objects imported using my dump.
OBJECT_TYPE COUNT(1) ------------------- ---------- DATABASE LINK 1 FUNCTION 246 INDEX 4742
[code]...
4) The below list will be amount of space occupied by the segments in the SYSTEM.
col owner form a5 word wrap col segment_name form a15 word wrap col segment_type form a15 word wrap select owner,segment_name,segment_type ,bytes/(1024*1024) size_m
We are using Oracle 10g and have 10 tablespaces defined for our Database which have 108 tables. Size of 108 tables is around 251 MB as seen during importing the dump. While creating these 10 tablespaces I used below parameters for allocation of space
SIZE 1M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE 1M;
which set the initial space for 10 tablespaces to around 1032Kb each. Now my Question is after importing the dump , how the disk space for 10 tablespaces increases to 398 MB in total ?
Is there any relation of Tablespace disk space and Actual Data present in the tables ?
I would like to run a query that counts by case_manager, number of distinct app_id's that have a status ='AC' in a select number of programs. All of the fields are in the same table.I want it to look like this:
Case MgrA DYYOY Jane13420 John3452 Alice1233
Fields are case_manager, status, applicant, and program table reg...I can do the count command to find the total of all active people for each region code. What I want it for the breakdown by program the people are in.
My query for that is: SELECT case_manager, Count (*) from reg where status='AC' Group by case_manager order by case_manager and I get this: Case ManagerCount * Jane46 John 14 Alice9
my query is like below. select SUBSTRING('one thousand one hundread one reuppee only asasasas aaaaa bbbbb',1,60) and my output of above query is below..'one thousand one hundread one reuppee only asasasas aaaaa bb' but i want display only up to 'one thousand one hundread one reuppee only asasasas aaaaa'because my bbbbb it not complete so i want to find the space before 'bbbbb' and diplay up to 'aaaaa'
We have a Production Oracle 10g R2 RAC on HP-UX v2 IA64 servers.We have Two Disk Groups one for Archive (ARC_DISK - 100 GB) and other for Database(DATA_DISK - 1 TB]. We wanted to add more space to the DATA_DISK disk group.Unix admin configured 200 GB from SAN and changed the ownership of the Disk to oracle and permissions to 775 on 1st Node.I opended DBCA from 1st Node and was able to see the disk in 'Show Candidate'.
I added this disk to the DATA_DISK disk group and clicked OK but got ORA- error with some message like some operations could not be performed. I exited DBCA.We realized that we had forgotten to change the ownerhip and permission from the 2nd Node.Unix admin changed the ownership of the Disk to oracle and permissions to 775 on the 2nd Node.
I opened DBCA again from 1st Node and selected the DATA_DISK disk group but could not find the Disk in 'Show Candiate' open. I clicked on 'Show All' and this disk was shown with Header_Status - MEMBER but not allocated to DATA_DISKGROUP. When I clicked the 'Show Member' option, this disk is not shown for DATA_DISK disk group. I exited DBCA at this point.As this is a critcal Production database I didnt proceed any further and exited DBCA.
Now I need to add this Disk to the DATA_DISK disk group but not sure which option to select. I got one reply from another forum to run DBCA select the DATA_DISK Disk Group and then click 'Show All' and select this Disk (which already has MEMBER as Header Status) and select Force Option and click OK to continue.
i have a ref cursor and i have used 'open cursor for' statement:
CREATE OR REPLACE PACKAGE aepuser.pkg_test AS TYPE cur1 IS REF CURSOR; PROCEDURE get_empdetails (p_empno NUMBER, io_cur OUT cur1); END;
[code]...
then i want to know that- will oracle automatically deallocate the memory occupied by records in cursor area?if yes, then when it will be free , in case of 'open cursor for' ?
I Configured an ASM instance and a disk group with two disk for normal redundancy.
> Here .. each disk is 2gb
The disk group has two disks...
SQL> select group_number, name, type, total_mb, free_mb 2 from v$asm_diskgroup;
GROUP_NUMBER NAME TYPE TOTAL_MB FREE_MB ------------ ------------------------------ ------ ---------- ---------- 1 DATA NORMAL 4000 3898
as the group has two way mirroring (Normal redundancy) How much data (2 GB or 4 GB) can i keep in the disk group? My conception is I can keep 2 GB data in the disk group... (as the disk group keeps every extent in another disk as mirror)
I have 2 servers both having windows server 2008 64 bit as operating system installed on both I need to install oracle clusterware 11g r1 on both servers with clustering on external storage. I have configured the network(private,public and virtual) for both servers and have started the installation.
In the installation of oracle I add both servers but then I reach to a point where they ask me for voting disk or ocr disk in the cluster configuration storage but no disk is present how can i create ocr disk or voting disk on windows server 2008? And the external storage should I buy a special type of storage that supports clustering to continue my work?
multiple disk failure, the database was open at the time.We have managed to recover the data from the disk and have put it on another disk.We have got a new server with oracle 9 running.
A blank database has been created.I have copied the control files and redo files onto the new server and left the datafiles , (.dbf,.ora) on a seperate external disk F drive (this is because the original database resided on F drive)I can mount the database , however when I try to open it I get the following
ORACLE instance started.
Total System Global Area 135338868 bytes Fixed Size 453492 bytes Variable Size 109051904 bytes Database Buffers 25165824 bytes Redo Buffers 667648 bytes Database mounted. ORA-01122: database file 1 failed verification check ORA-01110: data file 1: 'F:ORACLEORADATALCMPROSYSTEM01.DBF' ORA-01207: file is more recent than controlfile - old controlfile
I managed to upload images to a database server, resize them, copy to the application server and everything worked just fine - the Apex page successfully displayed images. Since last week, things have broken. This is how: there's a directory object which points to application server's directory:
SQL> select * from all_directories;
OWNER DIRECTORY_NAME DIRECTORY_PATH ------- ------------------------------ ----------------------------------- SYS SLIKE_4005_UPLOAD d:gisslike_4005_upload --> on a database server SYS SLIKE_4005 \my-iasd$homegisslike_4005 --> on an application server
SQL>
I can use a directory located on a database server:
D:GISSlike_4005_upload>dir photo_resize.* Volume in drive D is RAID Volume Serial Number is 88F2-69D2 Directory of D:GISSlike_4005_upload [code]....
How come it doesn't work? I was absent last week, database server was restarted for some reason (there were Windows' updates which required restarting). After that, all applications (lucky us, just two of them, but in multiple procedures/functions) return FALSE for UTL_FILE.FGETATTR.
We recreated directory objects, but that didn't work (UNC or not, no difference). I Googled quite a lot, read Metalink notes - nothing I did solved the problem.
what these OS updates were about; maybe they are not to be blamed at all. Both servers (database & application) run MS Windows Server 2003 Standard Edition Service Pack 2. In the meantime, a colleague developed a workaround (it uses UTL_HTTP) which works, but it is MUCH slower than the previous UTL_FILE.FGETATTR option.
Why don't we keep these images on the database server (instead of the application server)?I was told that Apache is incapable of accessing mapped network directories so we used what we could.