I look after a team of DBAs and I have a request to free up space on our very expensive storage system. However the answers on how to do this differ and i'd like to ask for external input...So not being a techincal person I see the world as quite black and white. Meaning that you delete data and you free space but after doing much reading I understand this is not the case, as you essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table. We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data. What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
How about deleting the data and reclaiming the space. Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.
how they have done this as it must be a common problem that people have created some different solutions. What commands, procedures have been used?
essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table.
We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data.
What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
deleting the data and reclaiming the space.
Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.
We are using Oracle 10g and have 10 tablespaces defined for our Database which have 108 tables. Size of 108 tables is around 251 MB as seen during importing the dump. While creating these 10 tablespaces I used below parameters for allocation of space
SIZE 1M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE 1M;
which set the initial space for 10 tablespaces to around 1032Kb each. Now my Question is after importing the dump , how the disk space for 10 tablespaces increases to 398 MB in total ?
Is there any relation of Tablespace disk space and Actual Data present in the tables ?
I guessed that was because of LUN size (it was exceed 2 TB)After that I dinamically shrinked LUN size on our external storage, rebooted and perfomed cfgmgr command on both nodes. But I still have no enough free space.
In my environment Oracle database 11gR1 is running & dg is configured i.e >> 1 primary & 1 standby. In near future space issues will arise for standby. I want to create 1 more standby with max disk space, but how? Active dataguard is configured where report are generated from where & what changes should be made in Primary pfile & new standby pfile.
I am trying to find the space occupied on disk by the tablespaces of the database that contain tables, some (and not all) of whose columns are encrypted. My query is like this:
select distinct a.tablespace_name, file_name, bytes /(1024*1024*1024) File_Size_In_GB from dba_data_files a, dba_tables b, (select distinct owner, table_name from DBA_ENCRYPTED_COLUMNS) c where a.tablespace_name = b.tablespace_name and b.owner = c.owner and b.table_name = c.table_name order by a.tablespace_name;
The output of the query is as shown in the attached file:
Since the output (under the heading Total Size of the tablespace) is probably the sum of all the datafiles returned by the query and is obviously incorrect, I have not given the rest of it. I also tried the following:
select distinct a.tablespace_name, file_name, bytes /(1024*1024*1024) File_Size_In_GB, sum (bytes/(1024*1024*1024))over (partition by a.tablespace_name order by file_name) "Total Size of the tablespace" from dba_data_files a, dba_tables b, (select distinct owner, table_name from DBA_ENCRYPTED_COLUMNS) c where a.tablespace_name = b.tablespace_name and b.owner = c.owner and b.table_name = c.table_name order by a.tablespace_name ; [code]...
Here, the fig. under the heading "Total Size of the tablespace" are probably the sum of all the records returned by the query if distinct is not used i.e all the data file sizes returned by the query.
tune my query and get the desired results? I think this can be achieved by group by with rollup, cube, order by and grouping functions, but am not sure how to proceed. I know that I can get the results by using Enterprise Mgr. Console in 2 mins., but would still like to get the results with the queries.
If we want to know the number of instances, number of RAC databases and whole total disk space used by oracle (not file system size),1. Any script can be ran from OEM grid control against all instances/databases? or2. we have a repository unix server which has all tnsnames of whole databases, any script we can run from there?
We have a Production Oracle 10g R2 RAC on HP-UX v2 IA64 servers.We have Two Disk Groups one for Archive (ARC_DISK - 100 GB) and other for Database(DATA_DISK - 1 TB]. We wanted to add more space to the DATA_DISK disk group.Unix admin configured 200 GB from SAN and changed the ownership of the Disk to oracle and permissions to 775 on 1st Node.I opended DBCA from 1st Node and was able to see the disk in 'Show Candidate'.
I added this disk to the DATA_DISK disk group and clicked OK but got ORA- error with some message like some operations could not be performed. I exited DBCA.We realized that we had forgotten to change the ownerhip and permission from the 2nd Node.Unix admin changed the ownership of the Disk to oracle and permissions to 775 on the 2nd Node.
I opened DBCA again from 1st Node and selected the DATA_DISK disk group but could not find the Disk in 'Show Candiate' open. I clicked on 'Show All' and this disk was shown with Header_Status - MEMBER but not allocated to DATA_DISKGROUP. When I clicked the 'Show Member' option, this disk is not shown for DATA_DISK disk group. I exited DBCA at this point.As this is a critcal Production database I didnt proceed any further and exited DBCA.
Now I need to add this Disk to the DATA_DISK disk group but not sure which option to select. I got one reply from another forum to run DBCA select the DATA_DISK Disk Group and then click 'Show All' and select this Disk (which already has MEMBER as Header Status) and select Force Option and click OK to continue.
I Configured an ASM instance and a disk group with two disk for normal redundancy.
> Here .. each disk is 2gb
The disk group has two disks...
SQL> select group_number, name, type, total_mb, free_mb 2 from v$asm_diskgroup;
GROUP_NUMBER NAME TYPE TOTAL_MB FREE_MB ------------ ------------------------------ ------ ---------- ---------- 1 DATA NORMAL 4000 3898
as the group has two way mirroring (Normal redundancy) How much data (2 GB or 4 GB) can i keep in the disk group? My conception is I can keep 2 GB data in the disk group... (as the disk group keeps every extent in another disk as mirror)
I have 2 servers both having windows server 2008 64 bit as operating system installed on both I need to install oracle clusterware 11g r1 on both servers with clustering on external storage. I have configured the network(private,public and virtual) for both servers and have started the installation.
In the installation of oracle I add both servers but then I reach to a point where they ask me for voting disk or ocr disk in the cluster configuration storage but no disk is present how can i create ocr disk or voting disk on windows server 2008? And the external storage should I buy a special type of storage that supports clustering to continue my work?
Sysaux Tablespace is running low. WE SET AWR RETENTION TIME=60 DAYS. WE ARE NOT INTEREST TO EXTEND SYSAUX TABLESPACE SIZE. Usually we take AWR weekly once. Some times we did ADDM report and ASH.
CODEsql>select TABLESPACE_NAME, FILE_NAME, BYTES/(1024*1024), AUTOEXTENSIBLE, MAXBYTES/(1024*1024) from dba_data_files where tablespace_name = 'SYSAUX';
1. What's the best SOLUTION ? 2. Can i shrink sysaux tablespace ? 3. I think , The size for all occupants in sysaux tablespace is less than 200 MB => how to find actual content of sysaux tablespace ? 4. What could be the reason for growth? Is there any way to free the space from sysaux table space?
When we are trying to create number data type column of a table with precision greater than actual value,it's accepting the definition of the table . But we are unable to insert any values into the table.how internally it stores the value
SQL> drop table precision_test; Table dropped SQL> create table precision_test(name number(2,5)); Table created SQL> insert into precision_test values (1); insert into precision_test values (1) [code]....
I have a procedure which will execute on every Monday. Same is not executed last Monday. Can I execute the Procedure on some other day with out changing the actual procedure?
I am writing a procedure for the front-end. The end-users need to insert multiple rows of data into history tables in the database (11G). My problem is: the multiple actually parameters is not a fix amount, this time, the amount could be 5, next time, it could be 12. I currently used one string and pass the actual parameter (P_id, number) as '2, 4, 5, 7, 8', the procedure was executed successfully, but cannot insert any data into history table.
See my procedure below (the base table has clob data, I have to consider insert ... select *), I tried to use to_number (CONTACT_MSG_ID), it doesn't work well:
PROCEDURE ARCHIVE_XREF_CONT_EMAIL(P_ID IN VARCHAR2) IS BEGIN INSERT INTO TRC_XREF_CONT_EMAIL_MSGS_HIST SELECT * FROM TRC_XREF_CONT_EMAIL_MSGS [code].......
I can't seem to understand why the hour is incorrect. Below query "dte_computation_on_data" is the old function they use to convert date and insert it to the table. Problem is when I revert it to the actual date the hour is incorrect.
CODE SELECT -- THIS HERE IS MY TEST TO REVERT TIME AND DATE ON THE FORMULA OF WITH RESPECT TO THEIR FUNCTION to_char(TO_DATE('19700101', 'YYYYMMDD')+(tb1.dte_computation_on_data/86400),'MM/DD/YYYY') || ' ' || to_char(to_date(mod (tb1.dte_computation_on_data,86400) ,'sssss'),'hh24:mi:ss ') revert_test, systimestamp,tb1.dte_computation_on_data from ( SELECT -- THIS IS THE FORMULA OF THE OLD FUNCTION THEY USE TO CONVERT DATE TO NUMBER AND INSERTED ON THE ROW floor((CAST(SYS_EXTRACT_UTC(systimestamp) AS DATE) - TO_DATE('19700101', 'YYYYMMDD')) * 86400) dte_computation_on_data FROM dual)tb1;
I'm using ASM on LUNs from an EMC SAN, fronted by PowerPath. Right now I have only one fiber path to the SAN, so /dev/emcpowera3 maps directly to /dev/sda3, for example. Oracle had a typo in what they told me to do in /etc/sysconfig/oracleasm*, so the scan picks up both devices.
#/etc/init.d/oracleasm querydisk -p ASMVOL_01
Disk "ASMVOL_01" is a valid ASM disk /dev/emcpowera3: LABEL="ASMVOL_01" TYPE="oracleasm" /dev/sda3: LABEL="ASMVOL_01" TYPE="oracleasm"
But I don't think it can be using both. How do I see which one it's actually using?
Where are the details of the Actual Responder Details stored in the case of the below scenario
If a notification for one user was closed by another user through access to the first user's worklist, the name of the second user, who actually took the action, is displayed as the responder.
I am only able to find the Original Recipeint username in the wf_notifications table.But where will the details of the Actual Responder name displayed in the notification be stored.
When I use the "wf_notification.Responder" function, I get the Original Recipeint username of the user but not the one who acted on the notification. Is there ANY way by which I can get the Actual responder user name who acted on the notification
select responder from wf_notifications where notification_id = <nid> also gives me the Original Recipeint username of the user but not the one who acted on the notification
Is there any way to find out the division between the time taken for query parsing, creating execution plan and actual data retrieval seperately? If I enable 'set timing on' I see the elapsed time which is the total time taken for all these 3. Some of my queries are taking long time when I run it first time and so want to know what is it taking long? is it the parsing or creating the execution plan, if so what can I optimize.
i am exporting / importing from 10.2.0.4.0 to 11.2.0.3.0 but while doing import some of rows are rejected ...
IMP-00019: row rejected due to ORACLE error 12899 IMP-00003: ORACLE error 12899 encountered ORA-12899: value too large for column XXXX (actual: 51, maximum: 50) Column 1 264 Column 2 123432 Column 3 Column 4 7
[code]....
having looked at data in source system i cant see see character "Â" in the column 11 i think this is what causing issue !!why is oracle adding this character to this field ? how can i fix this ? without having to modify table in new system to allow more characters?
We have a table with interval partition. This table is accessed very frequently. We are suppose to exchange partitions between this actual table from it's corresponding staging table.
In order to keep the newly created partitions empty, is there a way to restrict other applications from using it before we push data from staging table to it's actual table.