If I am not using data of many partitions in any way, will it affect my performance if I am firing select query that uses other/active partitions data.
How can I check when that partitioned was last accessed, also can I brought those inactive partitions offlie? If we can, what will be the advantages or disadvantages of that?
In an attempt to take older data off line and allow database refreshes to be faster, tablespaces associated with partitioned table data for a given time period was taken off line, leaving only tablespaces that relate to the current time period online. In effect, tablespaces related to 2010 and earlier were taken offline from a table.
1. Without giving a filter on the partition key (the business date) to scan for data greater than the dates in the off lined tablespace partition, we get a ORA-376/ORA-1110 error (data file cannot be read at this time).
2. Materialized views using fast refresh or refresh on commit, will also not work because of the partitions being off line.
Queries directly querying the tables are manageable from an application point of view.But the materialized views failing to aggregate is a bigger problem.
how we can manage this situation? I know that I can move the partitions to a different table in a tablespace to be taken off line. But if possible, we wanted to solve this without doing a move partition.
I have created an user named "Raja" with a default tablespace as "Raja_TBS" along with a datafile "rajadata.dbf". I have taken the tablespace offline
SQL> alter tablespace raja_tbs offline;
Tablespace Altered. when I take a tablespace offline, which means I cannot read or write and the tablespace is currently unavailable for users. I am still able to create a table on the "Raja_TBS" while it is offline.
i'm facing a problem while i'm inserting millions of record from table to table that undo tablespace reach 100% full and execution aborted. , how can free the undo tablespace ??? many of extendes are offline. will it flush automatically ??? or what i should do
My tablespace contains two datafiles dfile1.dbf and dfile2.dbf on D drive of my filesystem. Now i copied these dbf files on pasted it on a location in E drive. Now i want my tablespace to use dbf files pasted in new location.
Will i have to Alter Tablespace Add Datafile with Reuse clause
I can not drop datafile in a tablespace, how can i do?
SQL> Alter Database Datafile '/u02/app/oracle/oradata/oracl/hxl06.dbf' Offline 2 3 /
Database altered.
SQL> Alter Tablespace tps_hxl Drop Datafile '/u02/app/oracle/oradata/oracl/hxl06.dbf'; 2 Alter Tablespace tps_hxl * ERROR at line 1: ORA-03264: cannot drop offline datafile of locally managed tablespace
I have oracle 10g running on Solaris with file system and some one created database files with same name but in different directories for example data01.dbf in two different directories, say /u01/oradata/data01.dbf and /u02/oradata/data01.dbf. Now, I want to find out the duplicate datafiles (data01.dbf in this case) sitting in different directories, is there anyway to find this out?
Today i am facing an error when going to rezise the datafile its fixed size is 19000M abut after truncating all tables it is 112M, but when i am going to resize its datafile to 500M its get me an error ora-03297 file contain used data beyond requested size values.
I have done the same before a week without any error.But this time i got the error
Erroneously created datafile, re: "/path/../large_rbs_03.dbf" was created under the SYSTEM tablespace which is supposedly be in the LARGE_RBS tablespace.
How do I make the said datafile be under LARGE_RBS?
I have a database of branch A whose files are located in E: Drive. I want to download it in branch B.I placed all the files of Branch A in D: Drive of Branch B. When i start the database i was getting controlfle error. I made required corrections in initorcl.ora. Now when i start the Database,its mounted, but I am getting ORA-01157: Cannot identify datafile 1. I tried to rename the file, but the "alter database rename file1 to file2;" option is not working.
Is there a way to find out when a datafile for an undo tablespace with autoextend enabled actually extended? I've done a few tests, and nothing is written to the alert log or any trace file that I've found. I can't find any V$ or DBA view that will give me the history of a file's size.
We need to find when any datafile was resized ( if at all)in a tablespace. Actually, by noting the created date from v$datafile , we used to know the data growth in a tablespace. Now as the number of datafiles have increased, we want to resize them. This diagnostinc have to be done without changing/adding anything in DB.
I mistakenly added a datafile to a tablespace which is asm, however the datafile was created in a default location and not the asm location:
alter tablespace pdaiidata1 add datafile '<filename>' size 2048M;
What I should have done:
alter tablepsace <tablespace_name> add datafile '+DATA1' size 2048M;
Is there any way to move this filesystem datafile into the asm tablespace? In previous Oracle versions, I've taken a tablespace offline, moved a datafile, renamed it, then brought the tablespace back online. Can I do something similar here in this situation?
This facility has one last 10g database and a very problematic tablespace and last datafile associated with it. The tablespace was set up with INITIAL_ EXTENT of 131,072 (128K) instead of the more 'normal' 4,194,304 (4M) and NEXT_EXTENT of 262,144 (256K) instead of 4,194,304 (4M).
More worryingly, the datafile has INCREMENT_BY set to 1 (8K) instead of 1,280 (10M) or 2,048 (16M).Has anyone ever updated sys.ts$.dflinit and sys.ts$. dflincr to modify the INITIAL_EXTENT and NEXT_EXTENT, and sys.file$.inc to modify the INCREMENT_BY?
I need to resize my datafile as i have allocated more space and need to reduce ( i.e.data load completed now). my tablespace is having 11.74 gb free space now. it has 3 datafile.
TABLESPACE TOTAL USED FREE PCT_FREE LARGEST FRAGMENTS ------------------------ ---------- ---------- ---------- ---------- ---------- ---------- CFC_DATA 150528 138780.6 11747.4 7.80412946 1251 992
TABLESPACE_NAME FILE_ID FILE_NAME Size(MB) ------------------ ---------- ------------------------------------------------------- ---------- CFC_DATA 71 +DATA/dedw/datafile/cfc_data.4074.731085435 65535.9688 CFC_DATA 334 +DATA/dedw/datafile/cfc_data.4473.757566557 20480 CFC_DATA 1710 +DATA/dedw/datafile/cfc_data.2012.728095695 64512I used below script to find out HWM in order to resize the datafile. db_block_size is 16KB. [code]....
in TOAD, we have an option, that is "Minimum size" button against each datafile.. Need the SQL which is running behind when we press this button from TOAD ?
I have one generic question about space management. I have one table with size of 1TB. This table stored in ORC1 tablespace. This tablespace contains 70 datafiles.
Since it's 10.2.0.4 database. I have dropped this table by using purge
drop table <<table_name>> purge;
Once table drop was completed. When I check the tablespace space it was 100% free but due to HWM was unable to resize the datafile from current size to small size. What was the reason behind this. Is there any process needs to follow when dropping big tables ? like instead of dropping the tables do I need to truncate first & then drop .
I've ran: DROP TABLESPACE x_tbs INCLUDING CONTENTS;
I've read(should have read it prior to action) [URL]tm
and I understand now that I should have stated "AND DATAFILES", However, too late now. the tablespace is removed, but the datafile x_tbs is still present in a folder of my OS.
Is it safe to manually remove it by deleting it from the operating system?
The blocksize for my database is 4096, as i have heard that the minimum size of the datafile is decided by blocksize, but i want to know that how it is calculated as by giving the above syntax the other values will be default. I am trying the syntax in oracle 9.2.0.1.0 version.
I have applied offline patch *10417948* on my database. How can see that patch applied on database/OH or not? I have applied one online patch few months ago....in that i have applied that patch on each database after installing it...using command :
when i execute a command opatch lsinventory -details i got the following output :
+Interim patches (2) :
Patch 10417948 : applied on Sat Feb 23 13:42:49 IST 2013 Unique Patch ID: 14586154 Created on 18 Oct 2012, 06:52:32 hrs PST8PDT Bugs fixed: 10417948 [code].......
For offline patch is it required to enable patch on every database?
I want to create offline database. there is one custom application i designed but for that need to use internet.
some time there is internet fluctuation. so i want offline db concept to implement . In a day once inter net connect data will get upload and centralize. is it possible by using OAF or ADF..?