If I am not using data of many partitions in any way, will it affect my performance if I am firing select query that uses other/active partitions data.
How can I check when that partitioned was last accessed, also can I brought those inactive partitions offlie? If we can, what will be the advantages or disadvantages of that?
In an attempt to take older data off line and allow database refreshes to be faster, tablespaces associated with partitioned table data for a given time period was taken off line, leaving only tablespaces that relate to the current time period online. In effect, tablespaces related to 2010 and earlier were taken offline from a table.
1. Without giving a filter on the partition key (the business date) to scan for data greater than the dates in the off lined tablespace partition, we get a ORA-376/ORA-1110 error (data file cannot be read at this time).
2. Materialized views using fast refresh or refresh on commit, will also not work because of the partitions being off line.
Queries directly querying the tables are manageable from an application point of view.But the materialized views failing to aggregate is a bigger problem.
how we can manage this situation? I know that I can move the partitions to a different table in a tablespace to be taken off line. But if possible, we wanted to solve this without doing a move partition.
I have created an user named "Raja" with a default tablespace as "Raja_TBS" along with a datafile "rajadata.dbf". I have taken the tablespace offline
SQL> alter tablespace raja_tbs offline;
Tablespace Altered. when I take a tablespace offline, which means I cannot read or write and the tablespace is currently unavailable for users. I am still able to create a table on the "Raja_TBS" while it is offline.
i'm facing a problem while i'm inserting millions of record from table to table that undo tablespace reach 100% full and execution aborted. , how can free the undo tablespace ??? many of extendes are offline. will it flush automatically ??? or what i should do
Starting backup at 06-MAY-13 channel ch00: starting compressed incremental level 0 datafile backup set channel ch00: specifying datafile(s) in backup set RMAN-03009: failure of backup command on ch00 channel at 05/06/2013 07:09:16
[Code]....
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03009: failure of backup command on ch00 channel at 05/06/2013 07:09:16 ORA-19602: cannot backup or copy active file in NOARCHIVELOG mode
Questions about implementing ARCHIVELOG mode in RAC environment.
Oracle 11gR1 RAC + ASM database. I'd like to implement ARCHIVELOG mode. Each node will have its own archive log.
Does it make sense to place archivelogs on ASM (shared storage)? Or the best practice would be to have some aside "BACKUP" server to place and keep archivelogs from all the RAC nodes there?
Additional question is about archive logs space required estimation. If it's correct to know estimate archive logs space required to get the daily avg number of switch logs
CODEselect round(avg(n_of_switch_logs), 3) avg_num_of_switch_logs from (select trunc(first_time), count(*) n_of_switch_logs from gv$log_history group by trunc(first_time)) and multiply it on size of single redolog?
I can not drop datafile in a tablespace, how can i do?
SQL> Alter Database Datafile '/u02/app/oracle/oradata/oracl/hxl06.dbf' Offline 2 3 /
Database altered.
SQL> Alter Tablespace tps_hxl Drop Datafile '/u02/app/oracle/oradata/oracl/hxl06.dbf'; 2 Alter Tablespace tps_hxl * ERROR at line 1: ORA-03264: cannot drop offline datafile of locally managed tablespace
I have oracle 10g running on Solaris with file system and some one created database files with same name but in different directories for example data01.dbf in two different directories, say /u01/oradata/data01.dbf and /u02/oradata/data01.dbf. Now, I want to find out the duplicate datafiles (data01.dbf in this case) sitting in different directories, is there anyway to find this out?
Today i am facing an error when going to rezise the datafile its fixed size is 19000M abut after truncating all tables it is 112M, but when i am going to resize its datafile to 500M its get me an error ora-03297 file contain used data beyond requested size values.
I have done the same before a week without any error.But this time i got the error
Erroneously created datafile, re: "/path/../large_rbs_03.dbf" was created under the SYSTEM tablespace which is supposedly be in the LARGE_RBS tablespace.
How do I make the said datafile be under LARGE_RBS?
I have a database of branch A whose files are located in E: Drive. I want to download it in branch B.I placed all the files of Branch A in D: Drive of Branch B. When i start the database i was getting controlfle error. I made required corrections in initorcl.ora. Now when i start the Database,its mounted, but I am getting ORA-01157: Cannot identify datafile 1. I tried to rename the file, but the "alter database rename file1 to file2;" option is not working.
Is there a way to find out when a datafile for an undo tablespace with autoextend enabled actually extended? I've done a few tests, and nothing is written to the alert log or any trace file that I've found. I can't find any V$ or DBA view that will give me the history of a file's size.
We need to find when any datafile was resized ( if at all)in a tablespace. Actually, by noting the created date from v$datafile , we used to know the data growth in a tablespace. Now as the number of datafiles have increased, we want to resize them. This diagnostinc have to be done without changing/adding anything in DB.
changing archive mode during DB running.Before in 9.2, 'ALTER SYSTEM ARCHIVE LOG START/STOP' is available for this job. But, after 10g/11g, it cause 'archive log stop has been deprecated'.
However, for my practice, I need to estimate archived logs size before backup them, so I need to stop archive log for a while and enable it later during DB running.
I check Oracle documents, it only mentions changing archive log mode at DB mounted but not open.
I am using oracle 10g. Is there any mechanism to / parameter to enable or disable archive log mode? can I enable arching directly from pfile without touching the startup process?
I mistakenly added a datafile to a tablespace which is asm, however the datafile was created in a default location and not the asm location:
alter tablespace pdaiidata1 add datafile '<filename>' size 2048M;
What I should have done:
alter tablepsace <tablespace_name> add datafile '+DATA1' size 2048M;
Is there any way to move this filesystem datafile into the asm tablespace? In previous Oracle versions, I've taken a tablespace offline, moved a datafile, renamed it, then brought the tablespace back online. Can I do something similar here in this situation?
This facility has one last 10g database and a very problematic tablespace and last datafile associated with it. The tablespace was set up with INITIAL_ EXTENT of 131,072 (128K) instead of the more 'normal' 4,194,304 (4M) and NEXT_EXTENT of 262,144 (256K) instead of 4,194,304 (4M).
More worryingly, the datafile has INCREMENT_BY set to 1 (8K) instead of 1,280 (10M) or 2,048 (16M).Has anyone ever updated sys.ts$.dflinit and sys.ts$. dflincr to modify the INITIAL_EXTENT and NEXT_EXTENT, and sys.file$.inc to modify the INCREMENT_BY?
I need to resize my datafile as i have allocated more space and need to reduce ( i.e.data load completed now). my tablespace is having 11.74 gb free space now. it has 3 datafile.
TABLESPACE TOTAL USED FREE PCT_FREE LARGEST FRAGMENTS ------------------------ ---------- ---------- ---------- ---------- ---------- ---------- CFC_DATA 150528 138780.6 11747.4 7.80412946 1251 992
TABLESPACE_NAME FILE_ID FILE_NAME Size(MB) ------------------ ---------- ------------------------------------------------------- ---------- CFC_DATA 71 +DATA/dedw/datafile/cfc_data.4074.731085435 65535.9688 CFC_DATA 334 +DATA/dedw/datafile/cfc_data.4473.757566557 20480 CFC_DATA 1710 +DATA/dedw/datafile/cfc_data.2012.728095695 64512I used below script to find out HWM in order to resize the datafile. db_block_size is 16KB. [code]....
in TOAD, we have an option, that is "Minimum size" button against each datafile.. Need the SQL which is running behind when we press this button from TOAD ?
I have one generic question about space management. I have one table with size of 1TB. This table stored in ORC1 tablespace. This tablespace contains 70 datafiles.
Since it's 10.2.0.4 database. I have dropped this table by using purge
drop table <<table_name>> purge;
Once table drop was completed. When I check the tablespace space it was 100% free but due to HWM was unable to resize the datafile from current size to small size. What was the reason behind this. Is there any process needs to follow when dropping big tables ? like instead of dropping the tables do I need to truncate first & then drop .