Basically , we have a large index tablespace , whose data files already spans multiple mount points , and has been extended with a new data file on a wrong mount point by mistake , Can we relocate this last datafile extension , to a different mount point while the DB is online ? Is there any special consideration for index datafiles (If such classification is correct ) or is it just like any other Oracle non-system datafile ?
What will be the effect on the normal data tables which are using this index if they are being accessed during the exercise? This is a 2 node Oracle 9i 9.2.0.8.0 RAC on Veritas Cluster .
I am on 11.2.0.3 Enterprise Edition. We are using the new feature "Composite Domain Index" for a Domain index on a very large table (>250.000.000 rows). It really works with mixed queries. We added two number columns using FILTER BY.We have lots of DML on this table. Therefore, we are executing synchronize and optimize once the week. The synch behaves pretty normal. But "optimize_index" takes a very very long time to complete. I have switsched on 'logging' for the optimize process. The $I table takes some time but is finished normally. But the optimization of the $S table (that is the table created for the CDI feature) is running over 12 hours now - and far from being finished. From the logfile, I can see that it optimizes 1000 rows every 20 minutes. Here is the output of the logfile:
Oracle Text, 11.2.0.3.0 14:33:05 06/26/12 begin logging 14:33:05 06/26/12 event 14:33:05 06/26/12 process $N for optimize: SEQDEV.GEN_GES_DESCRIPTION_CTX_I 14:33:16 06/26/12 14:33:16 06/26/12 [code]....
I haven't found a recommendation from Oracle not to use "optimize_index" for Domain Indexes with CDI. But in my case, it would be much faster just to drop and recreate the Domain Index in question.
Does it work with copying datafiles from 10g2 to 11g2? I want to move one huge tablespace (which contains one table) from 10g2 to 11g2, what is the best method to do that?
I am looking for a way to monitor the creation of new datafiles / tempfiles.Let's say I am creating a new datafile of 8GB in size, and it takes a few minutes to create.
I have looked at v$session_longops, and several other system views, but cannot find any that indicate how the file creation is progressing.
I want to be able to display something simple, such as "72% complete".
11.2.0.2 on Solaris..I have such a large post on a very basic space calculation.
We have several tablespaces starting with WLMCS in our DB..I just wanted to calculate the total space consumed in the disk by all these tablespace combined .When I queried DBA_DATA_FILES.MAXBYTES and DBA_DATA_FILES.USER_BYTES , I've noticed that ,
When AUTOEXTEND is NO: MAXBYTES is 0 for these datafiles . But USER_BYTES won't be 0 for these files
When AUTOEXTEND is YES: MAXBYTES will be a non-zero value for these datafiles . USER_BYTES won't be 0 either for these files-- Not including datafile names for better readability.
SYS > select tablespace_name, maxbytes/1024/1024, user_bytes/1024/1024, autoextensible from dba_data_files where tablespace_name like 'WLMCS%';
11 rows selected.To calculate the space consumed , I made 2 assumptions.Are the below 2 assumptions right?
Assumption 1. Whenever MAXBYTES = 0 , USER_BYTES should be considered for the space calculation.
Assumption 2. Whenever you have non-zero values for both MAXBYTES and USER_BYTES , MAXBYTES should be considered for the space calcuation.I did the calculation (adding up of) based on the above assumptions. Is this calcualtion Correct ?
-- Not including datafile names for better readability.
Due to improper documentations of a certain project, I need to drop a DEFAULT tablespace of a newly created instance including it's associated datafiles by using this command:
"DROP TABLESPACE <tablespace name> INCLUDING CONTENTS AND DATAFILES;"
The default tablespace name is QWER (qwer01.dbf) and I added 2 datafiles in it, re: OPD_SML01.dbf & EXYT_SML01.dbf.
Can we restore a database if we have only the datafiles available? The database has been dropped but we have just the datafiles with NO system, sysaux,users and undo datafiles that were saved after taking the database offline and then dropping the database. I know this is strange but if possible, I want to see if we can retrieve some of the data from those datafiles to another database. BTW, There is no backup available for this database except the dbf files on OS.
when i install oraclexe on my system..when i want to connect the oraclexe then it give error
D:oraclexeapporacleproduct10.2.0serverBIN>net start OracleXETNSListener 'net' is not recognized as an internal or external command, operable program or batch file.
D:oraclexeapporacleproduct10.2.0serverBIN>net start OracleServiceXE 'net' is not recognized as an internal or external command, operable program or batch file.
D:oraclexeapporacleproduct10.2.0serverBIN>c
when i check this path D:oraclexeoradataXE then i check datafiles not generated in this folder
Just noticed MR(media recovery lock ) in our database,further check showd all datafiles had this lock since our database was shutdown abortly but was startup gracefully.
What is best practice to change small disk D:? I am beginner with Oracle. 10g on W2008. 5 datafiles (all indexes,second data file, 2 undotabs)*.dbf (34;30;1;34;12 GB) is on D:. Part of tablespaces (1 data, 1 undo)has files on c:.
I. 1.Shutdown 2008 server. 2.Copy D: image with GHOST to USB, network. 3.Connect new D, create RAID. 4.Restore image to D. 5.Start 2008 server.
II. 1.Stop application. 2.CONNECT AS SYSDBA 3.SHUTDOWN NORMAL or (IMMEDIATE)? 4.Copy files *.dbf at OS level from d: to ... USB disk, network. 5.Shutdown 2008 server. 6.Change disks, create RAID in BIOS. 7.Start W2008. Is Oracle at this moment in SHUTDOWN mode? 8.Copy back *.dbf to new D: (with directory structure). 9.STARTUP Oracle.
I need a PL/SQL stored procedure which will accept a datafile name as parameter and dynamically create and execute "alter tablespace" command to add this passed datafile dynamically.
Table D-7 shows a hierarchical file mapping of a sample OFA-compliant installation with two Oracle home directories and two databases. The database files are distributed across three mount points, /u02, /u03, and /u04. URL....
why three mount points (/u02, /u03, and /u04) for datafiles? don' understant this. isn't it better to store all datafiles in one mountpoint?
I am using 11.2.0.1 database and I have all my datafiles,control files,redo logs in /home/oracle/oradata and i would like to completely move all the files to /data/oradata folder.
What is the best method? (without losing old backup's , without reset log , no new incarnation,no new DBID) can it be done online /offline?
Is it recreating control file with new location and open database without resetlog the best method?
I have a huge table (about 60 gb) partition over range. The index on this table is global index created on 4 columns together. I have a query which is running very slowly. The explain plan is showing the use of this global index.Explain plan is not showing pstart and pend because the index is global.
I'm testing out various RMAN restore/recovery scenarios for 10g R1. Took a full level-0 backup of one of our larger databases (16 tb) and I'm trying to restore the whole thing on another server. While the backup was running (for about a day & a half), a few datafiles were added. The backup completed fine, including "backup current controlfile;". Now my restore is failing with "RMAN-06023: no backup or copy of datafile xxxx found to restore", with reference to the 4 files that were added while the backup was in progress.
I guess if the datafiles were added after the backup had finished, we'd be fine because they'd be created during recovery with our archive logs (we have all of those).
I found some suggestions for first doing a "restore controlfile;" -- did that and it worked fine -- then mounting the database and creating the datafiles manually. I added that to my RMAN restore script and tried it, like so:
run { allocate channel d4 type disk format '/BK01/dbX/stream1/df_%t_s%s_s%p' ; sql "alter database create datafile ''/uXX/oradata/dbX/datafile_58.dbf''"; restore database ; }
The datafiles were created successfully, and the restore began .. but then it failed with this:
creating datafile fno=4216 name=/uXX/oradata/dbX/datafile_58.dbf ... RMAN-03002: failure of restore command at 06/20/2013 17:27:28 ORA-01119: error in creating database file '/uXX/oradata/dbX/datafile_58.dbf ORA-27038: created file already exists
So RMAN seems to be telling me that if these 4 datafiles don't exist in the backupset, then RMAN won't automatically create them. But if I manually create them first, then RMAN will try to restore them and complain that they already exist. And for that matter, I would have expected that RMAN would be smart enough to detect that those 4 files were added while the backup was active, and just include them in the backup. Maybe there's something I needed to do to tell RMAN to do that?
This is just a test, so of course I can redo the backup from scratch and try again, but we have some pretty large databases which cannot be shut down for cold backups, so it's entirely possible this scenario could show up in a real disaster recovery. RMAN does such a good job of managing things.
I have checked the space of my tablespaces/datafiles in my database. I have 8 GB space left in my database server. I cant add more hard-disk as there is no slot left. We r planning to buy a new server with latest config.
My question is, how can we know upto what size our database can increase and when a datafile need to be added in advance. Sometimes even though datafiles have space left,it shows errors abt extents cannot be extended. We have coalesce the tablespaces and added a new datafile.
I am facing an issue while rebuilding a DR (Physical Standby) site using RMAN.Rman is failing to restore all the datafiles to the stanby site.
There are 403 datafiles that should be restored among which after restoring 50-60 datafiles the channel is abruptly getting terminated.Please note this exercise is not new I have done this many times (once a year after a DR testing) and the same script is being used.The rcv file is as follows
/u03/oracle/admin/mpr/scripts/dataguard/configure_dr.rcv connect auxiliary sys/XXXX@YYY run { duplicate target database for standby nofilenamecheck dorecover; } exit CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; configure_dr.rcv: END [code]....
The trace file says "krso_proc_add: Detected dead process 13828326; subsuming V$MANAGED_STANDBY slot INFO: krsr_rfs_atc: RFS processes are temporarily disallowed"
I have also started RMAN debug tracing and the trace is also scanned as it is huge.I have seen the following in the trace error 3015 signalled during compilation..I do not find any specific oracle error to check on .
The only change that happened is that the Primary production server has been moved to a new datacentre and after its move all the connections seems to be working fine.
We have lost some datafiles and some archivelogs. We have backup of whole database taken on Dec 20 and archivelogs until Dec 23.So we dont have 1 day worth of archivelogs which can lead to incomplete recovery.
Now question is do we need to restore all datafiles or only lost datafiles for incomplete recovery since we dont have all archivelogs ? lost datafiles belong to different tablespaces.
Datafiles which are still intact are with current SCN and we restore lost datafiles only and recover unitl our last available archivelog it will have different SCN. Will we be able to open database or we need to resotre whole database for this scenario?
usually i create oracle databases using the create database command, however im trying this time to use the dbca -silent -createDatabase -templateName General_Purpose.dbc - i have edited the General_Purpose.dbc to have the control files. datafiles etc to go to a different location other then the $ORACLE_BASE and no luck so far, whatever i do even changing the $oracle_base just dumps all into a location, how can i force it / change it - this is Oracle 11GR2 on linux
see sample of my General_Purpose.dbc <initParam name="control_files" value="/u01/oradata/control01.ctl, /u02/oradata/control02.ctl"/> <initParam name="diagnostic_dest" value="/backup/oracle/oradata"/>
im on 11G R2 database on linux 5.4, while configuring a physical standby from scratch, remember i dont want to use RMAN, when i copy datafiles from primary to standby, is it necessary to shutdown primary and then copy datafiles ? or i can use SCP command and copy even though primary is up in read write mode?
I am facing the error "ORA-01502: index or partition of such index is in unusable state " while loading the text data using sql loader with direct path (direct = Y ,rows = 10000) option. Table consists an composite non unique index. If I query the dba indexes for the effected index it shows the index status as VALID. There was no maintaince done on the effected table or index. I have tried loading the same data using conventional path but didn't found any issues for the same.