OMF - Add Data Files To A Tablespace?
Aug 6, 2012
DB version : 11.2.0.2
Platform : RHEL 5.4
We use Oracle Managed files for storing datafiles in ASM diskgroups. To add datafiles to a tablespace we usually issue
SQL > ALTER TABLESPACE CADL_WM_TBS ADD DATAFILE '+DATA' SIZE 10g AUTOEXTEND Off;
Tablespace altered.And the new datafile will be created at the below location
+DATA/orcl/datafile/cadl_wm_tbs.893.767888027But my db_create_file_dest is set only as DATA
SQL > show parameter db_create_file
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_create_file_dest string +DATA
Although the above datafile got created in the desired location (ie inside +DATA/<dbname>/datafile/ directory), how did this happen without us setting the db_create_file_dest parameter to +DATA/orcl/datafile ?
View 4 Replies
ADVERTISEMENT
Jan 22, 2013
why total size for undotbs1 is different from the acutal data file size in Operating system.
select tablespace_name, sum(bytes/1024/1024) from dba_data_files
where tablespace_name like 'UNDO%'
group by tablespace_name;
tablespacename total size
UNDOTBS1 2000
UNDOTBS2 7284
[code]....
View 7 Replies
View Related
Sep 24, 2010
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
View 4 Replies
View Related
Mar 29, 2011
I am working in a bank as an system consultant, i have a SAN Storage Area and oracle as below.
SAN 1
This interface includes the DATA FILES of the oracle tablespace
SAN 2
SAN1 Mirrors the DATA FILES of the oracle tablespace to SAN 2
1. Can i rely on real time data recovery from SAN2 ?
2. if SAN1 (Data Files are currupted) will the SAN2 Data Files will be currupted as well.
3. If the SAN2 is currupted then what Oracle Features can be used to have uncurrupted data.
View 5 Replies
View Related
Jun 15, 2012
I have set up a cross platform (Microsoft Windows IA (32-bit) -> Linux x86 64-bit) data guard and it worked fine.Then I did a switch over (which again worked) and found out the data is not getting replicated at all.. checked the data files available from the new primary database and found out they are in the windows format as below..
SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
D:ORACLEAPPADMINISTRATORORADATAMFSSYSTEM01.DBF
D:ORACLEAPPADMINISTRATORORADATAMFSSYSAUX01.DBF
D:ORACLEAPPADMINISTRATORORADATAMFSUNDOTBS01.DBF
D:ORACLEAPPADMINISTRATORORADATAMFSUSERS01.DBF
D:ORACLEAPPADMINISTRATORORADATAMFSRMANRMAN_TS01.DBF
and physically they were created at '/home/app/oracle/product/11.2.0/db_1/dbs/' and as
D:ORACLEAPPADMINISTRATORORADATAMFSREDO02.LOG
D:ORACLEAPPADMINISTRATORORADATAMFSREDO03.LOG
D:ORACLEAPPADMINISTRATORORADATAMFSRMANRMAN_TS01.DBF
View 5 Replies
View Related
Aug 13, 2010
I'm a SAP consultant working in SQL on NT platforms. This is the first conversion from Oracle that I have done. My client has provided us with a "Cold" backup of the Oracle dbase on a HD formatted in Unix, I have the partition mounted and I'm able to view the files. I have the ORDATA folder with all the .DBF files.
Q: How do I extract the data from the .DBF files. I need to export to something workable with SQL.
Original database was on Unix, I'm operating on Windows platform.
View 4 Replies
View Related
Aug 25, 2010
we're planning a data migration from an application (oracle-based) to another (also with oracle db).
the origin is a ca. 80 GB database. so lots of millions of records are to be migrated. (before loading records into the destination tables, they have to be transformed).
the current concept is to receive all origin data in xml files, load them in a staging area (an own migration scheme in oracle), transform and load them into the destination tables.
we have three days for the whole migration (including extract from origin database, transform, load, backup after completion...).
my question is, that a migration with xml-files is a good concept. i think xml processing is much slower than doing the same with csv files. my proposal to migrate an oracle dump (so we got the original data in our staging area) was declined.
is migration mass-data with xml files good or are there performance or other issues?
View 2 Replies
View Related
Jul 3, 2013
I was trying to load data from XML files to an Oracle database table.I followed these below steps to load that file data into a table. Created XML_DIR1 as oracle directory where i have kept all XML files.
Create table import_rpt_xml of xmltypexmltype store as binary xml; insert into import_rpt_xmlvalues (xmltype (bfilename('XML_DIR1','I-Yamanouchi-20040525-501.xml'),nls_charset_id('AL32UTF8')));
This insert shows below error: Error starting at line 80 in command:insert into import_rpt_xmlvalues(xmltype(bfilename('XML_DIR1', 'I-Yamanouchi-20040525-501.SGM'), nls_charset_id('AL32UTF8')))
Error report:SQL Error: ORA-31061: XDB error: XML event errorORA-19202: Error occurred in XML processingIn line 69 of orastream:LPX-00217: invalid character 142 (U+008E) I tried to look into my XML and got that it has some Japanese characters in it.
this to deal with japanese characters in XML. I don't want to miss those characters. My databse NLS_CHARACTERSET is 'AL32UTF8'.
My sample XML file looks like this.
<ichicsr lang="ja">
<ichicsrmessageheader>
<messagetype>ichicsr</messagetype>
<messageformatversion>2.1</messageformatversion>
<messageformatrelease>2.0</messageformatrelease>
<messagenumb>US-Yamanouchi-W2004050033-4</messagenumb>
<messagesenderidentifier>Yamanouchi</messagesenderidentifier>
<messagereceiveridentifier>PMDA</messagereceiveridentifier>
[Code]...
and so on.
View 4 Replies
View Related
Jul 24, 2012
: I noticed three data files on my standby show's they are corrupted. What will be the easy and fast way to sync them up from my production database?
View 1 Replies
View Related
May 13, 2004
I would like to extract some data value from oracle to a text file...and i m not sure how to set the delimiter between the columns data value
SET echo off
SET space 0
SET pagesize 0
SPOOL a.txt
SELECT emp_id, name, add
FROM table1
/
SPOOL OFF
Where do i set the delimiter?
Can i do something like in SQL*Loader?
fields terminated by ',' enclosed by '"'
I would like the text file to be display as
"123","ABCD","123 abc road"
"234","XYZ","234 xyz road"
View 5 Replies
View Related
Jun 12, 2013
I am trying to insert data in three tables from three csv files simultaneously. This is what I have so far:
---insert all data from three csv files
DECLARE
--zenobject
F UTL_FILE.FILE_TYPE;
[Code]....
View 5 Replies
View Related
Dec 28, 2012
db can only be opened if all the datafile and controlfiles are synchronize.I wonder if db crash and we dont have any kind of backup. is there some way to synchronize the control file with the datafile?....any way.db is not idle either when crashes we can manage data lose.just want our database open
View 1 Replies
View Related
Feb 24, 2013
My server hard disc crashed yesterday and i don't have any backups.
I am able to recover the .dbf files by using a recovery tool.
is it possible to use this .dbf files into new server and recover my data.
View 5 Replies
View Related
Dec 8, 2009
I would like to increase the size of my redo logs. For this I need to drop & recreate them. I have read the Oracle doc [URL] which covers this however :
I have both "normal" and "standby" redo logs (see below) on my primary database - can I drop the standby redo logs? Do these have a link to the redo logs on the standby database?
SQL> SELECt * from v$logfile order by 1;
GROUP# STATUS TYPE MEMBER
---------- ------- ------- ------------------------------------- 1 ONLINE D:ORACLEORADATAREDO01.LOG
2 ONLINE D:ORACLEORADATAREDO02.LOG
3 ONLINE D:ORACLEORADATAREDO03.LOG
4 STANDBY D:ORACLEORADATASTDBYREDO04.LOG
5 STANDBY D:ORACLEORADATASTDBYREDO05.LOG
6 STANDBY D:ORACLEORADATASTDBYREDO06.LOG
View 4 Replies
View Related
Sep 11, 2012
I am a fairly new dba and we had one of our HP Array cluster's crash to the point where oracle will not startup or mount anymore. I can access the datafiles on the linux server however. Is there a way to export the datafiles to another linux box to import in another database or have I pretty much lost everything (we do have an RMAN backup however it is in a remote location and the only person that knows the password to it is unavailable.
View 4 Replies
View Related
Oct 23, 2012
I am trying to spool data from tables into flat files. I am using the following scripts to accomplish it
1. A cmd file (windows) that makes a call to a sql file
2. The SQL file which generates another query file at the run time, depending upon the table name passed to it
3. The run time query file , that executes the final query and spools the data into a txt file | delimited
For e.g. :
Actual command passed C:Spool_utilityspool_utility TABLE_NAME
E.g. of the spool utility file :
@echo off
SET dbuser=XX@YY
SET dbpw=xxxx
echo %date% - %time% - Start > %1%log.txt
echo START
sqlplus -s %dbuser%/%dbpw% @spool_utility.sql %1>%1.txt
echo %date% - %time% - Done >> %1%log.txt
echo DONE
E.g. of the spool_utility.sql
set echo off
SET newpage 0
SET feedback off
SET linesize 32767
set pagesize 0
[code]........
The above file generates a table_name.sql file with the actual table name at run time and gets executed and the output is written to the table_name.txt file.
This works perfectly fine. But the issue is when someone passes some wrong table name or if there is a actual run time error while executing the query , the error with details itself itself gets written to the end spool file.
For e.g. : if i do this just to generate an error and execute it from command line, the query generates an error and writes the error to the spool file , but at the command prompt where I executed the command I do not see any error and the process seems to have run perfectly well
set xxx on xxx off as above
spool &1.sql;
Prompt Select * from &1 where rownum><10---this will cause the issue
spool off
set termout ON
@ &1
EXIT
Eg of spool file generated :
from table_name WHERE rownum><=10 *
ERROR at line 62:
ORA-00936: missing expression
My question is, is there any way i can capture this runtime error and return this error to my calling sql script spool_utility.sql and then propagate it to the calling command file and do some tasks for eg removing the spool file and writing the actual error to a log file . Basically any way to know at my OS calling level that the entire spooling operation was unsuccessful.
View 8 Replies
View Related
Jul 1, 2013
Trying to connect to XE 11.2 getting the following:
Unable to connectSQLState=51000[Oracle][ODBC][Ora]ORA12705: Cannot access NLS data files or invalid environment specified.
I have set the environment variables using the oracle_env.sh shell script. Trying to connect with the instant client running on Windows XP with the ODBC extensions installed. Host is X86_64 CentOS Linux
View 0 Replies
View Related
Jun 29, 2011
my previous topic was locked. I was unable to respond.So I am sending it again. The user referred to in the message has its own tablespace assigned.I am trying to import the data to that tablespace. I have noticed that within the .imp file the USERS tablespace is being referenced.
I am having an issue importing. We are currently using Oracle10g. When I import the .imp file it places the data into the USERS tablespace and also in the tablespace of the users (SOM) that is specified. Is there a simple and easy fix for this? I checked the .imp and it has the USERS embedded in the file.
View 1 Replies
View Related
Jun 24, 2013
I am trying to backup my entire database's files :
[oracle@testing-oracle-1 ~]$ rman target /
Recovery Manager: Release 11.2.0.1.0 - Production on Mon Jun 24 16:09:54 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID=1341263457)
RMAN> list backup;
using target database control file instead of recovery catalog specification does not match any backup in the repository
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters are successfully stored
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT
FOR DEVICE TYPE DISK TO '/u01/app/oracle/backupsets/control_files/cf_%F.BCKP';2>
[code]....
I see that a backup set was created to "/u01/app/oracle/flash_recovery_area/ORCL/backupset/2013_06_24/" , in spite of my specification.
So I asked RMAN to list all the backups I have:
RMAN> list backup;
List of Backup Sets
===================
BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
1 41.14M DISK 00:00:00 24-JUN-13
BP Key: 1 Status: AVAILABLE Compressed: NO Tag: TAG20130624T161025
Piece Name: /u01/app/oracle/backupsets/818957425_1_%r.BCKP
[code]....
Here I see that 4 backup sets have been created.The backup sets 1 & 3 containing the archived redo logs (818957425_1_%r.BCKP and 818957451_3_%r.BCKP) are placed as I wanted in /u01/app/oracle/backupsets/ dir.
The file containing the Control File and Spfile (cf_c-1341263457-20130624-00.BCKP) is placed as I wanted in /u01/app/oracle/backupsets/control_files/ dir.However, The backup set file containing the data files(o1_mf_nnnd0_TAG20130624T161026_8wjkb2tg_.bkp) is placed under the directory /u01/app/oracle/flash_recovery_area/ORCL/backupset/2013_06_24/ ,and I don't understand why.
When I backup my database without the archived redo logs - it is saved correctly:
RMAN>
RMAN> delete backup;
using channel ORA_DISK_1
List of Backup Pieces
BP Key BS Key Pc# Cp# Status Device Type Piece Name
[code]....
1. Why aren't my data files backed up to the location I specified when I backup using PLUS ARCHIVELOG syntax?
2. Why are my redo logs saved to two different backup sets and not just one?
3. Is there a way to combine backup of all files (data files, control file, spfile, redo logs & arch redo's) in one backup set, one file?
View 5 Replies
View Related
Jan 20, 2012
I want to load data into more tables from many files ,based on first column value,which is FILLER field.i am trying to test this scenario with two oracle tables with similar definition. and load one record on each table using WHEN/POSITION keywords. for this , i added first column as reference column in the data which i have in ctl file itself.
1st table loaded with 1st record. But, 2nd record not loading.if i missed anything with WHEN/POSITION keyword ?
This is the error in log file for 2nd table(WD1):
Record 2: Rejected - Error on table WD1, column TAB.
ORA-01841: (full) year must be between -4713 and +9999, and not be 0
Table WD1:
0 Rows successfully loaded.
1 Row not loaded due to data errors.
1 Row not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
[code]....
View 9 Replies
View Related
Jul 31, 2008
I have written make files that compile .pc files in unix. This was for several projects that use an oralib source code directory.Just running proc on one target .pc file works fine on unix. I am trying to use proc - Oracle 10.2.0 - in windows and I keep getting:
Quote:unable to open include file
#include <stdio.h>
and other C library headers.
I am doing all development under cygwin, this way I can write a makefile just like under unix instead of using nmake.All C library headers are in /usr/include When I run proc on Solaris as that:
proc program.pc
No problems, and I do get program.c.
However in windows I get the previous error message. I have tried to do proc include=/user/include program.pc and proc include=/user/include parse=full program.pc but I still get the same error message.
View 3 Replies
View Related
Dec 19, 2012
"All data you create in this tablespace will be encrypted using an AES256 encryption key. You cannot encrypt an existing tablespace. To encrypt data, first create an encrypted tablespace, then use alter table move, CTAS or datapump import to move your data into the encrypted space. Remember to drop the old tablespace BUT not including datafiles. Use an OS schred program to remove the old datafile. If you are on ASM you may use the including datafiles option since you can’t schred files from the OS inside an ASM instance."
But i want to know why we should NOT drop the including datafiles, when dropping tablespace (so 'drop tablespace my_tbs including contents and datafiles'). So what option should we use when dropping tablespace?
Why we should use OS capabilities to remove the datafiles?
What happens if i remove the datafile when i drop the tablespace?
View 13 Replies
View Related
Jan 4, 2013
I have a range partitioned table with one lob column. Each partion is on a separate tablespace except two partitions which are on same tablespace. Now I want to move a partition from one tablespace to another tablespace along with lob data. By using a simple alter table move partition will also move the lob data or there is some special procedure to adopt.
View 9 Replies
View Related
May 15, 2010
I exported oracle database from one server in a dmp file. then i imported it in another oracle database server. when i saw the imported data the columns which were storing german data is in rubbish characters.
then i remember that the database from where i exported is having nls language as german. i executed this statement to set the nls on the new server
alter system set nls_lang=german SCOPE=SPFILE
but now my database is not getting started always giving me error - cannot access NLS data files or invalid environment specified.
i also set the path NLS_LANG=german in the solaris environment.
View 1 Replies
View Related
Jul 1, 2011
I have a 10G Express system running. I Have 2 tablespaces in production. WHen taking backup, it terminates unsuccessfully saying system01.dbf is damaged. The application works fine and no data loss is found through the application interface.
So can I shift the data to a new server using the dbf files of the tablespaces in use?
View 9 Replies
View Related
May 29, 2010
How to transfer redo log files to standby database..
View 1 Replies
View Related
Oct 8, 2011
1> Does dataguard in 10g use ftp/rsh to transfer archived log files to standby database or some other protocol?
2> In my primary database, archives are getting generated normally, there is no error in alert log file. But archives are not getting transferred to standby database. I am able to connect through sys user from primary server to standby database & vice versa.
Also, tnsping is working fine.
All was working fine till 2 days back & no parameter has been changed from database side. I am not able to transfer the file manually through FTP to standby server. Does it is the problem? Or dataguard doesnt use FTP protocol to transfer the files?
View 2 Replies
View Related
Sep 13, 2011
I've only successfully duplicate a standby database.
from the alert log
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: 'D:ORA102CTAREDO01.LOG'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
[code].....
when I tried to add the online and standby redo log, it error out
SYS@CTA>select logdetail.member, loggroup.group#, loggroup.sequence#, loggroup.archived, loggroup.status lg_status, logdetail.status ld_detail, logdetail.type
2 from v$log loggroup join v$logfile logdetail
3 on loggroup.group# = logdetail.group#;
MEMBER
--------------------------------------------------------------------------------
GROUP# SEQUENCE# ARC LG_STATUS LD_DETA TYPE
---------- ---------- --- ---------------- ------- -------
[code].....
based on my understanding from [URL] ....
Quote:
As part of the duplicating operation, RMAN automates the following steps:
Creates a control file for the duplicate database
Restores the target datafiles to the duplicate database and performs incomplete recovery by using all available incremental backups and archived redo logs
Shuts down and starts the auxiliary instance (refer to "Task 4: Start the Auxiliary Instance" for issues relating to client-side versus server-side initialization parameter files)
Opens the duplicate database with the RESETLOGS option after incomplete recovery to create the online redo logs (except when running DUPLICATE ... FOR STANDBY, in which case RMAN does not open the database) when duplicating for standby database it does not create online redo logs. Duplicating a standby database does not creates online redo logs.
how should I add the online and standby redo logs. If I transfer the redo logs from primary to standby, it always encountered the the following error
Dump file d:ora102ctadumpcta_arc0_3624.trc
Tue Sep 13 19:21:53 2011
ORACLE V10.2.0.4.0 - Production vsnsta=0
vsnsql=14 vsnxtr=3
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the OLAP, Data Mining and Real Application Testing options
Windows XP Version V5.1 Service Pack 2
[code].....
View 1 Replies
View Related
Jun 5, 2013
I performed a switchover test of my Exadata databases last night. Both databases are running 11.2.0.2 (BP7) on top of GI of the same version.I'm using Data Guard Broker to administer the Data Guard configuration.
I have, as you'd expect, the standby_file_management set to AUTO, so any file changes/additions/deletions that are made on Primary should be applied to Standby also.And they have been. Until last night.
When I had switched over to running Primary on the Standby site, I got this error message:
Tue Jun 04 22:27:12 2013
Errors in file /u01/app/oracle/diag/rdbms/exdw1pdg/exdw1pdg1/trace/exdw1pdg1_ora_26630.trc:
ORA-25153: Temporary Tablespace is Empty
I checked and my two temp tablespaces existed, but had no files in them. These files are 200Gb and 448Gb in size, so you'd think you'd notice them going missing. This wasn't by any means the first time we switched over (and, yes, I did create temp files for Standby when I built it and first switched over)
We've switched over to Standby multiple times and even ran a whole day's processing against it and haven't seen this. Ultimately, it wasn't a big deal, because I just created a tempfile for each of the tablespaces and off we went.Nothing in MOS seems to mention something like this. Basically, it looks like the switchover process decided to eat my tempfiles but keep my temp tablespace defintion. Odd.
View 5 Replies
View Related
Nov 6, 2013
I have taken an export using expdp of schema, data of the schema spread across different tablespaces , now i want to import the data to only one tablespace.
View 1 Replies
View Related