The steps to move OMF files in ASM. I tried the following and was not successful.
RMAN> switch database to copy;
datafile 1 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/system.357.809972853" datafile 2 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/sysaux.363.809972837" datafile 3 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs1.365.809972737" datafile 4 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/users.361.809972859" datafile 5 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs2.360.809972761" datafile 6 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs3.359.809972787" datafile 7 switched to datafile copy "+DATA01/pa01pod_im1l059p/datafile/undotbs4.358.809972811"
RMAN> alter database open resetlogs;
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of alter db command at 03/13/2013 16:30:05 ORA-01139: RESETLOGS option only valid after an incomplete database recovery
RMAN> alter database open;
so switch worked reset logs says can't use it there so I just try to open and it just hangs.
1) Can we fetch 'select statements' from redo log files through the use of log miner utility or any other? (I think redo log file contains only insert,update,delete and DDL/DCL commands only)
2) If "No" to the above answer then how can i fetch all select statements fired on the system for a day or particular time. (setting of sql_trace may be the one of them, but can it be possible for system level)
I have got backup pieces in ASM, I guess about more than 100 files. Now, I need to copy all of them from ASM to FileSystem, there are 2 methods still now:
1- Copy from ASM to FileSystem using cp command. 2- Copy from ASM to FileSystem using DBMS_FILE_TRANSFER.
But:
In the first method, when I copy one file, I took more than 1 minute, so the following script would take me more than 1 days (I guess so).
#!/bin/ksh # # This script copies files from FRA on ASM to local disk # ORACLE_SID=+ASM2 ASMLS=/vasgatedb/app/vsgbkp/asm_ls.txt ##{ASM files list}
[Code]...
The second method, DBMS_FILE_TRANSFER took me less than 1 second to copy one file completely.
sys@VSGDB> set timing on sys@VSGDB> exec dbms_file_transfer.COPY_FILE('asm_dir','level_0_vsgdb_9998_813844797.bkp','fs_dir','level_0_vsgdb_9998_813844797.bkp');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.38
Of course, I wish I used the second method as soon as possible, however, said I below, I've got about ~200 files, and I could not copy one by one file.
I have got oracle 11g release 1, I need recomendation, how can I delete safely content from the folder C:oraclediag dbmsinstancenameinstancenameincident?
ora-00257 archiver error. connect internal only until freed
when we tried to remove the unwanted arc files thro ASMCMD,we are getting below error:
ASMCMD> rm -ef 2011_04_05/ Unknown option: e usage: rm [-rf] <name1 name2 . . .> ASMCMD> rm -rf 2011_04_05/ ORA-15032: not all alterations performed ORA-15028: ASM file '+XCOM_BACKUP_DG/TXCOM/ARCHIVELOG/2011_04_05/thread_2_seq_27215.1143.747641143' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute) ORA-15032: not all alterations performed ORA-15028: ASM file '+XCOM_BACKUP_DG/TXCOM/ARCHIVELOG/2011_04_05/thread_3_seq_21762.826.747641143' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute) ORA-15032: not all alterations performed ORA-15177: cannot operate on system aliases (DBD ERROR: OCIStmtExecute)
We have Oracle 10g(10.2.0.4) RAC on AIX 5.3, there are 3 RAC instances on each node.From Oct 9th, we found one of the instance in node 1 generated a large size trace files in udump. The largest size of the trace file take up 3G. But there's only about 15G $ORACLE_BASE direcotry. After some time, we should delete some trace files for releasing the space.
Here is a part of alert log when this issue happen:
Sun Oct 9 23:18:15 2011 Errors in file /oracle/app/oracle/admin/bzywk/udump/bzywk1_ora_3166258.trc: ORA-00600: internal error code, arguments: [17087], [0x70000010DF9F580], [], [], [], [], [], [] Sun Oct 9 23:18:16 2011 Trace dumping is performing id=[cdmp_20111009231816]
[code]....
I checked with some trace files, I found all these files contains a huge info of processes.
I'm facing problem with archive log file size, Archive logs are generated with only of 90m or 92m or 94m(Variable sizes of less than 100m), Although i had set 100m for each of my redo log file. Here i'm providing my create db script for your reference. I want to know why the log switches before it reaches 100m.Is there any connection of intial 10m for my .dbf files.
I am trying to install Oracle 11g Release 2 on Redhat Enterprise Linux 5.2 (vmware image) but after 50% installation it through errors while copying files on the server. I also tried on Windows 2003 Server R2 (physical m/c as well as vmware image) and faced same problems. I did installation of Oracle 11g Release 1 and didn't find any problems.
I am facing problem in user_dump_dest directory...I have noticed that there are a lot of trace files with huge size in MBs.I clean it and after 4 days there are 40G of size..
I have analyzed that, datapump estimation is 9.902GB. When i check size of .dmp file, it's shows 1.44Gb.
Export: Release 11.2.0.1.0 - Production on Fri Apr 5 02:00:05 2013 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. ;;; Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_FULL_01": system/******** dumpfile=expdp_LVGITRN_30_24_050413.dmp directory=DP_DIR logfile=expdp_LVGITRN_30_24_050413.log full=y exclude=statistics Estimate in progress using BLOCKS method... Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA Total estimation using BLOCKS method: [bold]9.902 GB [/bold]
After importing my dump, i have noticed that ARGUMENT$ segment taken more than 9 GB out of my total SYSTEM table space.I belive ARGUMENT$ table is used only to store procedure/package parameter details. But I am not sure Why it has taken more space.
Is there any way we can reduce the SYSTEM table space? using with the below details?
Import Details: -------------- 1) Imported using IMP DP. List of parameters used are userid, logfile, dumpfile, directory, job_name and remap_schema. 2) Dump file size is 3GB 3) The below list will be no. of objects imported using my dump.
OBJECT_TYPE COUNT(1) ------------------- ---------- DATABASE LINK 1 FUNCTION 246 INDEX 4742
[code]...
4) The below list will be amount of space occupied by the segments in the SYSTEM.
col owner form a5 word wrap col segment_name form a15 word wrap col segment_type form a15 word wrap select owner,segment_name,segment_type ,bytes/(1024*1024) size_m
here one biggest schema size is 250GB and the total size of all the schema's is 300GB. The file where am taking the dump has 350GB space but even then the expdp failed saying
ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes
why it failed and how to restart it and make sure it runs successfully without error.
We had a server that crashed and died holding an Oracle database.Lucky us that we have full server cold backups.
I can get a restore of all the .dbf and .ctl files to a new server aswell as the pfile.Previously when i have done restores i have had the database up first and done
ALTER DATABASE BACKUP CONTROL FILE TO TRACE;
I would then delete the redo logs, tempfile and controlfiles and recreate using the tracefile.How do i go about bringing up the database without this tracefile. Do i just keep the controlfiles, temp file and redo logs and just attempt to startup? I believe i can keep all the paths the same that were on the other server.
I have written make files that compile .pc files in unix. This was for several projects that use an oralib source code directory.Just running proc on one target .pc file works fine on unix. I am trying to use proc - Oracle 10.2.0 - in windows and I keep getting:
Quote:unable to open include file #include <stdio.h> and other C library headers.
I am doing all development under cygwin, this way I can write a makefile just like under unix instead of using nmake.All C library headers are in /usr/include When I run proc on Solaris as that:
proc program.pc No problems, and I do get program.c.
However in windows I get the previous error message. I have tried to do proc include=/user/include program.pc and proc include=/user/include parse=full program.pc but I still get the same error message.
I have a rather complicated process to import text files into my DB.I'm given thousands of files every day, separated by "," and with 80 fields each. With a bash script, I take the 45 fields I need and then split each file into x number of files grouping the rows by three fields.Then I use SQL Loader to insert them into de DB.
The problem is that now I must insert on two tables and the "WHEN" clause doesn't allow the use of > and <.
To make things a litle clearer take this text file (already splited and grouped and ready to be inserted): ... 1,1,135,1900,0,12,114,2011/08/25 17:19:00,135,... 1,1,135,1900,0,13,119,2011/08/25 17:19:00,136,... 1,1,135,1900,0,14,117,2011/08/25 17:19:00,137,... 1,1,135,1900,0,15,113,2011/08/25 17:19:00,138,... 1,1,135,1900,0,16,119,2011/08/25 17:19:00,139,... ... When field 6 is higher or equal to 14, it must go to table a.When field 6 is lower than 14, it must go to table b.I can't use external tables as I'm in a different server.
2) Created a Source Directory --> DR1 3) Created a Target Directory --> DR2 4) Created a DB Link CREATE DATABASE LINK LINK1 CONNECT TO <username of the Remote DB> IDENTIFIED BY <Password> USING '<Remote DB Name taken in the TNSFILE>. 5) In the Local Server I had written the below command.
create or replace procedure proc1 is cursor c1 is select recid,substr(name,37) "ARCH_FILE" from v$archived_log; var1 c1%rowtype; begin open c1; fetch c1 into var1; [code]...
It is working, but not coping the Files from Local To the Remote.
I am trying to load multiple XML files into Oracle DB using SQL Loader. The filenames of the XML files starts with a description and then numbers, where the numbers are different each time.
Here's my CTL file:
LOAD DATA INFILE * INTO TABLE XML_TABLE TRUNCATE xmltype(XML_TABLE) FIELDS (
[code]....
I don't want to keep having to go into the ctl file and change the numbers of the xml file. Is there a way where I could just load all .xml files that begins with 'description'? Like maybe
exporting a big table (many rows = 3.000.000). Using the command exp the error message returned is "expdat.dmp > EXP-00028: failed to open expdat.dmp for write". Is there a possibility to export this table in multiple files (as a splitter)?
I want to load data into more tables from many files ,based on first column value,which is FILLER field.i am trying to test this scenario with two oracle tables with similar definition. and load one record on each table using WHEN/POSITION keywords. for this , i added first column as reference column in the data which i have in ctl file itself.
1st table loaded with 1st record. But, 2nd record not loading.if i missed anything with WHEN/POSITION keyword ?
This is the error in log file for 2nd table(WD1):
Record 2: Rejected - Error on table WD1, column TAB. ORA-01841: (full) year must be between -4713 and +9999, and not be 0
Table WD1: 0 Rows successfully loaded. 1 Row not loaded due to data errors. 1 Row not loaded because all WHEN clauses were failed. 0 Rows not loaded because all fields were null. [code]....
I have used webutil_file_transfer.Client_To_AS_with_progress to upload files from client to Application Server using Forms 10g.However, now i want to save file in database and not upload to database as blob.I mean I want to save the file from client TO a folder available in the database server.I was wondering, there is no documentation available on WEBUTIL.