How can i check for datafile corruption or a datafile error,its some tool like Linux (fsck) command in Oracle that can halp to me to chack a datafile?.
Is any way to isolate the disk region that is corrupted?
If a disk error exist an an ASM disk group how can i isolate the error from that disk group? It's an alternative different to the VALIDATE DATAFILE command in RMAN?I'm using Oracle 11GR2 on a Linux Box.
I have noticed a block corruption issue in my RAC database. Its an index , how to recover a corrupted index in system datafile.
SELECT tablespace_name, segment_type, owner, segment_name FROM dba_extents WHERE file_id = 5 3 and 147551 between block_id AND block_id + blocks - 1; TABLESPACE_NAME SEGMENT_TYPE OWNER
develop a Oracle stored function or procedure to confirm a availability of datafile on a specific local directory to be read by Oracle External table. The fine looks like filenameddmmyyyy.csv.
I have create a Oracle directory named data_dir that is mapped with physical directory c: mep
Once it is confirmed that datafile is available the ETL process is started.
I observed logical corruption in one of the database, while running select on some tables observed ORA-01410: invalid ROWID error. These errors or errors related to this logical corruption are not reported in alert log file.
Since our database is in NOARCHIVELOG mode and regular backups are not happening through RMAN (weekly cold mount point backup to tape), i was not able to use RMAN to investigate block corruption.
So used DBVERIFY on all datafiles of database to check the consistency of datafiles, and found DBV gives error for one of the datafile - 'Completely zero block found during dbv:' As i mentioned earlier, we are not taking regular backups using RMAN and database is in NOARCHIVE mode.
I am trying to load the data from .csv file into the table using SQL Loader.
The table has the following schema: src_id : number, dest_id : number, range: intsys.interval_typ --- > a type containing (lower,upper) payload : varchar2(100)
The loader.ctl file is :
load data infile * append into table sb_packet fields terminated by "," optionally enclosed by ' " ' (src_id,dest_id,range,payload) BEGINDATA 3,32,intsys.interval_typ(10043,142703),"misc data"
When I use the following this ctl file to transfer the data, i get the following error:
SQL*Loader-418: Bad datafile datatype for column RANGE
I have a sql script where triggers, procedures and functions are written. The triggers are causing db outages and causing problems in the application as well.
I am trying my best but with my low experience and expertise, am not able to make good progress.
Scenario: Course enrollments are inserted, deleted and updated in course_main and course_users table. This is done in gui as well as in background snapshot scheduler in a cron process. Course_main table contains all course enrollments and course_users table has crsmain_pk1 as foreign key.
Its quite a big file and am not sure what should i paste here so am uploading the file in txt.
I am cloning my prod db to test with the rman active clone command. I can successfully clone my DB, but after a few hours or so I see messages in the alert log that I have corrupted blocks in several datafiles. Note, i dont see these messages in my PROD DB therefore I think that DB has no corruption. I have few questions:
1) I was reading tha having tables or indexes set with the NOLOGGING option can cause objects to be unrecoverable. Would this affect my active clone?
2) I know you can either change a DB or tablespace to force logging. Is there a query I can use to determine if the DB is in force logging mode.
ALTER TABLESPACE tablespace_name FORCE LOGGING; ALTER DATABASE FORCE LOGGING;
3) Lastly how to check as why my clone DB would have corrupted blocks.
Here is the clone command I am using.
rman catalog=rman/rman@proddb target=sys/sys@proddb << EOT connect auxiliary sys/sys@clonedb duplicate target database to clonedb from active database nofilenamecheck pfile=/u01/app/oracle/product/11g/dbs/initclonedb.ora ; exit EOT
While taking a full export i came to know there was a block corruption in SYSAUX tablespace. I dont have any COLD/HOTBACKUPS/RMAN BACKUPS. As i have only the Exp backups and the database is in archive log mode. Whether is it possible to recover the BLOCK CORRUPTION with exp backups.
I am trying user managed recovery using archived and redo log files. I had restored old cold backup, then copy latest control file, redo log files, and archived log files at actual locations. Now I tried to recover database using following steps.
Quote: connect sys as sysdba startup mount; recover database; auto
After applied some archived files, one of the archived file corrupted and recovery cancel due to error, then i shut down the database. Now i want to copy corrupted archived file from backup and run the recovery further. so my question that should i have to run recovery from the beginning or can i run the recover from the last status of the database.
1) What is PHYSICAL/LOGICAL Corruption. 2) How it occurs. 3) Will RMAN works on both the types of corruption or only Physical (My senior told it works on both).
The database is running in archivelog mode and we have a standby with Maximum performance.There is no RMAN backup..We have noticed there is block corruption while accessing some tables.Now i would like to know are the corrupted blocks also replicated to the physical standby? Is there a way to recover the data from these corrupted blocks without shutting down the database ?
I was carrying out an experiment in order to crash the database and recover it.
The database was running, I moved the control file to another location and to my surprise the database was still running. I tried issuing checkpoint and transaction but it didn't affect database operations. I tried doing log switch. It completed successfully. According to my understanding and Oracle Certification books database was supposed to crash. But it didn't.
I tried this not only on RHEL, Windows XP but also in Solaris 5.10. The database version is Oracle 10.2.0.4 Standard Edition.
I am not able to get the relevant segment from the above information
SQL> select segment_name, segment_type, owner 2 from dba_extents 3 where file_id = 4 4 and 756652 between block_id 5 and block_id + blocks -1;
no rows selected
DBVERIFY Summary DBVERIFY - Verification complete Total Pages Examined : 3932160 Total Pages Processed (Data) : 3119107 Total Pages Failing (Data) : 0 Total Pages Processed (Index): 755048
[code]....
I have uploaded the complete logfile.
Below is a part of logfile
DBVERIFY - Verification starting : FILE = /prd/dvp/ora/oradata/LHF/disk06/gds_t01_01.dbf Block Checking: DBA = 21728172, Block Type = KTB-managed data block **** kdxcoavs = -84 < 0, avail = 3129 ---- end index block validation Page 756652 failed with check code 6401 ##not here that 756652 is the same block# mentioned in v$database_block_corruption
I'm using Oracle8i, with VB6 as front end When i try to connect Oracle using RDO in VB i get an error message from Oracle It is S1000:Oracle ODBC.ora Ora:1043 User Side Memory Corruption.
We are facing block corruption error and it's refer to system datafile (SYSTEM01.DBF).Below is the script through it, we can come to know about the extent.
select segment_name, segment_type from dba_extents where file_id=1 and 134144 between block_id and block_id+blocks-1;
select owner,index_name,index_type, table_name from dba_indexes where index_name='I_CDEF3';
How to resolve the problem as it is related to system datafile? We tried to drop the index but system is not allowing to do.
ORA-00701 - object necessary for warmstarting database cannot be altered.
Errors in file /oracle/BWP/saptrace/usertrace/bwp_ora_2728058.trc: ORA-01114: IO error writing block to file 1030 (block # 602122) ORA-27063: number of bytes read/written is incorrect IBM AIX RISC System/6000 Error: 28: No space left on device Additional information: -1 Additional information: 180224
But this file_id i don't have in my database, i am making these queries:
SQL> select FILE_ID from dba_temp_files order by FILE_ID;
FILE_ID ---------- 1 2 3 4 5 6 7 8 9 [code]....
I don't have this file_id, why alert.log is showing me it? Of course, nobody has created this datafile and nobody has removed it too.
I am using 11GR2 and looking to find out a way , where in I able to extract the data files name in such a manner , where it lists data file in order of mount points. say data1 first and then so on.. Snippet from my data files
like /db/ptmtrain/data1/system01.dbf/db/ptmtrain/data1/undotbs01.dbf/db/ptmtrain/data2/sysaux01.dbf/db/ptmtrain/data2/rbs03.dbf/db/ptmtrain/data2/rbs01.dbf/db/ptmtrain/data3/tools01.dbf/db/ptmtrain/data3/rbs02.dbf/db/ptmtrain/data23/sans01.dbf/db/ptmtrain/data24/users01.dbf/db/ptmtrain/data25/users02.dbf/db/ptmtrain/data26/users03.dbf
I have a tablespace which has around 32gb space consumed. But if i check the used space then its only 16GB. When i tried to resize the datafile it throws the error
ORA-03297: file contains used data beyond requested RESIZE valueAs per my understanding there are not continous blocks which are there in datafile due to fragmentation may be and there by not able to resize it. If i export the tablespace using datapump and reimport this will release the space.
But i want to know if there are any alternative ways to do the same.
In my Production DB. 5 Datafiles created in same tablespace. Datafile size is of 25GB. Data stored in all Datafile. Data is just 5GB in all datafile. I want to move data from 5 datafiles to single or couple of datafiles.
I have a small problem, i will try to explain it, My tablespaces are Automatically extend datafile when full (AUTOEXTEND). It had checked "autoextend ON" checkbox.
What I need to know is, when happened that autoextend accion(day/hour), is it possible?.
We have quarterly and yearly processes that deletes and updates data for millions of rows crossing different tables. To make room on the file system I would either like to remove empty datafiles (my preference) or coalesce TS to compact the data than remove the empty datafile.
Having said that, any query that can show me if a datafile has any data in it or a query that can show me whether a TS is a good candidate to be coalesced maybe a query with something displaying a percentage as to free and used extents and a YES/NO column whether to be coalesced or not.
Suppose our database have 100 datafiles and right now rman has completed backup of 64 datafile(from file no 1 to file no 64) and in process of backup of 65 number datafile. In mean time I executed some query and it has changed the file no 55. so now my question whether rman will go and bacup this datafile(datafile no 55) again or it will leave this file as it is?
I need to change data file path, and i got some document also but one thing confusing me, we need to offline the Tablespace, so my concern is all the table space like system, sys, user, temp etc. need to offline and then alter the database?