Server Administration :: Ora-00333 / Redo Log Read Error Block 203 Count 8192
Apr 17, 2010When i was starting my database .there was an error
ora-00333:redo log read error block 203 count 8192.
When i was starting my database .there was an error
ora-00333:redo log read error block 203 count 8192.
I am in charge of several instances located on a Linux server CentOS, virtualized on a ESX 3.5 environment.
From time to time (every 4 to 5 days), I have some errors in the alert.log. Last occurence was last night :
Corrupt block relative dba: 0x01004e12 (file 4, block 19986)
Fractured block found during buffer read
Data in bad block:
type: 6 format: 2 rdba: 0x01004e12
last change scn: 0x0000.131aaa5b seq: 0x2 flg: 0x04
[Code] ........
We are doing user manual backup (with BEGIN/END BACKUP) every night at 8PM, ending at 9PM approx. Then, fractured blocks never occur during backups. At 1AM, the maintenance window is opening, thus explaining the GATHER_STATS_JOB job.
When I check corruption on early morning, I am always unable to reproduce the problem. DBV is OK without issues. We never had a problem with the data itself, whatever it is a table or an index in the reported failed block.
I would like to know what could cause these logical corruption, and how to stop them ?
we are generating Entity beans at java appication. where application is connecting to DB and copy Table structure in Java class file. when this process start, oracle process takes 100% CPU. while there is no wait event at DB level. only queries are fired on oracle sys tables.(user_tables,user_constraints,etc..)
On trial and error based, we have changed this parameter and that java process works fine. how this parameter effect the on oracle system table.
I am facing some problems in recovery on oracle 9.2.0.1.0 Standard Edition.
Quote:
ORA-00283: recovery session canceled due to errors
ORA-00368: checksum error in redo log block
ORA-00353: log corruption near block 43200 change 1021681146 time 10/11/2012 18:25:25
ORA-00334: archived log: '/oradb/odb/archivedata/ARCH0000002343.ARC'
Error is occurred during the transaction...i.e.
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [4194], [89], [83], [], [], [], [], []
I have a multi record control block (basically a text item displaying 6 records) where user enters values and I want to process the values using pre-insert trigger.
I want to read value in each record and then do some tasks using a pre-insert trigger before I commit the values. To navigate between the records I was using first_record, next_record, clear_record built-ins but it gives errors like "40737-illegalrestricted procedure next_record in pre-insert trigger".
I am doing an import job and the following error occurs during Index import. is the reason for this error?
View 1 Replies View RelatedI have oracle 9i running on HP-UX, I would like to find how much redo we are generating in a given period of time, is there any script that I can use to get this information?
View 3 Replies View RelatedI learnt that logWriter writes in the redo log files when redo log buffer is 1/3 full, it means that 66 % of redo log buffer are always empty and never used,
if no, isn't a waste of memory (66 % always empty !)
I've a situation where I've very less redo logs generated. Let us say 10MB. Which solution will be better ?
1. Create one redo log group about 12 MB in size.
2. Create two redo log groups about 5 MB each in size as recommended by Oracle.
Even though solution 1 is also appropriate for me because I've less redo generated than the redo log group size. My whole redo will fit in this and I can raise checkpoint forcefully after certain period of time let us say every 3 seconds.
In one of our DB I found scenario one is implemented. So I want to know pros and cons of both of these practices.
I have some doubts about redo log files,
1) Can we fetch 'select statements' from redo log files through the use of log miner utility or any other?
(I think redo log file contains only insert,update,delete and DDL/DCL commands only)
2) If "No" to the above answer then how can i fetch all select statements fired on the system for a day or particular time.
(setting of sql_trace may be the one of them, but can it be possible for system level)
Today i noticed one problem with my database,my redologs switches in every 3mins,i also noticed there is no more transaction changes happening in database but still redo switches.
Fri Oct 05 06:10:05 2012
Thread 1 advanced to log sequence 79244
Current log# 2 seq# 79244 mem# 0: D:ORADATAORACIREDO02.LOG
Fri Oct 05 06:12:16 2012
Thread 1 advanced to log sequence 79245
Current log# 1 seq# 79245 mem# 0: D:ORADATAORACIREDO01.LOG
Fri Oct 05 06:14:28 2012
[code]......
why redo switch happening,any internal problem causes redo to switch .
redo generation. As I found the below statement in another forum."Undo segment generates the redo data also, because undo segment is database changes, so it generates the redo data also."
How a Undo segment can generate Redo and Undo datas.
Redo is getting generated very high. how to find out the reason ? database kept under 2 node cluster. chcked alert log trace and log writer trace files. pasted the content as below:
--alert log trace from node1 ( node2 also has same type of message ). Archive destination disk group - TXCOM_BACKUP_01 having enough space ( 80gb )
Mon Jan 7 00:49:10 2013
Thread 1 advanced to log sequence 448546 (LGWR switch)
Current log# 1 seq# 448546 mem# 0: +TXCOM_DATA_01/txcom/onlinelog/group_1.274.785770579
Current log# 1 seq# 448546 mem# 1: +TXCOM_DATA_01/txcom/onlinelog/group_1.302.802265189
Mon Jan 7 00:49:10 2013
[code]...
In the alert log, I am able to see the archive destination disk group ( TXCOM_BACKUP_01 ) is getting DISMOUNTED and again getting MOUNTED during every archive file generation. .
Mon Jan 7 00:49:20 2013
SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted
SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted
SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted
SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted
archive destination parameter in both nodes are not configured. it should read diskgroup name. ( +TXCOM_BACKUP_01 ) and corresponding size limit. Should i configure this ?
SQL> show parameter db_recovery
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string
db_recovery_file_dest_size big integer 0
[code]...
should i bring the database to mount stage and set log_archive_max_proesses to high count ? now value is 2 ( default )
SQL> update t set a = 1 where b = 2; -- must have redo record
2 rows updated.
SQL> rollback;
the above redo record that uncommit changed must be written from redo buffer to the online redo logfile. why Oracle write the redo record that uncommit changed to the online redo logfile ? when it will be used?
We had our production database hosted on Oracle 9.2.0. Few months back we have migrated it to Oracle 10.2.0.4.0. After Migration I have noticed that redo generation has become very very high. In earlier case no. of log files generating in production hours were around 30 where as after migration it become around 200 files per day. I have run statspack report on this database. Report is saying that db block change & disk write is become very high. Parameter timed_statistics has also been set to FALSE. Even then there is not any reduction on no. of log file generation. I had used import export for upgrading the databases.
View 13 Replies View RelatedWhenever any transaction happen in database redo has generated for this transaction. Do select statement treat as a transaction as it doesn't modify any thing in database. And If select statement should not be a transaction, there should not be any redo generation for select statement.
So is select statement generate redo? If yes then Why ?
understanding a redo/undo concept . Refer following data
create table t(n number);
insert into t values(10);
commit;
now I update as following
update t set n=20;
As per my understanding the before image i.e. n=10 is stored in undo (to be used for rollback, transaction recovery and even in instance recover but not in media recovery) and after image n=20 is stored in redo (to be used for various recovery purposes including media recovery in case of consistent backup).
So it is redo logs for rolling forward and undo for rolling back making transaction, db consistent . If my above understanding is true then what is meant by the term 'redo required for undo'?
Also, if there are 2 database db1 and db2 connected using database link where we are populating t1 table in db1 using t2 table in db2 using db link where redo and undo will be updated db1 or db2?
1) If i do changes in table on primary database and if i open standby database in Read-Only mode, i can see those changes immediately only if Real Time Apply is enabled. Am i correct? Database version is 10.2.0.4
2) From 11g, It is possible to apply redo while the standby is open in read only mode. prior to 11g, it was not possible. Right?
3) Should I first cancel Managed Recovery prior to issuing “ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY”?
We have one primary oracle database 10.2 and standby by database with no data guard. Initially we have 2 redo log group in primary and standby database.
We have recently add 2 more redo log and increase the size of log member from 50m to 200m in primary database. We don't have any problem in primary database.but in standby database we face a problem because we cannot open it. It always in mount stage in which . How we change the size of current redo log because we can't run. Alter system switch logfile command in mount stage.
How to read the treedump contents of a index IDX_TB_TEST_N1?
SQL> select object_id,object_name from dba_objects where owner='HXL';
OBJECT_ID OBJECT_NAME
---------- ---------------------------------------------
51786 IDX_TB_TEST_N1
alter session set events 'immediate trace name treedump level 51786'
/u01/app/oracle/admin/oracl/udump/oracl_ora_2679.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
[code]...
I am getting an error through by application when i am trying to access a screen.
Error:No data to read from socket.
What could be the reasons for this error.
I need to display the parameter and status of DB for listener and Read Only.
I know those value could be get from command line , but could we get the values of Listener and Read only by SQL/PlSQL? So I can get it through the query of DB.
I am facing a strange issue on 11gR2 (OEL 5.4) standby readonly with apply database.It's throwing 16000: database open for read-only access during SELECT's .
Here is snapshot of errors.
ORA-00604: error occurred at recursive SQL level 1
ORA-16000: database open for read-only access
SQL> l
1* SELECT t0.airportID, t0.archived, t0.assetTag, t0.bluetoothID, t0.cmBundle, t0.createdDate,
t0.currentProductTaskID, t0.ethernetID, t0.failOrReworkCount, t0.highestCompletedTaskTypeID, t0.lastModDate, t0.lastStationID,
t0.modCount, t0.modelID, t0.oemSerialNumber, t0.orgSerialNumber, t0.pdmVersion, t0.preburnComplete,
t0.productID, t0.reworked, t0.secondaryEthernetID,
t0.serialNumber, t0.shipped, t0.specialBuildTypeID,
[code]....
I have created an user named "Raja" with a default tablespace as "Raja_TBS" along with a datafile "rajadata.dbf". I have taken the tablespace offline
SQL> alter tablespace raja_tbs offline;
Tablespace Altered. when I take a tablespace offline, which means I cannot read or write and the tablespace is currently unavailable for users. I am still able to create a table on the "Raja_TBS" while it is offline.
i have tried to create the index in diff tbs also but same error is there
SQL> create index inx_tbl_voicechat_unsub_ani on tbl_voicechat_unsub (ani) tablespace ideadb_index;
create index inx_tbl_voicechat_unsub_ani on tbl_voicechat_unsub (ani) tablespace ideadb_index
*
ERROR at line 1:
ORA-01115: IO error reading block from file 201 (block # 144265)
ORA-27070: async read/write failed
OSD-04016: Error queuing an asynchronous I/O request.
O/S-Error: (OS 23) Data error (cyclic redundancy check).
ORA-01115: IO error reading block from file 201 (block # 144265)
ORA-27070: async read/write failed
OSD-04016: Error queuing an asynchronous I/O request.
O/S-Error: (OS 23) Data error (cyclic redundancy check).
I wanna know the way to count connections exist on Database, and the max connections db accept.
View 14 Replies View RelatedI installed Oracle 11g and created a test database in that the default count should be 4196... but it is 4143.. some packages are missing.. even when i'm creating materialized view it is showing some error that packages are missing. what can i do for that? Is my oracle s/w corrupted ? even when i downloaded from oracle site it also shows the same count.
View 1 Replies View Related) How to find out, whether my Query return the output from the block of BUFFER CACHE or from DATAFILES?
2) How to calculate the no of data blocks were used to return a single output.
I am using oracle 10g with sga_max_size =4GB and db block size 16k. Now i am creating a tablespace with block size 32 kb , whats value i select for the parameter db_32k_cache_size.
Is there any standard way to calculate the value of this parameter.