Replication :: Block Corruption While Accessing Some Table
Mar 16, 2011
The database is running in archivelog mode and we have a standby with Maximum performance.There is no RMAN backup..We have noticed there is block corruption while accessing some tables.Now i would like to know are the corrupted blocks also replicated to the physical standby? Is there a way to recover the data from these corrupted blocks without shutting down the database ?
While taking a full export i came to know there was a block corruption in SYSAUX tablespace. I dont have any COLD/HOTBACKUPS/RMAN BACKUPS. As i have only the Exp backups and the database is in archive log mode. Whether is it possible to recover the BLOCK CORRUPTION with exp backups.
1) What is PHYSICAL/LOGICAL Corruption. 2) How it occurs. 3) Will RMAN works on both the types of corruption or only Physical (My senior told it works on both).
I have noticed a block corruption issue in my RAC database. Its an index , how to recover a corrupted index in system datafile.
SELECT tablespace_name, segment_type, owner, segment_name FROM dba_extents WHERE file_id = 5 3 and 147551 between block_id AND block_id + blocks - 1; TABLESPACE_NAME SEGMENT_TYPE OWNER
I am not able to get the relevant segment from the above information
SQL> select segment_name, segment_type, owner 2 from dba_extents 3 where file_id = 4 4 and 756652 between block_id 5 and block_id + blocks -1;
no rows selected
DBVERIFY Summary DBVERIFY - Verification complete Total Pages Examined : 3932160 Total Pages Processed (Data) : 3119107 Total Pages Failing (Data) : 0 Total Pages Processed (Index): 755048
[code]....
I have uploaded the complete logfile.
Below is a part of logfile
DBVERIFY - Verification starting : FILE = /prd/dvp/ora/oradata/LHF/disk06/gds_t01_01.dbf Block Checking: DBA = 21728172, Block Type = KTB-managed data block **** kdxcoavs = -84 < 0, avail = 3129 ---- end index block validation Page 756652 failed with check code 6401 ##not here that 756652 is the same block# mentioned in v$database_block_corruption
We are facing block corruption error and it's refer to system datafile (SYSTEM01.DBF).Below is the script through it, we can come to know about the extent.
select segment_name, segment_type from dba_extents where file_id=1 and 134144 between block_id and block_id+blocks-1;
select owner,index_name,index_type, table_name from dba_indexes where index_name='I_CDEF3';
How to resolve the problem as it is related to system datafile? We tried to drop the index but system is not allowing to do.
ORA-00701 - object necessary for warmstarting database cannot be altered.
I want to set up advance replication for 3 master site (multimaster) I created 3 master site named orc1,orc2,orc3 and followed up oracle replication management of API book instruction I created 2 tables(tes1,test2) in hr schema in all 3 master site with the same data. then I created the following steps
1-CONNECT repadmin/repadmin@orc1
2-Create the master group named hr_test_repg
BEGIN DBMS_REPCAT.CREATE_MASTER_REPGROUP( gname => 'hr_test_repg'); END; /
4-add tables test1 and test2 to the group
BEGIN DBMS_REPCAT.CREATE_MASTER_REPOBJECT( gname => 'hr_test_repg', type => 'TABLE', oname => 'test1',
[code]....
I could create DBMS_REPCAT.GENERATE_REPLICATION_SUPPORT for test2 but not for test1 and it produces error
RROR at line 1: RA-23309: object hr.test1 of type TABLE exists RA-06512: at "SYS.DBMS_SYS_ERROR", line 105 RA-06512: at "SYS.DBMS_REPCAT_MAS", line 2552 RA-06512: at "SYS.DBMS_REPCAT", line 562 RA-06512: at line 2
I have a schema whereby a table is not joined with other tables.
the info on that table can be gotten manually (by doing a query) and then using that info in another query. so is there a way of getting info from that table?
when i am writing dump from external table, it is accessing records from dump.but when i am trying to access other dumps(create thru expdp) it is giving error.the logic i am following is mentioned below-
CREATE OR REPLACE DIRECTORY "DIR_GMS" AS 'D:Gopal_works est_env_files'
GRANT READ ON DIRECTORY dir_gms TO gopal; GRANT WRITE ON DIRECTORY dir_gms TO gopal;
New point: -- taking export thru expdb expdp hr/hr tables=EMPLOYEES directory=DIR_GMS dumpfile=HR_EMP.dmp logfile=expdpEMP.log then i created one EXTERNAL TABLE TO access it.
i have a table which has 2 columns.1st column has userId and the other contains an xml data as a link.on clicking that link a new file opens containing the data in xml format.
In a trigger(on update of a table t1) I am trying to write, I am doing an insert on t2 accessing ':new' values of the update on t1.
But in my Insert statement, I am having get one of the column values from another table. How can I write my insert statement in such a way as to insert values contained in ':new' pseudo columns and a select from another table. Below is my insert statement in the trigger : -------
IF (:old.GROUP_YELLOW <> :new.GROUP_YELLOW) THEN INSERT INTO TEST.W_THRESHOLD_LOG (THRESHOLD_LOG_WID, CHANGE_DATE, MEASURE_TYPE_WID, MEASURE_NAME, CUSTOMER_WID, CUSTOMER_NAME, USER_ID, CHANGED_ITEM, PREV_VALUE, NEW_VALUE) VALUES(TEST.W_THRESHOLD_LOG_SEQ.NEXTVAL, SYSDATE, :new.MEASURE_TYPE_WID, 'Rolling Stabilty' , :new.CUSTOMER_WID, 'Customer1', 'User1', 'GROUP_YELLOW', :old.GROUP_YELLOW , :new.GROUP_YELLOW); END IF; -------
In the above code if the hardcoded value 'Customer1' need to be picked from another table, i.e .
SELECT NAME FROM W_CUSTOMER_DIM WHERE CUSTOMER_WID = THRESHOLD.CUSTOMER_WID
how can I rewrite my query to the above value from the select into my insert statement..?
I have got 2 users as user1 and user2.I have used the following statements from user 'user1':
create role GENEVAOBJECTS; grant select, insert, update, delete on PRODUCT to GENEVAOBJECTS; grant GENEVAOBJECTS to user2;
In the above statements, product is a table. Now, I could able to access this table from user 'user2'. But however if I write a procedure in user2 schema accessing the table product, then the procedure is not getting compiled.
create or replace procedure test_prc as v_test number(9); begin select product_id into v_test from PRODUCT where rownum=1;
Let's say we have Table - A and we would like to replicate specific row transaction to Table B.
Here are the rows in *Table A* Time: Lets say 15:00
A1 Just Updated @15:00 A2 Just inserted @15:01 A3
B1 - Daily Delete Row -i.e just deleted a while back - Non scheduled process --executed by application @15:02 B2 - B3 - Daily Delete Row - i.e just deleted a while back -- Non Schduled process --executed by application @15:05 B4 - Just recently purged (As part of 180 Day purge ) - Scheduled process executed by operations team @15:10 B5 - Just recently purged (As part of 180 Day purge ) - Scheduled process executed by operations team @15:10 B6 -Just recently purged (As part of 180 Day purge ) - Scheduled process executed by operations team @15:10
Current Data in Table B (Before Replication) @15:00
A1 (without updates) A3 B1 B2 B3 B4 B5 B6
Expected rows in Table B (via replication/snapshot/materialized view / or any other method)
*Replication at 15:30* Table B - Read Only
Expected rows after replication-
A1 -- Newly updated details A2 -- Newly inserted row A3 B1 - Daily delete row is expected to be replicated B2 B3 - Daily delete row is expected to be replicated
***Note row B4 is not expected to be replicated to table B.
Questions:
1) How can we get updates, inserts and daily deletes replicated while ignore large purges? 2) How can large purge changes be reflected in replicated tables as well without deleting daily deletes?
I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.
So far the only option I have is to export and then import back the data.
know if there is any way to move a partitioned table between table spaces of different block size?
I have a sql script where triggers, procedures and functions are written. The triggers are causing db outages and causing problems in the application as well.
I am trying my best but with my low experience and expertise, am not able to make good progress.
Scenario: Course enrollments are inserted, deleted and updated in course_main and course_users table. This is done in gui as well as in background snapshot scheduler in a cron process. Course_main table contains all course enrollments and course_users table has crsmain_pk1 as foreign key.
Its quite a big file and am not sure what should i paste here so am uploading the file in txt.
Actually am trying to replicate two db servers from one in hong kong and another in china. when am trying to establish the replication, am getting error 'ORA-04052: error occurred when looking up remote object' like this...
but the same way i have tried in my local network, it is working fine.i have tried schema replication through enterprise manager grid control..
i need to set up a central server with all the master tables and two other local database which will hold the updatable materialized view of the master table...the databases must be synchronized with central server..and user will work on the materialized view database...
I am cloning my prod db to test with the rman active clone command. I can successfully clone my DB, but after a few hours or so I see messages in the alert log that I have corrupted blocks in several datafiles. Note, i dont see these messages in my PROD DB therefore I think that DB has no corruption. I have few questions:
1) I was reading tha having tables or indexes set with the NOLOGGING option can cause objects to be unrecoverable. Would this affect my active clone?
2) I know you can either change a DB or tablespace to force logging. Is there a query I can use to determine if the DB is in force logging mode.
ALTER TABLESPACE tablespace_name FORCE LOGGING; ALTER DATABASE FORCE LOGGING;
3) Lastly how to check as why my clone DB would have corrupted blocks.
Here is the clone command I am using.
rman catalog=rman/rman@proddb target=sys/sys@proddb << EOT connect auxiliary sys/sys@clonedb duplicate target database to clonedb from active database nofilenamecheck pfile=/u01/app/oracle/product/11g/dbs/initclonedb.ora ; exit EOT
How can i check for datafile corruption or a datafile error,its some tool like Linux (fsck) command in Oracle that can halp to me to chack a datafile?.
Is any way to isolate the disk region that is corrupted?
If a disk error exist an an ASM disk group how can i isolate the error from that disk group? It's an alternative different to the VALIDATE DATAFILE command in RMAN?I'm using Oracle 11GR2 on a Linux Box.
I have a scenario like, am having two databases DB1 and DB2 in different locations where I need to replicate some of the tables(around 10 to 15 tables) from DB1 to DB2(i.e. Whenever I update any table in DB1 it has to reflect in DB2.). Both DB1 and DB2 has the same database objects.
(DB version - Oracle 10g Release 10.2.0.4.0).
the steps how this can be done. Can it be done using Materialised View.
I am trying user managed recovery using archived and redo log files. I had restored old cold backup, then copy latest control file, redo log files, and archived log files at actual locations. Now I tried to recover database using following steps.
Quote: connect sys as sysdba startup mount; recover database; auto
After applied some archived files, one of the archived file corrupted and recovery cancel due to error, then i shut down the database. Now i want to copy corrupted archived file from backup and run the recovery further. so my question that should i have to run recovery from the beginning or can i run the recover from the last status of the database.
I have to reorganize one table that related to several other tables. The reorg is too slow when it runs on this table. I would like to create one image of the table and synch it with the original one in real time. So when I run the reorg, I will use the image table that does not constrained by indexes and other objects. Once the reorg is done, I would like to rename the table. how could I do the replication in real time?
I have two schema on the two servers for replication replication is working fine.
i export one schema to another so all the tables exists at both the sites. I am adding objects in the replication group using oracle enterprise manager console.
some of the tables added fine. but some gives me error like.
ORA-23309: object UMESH.PRODUCT_MASTER of type TABLE exists
but with the error in and when generate replication support.
SQL> select status,request,message,oname from dba_repcatlog; STATUS REQUEST -------------- ----------------------------- MESSAGE -------------------------------------------------------------------------------- ONAME ------------------------------ ERROR CREATE_MASTER_REPOBJECT
[code]....
sometimes i got error like
ORA-00942: table or view does not exist
when use CREATE_MASTER_REPOBJECT command to create object at master definition site while the the table exists at the master site.but in the same situation other objects are working fine.