Server Administration :: Access Redo Generation After Migrating From 9.2.0 To 10.2.0.4
Sep 14, 2010
We had our production database hosted on Oracle 9.2.0. Few months back we have migrated it to Oracle 10.2.0.4.0. After Migration I have noticed that redo generation has become very very high. In earlier case no. of log files generating in production hours were around 30 where as after migration it become around 200 files per day. I have run statspack report on this database. Report is saying that db block change & disk write is become very high. Parameter timed_statistics has also been set to FALSE. Even then there is not any reduction on no. of log file generation. I had used import export for upgrading the databases.
I have oracle 9i running on HP-UX, I would like to find how much redo we are generating in a given period of time, is there any script that I can use to get this information?
redo generation. As I found the below statement in another forum."Undo segment generates the redo data also, because undo segment is database changes, so it generates the redo data also."
How a Undo segment can generate Redo and Undo datas.
Redo is getting generated very high. how to find out the reason ? database kept under 2 node cluster. chcked alert log trace and log writer trace files. pasted the content as below:
--alert log trace from node1 ( node2 also has same type of message ). Archive destination disk group - TXCOM_BACKUP_01 having enough space ( 80gb )
Mon Jan 7 00:49:10 2013 Thread 1 advanced to log sequence 448546 (LGWR switch) Current log# 1 seq# 448546 mem# 0: +TXCOM_DATA_01/txcom/onlinelog/group_1.274.785770579 Current log# 1 seq# 448546 mem# 1: +TXCOM_DATA_01/txcom/onlinelog/group_1.302.802265189 Mon Jan 7 00:49:10 2013
[code]...
In the alert log, I am able to see the archive destination disk group ( TXCOM_BACKUP_01 ) is getting DISMOUNTED and again getting MOUNTED during every archive file generation. .
Mon Jan 7 00:49:20 2013 SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted
archive destination parameter in both nodes are not configured. it should read diskgroup name. ( +TXCOM_BACKUP_01 ) and corresponding size limit. Should i configure this ?
SQL> show parameter db_recovery
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_recovery_file_dest string db_recovery_file_dest_size big integer 0
[code]...
should i bring the database to mount stage and set log_archive_max_proesses to high count ? now value is 2 ( default )
I'm trying to migrate an Oracle Enterprise Server database v9.2.0.1.0 deployed on Windows 2000 Server 32-bit to v10.2.0.5.0 on Windows 2008 R2 64-bit.Basically, I'm following instructions provided by this document:[URL],,,
1) Updated v9.2.0.1.0 to v9.2.0.6.0 on the old server 2) Backed up the database as follow: SQLPLUS /NOLOG CONNECT / AS SYSDBA ALTER DATABASE BACKUP CONTROLFILE TO TRACE; SHUTDOWN IMMEDIATE;
3) Installed Oracle Enterprise Server 10.2.0.4 on the new server and updated to v10.2.0.5
4) Copied trace files, data files, control files, archive logs and init.ora to the new server. Redo logs have been not copied since the ARCHIVELOGMODE is enabled on the old server.
5) Created an Oracle service on the new server as follow:
I have created 5 materialize view with REFRESH FORCE option.
REFRESH is done with every 10 min. My destination database is MySQL. DBlink is created in ORACLE 10g Materialize view is created using DBLINK.
Since refresh materialize view done with every 15 min, my Redo generate gone on toss.System is generating 1 redo file for 1 min. The size for redo log is 500MB.Since materialize views are created base on mysql table, system is doing complete refresh every time which causing to generate more and more redo.
Is there any setting/option/or solution to reduce high redo generation.
on weekends we have too many archive logs generated .i have taken the data of a week and found that average archive log generated from monday to friday is 7 files per day but on satuarday and sunday the average is 60 files and FG1 gets full. on weekends we have all type of backups running like incremental,archival and logical backups and on sunday we have full physical backup
what is the reason of too many archive log files generations at weekends. is it due to hot and logical backups , if yes then how ?
I'm in 11203, and generation of ASH report is very slow.AWR and ADDM reports are generated quickly.To understand what happen, I check the wait event on the session that is executing ASH report, and I found that this session is waiting 99% with "controlfile sequential read". Is there any way to make the generation of ASH report quick ?Why the generation need to access to the controlfile?.
We are facing one issue on one of the database. The database is generating large trace files(14000) from last two days. That consumes around 15G space on the disk. And the content of the trace files is not having any meaningful message to debug:
*** TRACE DUMP CONTINUED FROM FILE /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc ***
... (Many lines with above message)
The alert log is having one repeated error yesterday:
Thu May 6 22:00:03 2010 Errors in file /apps/oracle/admin/fs90uat/bdump/fs90uat_j000_11811.trc: ORA-12012: error on auto execute of job 2647927 ORA-04063: ORA-04063: package body "ORACLE_OCM.MGMT_DB_LL_METRICS" has errors ORA-06508: PL/SQL: could not find program unit being called: "ORACLE_OCM.MGMT_DB_LL_METRICS" ORA-06512: at line 1
The corresponding trace file is having error:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORACLE_HOME = /apps/oracle/product/10.2.0/db_1 System name: SunOS Node name: corpqadb30 [Code] .......
i am using oracle 10g on solaris 10 os.currently archived log is generated by size wise 52 mb.i want to know whar is the best practice for archive log generation . it should be time interval or size wise.
I learnt that logWriter writes in the redo log files when redo log buffer is 1/3 full, it means that 66 % of redo log buffer are always empty and never used,
if no, isn't a waste of memory (66 % always empty !)
I've a situation where I've very less redo logs generated. Let us say 10MB. Which solution will be better ?
1. Create one redo log group about 12 MB in size. 2. Create two redo log groups about 5 MB each in size as recommended by Oracle.
Even though solution 1 is also appropriate for me because I've less redo generated than the redo log group size. My whole redo will fit in this and I can raise checkpoint forcefully after certain period of time let us say every 3 seconds.
In one of our DB I found scenario one is implemented. So I want to know pros and cons of both of these practices.
1) Can we fetch 'select statements' from redo log files through the use of log miner utility or any other? (I think redo log file contains only insert,update,delete and DDL/DCL commands only)
2) If "No" to the above answer then how can i fetch all select statements fired on the system for a day or particular time. (setting of sql_trace may be the one of them, but can it be possible for system level)
Today i noticed one problem with my database,my redologs switches in every 3mins,i also noticed there is no more transaction changes happening in database but still redo switches.
Fri Oct 05 06:10:05 2012 Thread 1 advanced to log sequence 79244 Current log# 2 seq# 79244 mem# 0: D:ORADATAORACIREDO02.LOG Fri Oct 05 06:12:16 2012 Thread 1 advanced to log sequence 79245 Current log# 1 seq# 79245 mem# 0: D:ORADATAORACIREDO01.LOG Fri Oct 05 06:14:28 2012 [code]......
why redo switch happening,any internal problem causes redo to switch .
SQL> update t set a = 1 where b = 2; -- must have redo record 2 rows updated. SQL> rollback;
the above redo record that uncommit changed must be written from redo buffer to the online redo logfile. why Oracle write the redo record that uncommit changed to the online redo logfile ? when it will be used?
Whenever any transaction happen in database redo has generated for this transaction. Do select statement treat as a transaction as it doesn't modify any thing in database. And If select statement should not be a transaction, there should not be any redo generation for select statement.
So is select statement generate redo? If yes then Why ?
understanding a redo/undo concept . Refer following data
create table t(n number); insert into t values(10); commit;
now I update as following
update t set n=20;
As per my understanding the before image i.e. n=10 is stored in undo (to be used for rollback, transaction recovery and even in instance recover but not in media recovery) and after image n=20 is stored in redo (to be used for various recovery purposes including media recovery in case of consistent backup).
So it is redo logs for rolling forward and undo for rolling back making transaction, db consistent . If my above understanding is true then what is meant by the term 'redo required for undo'?
Also, if there are 2 database db1 and db2 connected using database link where we are populating t1 table in db1 using t2 table in db2 using db link where redo and undo will be updated db1 or db2?
We have one primary oracle database 10.2 and standby by database with no data guard. Initially we have 2 redo log group in primary and standby database.
We have recently add 2 more redo log and increase the size of log member from 50m to 200m in primary database. We don't have any problem in primary database.but in standby database we face a problem because we cannot open it. It always in mount stage in which . How we change the size of current redo log because we can't run. Alter system switch logfile command in mount stage.
database administration , we are planning to use amazon cloud database , this database does not allow us to login to server machine , unfortunately amazon don't provide ssh to this machine , in general for doing any of adminstration task on the database will there be need to log into the machine ? we can always log in through toad or any other sql client but we cannot do ssh to server..general can this limitation effect administration ?
I'm currently working on a project in which I do not have permissions to access the Server where the database is installed and configure.Because of company policies, I do not have Admin Rights over Oracle, but I do have an account that can make Selects to DBA_USER_PRIVS for instance.
I would like to know if there is any way to access the database logs to know if there was any kind of problem within the database, because one of my Schemas misteriously went clean (all tables, sequences, triggers, ... vanished)
I need to create one to many user DB link in oracle 10g. Meaning I have a user A in database 1 and I want to access the objects from user B,C,D in database 2, how to create a public database link so that i can have this one to many user access?
I have login through the sysdba and created dblink and modified the tnsnames.ora for necessary changes. And the it started working correctly and I can able to get the data by selecting it.But when I login with different user that is showing the following error.
ORA-12154: TNS:could not resolve the connect identifier specified
How to resolve this,is there any privileges or permission need to give to that user.
I've to install Oracle 10g on Solaris 10, Sparc. I prepared anything successfully, but, when I gone to create ASM instance, I found the Create new bow did not show any disks into. I take brief something:
I want to get clear with one thing yesterday i installed oracle9i and dev 2000 to my client.
when they run one report they got stuck with pl/sql compilation errorrep-1247
when i checked that report in the report builder, in the query they are using some other table which is not belongs to that schema,then I give that schema.tablename and compiled, but this is coming for other reports also, then only i came to know that they are acceessin other schema also, how can i sort this out.
can i fix this by givin full access privilige or what privilige can i give to get full access of other schema table.
how can i check in the old database what are all the roles and privileges given to this user,
I am facing a strange issue on 11gR2 (OEL 5.4) standby readonly with apply database.It's throwing 16000: database open for read-only access during SELECT's .
Here is snapshot of errors.
ORA-00604: error occurred at recursive SQL level 1 ORA-16000: database open for read-only access
We have created a new db link. But we are not able access the remote database. When we are trying to access any table using db link system is getting hanged.
We would like to know what are the parameters/permissions which affect the db link access either at data base level or at server level.
One observation we made is that that particular db link is not getting dropped when we tried to drop it.