I learned that Oracle uses supplemental logging mechanısm to add the changed rows to redo log files and identify the changed rows on target replication database? Is that mechanism mandatory to handle the replication of data between updated and back up databases?
I am creating database instance from template. I have specified the location of redo log files. When I run the dbca utility it does creates the redo log files in specified directory. But the installation fails . When I checked the trace file. it says unable to locate the specified file(redo.log). But when i check in directory they are created.
1) Can we fetch 'select statements' from redo log files through the use of log miner utility or any other? (I think redo log file contains only insert,update,delete and DDL/DCL commands only)
2) If "No" to the above answer then how can i fetch all select statements fired on the system for a day or particular time. (setting of sql_trace may be the one of them, but can it be possible for system level)
In the backup and recovery session i have miss my online redolog files. why the datafiles are recovered.it's possible to recovered to online redo logs?
I've been using ASM for a few years now and have always installed a new system with 3 diskgroups
+DATA - for datafiles, control files, redo logs +FRA - for achive logs, flash recovery. RMAN backup
Those I guess are the standards, but I've always created an extra (very small) diskgroup, called +ONLINE where I keep multiplexed copies of the redo logs and control files.
My reasoning behind this is that if there are any issues with the +DATA diskgroup, the redo logs and control files can still be accessed.
In the olden days (all those 5 years ago!), on local storage, this was important, but is it still important now? With all the striping and mirroring going on (both at ASM and RAID level), am I just being overtly paranoid? Does this additional +ONLINE diskgroup actually hamper performance? (with dual write overheads that are not necessary)
I've only successfully duplicate a standby database.
from the alert log
ORA-00313: open failed for members of log group 1 of thread 1 ORA-00312: online log 1 thread 1: 'D:ORA102CTAREDO01.LOG' ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified.
[code].....
when I tried to add the online and standby redo log, it error out
SYS@CTA>select logdetail.member, loggroup.group#, loggroup.sequence#, loggroup.archived, loggroup.status lg_status, logdetail.status ld_detail, logdetail.type 2 from v$log loggroup join v$logfile logdetail 3 on loggroup.group# = logdetail.group#; MEMBER -------------------------------------------------------------------------------- GROUP# SEQUENCE# ARC LG_STATUS LD_DETA TYPE ---------- ---------- --- ---------------- ------- -------
[code].....
based on my understanding from [URL] ....
Quote:
As part of the duplicating operation, RMAN automates the following steps:
Creates a control file for the duplicate database
Restores the target datafiles to the duplicate database and performs incomplete recovery by using all available incremental backups and archived redo logs
Shuts down and starts the auxiliary instance (refer to "Task 4: Start the Auxiliary Instance" for issues relating to client-side versus server-side initialization parameter files)
Opens the duplicate database with the RESETLOGS option after incomplete recovery to create the online redo logs (except when running DUPLICATE ... FOR STANDBY, in which case RMAN does not open the database) when duplicating for standby database it does not create online redo logs. Duplicating a standby database does not creates online redo logs.
how should I add the online and standby redo logs. If I transfer the redo logs from primary to standby, it always encountered the the following error
Dump file d:ora102ctadumpcta_arc0_3624.trc Tue Sep 13 19:21:53 2011 ORACLE V10.2.0.4.0 - Production vsnsta=0 vsnsql=14 vsnxtr=3 Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production With the OLAP, Data Mining and Real Application Testing options Windows XP Version V5.1 Service Pack 2
I've got a legacy SAP system with oracle 8i on Tru64. No changes at all are made but for legal reasons we have to keep it up and running.
we currently do a full backup monthly by shutting down oracle and doing a backup of all the files to tape and that takes around 12 hours.
If I stop doing the full backup and I only backup the control file and the archived redo log files every month and I had to restore the full database years from now, would I be able to restore the database using the last full monthly backup and use the latest control file and archived redo log files?
1) I have 5 Exported Dump files. 2) All of those 5 dump files were taken in different time periods. 3) Many of those Dump files are having the same Partition records.
eg:- Dump 1:- 01-06-2010 to 31-11-2010 Dump 2:- 01-09-2010 to 31-12-2010
4) Now i want to import all those partitioning data into a single table, without having any duplication.
I have written make files that compile .pc files in unix. This was for several projects that use an oralib source code directory.Just running proc on one target .pc file works fine on unix. I am trying to use proc - Oracle 10.2.0 - in windows and I keep getting:
Quote:unable to open include file #include <stdio.h> and other C library headers.
I am doing all development under cygwin, this way I can write a makefile just like under unix instead of using nmake.All C library headers are in /usr/include When I run proc on Solaris as that:
proc program.pc No problems, and I do get program.c.
However in windows I get the previous error message. I have tried to do proc include=/user/include program.pc and proc include=/user/include parse=full program.pc but I still get the same error message.
let's say a parameter changed in the database ex: alter system set retention_target= 1500; and i want to know what was the old value before it has been changed ,
Name Null? Type ----------------------------------------------------- -------- ------------------------------------ C1 NOT NULL VARCHAR2(15) C2 VARCHAR2(254) C3 NOT NULL NUMBER(15) C4 VARCHAR2(254) C5 NOT NULL VARCHAR2(254) C6 NOT NULL NUMBER(15) C7 NOT NULL NUMBER(15) C8 NOT NULL VARCHAR2(254)
[Code]...
But, till yday it was showing the orignal column name..
Create Or Replace Function Fin_Prd(V_Dte In Date) Return Number Is V_Fin_Str Number; Begin
[Code].....
The above function was running well. Today i have made some change as under
Create Or Replace Function Fin_Prd(V_Dte In Date,V_Rtn_Flg In Number Default 0) Return Number Is V_Fin_Str Number;
[Code]....
Above function is created and working well when i use it in query in sql prompt or Toad. But problem is this that all function which used this are invalid and when i run report whose query use FIN_PRD then error is "Ora-04062. Timestamp of Fin_Prd has been changed".
a table structure is modified every now and then because of which the few packages get uncompiled. is there any way to monitor which user has changed table structure.
I am having few confusion on dbname, sid & tnsname.
rman target sys/oracle@suman auxiliary /
In the above command, "suman" is SID or Tnsname or dbname?
Bimistake, in my test database, i tried to change the DBname to suman from sumandb in pfile. After changing, i tried to startup database in nomount pfile but oracle through error that, unable to locate control file. Then, i remove the control files and tried to start database with nomount. Oracle through error about the control file.
This database, i had installed through DBCA. I want to know, where are all places where DBname reside?
Can i start database through pfile even if it is created through DBCA?
now I compiled that form in RMP schema and run in the same schema it run fine.But same function i have created on another schema and copying same fmx in that runtime and trying to run ..It wont works.It throws error : Signature has been changed.
Later on i came to know that such error occurs jus becoz of 2nd parameter p_pass varchar2 default 'Y'.
ck_amt is checkbox.. ck_amt_tot is total of jtt_amt_1 [but total only those record whose checkbox is checked]
My task is like this When i checked checkbox whatever value in jtt_amt transfer to jtt_amt_1 field
but i can change value in jtt_amt_1 field ...i want to take addition of that changed field and show that sumation in ck_amt_tot.
I write trigger when-checkbox-changed like this ------------------------------------------------- IF :jou_tra1_tab.ck_amt = 'Y' THEN set_item_INSTANCE_property ('jou_tra1_tab.jtt_amt_1',CURRENT_RECORD, UPDATE_ALLOWED, PROPERTY_TRUE); :jou_tra1_tab.jtt_amt_1 := :jou_tra1_tab.jtt_amt; else :jou_tra1_tab.jtt_amt_1 := 0; END IF;
and when validation item trigger for jtt_amt_1 : -------------------------------------------------- IF :jou_tra1_tab.ck_amt = 'Y' then :jou_tra1_tab.ck_amt_tot := :jou_tra1_tab.ck_amt_tot + :jtt_amt_1 ; else :jou_tra1_tab.ck_amt_tot := :jou_tra1_tab.ck_amt_tot - :jtt_amt_1 ;
But when i changed value in jtt_amt_1 field i cant get write summation .
I've got a oracle install [non production, but devel] that is a tad screwed up. We moved the box and as a result changed the hostname to match the new naming scheme. Ever since then OracleEM has been somewhat confused. In anycase, I don't want OEM anyways now. Plan is to learn SQLplus.
That being said I've used emctl to shut down dbconsole, but it seems there is something somewhere that keeps restarting 2 processes that like to sit around and take up 100% cpu. I can kill them, they stay dead for a few hours then crop up again.I was able to find this out about them:
And then this, which caused me to conlucde its OracleEM:
SELECT sess.process, sess.status, sess.username, sess.schemaname, sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id AND sess.process in (20334,20336) [code]...
We have a requirement where we need to migrate data from a legacy system (Oracle 10g) to our database (Oracle 10g again).
The thing is that the table structures will be slightly different between the two. Our current structure has some fields added/removed as well as some tables added/removed. But by and large the databases would be similar (about 75%).
While migrating I would need to map the fields in the source and destination manually. I would also need to populate some data to the fields that have been newly added.
Is there any way to do this? I was checking out Oracle SQL developer but could not proceed much.
By default the DBMS_STATS package runs once every 24 hours to collect statistics for database objects and Oracle collects new statistics when enough of the data (about 10%) has changed.
My question here is how to check the table has changed 10% in database?