Streams :: Applying Conflict Handlers On One Table?
Aug 15, 2012
I create following update conflict handlers, one after other. These are working correctly, my problem is when i create 2nd it replace first one, and when i create 3rd one it replace 2nd one. I want to put all three in action simultaneously. How can i do this.
I have a block based on a view. The view is a join on 2 tables, the first table always brings back 4 records for each parameter passed to it in the where clause. The second table is outer joined to the first table and may contain no matched records or some matched records. In some cases there will be a 1:1 match to the first table.
The problem is how to create the table handler procedure correctly. I need to update 2 tables in the table handler procedure.
The block is only enabled for update (to preserve the 4 rows), however some values on the block correspond to values from the second table. When you update a row in the block, how do you know if you are actually inserting a row into the second table or updating an already existing record on the 2nd table.
I have a question about the timestamp conflict resolution. let me describe my question with making an example:
I have a table(test1) with 3 columns(a,b,c)
I do also have 3 column groups one for a ,one for b and one for c if i want to use timestamp conflict resolution for b and c should I add 2 column timpstamp1 and stimestamp2 to test1 for each of these columns and define 1 trigger for each of them?
I hesitated because if it is true and for example if we have lots of columns which need timestamp conflict resolution the size of our table become very big.Is it correct?
A streams apply process which applies to a sql sever database is increasing its pga use continually until i stop the process and restart it. I need to stop it once every week or it will use too much of the pga and the database will hand causin paging etc.
I encountered the following error while trying to setup streams replication at the database level using dbms_streams_adm.maintain_global. Desmond begin*ERROR at line 1:ORA-23616: Failure in executing block 6 for script.
E00C49DDDB27C899E040A8C04C0119DA withORA-06550: line 21, column 3:PL/SQL: ORA-00942: table or view does not existORA-06550: line 21, column 3:PL/SQL: SQL Statement ignoredORA-06550: line 23, column 3:PL/SQL: ORA-00942: table or viewORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 659ORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 682ORA-06512: at "SYS.DBMS_STREAMS_MT", line 2427ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 3004ORA-06512: at line 2 SQL> select forward_block from dba_recoverable_script_blocks where script_id = ' [code]....
We use shared servers for our application connections... when the database starts and randomly grabs a free port number for the dispatchers, we occasionally run into a port conflict - we also have tuxedo and several other processes that start after the database, that have pre-defined ports assigned. So if the dispatcher grabs port 52452 but say the workstation listener is defined to use that same port - once the WSL starts, we have a conflict.
I know we can pre-assign specific ports to the dispatchers; my concern with doing that is that we currently support roughly 150 customers remotely, all using our standard database configuration; many of these have multiple databases per server - in some cases, up to 10-15 databases. So manually managing specific ports on all of these would be tedious, to say the least... especially if some other third party app comes into play that happens to use one of the ports we selected, and we have to change everyone's ports again.
letting Oracle randomly pick a free port within the defined range.
if we add the known ports that are used in applications that start after the databases (the ones we end up in conflict with) to the /etc/services file - will that prevent the dispatchers from using those ports? Does Oracle search the /etc/services file to find used ports, before it assigns new ones out?
All servers are HP-UX, a mix of PA-RISC based and Integrity. Oracle versions are all 10.2.0.3, with a few at 11.2.0.2. Before anyone suggests it, moving away from shared servers is not an option; our application makes new database connects at every query - so thousands of connects / disconnects every hour; the overhead of spawning a new dedicated connection for every one if these is too great, and significantly slows down the application.
We have scheduled some Jobs such as monthly, semi-annually reports using dbms_scheduler. we use 10gR2 software and windows 2003 server. we have a report which supposed to run at 4:00 AM and as soon as the report finishes it sends the report as email to the authorized users. when the scheduler ran the report inside the report it shows as ran at 3:08 AM where they get systimestamp. but when i query the
select last_stat_time, next_run__date from dba_scheduler_jobs where job_name ='TEST';
last_start_time next_run_date 03-DEC-12 04.00.00..223000 AM -04:00 07-JAN-13 04.00.00..200000 AM -04:00
when I query systimestamp from dual on that database from sqlplus: i get the following: systimestamp 04-DEC-12 07.40.16..152000 AM -05:00
I see the difference of -04.00 and -05.00 from both of the queries. i know the systimestamp from dual is correct. how to I fix the scheduler Job to run at the correct time with daylight savings to take effect?
I have the following query. The problem is that in case of dense_rank it gives wrong result when there is multi sort involved,So lets say if following data i get from inner query before applying dense_rank
Seq_uid Status Salary Pre_sort_col 1 A 4 A 1 A 3 A 2 B 5 B 3 A 0 A
After dense_rank when ordering by pre_sort_col desc and seq_uid desc the result set is
Seq_uid Status Salary ROW_NUM pre_sort_col 3 A 0 1 A 1 A 4 3 A 1 A 3 3 A 2 B 5 2 B
which is wrong as seq_uid 3 shld nt come first. I cant have salary in dense rank as these are dynamic columns,if on 3 columns multi sort is selected,the first column will come in pre_sort_col, SEq_uid has to be used as to distinguish from other records which get same pre_sort_col
SELECT c.ROW_NUM, c.RECORD_TOTAL, VW.* FROM (SELECT distinct seq_uid,ROW_NUM,RECORD_TOTAL FROM (SELECT dense_rank(order by pre_sort_col desc,seq_uid desc)row_num,a.* FROM (SELECT COUNT(DISTINCT V1.seq_uid) OVER() RECORD_TOTAL, [code]....
I am able to assign a user to a user group using the User Admin in Apex.I don't know how I would be able to assign a role (that I know how to define that for an individual user).The only thing I can see is a name for User Group and a Description! My requirement is to define a group of people to be assigned to one group/role, so that every change to that role can be automatically be applied to each user in that group
I have got a form in tabular format. each record contains some specific colour or it may not contain.
i want to chage the backgroud colour of that record based on what so ever user wants. for example, user want first record should be of RED back groud, second green, third blue etc.
colours are available in RGB format.
when i try to set_item_property(backgroundcolor) on post-query or on when new item instance, it makes all record of same colour.
note:- if a put another text/display item and make it number of record displayed property = 1 , then it works, because only one record colour is seen at a time.
I can not use multiple visual attributes because number of colours are up to user feedback.
We are trying eliminate/minimize the downtime for our application. As part of new code deployments sometimes we need to modify DB Structure also. As it is taking time to backup current DB and apply new DDL, the application is down.
Is there a way to eliminate the downtime, if I can leverage Data Guard, Golden Gate or RAC concepts?
I installed Oracle 10.2.0.1.0 in RH Linux 4.5. After that applied patch for 10.2.0.4.0. Everything went smoothly, but at the last steps I unknowingly clicked next button. I dont know whether some scripts is to be run as I did during installation of 10.1.0.4.0. If it is to be done I didn't do that.
How can I proceed now? If some scripts to be run where is the location? Will it cause any issue if I running it after completing the update?
I am trying to apply patch 16619892 on 11.2.0.3.0 to upgrade it to 11.2.0.3.7.
I hope this is a straight forword stuff i beleive, but i am getting the following errors and log details.
$ cd /noracle/patch/16619892/ xxxx@xxxx$ /noracle/home/oracle/product/11.2.0.3/OPatch/opatch apply Oracle Interim Patch Installer version 11.2.0.3.4 Copyright (c) 2012, Oracle Corporation. All rights reserved.
We have a 11r2 primary and standby running on RHEL 5. The primary lost network connectivity for a time and VMWare locked up. The VM admin restarted the primary VM (it was a hard-boot). This all happened while I was gone and when I came into work, the primary database appeared to be working. I decided to dig deeper and check out the standby. On the standby...
Just to confirm with you if we can apply the October 2011 Critical Patch Update (CPU) to address vulnerabilities covered from CPU 2007 up to CPU 2011?
The PC Server (staging) where patching will be applied is running under Windows Vista have not been patched since it's database creation. This is maintained by our Contractors.
Im applying patch which is an upgrade to 10.2.0.5 from 10.2.0.3. Its a 2 node RAC. While applying patches the installer mentioned space are sufficient, but while applying patches it throws message that in oracle home space not sufficient. Im able to apply patches in cluster home successfully. How much space required in oracle home for this patch? There is no information mentioned the document as well.
I have form with master detail relation ship (invoicing form) the detail block is tabular, displaying upto 7 records...
Now my clients wants to show the serial number along with each record while feeding the data (this includes when insert, editing, deleting / clearing records) he wants to see the serial number as in MS access / MS techniques.
I tried to use the system variable to use :system.cursor_record; but this dose not works.(in insert/edit/delete/clear record)
We are applying PSU JAN 2011 patch 10349197 on windows server 2007. While applying the patch we are getting the below error.
Following files are active :
E:oracleproduct10.2.0inoraclient10.dll E:oracleproduct10.2.0inorapls10.dll E:oracleproduct10.2.0inoracommon10.dll E:oracleproduct10.2.0inorageneric10.dll E:oracleproduct10.2.0inoraplp10.dll [code]........ -------------------------------------------------------------------------------- The following warnings have occurred during OPatch execution:
1) OUI-67620:Interim patch 10349197 is a superset of the patch(es) [ 8559466 ] in the Oracle Home -------------------------------------------------------------------------------- Its not an rac. We stopped all the services running on this server.
My Database version as followsOracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64biPL/SQL Release 10.2.0.4.0 - ProductionCORE 10.2.0.4.0 ProductionTNS for Linux: Version 10.2.0.4.0 - ProductionNLSRTL Version 10.2.0.4.0 - Production We have datagaurd setup as well - Huge archive logs are generating in our primary database - Archive logs are shipping to standby with no dealy - But applying the archive logs are taking more in our physical standby database -why it was taking more time to apply archivlogs (sync) in standby ? - What could be possible reasons..? Note : Size of standby redo logs are same as redo log file of primary database - Also standby by redo one or more than online redo log primary. Since i need to report my higer leve stating this is cause for delay in applying archive logs.
Am trying to implement Oracle Streams Replication (Using Metalink Note 733691.1).I have configured the steps, but in my alert log am getting the below error:
Check that the primary and standby are using a password file and remote_login_passwordfile is set to SHARED or EXCLUSIVE, and that the SYS password is same in the password files. returning error ORA-16191
am successfully able to connect db's both server
From 1st server sqlplus sys@2nddb as sysdba
From 2nd server sqlplus sys@1stdb as sysdba
while conning it is asking for password
Both DB's are created with Same Oracle sys user password. after this disabled case sensitivity , still the error persists.
Platform: Windows 2003 Streams Set up: One way Streaming at table level
The error: ORA-26786: A row with key ("REPT_NUM", "STATE_CODE", "SURVEY_ID") = (067305669, 49, J) exists but has conflicting column(s) "DATE_TIME", "PREV_PARENT_ID" in table TOPCATI_JOLTS.UNIT ORA-01403: no data found
We are consistently getting this error every other day and some weeks more often, different records of course. On the capture site the application does a process called split cases. In this process the application will take an old PK case num insert a new PK case number with all of the data of the old case (Parent level). At the unit (Child level) the application will change all of the units to this new FK case number. This means the old case (parent) is left with no child units. This is all one transaction.
Is it possible that streams may be applying the LCR's out of order? especially since that whole process is one transaction.
but unfortunatly the ESB that we are using uses a jms component which seems to only be able to take mono-consumer queue.So we have created our queue usuing the following
This therefore permits us to have a queue/queuetable which pushes data to a single consumer.The probleme comes when we try to add a table rule using the following command:
CODEORA-06512: on line 2 24039. 00000 - "Queue %s not created in queue table for multiple consumers" *Cause: Either an ADD_SUBSCRIBER, ALTER_SUBSCRIBER, or REMOVE_SUBSCRIBER procedure, or an ENQUEUE with a non-empty recipient list, was issued on a queue that was not created for multiple consumers. *Action: Create the queue in a queue table that was created for multiple consumers and retry the call.
We are able to create the capture rule without any problem but without the apply rule, nothing seems to end up into the queue table.AQ is not a viable solution since it is troublesome when it comes to deletes and mass updates.
We have a core banking database and this database includes our customer related tables and these tables are really huge.
And we have other database for some applications and this database needs fresh customer data which take place in core banking database. Our current method is materialized views but we have some performance problems about it.
What can give better performance for synchronizing the tables between databases?
I have a Primary database and Standby database both in ASM. Recently my archive logs got deleted and i am trying to recover my standby database with an incremental backup based on scn from primary database. But i face the below error when i recover the standby database with the incremental backup taken in primary database.
RMAN> recover database noredo;Starting recover at 06-NOV-13using target database control file instead of recovery catalogallocated channel: ORA_DISK_1channel ORA_DISK_1: SID=21 device type=DISKchannel ORA_DISK_1: starting incremental datafile backup set restorechannel ORA_DISK_1: specifying datafile(s) to restore from backup setdestination for restore of datafile 00001: +STDBY/11gdb/datafile/system.258.805921881destination for restore of datafile 00002: +STDBY/11gdb/datafile/sysaux.259.805921967destination for restore of datafile 00003: +STDBY/11gdb/datafile/undotbs1.260.805922023destination for restore of datafile 00004: +STDBY/11gdb/datafile
i'm trying to write a pl/sql to find all missing archived logs that are need for streams replication.
There is already a oracle metalink note for this. But yet it would give only the archive log name that contains my dba_capture.start_scn and we need to check if the files exist in disk or not!
The problem here is, when using ASM, dba_registered_archived_log view is truncation the file name and it is really difficult to pin point the logs. So is it fine to join this view with V$archived_log? is deleted and status column would do the trick? I modified the plsql as below. Is this fine/accurate?
CODEdbms_output.put_line('Capture will restart from SCN ' || lScn ||' in the following file:'); for cr in (select decode (a.name, NULL, 'NOT FOUND', a.name) name, to_char(a.completion_time, 'hh42:mi:ss') completion_time from v$archived_log a,dba_registered_archived_log b where lscn between b.first_scn and b.next_scn and a.deleted = 'YES' and a.status != 'A') loop f_rec :=1; [code]......