Streams :: Streams Apply Process Keeps Growing PGA?
Nov 1, 2013
A streams apply process which applies to a sql sever database is increasing its pga use continually until i stop the process and restart it. I need to stop it once every week or it will use too much of the pga and the database will hand causin paging etc.
View 1 Replies
ADVERTISEMENT
Mar 27, 2013
My setup is following:
CODEhost22 (Oracle Linux 5.9) | host30 (Windows XP) with SQL Server Express 2008
DB22 | Gateway to MSSQL | DB30
Local Local | |
Capture --> Apply ------> Gateway -----------------> DUMMYUSR.DUMMYTBL
Process Process | |
Capture process on DB22 captures changes made to local DUMMYUSR.DUMMYTBL table, enqueues it to capture queue and propagates to apply queue for local apply process, which, using database link DB30 supposed to apply these changes to table DUMMYUSR.DUMMYTBL on remote SQL Server database DB30.
The problem is that when apply process is executing transaction, it aborts with error:
QUOTE ORA-26713: remote object does not exists or is inaccessible.
ORA-26712: remote object is "DUMMYUSR"."DUMMYTBL"@"DB30".
In trace log I can see the following message:
QUOTE kniNoPrimaryKeyError: ORA-26713, remote obj doesn't exist,
DUMMYUSR.DUMMYTBL@DB30
[code]...
what can cause ORA-26713 error of apply process applying changes to SQL Server database through Oracle Database Gateway, if object to be modifyed exists and accessible from Oracle database for strmadmin user?
By the way, database link and tables were created with following commands: CODEDB22(Oracle): (connected as strmadmin) create database link DB30 connect to "dummyusr" identified by "dummypwd" using 'DB30';
DB22(Oracle): (connected as dummyusr)
create table DUMMYTBL (no number, name char(6), ddate date, payload varchar2(2013));
DB30(MSSQL): (connected as dummyusr)
create schema DUMMYUSR authorization dummyusr;
create table DUMMYUSR.DUMMYTBL (no float(53), name char(6), ddate datetime2(0), payload varchar(2013));
View 1 Replies
View Related
Oct 10, 2013
,I am trying to set up Streams on a 11.2.0.3 on a Windows 2008R2 server. Due to an error in running propagation, i a, trying to delete both Capture and Apply queues. I have deleted the Apply queues but unable to drop the Catpure,
SQL>EXEC DBMS_APPLY_ADM.STOP_APPLY(apply_name =>'LAO_NLPG73_BLPU_APPLY'); SQL> select * from dba_apply; LAO_NLPG73_BLPU_APPLY NLPG73_BLPU_APPLY_Q STRMADMIN YES RULESET$_732 STRMADMIN STRMADMIN 301355 ABORTED 09/10/2013 17:34:21 1013 ORA-01013: user requested cancel of current operation CAPTURED STREAMS APPLY SQL> select * from dba_queues; STRMADMIN NLPG73_BLPU_APPLY_Q
[code]...
View 5 Replies
View Related
Jun 26, 2013
I encountered the following error while trying to setup streams replication at the database level using dbms_streams_adm.maintain_global. Desmond begin*ERROR at line 1:ORA-23616: Failure in executing block 6 for script.
E00C49DDDB27C899E040A8C04C0119DA withORA-06550: line 21, column 3:PL/SQL: ORA-00942: table or view does not existORA-06550: line 21, column 3:PL/SQL: SQL Statement ignoredORA-06550: line 23, column 3:PL/SQL: ORA-00942: table or viewORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 659ORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 682ORA-06512: at "SYS.DBMS_STREAMS_MT", line 2427ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 3004ORA-06512: at line 2 SQL> select forward_block from dba_recoverable_script_blocks where script_id = '
[code]....
View 1 Replies
View Related
Feb 9, 2013
Database Version - 10.2.0.4.0
OS - Red Hat Linux
We have configured oracle one way stream between two databases. Source database is capturing the changes (No downstream configured). Configuration was working fine but destination database was lagging behind very much i.e about 15 days behind the source database. We are ok with this but the problem is now that , as per client request we have restored previous backup and open the database with resetlog option in source database. After resetlog , archivelog sequence has been changed and stream is not working.
Can I apply the previous archivelog (before resetlog archivelogs ) in destination database anyway.Source database is a production database.
View 1 Replies
View Related
Feb 3, 2012
Source DB is logical standby 10.2.0.5
Destination DB is 11.2.0.3
Is it possible to set up streams on the logical DB to replicate some schemas to the 11gR2 DB?
View 10 Replies
View Related
Apr 18, 2013
Am trying to implement Oracle Streams Replication (Using Metalink Note 733691.1).I have configured the steps, but in my alert log am getting the below error:
Check that the primary and standby are using a password file and remote_login_passwordfile is set to SHARED or EXCLUSIVE, and that the SYS password is same in the password files. returning error ORA-16191
am successfully able to connect db's both server
From 1st server
sqlplus sys@2nddb as sysdba
From 2nd server
sqlplus sys@1stdb as sysdba
while conning it is asking for password
Both DB's are created with Same Oracle sys user password. after this disabled case sensitivity , still the error persists.
View 1 Replies
View Related
Oct 4, 2010
Platform: Windows 2003
Streams Set up: One way Streaming at table level
The error: ORA-26786: A row with key ("REPT_NUM", "STATE_CODE", "SURVEY_ID") = (067305669, 49, J) exists but has conflicting column(s) "DATE_TIME", "PREV_PARENT_ID" in table TOPCATI_JOLTS.UNIT ORA-01403: no data found
We are consistently getting this error every other day and some weeks more often, different records of course. On the capture site the application does a process called split cases. In this process the application will take an old PK case num insert a new PK case number with all of the data of the old case (Parent level). At the unit (Child level) the application will change all of the units to this new FK case number. This means the old case (parent) is left with no child units. This is all one transaction.
Is it possible that streams may be applying the LCR's out of order? especially since that whole process is one transaction.
View 2 Replies
View Related
Mar 26, 2013
I am trying to setup a one way queue that will be consumed by an inhouse ESB.We currently are able to create a multi-consumer queue using:
CODEBEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
END;
but unfortunatly the ESB that we are using uses a jms component which seems to only be able to take mono-consumer queue.So we have created our queue usuing the following
CODEBEGIN
DBMS_AQADM.CREATE_QUEUE_TABLE( Queue_table => '"STRMADMIN"."STREAMS_QUEUE_TABLE"',
Queue_payload_type => 'SYS.ANYDATA',
storage_clause => 'PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 TABLESPACE
[code]...
This therefore permits us to have a queue/queuetable which pushes data to a single consumer.The probleme comes when we try to add a table rule using the following command:
CODEBEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'source.t1',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'strmadmin.streams_queue',
[code]...
We then get the following error:
CODEORA-06512: on line 2 24039. 00000 - "Queue %s not created in queue table for multiple consumers"
*Cause: Either an ADD_SUBSCRIBER, ALTER_SUBSCRIBER, or REMOVE_SUBSCRIBER procedure, or an ENQUEUE with a non-empty recipient list, was issued on a queue that was not created for multiple consumers.
*Action: Create the queue in a queue table that was created for multiple consumers and retry the call.
We are able to create the capture rule without any problem but without the apply rule, nothing seems to end up into the queue table.AQ is not a viable solution since it is troublesome when it comes to deletes and mass updates.
View 1 Replies
View Related
Jan 28, 2013
We have a core banking database and this database includes our customer related tables and these tables are really huge.
And we have other database for some applications and this database needs fresh customer data which take place in core banking database. Our current method is materialized views but we have some performance problems about it.
What can give better performance for synchronizing the tables between databases?
-oracle streams?
-triggers?
-dbms_comparison?
View 3 Replies
View Related
Mar 2, 2012
i'm trying to write a pl/sql to find all missing archived logs that are need for streams replication.
There is already a oracle metalink note for this. But yet it would give only the archive log name that contains my dba_capture.start_scn and we need to check if the files exist in disk or not!
The problem here is, when using ASM, dba_registered_archived_log view is truncation the file name and it is really difficult to pin point the logs. So is it fine to join this view with V$archived_log? is deleted and status column would do the trick? I modified the plsql as below. Is this fine/accurate?
CODEdbms_output.put_line('Capture will restart from SCN ' || lScn ||' in the following file:');
for cr in
(select decode (a.name, NULL, 'NOT FOUND', a.name) name, to_char(a.completion_time, 'hh42:mi:ss') completion_time from v$archived_log a,dba_registered_archived_log b where lscn between b.first_scn and b.next_scn
and a.deleted = 'YES' and a.status != 'A')
loop
f_rec :=1;
[code]......
View 1 Replies
View Related
Mar 21, 2011
Can we use Materialized views in Streams like how we use tables... ?
View 11 Replies
View Related
Jun 2, 2010
this Topic was already discussed in "Physical standby on oracle 11g - Reporting needs" but without finishing.
View 5 Replies
View Related
May 27, 2011
We have three unix servers with four databases (10gR2) containing "HP Operation Management Unix" (OMU) server messages for monitoring purpose, and we now want to transfer these data to one new database on a new server for reporting purpose.
The message table in each OMU database keeps the message row until it is "Acknowledged" or for maximum fourteen days, then it is moved to an historic table where it stays for another three days. Keeping data for only seventeen days are a performance issue.The new "Reporting database" is intended to hold messages data for the last 90 days.
I wonder which method to use to move/replicate data against the databases? Materialized view using database link, with view on top of the MVs. How to keep rows longer than the master (source) table, avoiding deletion when master row is deleted
Oracle Streams, with local capture and remote apply. How will this influence on the master database performance. There are about 10000 new messages in each OMU database every day. Is it possible having four streams connections against the reporting database ?
Or should I simply use database triggers which fires after insert and update and applies changes to the reporting database using database links ?
View 2 Replies
View Related
Aug 31, 2011
Till now my company used Share Plex for replication 6 DBs from different location to one server (used for BI operations).
1) Replication in in real time
2) All DBs are Oracle but different versions 9 and 10 .
3) We have horizontal and vertical replication. We replicate just some tables, for some tables just some columns and for some tables just records with some conditions
4) We have different charsets.
5) SharePlex have compare/repair tool to check it DBs are in sync mode
I started to look for Oracle Streams to use it instead of SharePlex. Do you know if it is able to handle that type of replications? Do you know about any Oracle Streams limitation which eliminates it?
View 5 Replies
View Related
Jan 21, 2013
Due to firewall restriction, i have to configure the streams in queue forwarding. Intermediate database will hold two different ip's (one from source and another to target) and will not talk each other.
View 1 Replies
View Related
Aug 15, 2012
I create following update conflict handlers, one after other. These are working correctly, my problem is when i create 2nd it replace first one, and when i create 3rd one it replace 2nd one. I want to put all three in action simultaneously. How can i do this.
{code}DECLAREcols DBMS_UTILITY.NAME_ARRAY;BEGINcols(1) := 'MODIFIED_DATE';cols(2) := 'MODIFIED_BY';cols(3) := 'RATE_CHANGE_DATE';cols(4) := 'PENSION_AMOUNT';cols(5) := 'EASYPAISA_WORK_ON';cols(6) := 'PPO_NO';cols(7) := 'NEW_NIC_NO'; cols(8) := 'OLD_NIC_NO';cols(9) := 'CLAIM_NO';DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(object_name => 'CORE_BUSINESS.CB_PENSIONER',method_name => 'OVERWRITE',resolution_column => 'CLAIM_NO',column_list => cols);END;{code} {code}DECLAREcols DBMS_UTILITY.NAME_ARRAY;BEGINcols(1) := 'MODIFIED_DATE';cols(2) := 'MODIFIED_BY';cols(3) := 'RATE_CHANGE_DATE';cols(4) := 'PENSION_AMOUNT';cols(5) := 'CLAIM_NO';DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(object_name =>
[code]....
View 3 Replies
View Related
Nov 30, 2011
I have a test database with tables that occasionally has to be updated to the state of the production database tables. These databases both stream several tables to separate databases. (ie database A streams tables to database X and B to Y).
I export the needed schema daily from the production database "A". To update the test database "B", I import the needed tables from this export file. I then export the tables to be streamed from database B with the object_consistent=y parameter. I then import those tables to database Y using the streams_instantiation=y parameter.
So this is the process ...
1. Turn off capture on B
2. Turn off apply on Y
3. Drop the tables in B
4. Import tables from A to B
5. Prepare instantiation on tables in B (DBMS_CAPTURE_ADM.PREPARE_TABLE_INSTANTIATION)
6. Export from B with object_consistent=y
7. Drop tables in Y
8. Import from B to Y with streams_instantiation=y
9. Start apply on Y
10. Start capture on B
I ran this overnight through an OEM job and it appeared to work, just not how I expected. When I came in there was a lot of activity on both databases. All sessions were from the streams admin account. This continued for some time (long after the job had finished). During this time the "captured scn", "applied scn" & "last enqueued scn" values were quite different on the source side (didn't jot those down). I compared these ot the "last applied message number" on the target side and the valuse were quite a bit off.
After the activity subsided, the SCN values were all in the same ballpark. So I figure all the activity was changes getting applied from database B to database Y. I tested out the streaming and it's working fine.
I thought the parameters in the export and import statements were supposed to sync up the SCNs. That is I thought they would start off in the same state and there would be no need for applying changes. No one changed anything on the source database overnight.
I must not be understanding how the streams export/import parameters work.
View 5 Replies
View Related
Oct 8, 2013
I have requirement where replication should be done between two 11gR2 RAC on Standard edition.I have following queries,1. Does Standard edition support DDL capture?
Im asking this because on
[URL]......
it says "SE1/SE: no capture from redo" what that really mean ??2. Can it be possible to configure capture at schema level and skip only some of the tables / triggers ?
View 2 Replies
View Related
Nov 29, 2012
i am trying to replicate sdo_geometry datatype object using oracle11gr2 streams but the capture process didnt capture the change made on the table.
View 2 Replies
View Related
Jul 20, 2009
We are using Oracle Streams for replication.
Column datatypes in some of the tables on Source and Destination are different (Number on Source and Varchar2 on destination).
Do we have to create any rule or dml handler to handle this or Streams will automatically take care of it?We are oracle 10g.
View 1 Replies
View Related
Feb 22, 2012
Actually am trying to replicate two db servers from one in hong kong and another in china. when am trying to establish the replication, am getting error 'ORA-04052: error occurred when looking up remote object' like this...
but the same way i have tried in my local network, it is working fine.i have tried schema replication through enterprise manager grid control..
View 6 Replies
View Related
Oct 5, 2011
I have created Replication process for a single table using the below mentioned script.
connect sys/afccv@afccv as sysdba
show parameter open_cursor
create user STRMADMIN identified by STRMADMIN;
ALTER USER STRMADMIN DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON USERS;
GRANT CONNECT, RESOURCE, AQ_ADMINISTRATOR_ROLE,DBA to STRMADMIN;
execute DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
connect sys/vxmldb@vxmldb as sysdba
[code]....
According to Alert log file Replication process started( Refer Alert log content below
Wed Oct 05 12:45:53 2011
Streams CAPTURE CP01 for STRMADMIN_CAPTURE started with pid=256, OS id=8692
Starting persistent Logminer Session with sid = 41 for Streams Capture STRMADMIN_CAPTURE
Wed Oct 05 12:46:20 2011
LOGMINER: Parameters summary for session# = 41
[code]....
Now When I am checking the apply status then it is showing Dequeue message
SQL> l
1* select apply_name,state,DEQUEUE_TIME from V$STREAMS_APPLY_READER
SQL> /
APPLY_NAME STATE DEQUEUE_TIME
------------------------------ ----------------- --------------------
STRMADMIN_APPLY DEQUEUE MESSAGES
when I am checking the number of rows on both tables (source and destination) , at destination it is not applying any thing.
View 5 Replies
View Related
Jul 25, 2012
We are using oracle as database in our application. For high availability we have a cluster of multiple nodes and the data is replicated using oracle streams. All the nodes in the cluster are active. We do not have any concept of stand-by.
Now we are planning to use oracle RMAN for backup and recovery. RMAN user's guide doesn't recommend any strategies for such deployments. It mainly focuses on primary/stand-by deployments.
View 1 Replies
View Related
Aug 19, 2010
On one of our systems, the system tablespace ist growing up fast. The largest objects are:
SYS.C_OBJ# 3 GB (CLuster)
SYS.I_COL1 2,8 GB (INDEX)
SYS.I_COL2 1 GB (INDEX)
SYS.I_COL3 980 MB (INDEX)
why theses objects are going so big? Is it possible to shrink these objects? Table of the three index ist sys.col$ and part of CLuster sys.c_obj#
View 3 Replies
View Related
Apr 26, 2012
In our production database server Index tablespace is not growing now... On average index tablespace increases 200 MB to 250 MB per day. Last two weeks index tablespace does not increase..
I am giving you all a statistics of our production db Tablespace usage..
Date Globusdataxml(MB)Globusindexxml(MB)
23-4-2012 26321025 1581.94
24-4-2012 24633.31 1584.25
25-4-2012 23410.88 1576.25
26-4-2012 22186.19 1573.25
View 9 Replies
View Related
May 7, 2013
Provided a sample table to determine the sales agents performance below.
Id AgenName Zone1 Zone2 Zone3 Zone4
1 ABC 90 85 65 55
2 SDD 45 67 78 90
3 ERF 85 30 52 45
If in any zone, the sales is less than 40, it should be failed.This can be determined using the below query
select Id, AgenName, Case when Zone1 < 40 or Zone2 < 40 or Zone3 < 40 or Zone4 < 40 Then 'Failed'
Else 'Eligible'
End as Flg_Comm
from sales_table;
But the problem is when the zone grows (zone5, zone6) every time the query requires to be added with or condition.
Is there a way to dynamically determine the result even when no. of columns grow (or added)?
View 1 Replies
View Related
Jun 17, 2012
Does we need to apply the patches in sequence. i installed oracle 11g r2 today only, then which patch i can install. if i like to install CPU, then does i need to install all the previous or only the latest one. if only latest one only, then how can i find the CPU patch number.
of course, instead of cpu i can install psu patch, then which one has to install. only latest one or all previous also. if latest one only how to find latest psu patch that has to apply.
View 4 Replies
View Related
Sep 20, 2012
How to apply patchset (10.2.5) with no downtime?
My db version is 10.2.4 and OS is hp-ux
View 10 Replies
View Related
Mar 3, 2012
I need to migrate a 500 GB 10.2 database from windows to linux. Of course I found Alejandro Vargas paper on how to do that ("Database Migration From Windows to Linux Using RMAN"). My problem is that the source database will be up again for 7 days and I need to apply all changes made to the source database to the migrated linux database.
Can I just take all redo log files generated from the source database and apply those to the destination database? I might have a problem with a) file conversion and that the redo log are not in sync with the target database (different check points?).I just found that on an exotic website: "The contents of a redo log file depend on a combination of Oracle version, operating system and server architecture. In general redo logs written on one architecture cannot be read on another. There are a few exceptions to this rule. For example, in Oracle 10.2 a redo log written in Linux can be read by a Windows database."
So if this is true, does it also work the other way AND will the database migration from window to linux create a perfect clone of the database where those redo log files actually can be applied to?
View 1 Replies
View Related