Queueing :: How To Configure Oracle AQ For Distributed Environment
Sep 23, 2013
I have a scenario In which I have say 4 AQ in which I will post the message.Also i have say 2 database.I am planning to create an MDB which will poll on these AQ's, so whenever I post message the MDB will read it and perform a specific action.I believe I can create only one MDB per queue, if it is so then I have to create 8 MDB.
As there 2 datasource and 4 MDB. Is there any other way to handle this I mean without creating 8 MDB as the Data sources can increase to 10 to 20 so the number of MDB will be 20 to 40.I guess this will affect the application performance. Can I make some changes in application so that only few MDB's are required?
I have read almost all docs about distributed transaction on tahihi.oracle.com website,But I can find a statment about this:
Can Oracle always guarrantee the data consistent in a distributed transaction?
For example,there is a distributed transaction on node a,node b and nod c.node b and node c informed node a they were prepared,so node a committed,and inform node b and node c commit.then node b committed and feedback,but network on node c broken at this point,So node a can't not get feedback from node c,but node a and node b has been committed, so what will Oracle do in this condition?
If node c rollback the data on local node, consistent in this distributed transaction was failed ,yes?
I'm trying to move my backup sets from windows database environment, to OEL 5.7 environment on another server.
I've found a manual [URL] by which I am trying to do it.I took backup sets from last night's backup using RMAN,and the current parameter(initSID.ora) file from the running live database.Now i need to configure control files in the pfile accordingly.
1. can i take current control files from the running system, to restore and recover backup sets from last night, to the state the database was at backup time?
2. how can i find out if control files are backed up and know by RMAN? "list backup completed after '2012-JUN-19';" >> gives me Archive redo logs, datafiles, but don't see the control files(or don't reconize them).
I migrated an AQ solution which sends messages between three instances from a development environment to an UAT environment and my propogation from one instance to another is not working as per development.
When I enqueue a message I note that the enq_time is one hour behind the actual sysdate.
All the messages are been stored in the queue table and are propogated through only when the instance is restarted.
A trigger is enqueuing to a queue. This works fine, but the callback function is never called. The queue already worked for a while, but since i changed something at the procedure called by the callback it does not work anymore.
I already have tried the following: -Stopping and restarting -Dropping and recreating (with the scheduler having no jobs anymore) -Dropping, restarting the database and recreating
None of these worked. Where do I fail, when considering that the queue with the same scripts worked already? I post the script for creating the queue and adding the subscriber:
CREATE OR REPLACE TYPE pat_history_queue_payload_type AS OBJECT ( TSTAMP VARCHAR2(22 CHAR), TYP VARCHAR2(10 CHAR), DELTA_MENGE NUMBER,
[code]...
The function CALLBACK which is called by the queue, is never called, I checked that with log messages. Also the package that contains the function is compiled ok.
I am trying to read a message from Oracle queue using OCCI.I am getting this run time error:
ORA-24550: signal received: [si_signo=11] [si_errno=0] [si_code=2] [si_int=-389971137] [si_ptr=0x34e8c1833f] [si_addr=0x615db0] Killed.I have checked for the line that is throwing error and found below line causing it:
*messageFromQueue = cons.receive(Message::RAW);
It seems like RECEIVE function is throwing this error: Here is my code:
I have 3 instances and i want to work between then. The error occurs when i use subquery, This is the code:
update erie.rie_cbtrega@l$e_tfcries rgr set rgr.c_descri = ( select rg.c_descri from dadm.cbtrega@l$e_tfccie rg where rg.c_idrega = rgr.c_idrega ) ;
When i execute update without subquery "( select rg.c_descri from dadm.cbtrega@l$e_tfccie rg where rg.c_idrega = rgr.c_idrega)" the result is successfully, but when i add subquery the result is
ORA-02019: no se ha encontrado la descripción de la conexión para la base de datos remota ORA-02063: line precediendo a TFCCIE ORA-02063: 2 lines precediendo a L$E_TFCRIES
How to configure Oracle EM with newly created Oracle Instance on Oracle 10g DB,which is Single Instance DB but not RAC ,when I start the Oracle EM it is starting the default DB which created during Oracle Server Installation.
We wish to configure an Oracle 11g RAC with ASM setup for the internal employee training purpose. The requirement is as follows
1. Operating system Linux, Software Oracle 11g Release 2 2. Need to build total 5 individual clusters. 3. Each cluster with 2 Node 4. Need to configure ASM as a shared storage with real hardware. (No virtualization, Openfiler, NFS) 5. Need to configure RAC database.
provide me the hardware details for configuring Shared storage. Is there any specific Storage which we can use to configure ASM as a storage.
i am trying to install 2 node RAC on Oracle VMs. Before the installation during the -preinst check there were few issues which were resolved (ex user equivalence). After that during the installation process of the Grid it failed at step "Configure Oracle Grid Infrastructure for a cluster". After it failed at this step, subsequent steps too failed which I asked OUI to ignore and then I ran both the post installation scripts. And then ran post crsinst which failed. Pasting below the output of the root.sh script, post crsinst and other checks.
************************************* [root@bsfrac01 grid]# sh root.sh Running Oracle 11g root.sh script...
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created.Finished running generic part of root.sh script.Now product-specific root actions will be performed.
2011-02-13 00:11:55: Parsing the host name 2011-02-13 00:11:55: Checking for super user privileges 2011-02-13 00:11:55: User has super user privileges
We have to configure Dataguard for our 24x7 critical banking 2 Node RAC database(10.2.0.4). Before proceeding with the configuration we have to make sure what steps we should follow to have minimum or no downtime.
1) Document covering DG setup in RAC environment. 2) We have to perform switchover as well so need its steps too. Either the normal switchover steps would be used or have to stop/start rac services as well.
We want to configure HSODBC on HP-UX 11.23 for Oracle Database 10g R2 in order to establish a DB link from Oracle to Netezza.We have installed the ODBC Drivers too..Currently we have not done any configurations and we go via an AIX box to establish a link.We want to get rid of the AIX box and instead establish directl link towards netezza from HP-UX Box.
Does Oracle support have some guidelines or steps to setup HSODBC on Oracle database 10g r2 residing on HP-UX 11.23 towards Netezza?
recently i have tried to upgrrade from oracle 11.2.0.2 to 11.2.0.3 in a test environment (virtual machine).
there i have mentioned the SID as ORCL, but when we took the same snapshot to other machine (VM). iam unable to connect to the VM, unless i gave the local net service configuration as ORCL but it is unable to connect. but when i ran the query "select * from GLOBAL_NAME" output was ORCL. domain. com again i have configured in LNSC as ORCL.DOMAIN.COM it is working.
I have a requirement to upgrade Oracle Data Integrator(ODI) from 10.1.3.5 to 11.1.1.6.3 We have a Clustered production environment where N1 will be up when N2 be down and viceversa.
Here N1 and N2 are the ODI servers as well as DB(11g Release 2) servers. They both access the SHARED CLUSTERED database. From ODI we will generally point the Oracle clustered IP(Virtual IP) which will internally point either N1 or N2 whichever is active. ODI application wise we are clear about the procedure.
Having some issues on DB related activities.
1. Should I break the cluster definitely? Cant I do the activity without breaking the cluster? 2. Do I need to point N1, N2, Clustered IP (Virtual IP) while doing the activities? 3. Since its a clustered database, do I need to db related activities once or twice? (Twice means, manually on both the servers) 4. As they are using same file structures (RAC), If the Virtual IP points N1 by default, assume that I create two new users a
I need to configure a watchdog (or any other) utility which will constantly monitor database health. And if something goes wrong with the database, the utility should restart the DB instance.
I have 9i RDBMS Software and RHEL 4. I want to install a Oracle Names server. I install RDBMS.But Where I run netmgr and want to configure and start Oname server it says.
$ORACLE_HOME/bin/names file not exists.
How Can I configure OracleNames Server using Oracle 9i On Linux.
How can we bring down the databases in oracle fail safe environment?
We have one database X in two server�s windows A & B with oracle fail safe environment. What procedure should we fallow to bring down the database X.
Today I was strangling to bring down the database because database was automatically coming up once brought down the database. what procedure should we follow to bring down the database in OFS environment.
i am reinstalling oracel 11g in my window 7 64 bit machine after i uninstalled it. however, i got an issue that is environment variable path failed in the installation process. this didn't happen when i firstly successfully installed oracle.
I am running a job that is using Pro C code. I am running it on an Oracle 10g database with an Oracle 9 client on a UNIX platform. The code compiled fine. The job runs fine sometimes but other times it fails with a Segmentation Fault error.
I have the same job running in an Oracle 8i environment with no problems.
how to change non - SYS oracle users' password in data guard envirnment. We all know that for SYS password change in data guard. DBA has to change in primary database by either "alter user SYS identified by xxxx" or create password file with orapwd.
Then scp password file to standby database. However, if I want to change SYSTEM or DBSNMP passwords, I change on primary with " alter user ....." SQL, then new passwords will be login dictionary. But this new SYSTEM pqssword will be shipped with redo log to standby and SYSTEM password on standby will be updated? I need technical answer on this question
i want to configure my present Data Guard Setup with Enterprise Manager...and on the Enterprise Manager grid control page i am not seeing any Data Guard option.....i am using oracle 10gR2 and Windows XP.
How to use enterprise manager for configuring Data Guard.
We are in the process of setting up a DR environment for our SAP and Oracle databases . The netapp and our architects came up with solution as follows .
1.Standby databases are built for all production databases. 2.The SAP file systems are replicated to the secondary site 3.The Oracle logfiles and controlfiles are replicated by netapp snap mirror every 10 mins interval 4.The database is recovered through recover standby database every 15 mins at standby site 5.Please note there is no data guard involved . 6.To test the failover , the mirror is broken .The standby controlfile is replaced with Production controlfile and Redo logs files. 7.The standby database issued a startup comnmand and it worked .
Would like to know whether the step 6 is a correct approach ? I tried to convince the architects that this will result in a very disastrous situation for us but none is listened to .
how to integrate Application Express with Oracle EBS. Any companies that offer hosted EBS environment with APEX and allow developer-level access (ie database, application server, front-end responsibilities)?
I am using the dblink to merge the data. I am using the following merge statement.
merge into APP_USER.USR_NEW_RIGHTS@NEW_RIGHTS t Using (select 'test' GRANTEE,'TESTxxx'ROLE from dual ) s on (t.GRANTEE = s.GRANTEE and t.ROLE = s.ROLE) when not matched then insert (ID,GRANTEE,ROLE,XRIGHT,COMPANY,OWNER,TABLENAME) values ('','test','TESTxxx',null, null, null, null);
I know that I have to set a commit and it's working when I insert information's with a normal insert statement via database link, but it seems that merging doesn't work.
We found out an error from alert log of our Oracle 10.2.0.5 DB : ==================================== .. Wed Jan 30 16:45:01 EAT 2013 DISTRIB TRAN bea1.67AA54355C4A74ECDEE0 is local tran 6.42.332492 (hex=06.2a.512cc) insert pending prepared tran, scn=8151148567799 (hex=769.d6509cf7) Wed Jan 30 16:45:02 EAT 2013 Errors in file /oradata/sfapdb/bdump/sfapdb_reco_2739.trc: ORA-24756: transaction does not exist Wed Jan 30 16:45:02 EAT 2013 Errors in file /oradata/sfapdb/bdump/sfapdb_reco_2739.trc: ORA-24756: transaction does not exist .. ====================================
There is no useful information from the trace log as shown below: ==================================== Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, Data Mining and Real Application Testing options ORACLE_HOME = /ap/oracle10 System name:HP-UX Node name:scvap2 Release:B.11.23 Version:U Machine:9000/800 Instance name: sfapdb Redo thread mounted by this instance: 1 Oracle process number: 18 Unix process pid: 2739, image: oracle@scvap2 (RECO)
*** SERVICE NAME:(SYS$BACKGROUND) 2013-01-30 16:45:01.941 *** SESSION ID:(1749.1) 2013-01-30 16:45:01.941 *** 2013-01-30 16:45:01.940 ERROR, tran=6.42.332492, ose=0: ORA-24756: transaction does not exist *** 2013-01-30 16:45:02.059 ERROR, tran=6.42.332492, session#=1, ose=0: ORA-24756: transaction does not exist ====================================
I also found out there are some records (trans_id = "6.42.332492") in SYS.PENDING_TRANS$/ SYS.PENDING_SESSION$/dba_2pc_pending with "prepare" status.
This transaction is launched from a Weblogic Server via JDBC. Since it is abnormal so I have no choice to force commit/purge this transaction. Is that a bug of Oracle DB ? or Weblogic coding problem ?