Real Application Clusters :: One Node Or Clusterware Single Instance Failover?
Aug 21, 2012
I am trying to design a database consolidation high-availability cluster for Oracle 11g R2 64-bit Enterprise Edition (X86-64) on Oracle Linux 6.x UEK, using Oracle 11.2.0.3 (the latest as of Aug 2012).
We don't need RAC Multi-Node now or in foreseeable future because none of the databases we run break the capacity of a single node. Likewise, we don't need to use Oracle VM to virtualise the database instances.
We plan to use SGA and PGA memory management to run multiple instances on same hardware operating on a single Linux 64-bit O/S image.Does it sound ok so far?
Two or three of 4-socket, 40-core Intel 64-bit servers with 512GB of RAM each (relatively cheap at today's HW commodity prices) will be sufficient to run all Oracle databases we have on Linux 64-bit.So the two HA options that I know of are:
(1) use Oracle Clusterware/Grid/ASM to provide for instance failover
(2) use Oracle RAC One Node on top of Clusterware/Grid/ASM
As I understand it RAC One Node is significantly more expensive than the "free" Oracle Clusterware/ASM/Grid (since we own Oracle 11.2.0.3 Enterprise Licences already). So why should my employer pay for RAC One Node licence given they already own Single Instance Fail-Over and Restart protection from Clusterware/Grid/ASM ?
I also read that Data Guard 11.2 may not be supported with RAC One Node on 11.2? True? Will same Data Guard 11.2 work with a Single-Instance Failover running on Clusterware/Grid/ASM ?
-Who is running RAC One Node? Why?
-Who is running Single Instance Failover with Clusterware? Why?
-Who is using Data Guard with either of the above?
I have 11gR2 GI installed on two nodes. I am trying to convert a 10g single instance (uses ASM) database to RAC and getting this error. I am trying using 12c as well as manually by using rconfig.
[main] [21:19:15:145] [ASMInstance.initialize:135] First record =[Ljava.lang.String;@958bb8 java.lang.NullPointerException at oracle.sysman.assistants.rconfig.engine.ASMInstance.initialize(ASMInstance.java:153) at oracle.sysman.assistants.rconfig.engine.Step.execute(Step.java:283)
I have installed Oracle RAC 10g on Redhat Linux 4.0. Till yesterday failover was happening that is when i stopped one instance on node01 the vip of node01 was transferred to node02.This was shown using ifconfig -a but now that is now happening.
Below information is given:
[oracle@node01 ~]$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.hitesh.db application ONLINE ONLINE node02 ora....h1.inst application ONLINE ONLINE node01
I want to apply patch for the 3 node rac database. I am going to use rolling upgrade, one node at a time. For this i need to stop all the instances or any processes running out of $ORACLE_HOME. I will use srvctl stop home command. My concern is:
1) with the srvctl stop home command how the instances shutdowns, either shutdown immediate or shutdown abort. 2) if applications are connected to the shutdown node, will the sessions fail over to surviving nodes? 3) If it is shutdown abort, how session failover?how about long running queries ? will they also fail-over?
Oracle version 11.2.0.2 on RHEL 5.6 , planning to apply Db PSU 11.2.0.2.7
I m trying to make ORACLE RAC 10gr2 on two nodes with openfiler , with ocfs and asm.but i didn't have clusterware 10.2 software to install because oracle has removed software to download, that's why i install clusterware 11.2 . after installation clusterware 11.2 i check status and output is below on both nodes.
[oracle@linux2 bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
but when i try to install oracle 10gr2 software and when i reach to configure asm disks it says "in order to use AUTOMATIC STORAGE MANAGEMENT the oracle cluster synchronization service (CSS) must be up and running. and says to run "/u01/app/oracle/product/10.2.0/db_1/bin/ localconfig reset* as root user.. I run it and it give no error.
I have two node RAC on a test server. I have installed database software but it failed with an error that OEM could not be installed. Now I need installing Enterprise manager.
Grid Version: 11.2.0.3 OS: Red Hat Enterprise Linux 5.6 Node2 of our two node RAC got rebooted. Upon reboot, CRS and ASM instance came up. But the DB didn't come up. How can I check if DB is linked to CRS startup ?How can I enable DB startup upon CRS startup ?
I am wokring on oracle 10.2.0.4 rac 2 node instances on AIX. We have one table having multiple rows defilning jobs will be done by users ...the functionality is that ..when even one user will pick on row (job) , one update statement will issue and it will update status column to 1 which menas job is allocated , this means now this should not allocate to any other user ..
but we are facing issue that once any user will pick that job , in application log files we can see that row gets lock and updates the status to 1 . but then also users connecting to other or some time same instance will get that job...means multiple users can pick same job .. even after already picked by another user ..
Is this can be issue with rac configuration ... like whenever one user updates a row ..it will be in cache of one instance and when another user trys to again update 2nd instance does not have that information and allows user connected to that instance to pick that job..like delay in block transfer or cache fusion..
is this can be issue with rac or it is purely application issue..
In our 2-node RAC , Node1 was hung for a while. So, I wanted to restart that Instance. But I had some 50 services running in Instance 1(Preferred Instance) which has to relocated before I bounce the instance.
Question 1. Instead of using srvctl relocate for each service, Is there any way to quickly relocate all services to the Node2 (Available Instance) ?
Question2. When a service is relocated , are DMLs running from sessions using that service be 'moved' to the available instance ? Or, is it just the SELECTs that can be failed over ?
we have 4 node rac cluster. Node 4 crashed. All the services on node 4 moved to node1.how can I evenly distribute the services instead all services going to node1 ?
for example: i have 10 services on node 4. All went to node 1.i want 3 on node1, 3 on node2, 4 node3 .
OS: oel6.3 - 2.6.39-300.17.2.el6uek.x86_64 Grid and DB: 11.2.0.3.4
This is a two node Standard Edition cluster.
The node crashes upon restart of clusterware after following the instructions from note:751343.1 (RAC Support for RDS Over Infiniband) to enable RDS. The cluster is running fine using ipoib for the cluster_interconnect.
1) As the ORACLE_HOME/GI_HOME owner, stop all resources (database, listener, ASM etc) that's running from the home. When stopping database, use NORMAL or IMMEDIATE option.
2) As root, if relinking 11gR2 Grid Infrastructure (GI) home, unlock GI home: GI_HOME/crs/install/rootcrs.pl -unlock
3) As the ORACLE_HOME/GI_HOME owner, go to ORACLE_HOME/GI_HOME and cd to rdbms/lib
4) As the ORACLE_HOME/GI_HOME owner, issue "make -f ins_rdbms.mk ipc_rds ioracle"
5) As root, if relinking 11gR2 Grid Infrastructure (GI) home, lock GI home: GI_HOME/crs/install/rootcrs.pl -patch
Looks to abend when asm tries to start with the message below on the console. I have a service request open for this issue but
kernel BUG at net/rds/ib_send.c:547! invalid opcode: 0000 [#1] SMP CPU 2
I have found the below information in alert log of 2 nodes.
Version 10.2.0.5.0 OS :AIX
Suddenly instances were shutdown and restarted again.
Reconfiguration started (old inc 0, new inc 64) List of nodes: 0 1 Global Resource Directory frozen * allocate domain 0, invalid = TRUE Communication channels reestablished * domain 0 valid = 1 according to instance 0 Fri Dec 28 04:19:18 GMT 2012 [code]....
I have a 2 node RAC environment (11.2.0.3) where each node has there own local Grid_home and RDBMS_home.
I am installing a Rolling Bundle Patch with OPatch in this environment. The installation document says that "The order of patching in RAC install is GRID_HOME, then RDBMS_HOME" so i did the following.
1. stopped all oracle related services on node1 2. set oracle_home=<Grid_home> 3. applied the opatch 4. opatch succeeded on node1 and it says "The node 'NODE2' will be patched next... Is the node ready for patching?
1. Should i shutdown the oracle services in Node2 and continue to patch the Grid_home ? If yes then the DB will be completely down for user access. This defeats the purpose of rolling mode which says there is no downtime. 2. Should i patch the RDBMS_home on node1 , start all the oracle services on node1 , stop the oracle services on node2 and then resume the opatch on node1 which is waiting to patch the Grid_home on node2 ?
We are using oracle 11.2.0.3.0 with 3 node rac. Earlier 3 scan vip and 3scan listener running on each node.But we found recently node1 running using 2vip and 2scanlistener and in node2 1vip and 1scanlisteners were running.but no longer running scan vip or scan listener in node 3. If i decided to reloacate the scan vip/Scan_listener to node3 from ndoe 1 using below command,does it cause any impact on my transcation?
i am configuring asm on clusterware when i create instance on node1 and node 2 a message screen appear " Cant start asm instance on node 2)When i start asm instance manually on node 2 i found the below error in alert log file.
ORA-27508 IPC error sending a message ORA-27300 OS system dependent operation:send msg failed with status: 101 ORA-27301 OS failure message: Network is unreachable ORA-27302 failure occurred at: sskgxpsnd1
I am installing Oracle 10g Clusterware in RHEL 5 Server. When i run root.sh in the second node the last step Running vipca(silent) for configuring nodeapps failed. Then i run VIPCA manually in second node and configured vip configuration successfully,but when i checked the post-checks for cluster services the Checking existence of VIP node application,Checking existence of ONS node application and Checking existence of GSD node application in second node failed. I am able to ping both the servers one another with vip name successfully and the CSS,CRS and EVM appears healthy in both nodes. 1. Does the vip configuration is proper? If no what is causing this error and how to rectify it? Find the output below
I have a single node Oracle E-Biz 12.1.3 Installation.We plan to convert this to a dual node RAC install and I wanted to run by the steps to perform this AND clone over the current production system to the new RAC system (note this will not be using ASM - but will be based on NetApp NFS)
1. Install GRID 11.2.0.3 - installed acorss two nodes, Binaries can be shared - voting disks / cluster rigistry MUST be shared
2. Install Database Software for RAC - Binaries cannot be shared and must be installed on each node independently
3. Configure the GRID for database (database / listeners)
4. Clone over the stand alone system (clone database as RAC)
I will be using SCAN - and therefore would expect all the web services / concurrent manager nodes to point to the SCAN hostname as opposed to individual host names.
I am configuring Grid Infrastructure 11g2 on node A, one of two cluster nodes, and getting the following message: [INS-40916] Single-instance versions of Cluster Synchronization Services (CSS) are detected.
I executed runInstaller and installed software-only on both A and B nodes. It apparently the other node has already CSS instance running, and maybe caused by the installation.I remove the software from A first?
How do I stop CSS on the node A, and let the configuratoin to continue on this node B
I would like to use Oracle Grid Infrastructure to protect a single-instance database.
I found these docs:
[URL]
I was unable to find a similar document for GI 11.2 . Do you know if such a document exists?
At the moment I only know that the GI is free if used for an Oracle Product, but I am not shure whether it can be used to protext external applications or single-instance databases
is there any alarm in moving alert logs generated in the grid infrastructure. The alert log in the grid log directory has grown around 2gb .Our CRS version 11.2.02. on linux. Also pls, Sample script for automating the same.
oracle 11g ASM RAC under OEL 5.6 . i just want to do a cloning so first i create cloning of my oracle home and then i create pfile and i get this error
SQL> startup nomount; ORA-29702: error occurred in Cluster Group Service operation SQL> exiti google it , i found this issue is with relink RAC binaries due to the souce database is RAC and destination is not
[oracle@backuptest ~]$ cd $ORACLE_HOME/rdbms/lib [oracle@backuptest lib]$ make -f ins_rdbms.mk rac_off bash: make: command not found
Trying to find a best practices document from Oracle regarding the use of ASM to store the archivelogs in RAC. Most of the DBA's I know create a non-ASM location to store the archivelogs. Are there any considerations if you are also using dataguard?
why it's offline , what this means , what the useuage of this? did have any affect on database availability? the below it's from OEM
[Critical] ABSNL Cluster Resource State State Change ora.gsd has instances in OFFLINE State Nov 12, 2012 12:15:03 PM [Warning] ABSNL Cluster Resource State State Change ora.abs.db has instances in OFFLINE State Nov 12, 2012 12:15:03 PM
We have java application which uses the ons configuration. Currently app's ons nodes parameter is pointing to vip i.e Node1-vip:6200,Node2-vip:6200. Will Ons nodes parameter support the RAC scan? Can i change to Nodes:RAC-scan:6200?
Grid Version:11.2.0.3OS : RHEL 5.8 I have succesfully installed cluster (GI) and installed RDBMS software on my 2-Node RAC . But, I don't want to use dbca to create the database due to some custom requirements. Instead , I want to run CREATE DATABASE command manually to create the RAC DB. Once the DB is created in one node, what are the steps I need to do to make this DB a RAC DB?