Clusterware :: 2 Oracle SE Instances In 1 Cluster?
Jun 21, 2013
I would like to build a 2-node cluster without RAC. I desire to operate 2 Oracle database instances for use by 2 separate applications. I plan to run Linux. I would like node1 to act as the primary server for the databases in application1 and node2 to act as the failover server for the databases in application1. Conversely, I want node2 to act as the primary server for databases in application2 and node1 to act as the failover server for databases in application2. Can Oracle SE and Oracle Clusterware be configured in this this way?
I have a RAC working with SAN raw devices presented to ASM and working fine. My situation has changed since the installation of the RAC. Now I need a shared storage (with a filesystem) to upload some files (so DB can read these as external tables) for this I have used OCFS2.
As you can see I have Oracle RAC on raw ASM + shared storage on OCFS2 and for the time been all seems to be well and working perfectly. Can these coexist with out effecting the other?
After installing a 4 node cluster 11.2.0.3 with 16 CPU's (4 on each node) on IBM 795 with aix 6.1 each server is using 0.5 CPU with no user load on the system. Running SIHA on one server typically uses 0.05 CPU with no user load on the system.
My application consists of Oracle RAC and Oracle Enterprise Manage (OEM) on same two nodes. I am exploring the idea of utilizing Oracle Clusterware (which is already installed with Oracle RAC) to support OEM in active passive mode.
Here is the scenario.
Node A running Solaris 10.9 on SPARC and Node B running Solaris 10.9 on SPARC. Node A and Node B are both Oracle RAC Nodes ( Active Active ) which use Shared Storage for Oracle using ASM.
Node A and Node B will run Oracle Enterprise Manager (OEM) Application in active passive scenario which will be using Oracle RAC for its database. So, both Oracle RAC and OEM are sharing same server A and B for its clusters.
Now, OEM application need a third party clusterware to fail it over. It needs clusterware to provide floating VIP and shared folder of the size of 20GB which will keep software libraries. If node A goes down, node B have access to same libraries and will come up as active.
Do you know if oracle clusterware which comes with Oracle RAC can support OEM for failing over i.e. it can provide floating VIP and shared file system?
We have 2 servers running Oracle 11gR2 Window server 2008 64 bit RAC. Today I add new shared disk visible from both servers. All our data placed on ASM partitions, but we need Windows volume for exchanging some data between servers & users;Actions:then i have prepare disk on shared storage from 1st node, run diskmfmt.msc & rescan disks when windows recognizes new hdd, initialize it.create an Extended partition & Logical drive & Assign drive letter. from another nodes, make sure no drive letter is assigned.
Sometimes Windows automatically assign drive letter. If drive letter is assigned, right click & Change drive letter & Remove. from 1st node, format this logical drive using ocfsFormat.exe after execute below scripts i have face some issues
E:app11.2.0gridcfs>ocfsformat /m P: /c 1024 /v DATA /f /d /aReg Get Cluster Name(): Reg Query ValueEx for CFS_CLUSTER_NAME failed with error 203
This indicates that the Cluster Name has not been configured on this node for OCFSThe volume formatted in this condition will be seen by all nodes running OCFS.
I'm a long time Oracle DBA but have only recently started with RAC and have never done any patching or upgrades in a RAC environment.
My environment consists of a 2 node RAC cluster, and 2 physical standbys at separate remote locations.
The database, GI, ASM, and clusterware are all at version 11.2.0.1.
I want to upgrade all components to the latest patchset which is 11.2.0.3. Eventually I also want to apply the latest PSU.
Can I do this without down time? It is a mission critical database where downtime could conceivably endanger lives. If it can't be done with zero downtime, then downtime must be kept to an absolute minimum (minutes, not hours).
I suspect the clusterware needs to be patched first and I do not know whether I can do it on one node at a time.
I have to manage an active passive cluster on SLES10 SP4. Oracle 10.2.0.4. All oracle binaries are on a DRBD Device on mountpoint /oradata. This mountpoint includes the binaries and all datafiles.
I havent installed these systems. At one stage on node died and got reinstalled by a system administrator. After a switch from the running node to the standby node, the cluster is not starting.
Jan 3 14:36:01 a logger: Cluster Ready Services waiting on dependencies. Diagnostics in /tmp/crsctl.6114. Jan 3 14:36:14 a logger: Waiting for Oracle CSS service to be available before starting Jan 3 14:36:14 a logger: ASM instance +ASM. Wait 2.
After issuing $ORACLE_HOME/bin/localconfig add the cluster is starting, the asm instance is coming up, and also the database instance is starting.
Shortly after that the db instance dies. Alert log is showing something like recursive sql error. Looks like writing to the datafiles is not working. In asmcmd the diskgroups are showing DISMOUNTED (show MOUNTED before)
so what to do ? a) Why is it necessary to issue "localconfig add" to make the cluster starting ? b) What could be a reason for not being able to write to the database ? c) How to install an active/passive cluster ? c1) Install OS on both nodes c2) Install oracle binaries c2.1) on both nodes in the local files system and do the mapping to the drbd afterwards? c2.2) only on one node and just switch the mountpoint ?
We have 3 node RAC. In one of the node clusterware is down and also CSSD process is down. I got to know that we have to clear socket files in that node.what needs to be done instead ofclearing socket files.
database Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production [code]...
im having problems with CRS in a Rac installation. Because of this i cant start the database.I installed Oracle Rac 11.2.0.1 with ASM in a Red Hat 5.8 linux server. Everything was ok until a server reboot. I have 2 nodes, nodo1 and nodo2. Both got rebooted and after that im not able to start the cluster.
All the other resources seem to be just fine .
At nodo1: [root@nodo1 oracle]# crsctl start cluster -all CRS-2672: Attempting to start 'ora.cssdmonitor' on 'nodo1' CRS-2672: Attempting to start 'ora.cssdmonitor' on 'nodo2' CRS-2676: Start of 'ora.cssdmonitor' on 'nodo1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'nodo1' CRS-2672: Attempting to start 'ora.diskmon' on 'nodo1' CRS-2676: Start of 'ora.cssdmonitor' on 'nodo2' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'nodo2'
I have been asked to look at making an existing Oracle server resilant. The technology I need to use are:
- Windows 2008 R2 Enterprise SP2 - Microsoft Cluster Services - Oracle Enterprise 11g release 2 - Oracle Fail-Safe (latest version available)
Unfortunately ApEX 4.1 and Mod_plsql are install on the existing stand-alone server. I'm afraid i know very little about Oracle and ApEX but i need to make a design decision and check fiesablilty.
I believe the HTTP service and Mod_plsql can be moved to a independant server which should work but ApEX itself must remian on the database server (i'm told).Will ApEX work in a MSCS setup? I can only find references to RAC support, RAC is out of scope.
I have installed Oracle Grid on a standalone server and setup Oracle db 11.2.0.2 on Oracle Linux 6.2 64 bit server.When I reoot the server and run crs_stat -t,several daemons havent started thus the ASM and db instances are also down as below
I am forced to manually start the daemons via command crsctl start resource -all then I manually start the ASM and db instances. Yet when I run the commands
crsctl config has CRS-4622: Oracle High Availability Services autostart is enabled.
crsctl check has CRS-4638: Oracle High Availability Services is online.
Thus I would assume the daemons would start automatically during boot.
Oracle VM 2 Node RAC cluster Oracle 11G 11.2.0.3.0 Oracle 11g rel 2 GI
on current 2 node cluster we have GI and RAC db configured Nodes: vmorarac1,vmorarac2
we shutdown vmorarac1 to clone it to vmorarac5
on new node I have changed the hostname to vmorarac5 In /etc/sysconfig/network-scripts/ifcfg-eth0 change ip to new ip and same for /etc/sysconfig/network-scripts/ifcfg-eth1 152.144.199.210,152.144.199.211
mad echanges to /etc/hosts on vmorarac1/2 for 2 new IP address assigned to vmorarac5
I am using newstart cluster software. currently I have two blade servers in my system and I have configured cluster as well. right now I have one issue. I am using the ip address of 10.34.14.0/28 network for cluster but this can not be accessed from all network so I want to use different range of IP of 10.68.1.0/28 network in the same blade servers. Can I configure two different floating (cluster) IP? the first IP range is for signalling and second IP range 10.68.1.0/28 network is for OMM. I need to use both IP in two blade servers.
according to article "Installing Netbackup for Oracle agent on Unix". URL.... Shut Down all oracle instances on this client. The reason: sometimes Oracle will take a shared library (such as ours) and place it into its shared memory spaceMy question is how I can control existence shared library in memory space to define the real need of Shuting down of all oracle instances ? I'm usung 11.2.0.3
Yesterday I was installing 11.2.0.2 patch to our Oracle 11gR2(11.2.0.1.0) Two node RAC cluster on OEL 5.4 x86 Linux.Before installing I have read [URL] support note which says "Note: All Oracle Grid Infrastructure patch set upgrades must be out-of-place upgrades, in which you install the patch set into a new Grid home. In-place patch set upgrades are not supported. "
During Installation I found following Installation Option :
1) Install and configure Oracle grid Infrastructure for a cluster
2) Configure Oracle Grid Infrastructure for a Standalone Server
3) Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management
4) Install Oracle Grid Infrastructure Software Only
I have a 11.2 Grid Infrastructure with the July PSU applied. I ran the CVU and reveive the following error. I checked the file in error and I find nothing wrong. Why does CVU fail the pre-check?
RECEIVED ERROR: Starting check for zeroconf check ... ERROR: PRVE-10077 : NOZEROCONF parameter was not specified or was not set to yes in file "/etc/sysconfig/network" on node "cusms2.us.com" PRVE-10077 : NOZEROCONF parameter was not specified or was not set to yes in file "/etc/sysconfig/network" on node "cusms1.us.com" Check for zeroconf check failed
Pre-check for cluster services setup was unsuccessful on all the nodes.
I am trying to create a rac setup on oracle 10.2. through NFS cluster configuration .i succeeded with the installtion of cluster configuarion and installation of oracle software ,but when i am trying to create database ,
I am installaing oracle RAC 11.2.0.1. After I installed the grid infrastructure, when I ran root.sh, I got below errors;
ASM failed to start. Check /d1/app/grid/cfgtoollogs/asmca/asmca-1109068AM4612.log for details.
Configuration of ASM failed, see logs for details Did not succssfully configure and start ASM CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
The installation of CRS is successfull & at end it ask to execute 2 scripts on both nodes i.e. orainstRoot.sh and root.sh
When the scripts are executed,the execution of orainstRoot.sh is successfull , but root.sh give following error:
[root@rac2host crs]# ./root.sh WARNING: directory '/home/oracle/product' is not owned by root WARNING: directory 'home/oracle' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now.
[code]....
Is this concerned with permissions on OCR, Voting Disk.