Real Application Clusters :: How Sessions Failover To Surviving Nodes
Sep 6, 2012
I want to apply patch for the 3 node rac database. I am going to use rolling upgrade, one node at a time. For this i need to stop all the instances or any processes running out of $ORACLE_HOME. I will use srvctl stop home command. My concern is:
1) with the srvctl stop home command how the instances shutdowns, either shutdown immediate or shutdown abort.
2) if applications are connected to the shutdown node, will the sessions fail over to surviving nodes?
3) If it is shutdown abort, how session failover?how about long running queries ? will they also fail-over?
Oracle version 11.2.0.2 on RHEL 5.6 , planning to apply Db PSU 11.2.0.2.7
[oracle@server03 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES="" -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 20127 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.then I executed:
To deinstall the Oracle home from the node you are deleting, run the following command from the Oracle_home/oui/bin directory:
Checking swap space: must be greater than 500 MB. Actual 20127 MB Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-11-06_12-05-36PM. Please wait ...[oracle@server03 bin]$ Oracle Universal Installer, Version 11.1.0.7.0 Production Copyright (C) 1999, 2008, Oracle. All rights reserved.
[code]....
When I arrive to the point 5 I see one resource ora.DPCTTest.db ONLINE on one server (server02 or server03 that are the nodes I want to remove). If I execute:
crs_relocate ora.db_name.db
I obtain: [oracle@server03 bin]$ ./crs_relocate ora.DPCTTest.db Attempting to stop `ora.DPCTTest.db` on member `server02` Stop of `ora.DPCTTest.db` on member `server02` succeeded. Attempting to start `ora.DPCTTest.db` on member `server03`
I have installed Oracle RAC 10g on Redhat Linux 4.0. Till yesterday failover was happening that is when i stopped one instance on node01 the vip of node01 was transferred to node02.This was shown using ifconfig -a but now that is now happening.
Below information is given:
[oracle@node01 ~]$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.hitesh.db application ONLINE ONLINE node02 ora....h1.inst application ONLINE ONLINE node01
i am on RAC 11G R2 RHEL 5 , when i install rac can i run root.sh on all nodes in parallel ? or do i have to wait for first node to finish root.sh completely and then i shud run root.sh on second node and so on on third node
To shutdown the entire crs stack, I ran crsctl stop cluster -all from one node. After the command execution, the below mentioned processes were still running on all nodes.
So , I had to manually run crsctl stop crs on all nodes.
[root@intsmdp01 ~]# /crs/product/11.2.0/bin/crsctl stop crsCRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'intsmdp01'CRS-2673: Attempting to stop 'ora.crf' on 'intsmdp01'CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'intsmdp01'CRS-2673: Attempting to stop 'ora.mdnsd' on 'intsmdp01'CRS-2677: Stop of 'ora.crf' on 'intsmdp01' succeededCRS-2673: Attempting to stop 'ora.gipcd' on 'intsmdp01'CRS-2677: Stop of 'ora.mdnsd' on 'intsmdp01' succeededCRS-2677: Stop of 'ora.drivers.acfs' on 'intsmdp01' succeededCRS-2677: Stop of 'ora.gipcd' on 'intsmdp01' succeededCRS-2673: Attempting to stop 'ora.gpnpd' on 'intsmdp01'CRS-2677: Stop of 'ora.gpnpd' on 'intsmdp01' succeededCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'intsmdp01' has completedCRS-4133: Oracle High Availability Services has been stopped.[root@intsmdp01 ~]#
Is there a way to bring down the entire crs stack in all the nodes in the cluster from one node ?
I am trying to design a database consolidation high-availability cluster for Oracle 11g R2 64-bit Enterprise Edition (X86-64) on Oracle Linux 6.x UEK, using Oracle 11.2.0.3 (the latest as of Aug 2012).
We don't need RAC Multi-Node now or in foreseeable future because none of the databases we run break the capacity of a single node. Likewise, we don't need to use Oracle VM to virtualise the database instances.
We plan to use SGA and PGA memory management to run multiple instances on same hardware operating on a single Linux 64-bit O/S image.Does it sound ok so far?
Two or three of 4-socket, 40-core Intel 64-bit servers with 512GB of RAM each (relatively cheap at today's HW commodity prices) will be sufficient to run all Oracle databases we have on Linux 64-bit.So the two HA options that I know of are:
(1) use Oracle Clusterware/Grid/ASM to provide for instance failover (2) use Oracle RAC One Node on top of Clusterware/Grid/ASM
As I understand it RAC One Node is significantly more expensive than the "free" Oracle Clusterware/ASM/Grid (since we own Oracle 11.2.0.3 Enterprise Licences already). So why should my employer pay for RAC One Node licence given they already own Single Instance Fail-Over and Restart protection from Clusterware/Grid/ASM ?
I also read that Data Guard 11.2 may not be supported with RAC One Node on 11.2? True? Will same Data Guard 11.2 work with a Single-Instance Failover running on Clusterware/Grid/ASM ?
-Who is running RAC One Node? Why? -Who is running Single Instance Failover with Clusterware? Why? -Who is using Data Guard with either of the above?
I m trying to make ORACLE RAC 10gr2 on two nodes with openfiler , with ocfs and asm.but i didn't have clusterware 10.2 software to install because oracle has removed software to download, that's why i install clusterware 11.2 . after installation clusterware 11.2 i check status and output is below on both nodes.
[oracle@linux2 bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
but when i try to install oracle 10gr2 software and when i reach to configure asm disks it says "in order to use AUTOMATIC STORAGE MANAGEMENT the oracle cluster synchronization service (CSS) must be up and running. and says to run "/u01/app/oracle/product/10.2.0/db_1/bin/ localconfig reset* as root user.. I run it and it give no error.
is there any alarm in moving alert logs generated in the grid infrastructure. The alert log in the grid log directory has grown around 2gb .Our CRS version 11.2.02. on linux. Also pls, Sample script for automating the same.
oracle 11g ASM RAC under OEL 5.6 . i just want to do a cloning so first i create cloning of my oracle home and then i create pfile and i get this error
SQL> startup nomount; ORA-29702: error occurred in Cluster Group Service operation SQL> exiti google it , i found this issue is with relink RAC binaries due to the souce database is RAC and destination is not
[oracle@backuptest ~]$ cd $ORACLE_HOME/rdbms/lib [oracle@backuptest lib]$ make -f ins_rdbms.mk rac_off bash: make: command not found
Trying to find a best practices document from Oracle regarding the use of ASM to store the archivelogs in RAC. Most of the DBA's I know create a non-ASM location to store the archivelogs. Are there any considerations if you are also using dataguard?
why it's offline , what this means , what the useuage of this? did have any affect on database availability? the below it's from OEM
[Critical] ABSNL Cluster Resource State State Change ora.gsd has instances in OFFLINE State Nov 12, 2012 12:15:03 PM [Warning] ABSNL Cluster Resource State State Change ora.abs.db has instances in OFFLINE State Nov 12, 2012 12:15:03 PM
We have java application which uses the ons configuration. Currently app's ons nodes parameter is pointing to vip i.e Node1-vip:6200,Node2-vip:6200. Will Ons nodes parameter support the RAC scan? Can i change to Nodes:RAC-scan:6200?
Grid Version:11.2.0.3OS : RHEL 5.8 I have succesfully installed cluster (GI) and installed RDBMS software on my 2-Node RAC . But, I don't want to use dbca to create the database due to some custom requirements. Instead , I want to run CREATE DATABASE command manually to create the RAC DB. Once the DB is created in one node, what are the steps I need to do to make this DB a RAC DB?
2. In the service add command syntax using the srvctl command, there is a new option that can be used with the srvctl add service command which is [-l <primary, physical-standby, logical_standby>], this option allows the service to be able to start up only when the service role matches the database role that is saved in the OCR.
We can now add a service to a database in the OCR using the following command:
would that be enough, or I need to make other changes as well from application perspective.
Another thing is that application deployment is done through ant scripts, which at the moment copies some data on /tmp/.. folder on db server and then run db scripts. These scripts make some ddl changes and some record updates. Now with rac implementation, these scripts will be run on one of the rac node(instances), so do I need to run these ddl scripts/record updates scripts on other node in oracle rac or oracle rac would take care of this automatically.
I have two node RAC on a test server. I have installed database software but it failed with an error that OEM could not be installed. Now I need installing Enterprise manager.
We are in preparation for the purchase of equipment for two- node Oracle10g RAC (on Windows Server 2008) and I need to configure the disks on servers. We have an existing EMC VNX5500 storage and SAN 8Gb switches.
My question is with how many disks to configure servers?Originally we planned four SSD discs (two in RAID 1 for OS and two in RAID 1 for locally Oracle home) for each node/server, and everything else on storage disks (Shared Disks Storage).
Whether it is necessary to have more local disks and for what? Where should be redologs files, archivelogs files, backup files…
Whether it is better server to be configured with six SSD disks: - two in RAID 1 for OS, - four in RAID 10 (or RAID 1) for locally Oracle home, archivelogs files, or something else…?
Grid Version: 11.2.0.3 OS: Red Hat Enterprise Linux 5.6 Node2 of our two node RAC got rebooted. Upon reboot, CRS and ASM instance came up. But the DB didn't come up. How can I check if DB is linked to CRS startup ?How can I enable DB startup upon CRS startup ?
My client asked me to "apply the newest patch" on his RAC 10.2.0.5 environment on Windows.
I've downloaded 10.2.0.5 Patch 20 (the newest version).
My question is: Should i apply it using OPatch to both: ORACLE_HOME and CRS_HOME? or only in ORACLE_HOME? What with 10.2.0.5.2 CRS Patch? Should i apply earlier CPU patches?
I have a 2 node 11.2.0.3 RAC database, OS version is RHEL 6.1
Our existing grid control server setup is 10gR2.Can i install 10.2.0.5 agent on my RAC servers and use the 10g grid to monitor my 11.2.0.3 database(RHEL 6.1) ?
I am wokring on oracle 10.2.0.4 rac 2 node instances on AIX. We have one table having multiple rows defilning jobs will be done by users ...the functionality is that ..when even one user will pick on row (job) , one update statement will issue and it will update status column to 1 which menas job is allocated , this means now this should not allocate to any other user ..
but we are facing issue that once any user will pick that job , in application log files we can see that row gets lock and updates the status to 1 . but then also users connecting to other or some time same instance will get that job...means multiple users can pick same job .. even after already picked by another user ..
Is this can be issue with rac configuration ... like whenever one user updates a row ..it will be in cache of one instance and when another user trys to again update 2nd instance does not have that information and allows user connected to that instance to pick that job..like delay in block transfer or cache fusion..
is this can be issue with rac or it is purely application issue..
I am Oracle RACSIG (URL....) member, but from last week, I am not able to play any of the recorded web seminars. I am getting error "The webpage cannot be found HTTP 404". I am able to look at the PDF but not the recording.
, I got the following error while trying to restart a 3 node RAC database. OEM reported two out of three nodes to be down. We also noticed that the cluster_database parameter was set to "False" and the cluster_database_instances integer was set to 1.We changed the parameters to 'True' and '3' respectively and while trying to restart the database we get the error message as shown below.
The resource action "ora.abc123.db start" encountered the following error:
ORA-01102: cannot mount database in EXCLUSIVE mode. For details refer to "(:CLSN00107:)" in
"/oradba/app/grid/11.2.0.3_a/log/abcde104/agent/crsd/oraagent_oradba/oraagent_oradba.log". CRS-2674: Start of 'ora.abc123.db' on 'abcde104' failedCRS-2632:
There are no more servers to try to place resource 'ora.abc123.db' on that would satisfy its placement policy
CRS-5017: The resource action "ora.abc123.db start" encountered the following error:
ORA-01102: cannot mount database in EXCLUSIVE mode.
For details refer to "(:CLSN00107:)" in "/oradba/app/grid/11.2.0.3_a/log/abcde103/agent/crsd/oraagent_oradba/oraagent_oradba.log". CRS-2674: Start of 'ora.abc123.db' on 'abcde103' failed
I recently installed 2 node Oracle 11g RAC on RHEL5. While creating Clustered Database, database creation on second node (racnode2) failed. So, I connected */ as sysdba* on the node and executed startup only to get this error message:
SQL> startup ORA-01078: failure in processing system parameters ORA-01565: error in identifying file '+RACDB_DATA/RACDB/spfileRACDB.ora' ORA-17503: ksfdopn:2 Failed to open file +RACDB_DATA/RACDB/spfileRACDB.ora ORA-01034: ORACLE not available ORA-27123: unable to attach to shared memory segment Linux Error: 13: Permission denied Additional information: 196612 Additional information: 10 SQL>But, *'+RACDB_DATA/RACDB/spfileRACDB.ora'* is present.
I strongly believe it is a permission issue that ORACLE owned processes are not able to access disks owned by GRID user. I checked the permission of disks:
[oracle@racnode2 ~]$ cd /dev/oracleasm/disks [oracle@racnode2 disks]$ ls -l total 0 brw-rw---- 1 grid asmadmin 8, 65 Nov 19 19:15 CRSVOL1 brw-rw---- 1 grid asmadmin 8, 49 Nov 19 19:15 DATAVOL1 brw-rw---- 1 grid asmadmin 8, 81 Nov 19 19:15 FRAVOL1 [oracle@racnode2 disks]$And also, ORACLE user has asmdba among other privileges.
[oracle@racnode2 disks]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) context=user_u:system_r:unconfined_t
I am doing a upgrade from 10g2 standalone (with out ASM) to 11g2 2 node cluster (on ASM) and the database is some what 1TB of the size. My problem is if I do a export and import the live system need to be down for over an a day. is there a better way of doing this and how ?