[oracle@server03 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME CLUSTER_NODES="" -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 20127 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.then I executed:
To deinstall the Oracle home from the node you are deleting, run the following command from the Oracle_home/oui/bin directory:
Checking swap space: must be greater than 500 MB. Actual 20127 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-11-06_12-05-36PM. Please wait ...[oracle@server03 bin]$ Oracle Universal Installer, Version 11.1.0.7.0 Production
Copyright (C) 1999, 2008, Oracle. All rights reserved.
[code]....
When I arrive to the point 5 I see one resource ora.DPCTTest.db ONLINE on one server (server02 or server03 that are the nodes I want to remove).
If I execute:
crs_relocate ora.db_name.db
I obtain:
[oracle@server03 bin]$ ./crs_relocate ora.DPCTTest.db
Attempting to stop `ora.DPCTTest.db` on member `server02`
Stop of `ora.DPCTTest.db` on member `server02` succeeded.
Attempting to start `ora.DPCTTest.db` on member `server03`
i am on RAC 11G R2 RHEL 5 , when i install rac can i run root.sh on all nodes in parallel ? or do i have to wait for first node to finish root.sh completely and then i shud run root.sh on second node and so on on third node
I want to apply patch for the 3 node rac database. I am going to use rolling upgrade, one node at a time. For this i need to stop all the instances or any processes running out of $ORACLE_HOME. I will use srvctl stop home command. My concern is:
1) with the srvctl stop home command how the instances shutdowns, either shutdown immediate or shutdown abort. 2) if applications are connected to the shutdown node, will the sessions fail over to surviving nodes? 3) If it is shutdown abort, how session failover?how about long running queries ? will they also fail-over?
Oracle version 11.2.0.2 on RHEL 5.6 , planning to apply Db PSU 11.2.0.2.7
To shutdown the entire crs stack, I ran crsctl stop cluster -all from one node. After the command execution, the below mentioned processes were still running on all nodes.
So , I had to manually run crsctl stop crs on all nodes.
[root@intsmdp01 ~]# /crs/product/11.2.0/bin/crsctl stop crsCRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'intsmdp01'CRS-2673: Attempting to stop 'ora.crf' on 'intsmdp01'CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'intsmdp01'CRS-2673: Attempting to stop 'ora.mdnsd' on 'intsmdp01'CRS-2677: Stop of 'ora.crf' on 'intsmdp01' succeededCRS-2673: Attempting to stop 'ora.gipcd' on 'intsmdp01'CRS-2677: Stop of 'ora.mdnsd' on 'intsmdp01' succeededCRS-2677: Stop of 'ora.drivers.acfs' on 'intsmdp01' succeededCRS-2677: Stop of 'ora.gipcd' on 'intsmdp01' succeededCRS-2673: Attempting to stop 'ora.gpnpd' on 'intsmdp01'CRS-2677: Stop of 'ora.gpnpd' on 'intsmdp01' succeededCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'intsmdp01' has completedCRS-4133: Oracle High Availability Services has been stopped.[root@intsmdp01 ~]#
Is there a way to bring down the entire crs stack in all the nodes in the cluster from one node ?
I have recently setup a 2 node Rac on oracle linux 5.4 with oracel 11gR2, the installation went smoothly and all the cluster resources are up and running however the data is not syncing across the nodes, when I create a table it shows up on the other node but when I insert rows into the tables they don't show up on the other node and when I restart the cluster the inserted rows are completely gone even from the node where I inserted them.
Initial situation: Oracle 10g database single instance on Windows 2008 (datafile within NTFS partition) Final situation: same database upgraded to Oracle 11g r2 on a two nodes RAC on Windows 2008 (datafile within ASM)
In your opinion, is this the best way to achieve the job ?
1) on the two nodes install grid 11gr2, asm, rdbms 11g r2 (SE) 2) export full from 10g 3) create a new empty 11g r2 database in the RAC 11g r2 infrastructure with the same tablespace layout of 10g database 4) import full from 10g to 11g r2
I'm trying to install 2 nodes oracle RAC 11gR2 on SLES11. I configured DNS for public,virtual, private and scan IPs. when I check with nslookup, everything seems ok. but when I run runcluvfy.sh, it says that:
" PRVF-5636 : The DNS response time for an unreachable node exceeded 15000 ms on following nodes: rac1,rac2"
We have 2 Sun Solaris (Solaris 10) nodes each has an oracle db, and OID of the above versions.There is an ASR running between both nodes, and it has been setup as follow:
$ ./ldap/bin/remtool -asrsetup -v ------------------------------------------------------------------------------ ASR Setup for OID Replication
Make sure that the replication administrator that you enter below does not exist already in any of the nodes that will be part of the DRG to be created now. If the user exists, that user will be dropped and will be created newly.
------------------------------------------------------------------------------ Enter replication administrator's name : repadmin Enter replication administrator's password : Reenter replication administrator's password : Enter Master Definition Site (MDS) details : Enter global name of MDS : node1 [code].....
We ran a perl script which is using ldapmodify to change the value of one attribute in all records (7 Millions records).That has been done by creating many of LDIF files each of 10,000 records in 26.04, and by mistake this script has ran 3 times in parallel.Currently the replication between both nodes are stuck, and when I query the ods_chg_log on the master node (node1):
SQL> select count(*) from ods_chg_log;
COUNT(*) ---------- 6690594 And the number is not decreasing. And I queried on the asr_chg_log on the master node (node1): SQL> select count(*) from asr_chg_log; [code]....
I m trying to make ORACLE RAC 10gr2 on two nodes with openfiler , with ocfs and asm.but i didn't have clusterware 10.2 software to install because oracle has removed software to download, that's why i install clusterware 11.2 . after installation clusterware 11.2 i check status and output is below on both nodes.
[oracle@linux2 bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
but when i try to install oracle 10gr2 software and when i reach to configure asm disks it says "in order to use AUTOMATIC STORAGE MANAGEMENT the oracle cluster synchronization service (CSS) must be up and running. and says to run "/u01/app/oracle/product/10.2.0/db_1/bin/ localconfig reset* as root user.. I run it and it give no error.
i'm trying to duplicate my RAC database with ASM and 2 nodes to a single node, non ASM with rman duplicate database for standby with active database on windows. The problem is i dont know how to remove RAC configuration when duplicating the database.
Here is the procedure:
On standby i've: - installed database software, - configured listener and tnsnames, - added instance service (oradim -new -sid STANDBY -intpwd PASSWORD -startmode M), - configured initora (with only db_name = STANDBY) - configured orapwd file - startup nomount
on RAC node1:
rman target sys/PASSWORD@RAC connected to target database RMAN> connect auxiliary sys/password@STANDBY connected to auxiliary database: STANDBY (not mounted)
now for the duplicate command:
RMAN> duplicate target database 2> for standby 3> from active database 4> spfile 5> parameter_value_convert = '+DATA/RAC','D:\oracle\11.2.0\DATAFILES\DATA\STANDBY' 6> set db_unique_name='STANDBY' 7> set log_file_name_convert = '+DATA/RAC','D:\oracle\11.2.0\DATAFILES\DATA\STANDBY' 8> set db_file_name_convert = '+DATA/RAC','D:\oracle\11.2.0\DATAFILES\DATA\STANDBY';
i have tried adding this parameter - set cluster_database='false' but no luck.
is there any alarm in moving alert logs generated in the grid infrastructure. The alert log in the grid log directory has grown around 2gb .Our CRS version 11.2.02. on linux. Also pls, Sample script for automating the same.
oracle 11g ASM RAC under OEL 5.6 . i just want to do a cloning so first i create cloning of my oracle home and then i create pfile and i get this error
SQL> startup nomount; ORA-29702: error occurred in Cluster Group Service operation SQL> exiti google it , i found this issue is with relink RAC binaries due to the souce database is RAC and destination is not
[oracle@backuptest ~]$ cd $ORACLE_HOME/rdbms/lib [oracle@backuptest lib]$ make -f ins_rdbms.mk rac_off bash: make: command not found
Trying to find a best practices document from Oracle regarding the use of ASM to store the archivelogs in RAC. Most of the DBA's I know create a non-ASM location to store the archivelogs. Are there any considerations if you are also using dataguard?
why it's offline , what this means , what the useuage of this? did have any affect on database availability? the below it's from OEM
[Critical] ABSNL Cluster Resource State State Change ora.gsd has instances in OFFLINE State Nov 12, 2012 12:15:03 PM [Warning] ABSNL Cluster Resource State State Change ora.abs.db has instances in OFFLINE State Nov 12, 2012 12:15:03 PM
We have java application which uses the ons configuration. Currently app's ons nodes parameter is pointing to vip i.e Node1-vip:6200,Node2-vip:6200. Will Ons nodes parameter support the RAC scan? Can i change to Nodes:RAC-scan:6200?
Grid Version:11.2.0.3OS : RHEL 5.8 I have succesfully installed cluster (GI) and installed RDBMS software on my 2-Node RAC . But, I don't want to use dbca to create the database due to some custom requirements. Instead , I want to run CREATE DATABASE command manually to create the RAC DB. Once the DB is created in one node, what are the steps I need to do to make this DB a RAC DB?
2. In the service add command syntax using the srvctl command, there is a new option that can be used with the srvctl add service command which is [-l <primary, physical-standby, logical_standby>], this option allows the service to be able to start up only when the service role matches the database role that is saved in the OCR.
We can now add a service to a database in the OCR using the following command:
would that be enough, or I need to make other changes as well from application perspective.
Another thing is that application deployment is done through ant scripts, which at the moment copies some data on /tmp/.. folder on db server and then run db scripts. These scripts make some ddl changes and some record updates. Now with rac implementation, these scripts will be run on one of the rac node(instances), so do I need to run these ddl scripts/record updates scripts on other node in oracle rac or oracle rac would take care of this automatically.
I have two node RAC on a test server. I have installed database software but it failed with an error that OEM could not be installed. Now I need installing Enterprise manager.
i want to install oracle real application cluster using shared file system and not using ASM. we are not creating LUN for our data to kept and SAN admin will give us NFS share where we will place the database.
We are trying to increase ASM disk space and with respect to it when we are trying to allocate more space this question came across my mind. Now this was previously configured by my previous SA.
[root@oracledbtest1 ~]# /etc/init.d/oracleasm querydisk -d `/etc/init.d/oracl cut -f2,10,11 -d" " | perl -pe 's/"(.*)".*[(.*), *(.*)]/$1 $2 $3/g;' | while read v_asmdisk v_minor v_major do v_device=`ls -la /dev | grep " $v_minor, *$v_major " | awk '{print $10}'` echo "ASM disk $v_asmdisk based on /dev/$v_device [$v_minor, $v_major]"
[code]....
why are my LUN's showing different sizes on RAC and also the best way to allocate the space to the disks in the above scenario.
I'm using ASM on LUNs from an EMC SAN, fronted by PowerPath. Right now I have only one fiber path to the SAN, so /dev/emcpowera3 maps directly to /dev/sda3, for example. Oracle had a typo in what they told me to do in /etc/sysconfig/oracleasm*, so the scan picks up both devices.
#/etc/init.d/oracleasm querydisk -p ASMVOL_01
Disk "ASMVOL_01" is a valid ASM disk /dev/emcpowera3: LABEL="ASMVOL_01" TYPE="oracleasm" /dev/sda3: LABEL="ASMVOL_01" TYPE="oracleasm"
But I don't think it can be using both. How do I see which one it's actually using?
We are in preparation for the purchase of equipment for two- node Oracle10g RAC (on Windows Server 2008) and I need to configure the disks on servers. We have an existing EMC VNX5500 storage and SAN 8Gb switches.
My question is with how many disks to configure servers?Originally we planned four SSD discs (two in RAID 1 for OS and two in RAID 1 for locally Oracle home) for each node/server, and everything else on storage disks (Shared Disks Storage).
Whether it is necessary to have more local disks and for what? Where should be redologs files, archivelogs files, backup files…
Whether it is better server to be configured with six SSD disks: - two in RAID 1 for OS, - four in RAID 10 (or RAID 1) for locally Oracle home, archivelogs files, or something else…?
Grid Version: 11.2.0.3 OS: Red Hat Enterprise Linux 5.6 Node2 of our two node RAC got rebooted. Upon reboot, CRS and ASM instance came up. But the DB didn't come up. How can I check if DB is linked to CRS startup ?How can I enable DB startup upon CRS startup ?
My client asked me to "apply the newest patch" on his RAC 10.2.0.5 environment on Windows.
I've downloaded 10.2.0.5 Patch 20 (the newest version).
My question is: Should i apply it using OPatch to both: ORACLE_HOME and CRS_HOME? or only in ORACLE_HOME? What with 10.2.0.5.2 CRS Patch? Should i apply earlier CPU patches?