I am installing 11gr2 11.2.0.2 grid infrastructure software. while running root.sh on node1 it got failed.
Start of resource "ora.crsd" failed
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-5017: The resource action "ora.crsd start" encountered the following error:
Start action for daemon aborted
CRS-2674: Start of 'ora.crsd' on 'rac1' failed
CRS-2679: Attempting to clean 'ora.crsd' on 'rac1'
CRS-2681: Clean of 'ora.crsd' on 'rac1' succeeded
CRS-4000: Command Start failed, or completed with errors.
Clusterware exclusive mode start of Clusterware Ready Services failed at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 6475.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
I am trying to install ORACLE 11gR2 RAC on CentOS 5.5 .
when run root.sh on first node. I am gettign 2/3 failed command lines but the end went successful
======================================================================= ohasd is starting ADVM/ACFS is not supported on centos-release-5-5.el5.centos ... add nodeapps -n perflabhp03 -A perflabhp03-vip/255.255.254.0/eth0 on node=perflabhp03 ... failed ... PRCR-1001 : Resource ora.net1.network does not exist add scan=perflab-cluster-scan ... failed, Configure Oracle Grid Infrastructure for a Cluster ... failed ... 'UpdateNodeList' was successful. =========================================================================
Because of this output, when see the ./crsctl stat res -t , it showing LISTENER offline for perflahp03, and I can not see any status line for "perflabhp03-vip " and "Scan-listener".
Do I need to reinstall the entire cluster setup because of VIP issue?
Enter the full pathname of the local bin directory: [usr/local/bin]: /usr/local/bin The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying dbhome to /usr/local/bin ... The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying oraenv to /usr/local/bin ... The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n) [n]: y Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root.sh script. Now product-specific root actions will be performed.
2012-12-11 17:31:03: Parsing the host name 2012-12-11 17:31:03: Checking for super user privileges 2012-12-11 17:31:03: User has super user privileges Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Improper Oracle Clusterware configuration found on this host Deconfigure the existing cluster configuration before starting [code]....
CRS-4000: Command Start failed, or completed with errors.
during installtion of Oracle RAC 11g r1 on Linux using VMware server, all the pre cluster installation was successful, and root.sh on node 1 was successful too, but whenever I run the root.sh script on the second node I get error message "Failure at final check of oracle CRS stack. 10"
I am installaing oracle RAC 11.2.0.1. After I installed the grid infrastructure, when I ran root.sh, I got below errors;
ASM failed to start. Check /d1/app/grid/cfgtoollogs/asmca/asmca-1109068AM4612.log for details.
Configuration of ASM failed, see logs for details Did not succssfully configure and start ASM CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
I'm trying to install Oracle 11gR2 RAC on AIX, do we need to turn on Multicasting, can we install without multicasting? Also can I have the ASM disk with external redundancy for OCR and Voting Disk?
I have 2 node 11gR2 RAC running on AIX 6.1, after I shutdown the database and restart crs using crsctl start crs, ASM instance comes back up but not the database and I had to start the database using srvctl, isn't that the database should come up when I start the crs?
Yesterday I was installing 11.2.0.2 patch to our Oracle 11gR2(11.2.0.1.0) Two node RAC cluster on OEL 5.4 x86 Linux.Before installing I have read [URL] support note which says "Note: All Oracle Grid Infrastructure patch set upgrades must be out-of-place upgrades, in which you install the patch set into a new Grid home. In-place patch set upgrades are not supported. "
During Installation I found following Installation Option :
1) Install and configure Oracle grid Infrastructure for a cluster
2) Configure Oracle Grid Infrastructure for a Standalone Server
3) Upgrade Oracle Grid Infrastructure or Oracle Automatic Storage Management
4) Install Oracle Grid Infrastructure Software Only
I have some confusions regarding to 11gR2 RAC installation:
- Is Grid Infrastructure only the solution to install Clusterware Application 11gR2? My requirement is to install only the Cluster ware Software for 11gR2 (For 11gR1 we have "linux.x64_11gR1_clusterware.zip") using ocfs2 file system, Archive Mode not enabled.
i am trying to install 2 node RAC on Oracle VMs. Before the installation during the -preinst check there were few issues which were resolved (ex user equivalence). After that during the installation process of the Grid it failed at step "Configure Oracle Grid Infrastructure for a cluster". After it failed at this step, subsequent steps too failed which I asked OUI to ignore and then I ran both the post installation scripts. And then ran post crsinst which failed. Pasting below the output of the root.sh script, post crsinst and other checks.
************************************* [root@bsfrac01 grid]# sh root.sh Running Oracle 11g root.sh script...
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created.Finished running generic part of root.sh script.Now product-specific root actions will be performed.
2011-02-13 00:11:55: Parsing the host name 2011-02-13 00:11:55: Checking for super user privileges 2011-02-13 00:11:55: User has super user privileges
Every time interconnect IP ping failed between RAC servers,CRS causing reboot to the server. As documented,it is of 3 sec. Can we alter this setting to increase the ping failed (private IP) by more than 3secs.
I tried to install 11gR2 grid infrastructure for 2 nodes RAC using VMWare as per the instructions provided in the below link.
[URL]
Until 65% installation went fine without any issues but after reaching this step "updating '/u01/app/11.2.0/grid/install/cluster.ini",it starts to hung. In the installation log file, I found these details.
The installation of CRS is successfull & at end it ask to execute 2 scripts on both nodes i.e. orainstRoot.sh and root.sh
When the scripts are executed,the execution of orainstRoot.sh is successfull , but root.sh give following error:
[root@rac2host crs]# ./root.sh WARNING: directory '/home/oracle/product' is not owned by root WARNING: directory 'home/oracle' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now.
[code]....
Is this concerned with permissions on OCR, Voting Disk.
RAC installation on vmware is failing with following errors. I followed below link for installation. URL..... Found the following errors:
[root@rac2 ~]# cd /crs/oracle/bin/ [root@rac2 bin]# ./vipca PRKR-1062 : Failed to find configuration for node rac1 PRKR-1062 : Failed to find configuration for node rac1 [code]....
The "/crs/oracle/cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.The "/crs/ oracle/cfgtoollogs/configToolAllCommands" script contains all commands to be executed by the configuration assistants. This file may be used to run the configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
[root@rac2 bin]# /crs/oracle/bin/racgons add_config rac1.localdomain:6200 rac2.localdomain:6200 [root@rac2 bin]# /crs/oracle/bin/oifcfg setif -global eth0/192.168.2.0:public eth1/192.168.0.0:cluster_interconnect PRIF-10: failed to initialize the cluster registry [root@rac2 bin]# /crs/oracle/bin/oifcfg setif -global eth0/192.168.2.0:public eth1/192.168.0.0:cluster_interconnect PRIF-10: failed to initialize the cluster registry
I am trying to correct NMO NOT SETUID-ROOT (UNIX-ONLY) error. On production database (9..7). Googlng this error - it says to run root.sh as root. should this be run in /usr/local/bin. which is the default location.
I don't believe that there would be any problems - but would like some confirmation -to be safe.
I'm having issues with users logging into Oracle. I installed it on Ubuntu 12.04 running Oracle 11.2 XE. If i am root, i can run sqlplus just fine and log in. But when I use a regular user account and run sqlplus, it will just stay blank. No error messages or any feedback. I echo $ORACLE_HOME and echo $ORACLE_SID and they come back exactly as they do under the root account. My path is set up just like root has it.
It almost seems like a permission issue but even when i try sqlplus /nolog it stays blank.
We have a 2 node RAC runing on 11.2.0.2 and last night the database was totally un-responsive. when i checked the ADDM I noticed the following:
Waiting for event "cursor: pin S wait on X" in wait class "Concurrency" accounted for 98% of the database time spent in processing the SQL statement with SQL_ID "4b2epo0eaqol9".I am wondering what option do i have here? i am looking to do the following:
1) find the root cause why the database was un-responsive
2) ADDM is listing the query, what options do we have further?
I need know the impact in my oracle database 10g R2, if i change root/oracle passwords in my Oracle RAC environment, my database using ASM and the nodes is in Red Hat 4.7.
how to find the root causes for temporary table space to grow unexpetedly and how to claim that grown space back automatically after the transaction over.