Real Application Clusters :: Stored Procedure Killed Root Blocker In 10g?
Jan 30, 2013
I need the procedure that killed root blocker in RAC environmental 10g, I found the stored procedure for 11g which simple but 10g required some expertise because I want to just exec [procedure] it will kill root blocker either locally and remote node
i am on RAC 11G R2 RHEL 5 , when i install rac can i run root.sh on all nodes in parallel ? or do i have to wait for first node to finish root.sh completely and then i shud run root.sh on second node and so on on third node
I m trying to make ORACLE RAC 10gr2 on two nodes with openfiler , with ocfs and asm.but i didn't have clusterware 10.2 software to install because oracle has removed software to download, that's why i install clusterware 11.2 . after installation clusterware 11.2 i check status and output is below on both nodes.
[oracle@linux2 bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
but when i try to install oracle 10gr2 software and when i reach to configure asm disks it says "in order to use AUTOMATIC STORAGE MANAGEMENT the oracle cluster synchronization service (CSS) must be up and running. and says to run "/u01/app/oracle/product/10.2.0/db_1/bin/ localconfig reset* as root user.. I run it and it give no error.
is there any alarm in moving alert logs generated in the grid infrastructure. The alert log in the grid log directory has grown around 2gb .Our CRS version 11.2.02. on linux. Also pls, Sample script for automating the same.
oracle 11g ASM RAC under OEL 5.6 . i just want to do a cloning so first i create cloning of my oracle home and then i create pfile and i get this error
SQL> startup nomount; ORA-29702: error occurred in Cluster Group Service operation SQL> exiti google it , i found this issue is with relink RAC binaries due to the souce database is RAC and destination is not
[oracle@backuptest ~]$ cd $ORACLE_HOME/rdbms/lib [oracle@backuptest lib]$ make -f ins_rdbms.mk rac_off bash: make: command not found
Trying to find a best practices document from Oracle regarding the use of ASM to store the archivelogs in RAC. Most of the DBA's I know create a non-ASM location to store the archivelogs. Are there any considerations if you are also using dataguard?
why it's offline , what this means , what the useuage of this? did have any affect on database availability? the below it's from OEM
[Critical] ABSNL Cluster Resource State State Change ora.gsd has instances in OFFLINE State Nov 12, 2012 12:15:03 PM [Warning] ABSNL Cluster Resource State State Change ora.abs.db has instances in OFFLINE State Nov 12, 2012 12:15:03 PM
We have java application which uses the ons configuration. Currently app's ons nodes parameter is pointing to vip i.e Node1-vip:6200,Node2-vip:6200. Will Ons nodes parameter support the RAC scan? Can i change to Nodes:RAC-scan:6200?
Grid Version:11.2.0.3OS : RHEL 5.8 I have succesfully installed cluster (GI) and installed RDBMS software on my 2-Node RAC . But, I don't want to use dbca to create the database due to some custom requirements. Instead , I want to run CREATE DATABASE command manually to create the RAC DB. Once the DB is created in one node, what are the steps I need to do to make this DB a RAC DB?
2. In the service add command syntax using the srvctl command, there is a new option that can be used with the srvctl add service command which is [-l <primary, physical-standby, logical_standby>], this option allows the service to be able to start up only when the service role matches the database role that is saved in the OCR.
We can now add a service to a database in the OCR using the following command:
would that be enough, or I need to make other changes as well from application perspective.
Another thing is that application deployment is done through ant scripts, which at the moment copies some data on /tmp/.. folder on db server and then run db scripts. These scripts make some ddl changes and some record updates. Now with rac implementation, these scripts will be run on one of the rac node(instances), so do I need to run these ddl scripts/record updates scripts on other node in oracle rac or oracle rac would take care of this automatically.
I have two node RAC on a test server. I have installed database software but it failed with an error that OEM could not be installed. Now I need installing Enterprise manager.
We are in preparation for the purchase of equipment for two- node Oracle10g RAC (on Windows Server 2008) and I need to configure the disks on servers. We have an existing EMC VNX5500 storage and SAN 8Gb switches.
My question is with how many disks to configure servers?Originally we planned four SSD discs (two in RAID 1 for OS and two in RAID 1 for locally Oracle home) for each node/server, and everything else on storage disks (Shared Disks Storage).
Whether it is necessary to have more local disks and for what? Where should be redologs files, archivelogs files, backup files…
Whether it is better server to be configured with six SSD disks: - two in RAID 1 for OS, - four in RAID 10 (or RAID 1) for locally Oracle home, archivelogs files, or something else…?
Grid Version: 11.2.0.3 OS: Red Hat Enterprise Linux 5.6 Node2 of our two node RAC got rebooted. Upon reboot, CRS and ASM instance came up. But the DB didn't come up. How can I check if DB is linked to CRS startup ?How can I enable DB startup upon CRS startup ?
My client asked me to "apply the newest patch" on his RAC 10.2.0.5 environment on Windows.
I've downloaded 10.2.0.5 Patch 20 (the newest version).
My question is: Should i apply it using OPatch to both: ORACLE_HOME and CRS_HOME? or only in ORACLE_HOME? What with 10.2.0.5.2 CRS Patch? Should i apply earlier CPU patches?
I have a 2 node 11.2.0.3 RAC database, OS version is RHEL 6.1
Our existing grid control server setup is 10gR2.Can i install 10.2.0.5 agent on my RAC servers and use the 10g grid to monitor my 11.2.0.3 database(RHEL 6.1) ?
I have installed Oracle RAC 10g on Redhat Linux 4.0. Till yesterday failover was happening that is when i stopped one instance on node01 the vip of node01 was transferred to node02.This was shown using ifconfig -a but now that is now happening.
Below information is given:
[oracle@node01 ~]$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.hitesh.db application ONLINE ONLINE node02 ora....h1.inst application ONLINE ONLINE node01
I am wokring on oracle 10.2.0.4 rac 2 node instances on AIX. We have one table having multiple rows defilning jobs will be done by users ...the functionality is that ..when even one user will pick on row (job) , one update statement will issue and it will update status column to 1 which menas job is allocated , this means now this should not allocate to any other user ..
but we are facing issue that once any user will pick that job , in application log files we can see that row gets lock and updates the status to 1 . but then also users connecting to other or some time same instance will get that job...means multiple users can pick same job .. even after already picked by another user ..
Is this can be issue with rac configuration ... like whenever one user updates a row ..it will be in cache of one instance and when another user trys to again update 2nd instance does not have that information and allows user connected to that instance to pick that job..like delay in block transfer or cache fusion..
is this can be issue with rac or it is purely application issue..
I am Oracle RACSIG (URL....) member, but from last week, I am not able to play any of the recorded web seminars. I am getting error "The webpage cannot be found HTTP 404". I am able to look at the PDF but not the recording.
, I got the following error while trying to restart a 3 node RAC database. OEM reported two out of three nodes to be down. We also noticed that the cluster_database parameter was set to "False" and the cluster_database_instances integer was set to 1.We changed the parameters to 'True' and '3' respectively and while trying to restart the database we get the error message as shown below.
The resource action "ora.abc123.db start" encountered the following error:
ORA-01102: cannot mount database in EXCLUSIVE mode. For details refer to "(:CLSN00107:)" in
"/oradba/app/grid/11.2.0.3_a/log/abcde104/agent/crsd/oraagent_oradba/oraagent_oradba.log". CRS-2674: Start of 'ora.abc123.db' on 'abcde104' failedCRS-2632:
There are no more servers to try to place resource 'ora.abc123.db' on that would satisfy its placement policy
CRS-5017: The resource action "ora.abc123.db start" encountered the following error:
ORA-01102: cannot mount database in EXCLUSIVE mode.
For details refer to "(:CLSN00107:)" in "/oradba/app/grid/11.2.0.3_a/log/abcde103/agent/crsd/oraagent_oradba/oraagent_oradba.log". CRS-2674: Start of 'ora.abc123.db' on 'abcde103' failed
I recently installed 2 node Oracle 11g RAC on RHEL5. While creating Clustered Database, database creation on second node (racnode2) failed. So, I connected */ as sysdba* on the node and executed startup only to get this error message:
SQL> startup ORA-01078: failure in processing system parameters ORA-01565: error in identifying file '+RACDB_DATA/RACDB/spfileRACDB.ora' ORA-17503: ksfdopn:2 Failed to open file +RACDB_DATA/RACDB/spfileRACDB.ora ORA-01034: ORACLE not available ORA-27123: unable to attach to shared memory segment Linux Error: 13: Permission denied Additional information: 196612 Additional information: 10 SQL>But, *'+RACDB_DATA/RACDB/spfileRACDB.ora'* is present.
I strongly believe it is a permission issue that ORACLE owned processes are not able to access disks owned by GRID user. I checked the permission of disks:
[oracle@racnode2 ~]$ cd /dev/oracleasm/disks [oracle@racnode2 disks]$ ls -l total 0 brw-rw---- 1 grid asmadmin 8, 65 Nov 19 19:15 CRSVOL1 brw-rw---- 1 grid asmadmin 8, 49 Nov 19 19:15 DATAVOL1 brw-rw---- 1 grid asmadmin 8, 81 Nov 19 19:15 FRAVOL1 [oracle@racnode2 disks]$And also, ORACLE user has asmdba among other privileges.
[oracle@racnode2 disks]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) context=user_u:system_r:unconfined_t
I am doing a upgrade from 10g2 standalone (with out ASM) to 11g2 2 node cluster (on ASM) and the database is some what 1TB of the size. My problem is if I do a export and import the live system need to be down for over an a day. is there a better way of doing this and how ?
In our 2-node RAC , Node1 was hung for a while. So, I wanted to restart that Instance. But I had some 50 services running in Instance 1(Preferred Instance) which has to relocated before I bounce the instance.
Question 1. Instead of using srvctl relocate for each service, Is there any way to quickly relocate all services to the Node2 (Available Instance) ?
Question2. When a service is relocated , are DMLs running from sessions using that service be 'moved' to the available instance ? Or, is it just the SELECTs that can be failed over ?
I am running a two node rac on grid 11.2 and db 11.2.0.3. My application does not like the scan listener, therefore I have to configure the remote listener parameter to the vips of the two hosts, which is working fine.
But it comes out the the system needs the service registered with the scan listener. So. How can I add a static service entry for my scan listener ?