I have a 2 node 11.2.0.3 RAC database, OS version is RHEL 6.1
Our existing grid control server setup is 10gR2.Can i install 10.2.0.5 agent on my RAC servers and use the 10g grid to monitor my 11.2.0.3 database(RHEL 6.1) ?
(Oracle Grid & Real Application Clusters Oracle Grid Computing with RAC) from rampant tech press. I have already ordered it 3 months back and still have not received it as its not printed yet or as i am told is in press. I would be glad to have a copy soon if its published.
We install grid using Grid Installer GUI. When doing a fresh installation of Grid Infrastructure 11.2.0.2 or 11.2.0.3 in a new server , I want to apply its latest PSU as well. When should I do this PSU patching ?
A. Should I apply the GI PSU after the entire Grid installation is successfully completed ?
or
B. Should I apply the GI PSU just before running the root.sh ?
i.e. when the installer prompts you to run root.sh (see the screenshot below)
[URL].........
At this stage(before running root.sh) the GI binaries are in place . So , I can apply the PSU on these binaries? Right?Should I follow option A or B above ?
Grid version: 11.2.0.3 OS : Red Hat Enterprise Linux 5.4
Few months back, in our RAC Cluster, while taking an expdp backup in a local Linux formatted filesystem, I got some errors. Don't quite remember the error code or the scenario now as I had too much work that day. The issue was fixed only when we used an ACFS filesystem location as the directory object for expdp.
Today, in the same RAC cluster, to reproduce that issue, I tested taking an expdp backup in a local Linux formatted file system ( /home/oracle/pumpDir ) and the expdp completed without any issues.
with expdp,impdp in RAC cluster environment because of using a local Linux filesystem ?
I am new in RAC scenario though i have some system to look after. I want to know, like single instance there is alert log,trace log where we should investigate to check if there is any problem. For RAC system which are the logs that i can find the monitor the incident or regular health checkup under grid and oracle user?
I have a server with Red Hat EL 5.5 running an oracle database 10g R2 and an Oracle Agent 10.2.0.5The disk ran out of space and I realized that there are a lot of files stating with core.xxx such as :
-rw------- 1 gridagent oinstall 17M Aug 2 13:10 core.10348 -rw------- 1 gridagent oinstall 17M Aug 2 13:15 core.10827 -rw------- 1 gridagent oinstall 17M Aug 2 12:30 core.4129 -rw------- 1 gridagent oinstall 17M Aug 2 12:35 core.4772
What's that ?Why are those files generated at $AGENT_HOME/<HOST>_<SID>/sysman/log/ ?
There are more than 24 GB of those files
in nmc.log I can see lines such as: NMC-00000 2010-08-02 13:25:39 [11760, nmcdbg.c,0440]TRC: Debug context enabled. NMC-20020 2010-07-30 01:31:58 [23083, nmccole.c,0725]ERR: Could not collect using DSGA Collection method NMC-20014 2010-07-29 01:31:56 [11258, nmccole.c,0822]ERR: Could not attach to the SGA
I m trying to make ORACLE RAC 10gr2 on two nodes with openfiler , with ocfs and asm.but i didn't have clusterware 10.2 software to install because oracle has removed software to download, that's why i install clusterware 11.2 . after installation clusterware 11.2 i check status and output is below on both nodes.
[oracle@linux2 bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
but when i try to install oracle 10gr2 software and when i reach to configure asm disks it says "in order to use AUTOMATIC STORAGE MANAGEMENT the oracle cluster synchronization service (CSS) must be up and running. and says to run "/u01/app/oracle/product/10.2.0/db_1/bin/ localconfig reset* as root user.. I run it and it give no error.
is there any alarm in moving alert logs generated in the grid infrastructure. The alert log in the grid log directory has grown around 2gb .Our CRS version 11.2.02. on linux. Also pls, Sample script for automating the same.
oracle 11g ASM RAC under OEL 5.6 . i just want to do a cloning so first i create cloning of my oracle home and then i create pfile and i get this error
SQL> startup nomount; ORA-29702: error occurred in Cluster Group Service operation SQL> exiti google it , i found this issue is with relink RAC binaries due to the souce database is RAC and destination is not
[oracle@backuptest ~]$ cd $ORACLE_HOME/rdbms/lib [oracle@backuptest lib]$ make -f ins_rdbms.mk rac_off bash: make: command not found
Trying to find a best practices document from Oracle regarding the use of ASM to store the archivelogs in RAC. Most of the DBA's I know create a non-ASM location to store the archivelogs. Are there any considerations if you are also using dataguard?
why it's offline , what this means , what the useuage of this? did have any affect on database availability? the below it's from OEM
[Critical] ABSNL Cluster Resource State State Change ora.gsd has instances in OFFLINE State Nov 12, 2012 12:15:03 PM [Warning] ABSNL Cluster Resource State State Change ora.abs.db has instances in OFFLINE State Nov 12, 2012 12:15:03 PM
We have java application which uses the ons configuration. Currently app's ons nodes parameter is pointing to vip i.e Node1-vip:6200,Node2-vip:6200. Will Ons nodes parameter support the RAC scan? Can i change to Nodes:RAC-scan:6200?
Grid Version:11.2.0.3OS : RHEL 5.8 I have succesfully installed cluster (GI) and installed RDBMS software on my 2-Node RAC . But, I don't want to use dbca to create the database due to some custom requirements. Instead , I want to run CREATE DATABASE command manually to create the RAC DB. Once the DB is created in one node, what are the steps I need to do to make this DB a RAC DB?
2. In the service add command syntax using the srvctl command, there is a new option that can be used with the srvctl add service command which is [-l <primary, physical-standby, logical_standby>], this option allows the service to be able to start up only when the service role matches the database role that is saved in the OCR.
We can now add a service to a database in the OCR using the following command:
would that be enough, or I need to make other changes as well from application perspective.
Another thing is that application deployment is done through ant scripts, which at the moment copies some data on /tmp/.. folder on db server and then run db scripts. These scripts make some ddl changes and some record updates. Now with rac implementation, these scripts will be run on one of the rac node(instances), so do I need to run these ddl scripts/record updates scripts on other node in oracle rac or oracle rac would take care of this automatically.
I have two node RAC on a test server. I have installed database software but it failed with an error that OEM could not be installed. Now I need installing Enterprise manager.
We are in preparation for the purchase of equipment for two- node Oracle10g RAC (on Windows Server 2008) and I need to configure the disks on servers. We have an existing EMC VNX5500 storage and SAN 8Gb switches.
My question is with how many disks to configure servers?Originally we planned four SSD discs (two in RAID 1 for OS and two in RAID 1 for locally Oracle home) for each node/server, and everything else on storage disks (Shared Disks Storage).
Whether it is necessary to have more local disks and for what? Where should be redologs files, archivelogs files, backup files…
Whether it is better server to be configured with six SSD disks: - two in RAID 1 for OS, - four in RAID 10 (or RAID 1) for locally Oracle home, archivelogs files, or something else…?
Grid Version: 11.2.0.3 OS: Red Hat Enterprise Linux 5.6 Node2 of our two node RAC got rebooted. Upon reboot, CRS and ASM instance came up. But the DB didn't come up. How can I check if DB is linked to CRS startup ?How can I enable DB startup upon CRS startup ?
My client asked me to "apply the newest patch" on his RAC 10.2.0.5 environment on Windows.
I've downloaded 10.2.0.5 Patch 20 (the newest version).
My question is: Should i apply it using OPatch to both: ORACLE_HOME and CRS_HOME? or only in ORACLE_HOME? What with 10.2.0.5.2 CRS Patch? Should i apply earlier CPU patches?
I have installed Oracle RAC 10g on Redhat Linux 4.0. Till yesterday failover was happening that is when i stopped one instance on node01 the vip of node01 was transferred to node02.This was shown using ifconfig -a but now that is now happening.
Below information is given:
[oracle@node01 ~]$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.hitesh.db application ONLINE ONLINE node02 ora....h1.inst application ONLINE ONLINE node01
I am wokring on oracle 10.2.0.4 rac 2 node instances on AIX. We have one table having multiple rows defilning jobs will be done by users ...the functionality is that ..when even one user will pick on row (job) , one update statement will issue and it will update status column to 1 which menas job is allocated , this means now this should not allocate to any other user ..
but we are facing issue that once any user will pick that job , in application log files we can see that row gets lock and updates the status to 1 . but then also users connecting to other or some time same instance will get that job...means multiple users can pick same job .. even after already picked by another user ..
Is this can be issue with rac configuration ... like whenever one user updates a row ..it will be in cache of one instance and when another user trys to again update 2nd instance does not have that information and allows user connected to that instance to pick that job..like delay in block transfer or cache fusion..
is this can be issue with rac or it is purely application issue..
I am Oracle RACSIG (URL....) member, but from last week, I am not able to play any of the recorded web seminars. I am getting error "The webpage cannot be found HTTP 404". I am able to look at the PDF but not the recording.
, I got the following error while trying to restart a 3 node RAC database. OEM reported two out of three nodes to be down. We also noticed that the cluster_database parameter was set to "False" and the cluster_database_instances integer was set to 1.We changed the parameters to 'True' and '3' respectively and while trying to restart the database we get the error message as shown below.
The resource action "ora.abc123.db start" encountered the following error:
ORA-01102: cannot mount database in EXCLUSIVE mode. For details refer to "(:CLSN00107:)" in
"/oradba/app/grid/11.2.0.3_a/log/abcde104/agent/crsd/oraagent_oradba/oraagent_oradba.log". CRS-2674: Start of 'ora.abc123.db' on 'abcde104' failedCRS-2632:
There are no more servers to try to place resource 'ora.abc123.db' on that would satisfy its placement policy
CRS-5017: The resource action "ora.abc123.db start" encountered the following error:
ORA-01102: cannot mount database in EXCLUSIVE mode.
For details refer to "(:CLSN00107:)" in "/oradba/app/grid/11.2.0.3_a/log/abcde103/agent/crsd/oraagent_oradba/oraagent_oradba.log". CRS-2674: Start of 'ora.abc123.db' on 'abcde103' failed
I recently installed 2 node Oracle 11g RAC on RHEL5. While creating Clustered Database, database creation on second node (racnode2) failed. So, I connected */ as sysdba* on the node and executed startup only to get this error message:
SQL> startup ORA-01078: failure in processing system parameters ORA-01565: error in identifying file '+RACDB_DATA/RACDB/spfileRACDB.ora' ORA-17503: ksfdopn:2 Failed to open file +RACDB_DATA/RACDB/spfileRACDB.ora ORA-01034: ORACLE not available ORA-27123: unable to attach to shared memory segment Linux Error: 13: Permission denied Additional information: 196612 Additional information: 10 SQL>But, *'+RACDB_DATA/RACDB/spfileRACDB.ora'* is present.
I strongly believe it is a permission issue that ORACLE owned processes are not able to access disks owned by GRID user. I checked the permission of disks:
[oracle@racnode2 ~]$ cd /dev/oracleasm/disks [oracle@racnode2 disks]$ ls -l total 0 brw-rw---- 1 grid asmadmin 8, 65 Nov 19 19:15 CRSVOL1 brw-rw---- 1 grid asmadmin 8, 49 Nov 19 19:15 DATAVOL1 brw-rw---- 1 grid asmadmin 8, 81 Nov 19 19:15 FRAVOL1 [oracle@racnode2 disks]$And also, ORACLE user has asmdba among other privileges.
[oracle@racnode2 disks]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) context=user_u:system_r:unconfined_t
I am doing a upgrade from 10g2 standalone (with out ASM) to 11g2 2 node cluster (on ASM) and the database is some what 1TB of the size. My problem is if I do a export and import the live system need to be down for over an a day. is there a better way of doing this and how ?