Real Application Clusters :: Multiple Disk Group Pros / Cons
Jul 8, 2013
i am trying to identify the pro's/con's of using multiple ASM Diskgroup. I understand oracle recommends/best practice is to have 2 DG (one data and one flash) and you can place multiple copies of control files/online redo logs(and thats the way i want to go). But would that same be true if i use different set of DISK. For example we have multiple RAID 10 devices and multiple of SSD devices for us that we can use for this ASM instance. And i was thinking to create 2 more Disk group (call it DG_SYS1 and DG_SYS2) and use that to put my online redo logs, control file and temp and system table space there. i understand in a standalone system(where regular file system is being used), they(online redo/ control file) are usually on there own drives, but with ASM when i am already using external RAID 10 config + ASM striping i assume the IO would faster or am i better of using the SSD that i can have for my redo/control? What would be the pro's/cons of it (besides managing multiple DG)..
I am moving an Oracle DB (10.2.0.1) to a new server, by creating a blank DB and importing the data by user. I chose to create the new database with Flash Recovery activated (I haven't previously used this), but I'm having a nightmare when importing the data.
I encountered issues with the DB_RECOVERY_FILE_DEST_SIZE which I resolved by increasing the size of this parameter.
Next the import crawled along because of redo.log file errors "Checkpoint cannot complete". I'm assuming that I need to add redo.log files to accommodate this?
I have 3 reporting tables with 2.2 million records each being rebuilt nightly. The data is used online 24/7 by users and thus, snapshot tables are being built from the refreshed reporting tables. The current method to do this:
delete from snapshot table; insert into snapshot table (select * from report table); <repeat for other 2 tables> commit;
This seems to me to be resource intense on the system even though the table is defined with nologging option.
Is it better to create a MV (select only with refresh complete on demand)? The query is very simple without joins so it at first seems like overkill. However, I am also seeing that dbms_mview.refresh allows for an atomic option. Thus, if 1 of the 3 MVs fails during refresh all 3 rollback, which is a nice feature.
Are there better ways to replicate a snapshot table that I've missed? Is a delete and insert strategy a bad idea?
I m trying to make ORACLE RAC 10gr2 on two nodes with openfiler , with ocfs and asm.but i didn't have clusterware 10.2 software to install because oracle has removed software to download, that's why i install clusterware 11.2 . after installation clusterware 11.2 i check status and output is below on both nodes.
[oracle@linux2 bin]$ ./crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online
but when i try to install oracle 10gr2 software and when i reach to configure asm disks it says "in order to use AUTOMATIC STORAGE MANAGEMENT the oracle cluster synchronization service (CSS) must be up and running. and says to run "/u01/app/oracle/product/10.2.0/db_1/bin/ localconfig reset* as root user.. I run it and it give no error.
is there any alarm in moving alert logs generated in the grid infrastructure. The alert log in the grid log directory has grown around 2gb .Our CRS version 11.2.02. on linux. Also pls, Sample script for automating the same.
oracle 11g ASM RAC under OEL 5.6 . i just want to do a cloning so first i create cloning of my oracle home and then i create pfile and i get this error
SQL> startup nomount; ORA-29702: error occurred in Cluster Group Service operation SQL> exiti google it , i found this issue is with relink RAC binaries due to the souce database is RAC and destination is not
[oracle@backuptest ~]$ cd $ORACLE_HOME/rdbms/lib [oracle@backuptest lib]$ make -f ins_rdbms.mk rac_off bash: make: command not found
Trying to find a best practices document from Oracle regarding the use of ASM to store the archivelogs in RAC. Most of the DBA's I know create a non-ASM location to store the archivelogs. Are there any considerations if you are also using dataguard?
why it's offline , what this means , what the useuage of this? did have any affect on database availability? the below it's from OEM
[Critical] ABSNL Cluster Resource State State Change ora.gsd has instances in OFFLINE State Nov 12, 2012 12:15:03 PM [Warning] ABSNL Cluster Resource State State Change ora.abs.db has instances in OFFLINE State Nov 12, 2012 12:15:03 PM
We have java application which uses the ons configuration. Currently app's ons nodes parameter is pointing to vip i.e Node1-vip:6200,Node2-vip:6200. Will Ons nodes parameter support the RAC scan? Can i change to Nodes:RAC-scan:6200?
Grid Version:11.2.0.3OS : RHEL 5.8 I have succesfully installed cluster (GI) and installed RDBMS software on my 2-Node RAC . But, I don't want to use dbca to create the database due to some custom requirements. Instead , I want to run CREATE DATABASE command manually to create the RAC DB. Once the DB is created in one node, what are the steps I need to do to make this DB a RAC DB?
2. In the service add command syntax using the srvctl command, there is a new option that can be used with the srvctl add service command which is [-l <primary, physical-standby, logical_standby>], this option allows the service to be able to start up only when the service role matches the database role that is saved in the OCR.
We can now add a service to a database in the OCR using the following command:
would that be enough, or I need to make other changes as well from application perspective.
Another thing is that application deployment is done through ant scripts, which at the moment copies some data on /tmp/.. folder on db server and then run db scripts. These scripts make some ddl changes and some record updates. Now with rac implementation, these scripts will be run on one of the rac node(instances), so do I need to run these ddl scripts/record updates scripts on other node in oracle rac or oracle rac would take care of this automatically.
I have two node RAC on a test server. I have installed database software but it failed with an error that OEM could not be installed. Now I need installing Enterprise manager.
We are in preparation for the purchase of equipment for two- node Oracle10g RAC (on Windows Server 2008) and I need to configure the disks on servers. We have an existing EMC VNX5500 storage and SAN 8Gb switches.
My question is with how many disks to configure servers?Originally we planned four SSD discs (two in RAID 1 for OS and two in RAID 1 for locally Oracle home) for each node/server, and everything else on storage disks (Shared Disks Storage).
Whether it is necessary to have more local disks and for what? Where should be redologs files, archivelogs files, backup files…
Whether it is better server to be configured with six SSD disks: - two in RAID 1 for OS, - four in RAID 10 (or RAID 1) for locally Oracle home, archivelogs files, or something else…?
Grid Version: 11.2.0.3 OS: Red Hat Enterprise Linux 5.6 Node2 of our two node RAC got rebooted. Upon reboot, CRS and ASM instance came up. But the DB didn't come up. How can I check if DB is linked to CRS startup ?How can I enable DB startup upon CRS startup ?
My client asked me to "apply the newest patch" on his RAC 10.2.0.5 environment on Windows.
I've downloaded 10.2.0.5 Patch 20 (the newest version).
My question is: Should i apply it using OPatch to both: ORACLE_HOME and CRS_HOME? or only in ORACLE_HOME? What with 10.2.0.5.2 CRS Patch? Should i apply earlier CPU patches?
I have a 2 node 11.2.0.3 RAC database, OS version is RHEL 6.1
Our existing grid control server setup is 10gR2.Can i install 10.2.0.5 agent on my RAC servers and use the 10g grid to monitor my 11.2.0.3 database(RHEL 6.1) ?
I have installed Oracle RAC 10g on Redhat Linux 4.0. Till yesterday failover was happening that is when i stopped one instance on node01 the vip of node01 was transferred to node02.This was shown using ifconfig -a but now that is now happening.
Below information is given:
[oracle@node01 ~]$ crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.hitesh.db application ONLINE ONLINE node02 ora....h1.inst application ONLINE ONLINE node01
I am wokring on oracle 10.2.0.4 rac 2 node instances on AIX. We have one table having multiple rows defilning jobs will be done by users ...the functionality is that ..when even one user will pick on row (job) , one update statement will issue and it will update status column to 1 which menas job is allocated , this means now this should not allocate to any other user ..
but we are facing issue that once any user will pick that job , in application log files we can see that row gets lock and updates the status to 1 . but then also users connecting to other or some time same instance will get that job...means multiple users can pick same job .. even after already picked by another user ..
Is this can be issue with rac configuration ... like whenever one user updates a row ..it will be in cache of one instance and when another user trys to again update 2nd instance does not have that information and allows user connected to that instance to pick that job..like delay in block transfer or cache fusion..
is this can be issue with rac or it is purely application issue..
I am Oracle RACSIG (URL....) member, but from last week, I am not able to play any of the recorded web seminars. I am getting error "The webpage cannot be found HTTP 404". I am able to look at the PDF but not the recording.
, I got the following error while trying to restart a 3 node RAC database. OEM reported two out of three nodes to be down. We also noticed that the cluster_database parameter was set to "False" and the cluster_database_instances integer was set to 1.We changed the parameters to 'True' and '3' respectively and while trying to restart the database we get the error message as shown below.
The resource action "ora.abc123.db start" encountered the following error:
ORA-01102: cannot mount database in EXCLUSIVE mode. For details refer to "(:CLSN00107:)" in
"/oradba/app/grid/11.2.0.3_a/log/abcde104/agent/crsd/oraagent_oradba/oraagent_oradba.log". CRS-2674: Start of 'ora.abc123.db' on 'abcde104' failedCRS-2632:
There are no more servers to try to place resource 'ora.abc123.db' on that would satisfy its placement policy
CRS-5017: The resource action "ora.abc123.db start" encountered the following error:
ORA-01102: cannot mount database in EXCLUSIVE mode.
For details refer to "(:CLSN00107:)" in "/oradba/app/grid/11.2.0.3_a/log/abcde103/agent/crsd/oraagent_oradba/oraagent_oradba.log". CRS-2674: Start of 'ora.abc123.db' on 'abcde103' failed
I recently installed 2 node Oracle 11g RAC on RHEL5. While creating Clustered Database, database creation on second node (racnode2) failed. So, I connected */ as sysdba* on the node and executed startup only to get this error message:
SQL> startup ORA-01078: failure in processing system parameters ORA-01565: error in identifying file '+RACDB_DATA/RACDB/spfileRACDB.ora' ORA-17503: ksfdopn:2 Failed to open file +RACDB_DATA/RACDB/spfileRACDB.ora ORA-01034: ORACLE not available ORA-27123: unable to attach to shared memory segment Linux Error: 13: Permission denied Additional information: 196612 Additional information: 10 SQL>But, *'+RACDB_DATA/RACDB/spfileRACDB.ora'* is present.
I strongly believe it is a permission issue that ORACLE owned processes are not able to access disks owned by GRID user. I checked the permission of disks:
[oracle@racnode2 ~]$ cd /dev/oracleasm/disks [oracle@racnode2 disks]$ ls -l total 0 brw-rw---- 1 grid asmadmin 8, 65 Nov 19 19:15 CRSVOL1 brw-rw---- 1 grid asmadmin 8, 49 Nov 19 19:15 DATAVOL1 brw-rw---- 1 grid asmadmin 8, 81 Nov 19 19:15 FRAVOL1 [oracle@racnode2 disks]$And also, ORACLE user has asmdba among other privileges.
[oracle@racnode2 disks]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) context=user_u:system_r:unconfined_t
I am doing a upgrade from 10g2 standalone (with out ASM) to 11g2 2 node cluster (on ASM) and the database is some what 1TB of the size. My problem is if I do a export and import the live system need to be down for over an a day. is there a better way of doing this and how ?