I am using 11.2.0.1.0 on RHEL 5.4ASM is properly working on my machine. database is running from ASM. I want to configure additional oracleasm devices.command fails saying:
$ oracleasm createdisk ASMDISK08 /dev/xvd8
Writing disk header: done
Instantiating disk: failed
Clearing disk header: done
I am creating asm diskgroup on loopback devicesHere are the configured loopback devices.
[root@host1 ~]# ls -l /dev/loop[1-9]
brw-rw-rw- 1 oracle oinstall 7, 1 Oct 25 14:42 /dev/loop1
brw-rw-rw- 1 oracle oinstall 7, 2 Oct 25 14:42 /dev/loop2
brw-rw-rw- 1 oracle oinstall 7, 3 Oct 25 14:42 /dev/loop3
brw-rw-rw- 1 oracle oinstall 7, 4 Oct 25 14:42 /dev/loop4
My cluster have two node.Data diskgroup uses normal redundancy.Priamry disks are located in one storage.Fail group disks are located in another storage.When we close the failgroup disks' storage the 2nd instance restart.After automatically restarted everything goes well.I want to know why does the 2nd instance restart?Is it normal?
i try to install oracle grid infrastructure 11r2 on cent OS 4.8 on vmware workstation 8.0.2..ASM disk is not showing ,i had also change disk discovery path to 'ORCL:*'i had already configure ASM disk using /etc/init.d/oracleasm createdisk...successfully..
CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY FAILGROUP failure_group_1 DISK '/devices/diska1' NAME diska1, '/devices/diska2' NAME diska2 FAILGROUP failure_group_2 DISK '/devices/diskb1' NAME diskb1, '/devices/diskb2' NAME diskb2;
now i want to add more disk in diskgroup then we can use
ALTER DISKGROUP disk_group_1 ADD DISK '/devices/disk*3',
now my question is
(1) in which failgroup thE Disk*3 will be added ? (2) How ican maintain eqal no of disk with same size availble in each FAILGROUP for data mirroring or redundancy by ASM ?
One of my collegues created an ASM disk group with normal redundancy using 2 disks. But the disks are not of same size, one disk is 100GB and another one is 10GB. Now the usable space out of diskgroup is showing 55GB. When I checked the diskgroup properties it is showing 2 fail groups one is DATA1_ 0000 with 10GB and other is DATA_0001 with 100GB. My question is why it is showing 55GB as usable space? My assumption is as it is having 2 fail groups with different disks of different sizes. For the failgroup 2 even it is of 100GB size, in order to maintain the redundancy with other small FG(10G) it will consider only 10GB out of 100GB. So, the 2nd FG size also should be 10GB. So that the usable space should show as 10GB as opposed to 55GB (not (100+10)/2).
I have 11.1.0.7 instance running and got more 300G of storage available for the database. I would like to know what are the steps to add these disks/storage to the existing DATA asm disk group?
Sun Solaris 10, Oracle 11gr2 x86 How are you doing? I am trying to build a test system. I never build ASM before. if I need to install database first then Migrating to ASM? The other problem I am having is that I only have a Raw disk. I have one disk for root and oracle and the other one for Raw disk for ASM. Do you know it's possible to build ASM on 1 raw disk?
I am using 10.2.0.1 on OEL5. I have installed CRS, Oracle home, ASM on both nodes and everything is fine.
When invoked dbca to create a database, it says:
*'DBCA could not startup the ASM instance configured on this node. To proceed with database creation using ASM you need the ASM instance to be up and running. Do you want to recreate the ASM instance on this node?'*
Is it a bug? because some blogs say this is a bug. And patch 8288940 will solve this. They also say this patch is to solve the incompatibility between 11g ASM incompatibility with 10g. But here I am using everything of 10.2.0.1.
My organisation is currently discussing different storage options for the database storage. Our production database is nearly 2TB and we do not want to continue with the existing NetApp storage (we use a 2 node RAC running 11.2.02 with nfs filesystem from NetApp filer).
We were looking at different options and came across Nimble Storage, they are very fast growing company aiming mid-range storage customers. The initial talks and demonstration looked very promising in terms of IO performance (they claim 40,000 - 60,000 IOPs for their CS400 series Nimble Storage array) and other options they are providing but we understand that majority of their customers are using it for VDI and other infrastructures.
They have demonstrated us using if for Oracle database with ASM storage over iSCSI LUNs. We are yet to do the POCs and benchmarking.
Has anyone come across Nimble Storage for running Oracle databases?
I cannot start ASM on my Oracle Database Appliance.
crsctl status resource -t says: ora.asm ONLINE ONLINE node1 Started ONLINE ONLINE node2 Started
however, if I try to access the ASMCMD it says: [grid@node1 ~]$ asmcmd Connected to an idle instance. ASMCMD> startup ORA-00304: requested INSTANCE_NUMBER is busy Connected to an idle instance.
, I have a few doubts in Normal,High and External redundancy levels in ASM concept. External - No failure group. No mirroring. Normal - Two Way Mirroring. One failure group. High - Three Way Mirroring. Two failure group. 1.
above mentioned 3 types in correct.2. My main question is what is the minimum number of disks needed in Disk group creation in Normal and External redundancy??
We have a 2 node rac cluster running on HPUX Itanium platform running oracle rdbms 10.2.0.4 that is currently configured using raw devices for ocr, voting disk, and all of the datafiles. We have a business requirement that is mandating that we have to use TDE tablespace encryption and in order to do so we must now upgrade to 11g.We are in the planning stages for the upgrade process and I am just trying to understand or find out what is going to be the best method to move the data that is currently in the tablespaces on the raw devices over to new tablespaces that will be created within ASM and will be created as TDE encrypted tablespaces?Our database is about 1.8 TB and we have alot of fairly large critical transactional tables that support a 24 x 7 oltp environment that cannot afford downtime.
We are trying to increase ASM disk space and with respect to it when we are trying to allocate more space this question came across my mind. Now this was previously configured by my previous SA.
[root@oracledbtest1 ~]# /etc/init.d/oracleasm querydisk -d `/etc/init.d/oracl cut -f2,10,11 -d" " | perl -pe 's/"(.*)".*[(.*), *(.*)]/$1 $2 $3/g;' | while read v_asmdisk v_minor v_major do v_device=`ls -la /dev | grep " $v_minor, *$v_major " | awk '{print $10}'` echo "ASM disk $v_asmdisk based on /dev/$v_device [$v_minor, $v_major]"
[code]....
why are my LUN's showing different sizes on RAC and also the best way to allocate the space to the disks in the above scenario.
I'm using ASM on LUNs from an EMC SAN, fronted by PowerPath. Right now I have only one fiber path to the SAN, so /dev/emcpowera3 maps directly to /dev/sda3, for example. Oracle had a typo in what they told me to do in /etc/sysconfig/oracleasm*, so the scan picks up both devices.
#/etc/init.d/oracleasm querydisk -p ASMVOL_01
Disk "ASMVOL_01" is a valid ASM disk /dev/emcpowera3: LABEL="ASMVOL_01" TYPE="oracleasm" /dev/sda3: LABEL="ASMVOL_01" TYPE="oracleasm"
But I don't think it can be using both. How do I see which one it's actually using?
We have Oracle ASM and installing 11.1.0.6 on a new machine (eventually will apply 11.1.0.7 patchset). So we are on 11g R1 in production environments.
When installing Oracle ASM 11.1.0.6, the root.sh fails
I checked various metalink notes, all settings are OK. We have ASMLib and i have configured ASMLib with the new disks prior to the ASM installation
Just to rule out the rootcause is with ASMLib , I have also disabled it. Still same problem. I ran the usual localconfig delete and localconfig add too.
Startup will be queued to init within 30 seconds. Checking the status of new Oracle init process... Expecting the CRS daemons to be up within 600 seconds.
Giving up: Oracle CSS stack appears NOT to be running.+ Oracle CSS service would not start as installed+ Automatic Storage Management(ASM) cannot be used until Oracle CSS service is started+ Finished product-specific root actions.+
I am not using RAC (to my knowledge), but I installed grid so I can use ASM with my RDBMS.
Recently I noticed that the "CRS Listener" service was not started, when I try to start it then it immediately stops. It does not seem to affect the databases, they are still up and running. What is this service used for and how can I get it to start. I am using Oracle Enterprise Server 11.2.0.3 and Grid 11.2.0.3 on a Windows 2008 R2 64-bit server.
We have standalone database running on ASM. Its 11Gr2 linux version5 server. After the Database bounce, the DB isnt coming up and is showing the below error.
SQL> startup nomount ORA-01078: failure in processing system parameters ORA-01565: error in identifying file '+DATA/test/spfiletest.ora' ORA-17503: ksfdopn:2 Failed to open file +DATA/test/spfiletest.ora ORA-15056: additional error message ORA-17503: ksfdopn:2 Failed to open file +DATA/test/spfiletest.ora ORA-15001: diskgroup "DATA" does not exist or is not mounted ORA-00450: background process 'ASMB' did not start ORA-00443: background process "ASMB" did not start ORA-06512: at line 4
Also i checked the ASM disk groups. I can see all those are MOUNTED properly. In fact i could also see the spfile present in ASM disk physically. It looks like it couldn't identify the spfile to start up the db. however i could see it physically present in ASM disk group. Find below snapshot.
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 358400 329103 0 329103 0 N DATA/ MOUNTED EXTERN N 512 4096 4194304 358368 358288 0 358288 0 N FRA/ MOUNTED EXTERN N 512 4096 4194304 20480 18780 0 18780 0 N REDO/
I added a new disk to a diskgroup. It starts to rebalance. However, I plan to drop the diskgroup anyhow and recreate it since I want to restore a database to it.(I shouldn't have added the new disk beforehand, my mistake) So now it's rebalancing and may take some hours.Can I drop the diskgroup now?I think there is no way to stop the rebalance?
Background: We are migrating a lot of databases from one SAN appliance to another. We are doing this by adding new disks from the new SAN appliance to the existing disk groups, re balancing, removing the old disks from the disk groups, and then re balancing again.
Question: If I execute two ALTER commands with the same power on 2 or more separate disk groups, do both operations start executing right away? Or do they queue up and execute one after another?
I ask because we would like to queue up several re-balances so we don't have DBAs watching status bars all day.
Normally ASMlib/oracleasm is used to prepare the disks for ASM. I just wonder, besides this tool, is there any other GUI based tool to use for preparing the ASM disks?