How To Create Ocr Disk Or Voting Disk On Windows Server 2008
Feb 17, 2011
I have 2 servers both having windows server 2008 64 bit as operating system installed on both I need to install oracle clusterware 11g r1 on both servers with clustering on external storage. I have configured the network(private,public and virtual) for both servers and have started the installation.
In the installation of oracle I add both servers but then I reach to a point where they ask me for voting disk or ocr disk in the cluster configuration storage but no disk is present how can i create ocr disk or voting disk on windows server 2008? And the external storage should I buy a special type of storage that supports clustering to continue my work?
Tell me that in Oracle 11g, information regarding voting disk location is stored in which file ?
I know that we can use "crsctl query css votedisk" to see the location of voting disk, but from which file or place this command pick the information ?
I am installing Oracle Grid Infratructure 11.2.0.3 on OEL 6.3...I reached the final step, running the root.sh on node 1. It succsessfully started the services but then I got this error:
Failed to create voting files on disk group OCR_VOTING. Change to configuration failed, but was successfully rolled back. CRS-4000: Command Replace failed, or completed with errors.
Voting file add failed Failed to add voting disks at /opt/oracle/11.2.0/grid/crs/install/crsconfig_lib.pm line 6780. +/opt/oracle/11.2.0/grid/perl/bin/perl -I/opt/oracle/11.2.0/grid/perl/lib -I/opt/oracle/11.2.0/grid/crs/install /opt/oracle/11.2.0/grid/crs/install/rootcrs.pl execution failed+
OCR_VOTING Diskgroup is configured using oracleasm:
oracleasm createdisk OCR_VOTING /dev/sdb1/dev/sdb1 is a partition and has been created ot the LUN /dev/sdb using fdisk.
Here is part of logfile of the root.sh - tool. Meanwhile I tried deinstalling the clusterware and cleaning up, then I verified the server configuration with cluvfy and all was OK.
I Configured an ASM instance and a disk group with two disk for normal redundancy.
> Here .. each disk is 2gb
The disk group has two disks...
SQL> select group_number, name, type, total_mb, free_mb 2 from v$asm_diskgroup;
GROUP_NUMBER NAME TYPE TOTAL_MB FREE_MB ------------ ------------------------------ ------ ---------- ---------- 1 DATA NORMAL 4000 3898
as the group has two way mirroring (Normal redundancy) How much data (2 GB or 4 GB) can i keep in the disk group? My conception is I can keep 2 GB data in the disk group... (as the disk group keeps every extent in another disk as mirror)
This is the first time I've got a job in Windows platform, I've worked as DBA with Unix/Linux still now.
I received one command: Install Oracle 11gR2 on Windows 7 Enterprise, so that, I plan to create one Partition (really, not mounted, this is Unallocated partition by Partition Magic 3rd software) which will contain datafile in it.
And, when I install 11gR2 Grid, the step ASM creation did not recognize the Unallocated Partition I created.
We have a Production Oracle 10g R2 RAC on HP-UX v2 IA64 servers.We have Two Disk Groups one for Archive (ARC_DISK - 100 GB) and other for Database(DATA_DISK - 1 TB]. We wanted to add more space to the DATA_DISK disk group.Unix admin configured 200 GB from SAN and changed the ownership of the Disk to oracle and permissions to 775 on 1st Node.I opended DBCA from 1st Node and was able to see the disk in 'Show Candidate'.
I added this disk to the DATA_DISK disk group and clicked OK but got ORA- error with some message like some operations could not be performed. I exited DBCA.We realized that we had forgotten to change the ownerhip and permission from the 2nd Node.Unix admin changed the ownership of the Disk to oracle and permissions to 775 on the 2nd Node.
I opened DBCA again from 1st Node and selected the DATA_DISK disk group but could not find the Disk in 'Show Candiate' open. I clicked on 'Show All' and this disk was shown with Header_Status - MEMBER but not allocated to DATA_DISKGROUP. When I clicked the 'Show Member' option, this disk is not shown for DATA_DISK disk group. I exited DBCA at this point.As this is a critcal Production database I didnt proceed any further and exited DBCA.
Now I need to add this Disk to the DATA_DISK disk group but not sure which option to select. I got one reply from another forum to run DBCA select the DATA_DISK Disk Group and then click 'Show All' and select this Disk (which already has MEMBER as Header Status) and select Force Option and click OK to continue.
I'm trying to create a ASM disk using oracleasm.I created list the disk (using oracleasm), but the view V$ASM_DISK and ASMCMD (lsdsk command) can't see it..Look:
ON windows environment...i am trying to delete the asm disk recently created with asmtool but it is not allowing me to do that . it seems asmtool gives some system assigned name to it . How to find the name so that i can drop the disk.
1. asmtool -create e:asmasm2.dsk 1024M;
2. asmtool -delete e:asma2.dsk ASM:00204 : Ignoring ORACLDISKE:E:ASMASM2.DSK not a valid ASM paritition
We have 2 servers running Oracle 11gR2 Window server 2008 64 bit RAC. Today I add new shared disk visible from both servers. All our data placed on ASM partitions, but we need Windows volume for exchanging some data between servers & users;Actions:then i have prepare disk on shared storage from 1st node, run diskmfmt.msc & rescan disks when windows recognizes new hdd, initialize it.create an Extended partition & Logical drive & Assign drive letter. from another nodes, make sure no drive letter is assigned.
Sometimes Windows automatically assign drive letter. If drive letter is assigned, right click & Change drive letter & Remove. from 1st node, format this logical drive using ocfsFormat.exe after execute below scripts i have face some issues
E:app11.2.0gridcfs>ocfsformat /m P: /c 1024 /v DATA /f /d /aReg Get Cluster Name(): Reg Query ValueEx for CFS_CLUSTER_NAME failed with error 203
This indicates that the Cluster Name has not been configured on this node for OCFSThe volume formatted in this condition will be seen by all nodes running OCFS.
In my environment Oracle database 11gR1 is running & dg is configured i.e >> 1 primary & 1 standby. In near future space issues will arise for standby. I want to create 1 more standby with max disk space, but how? Active dataguard is configured where report are generated from where & what changes should be made in Primary pfile & new standby pfile.
On Oracle 10g, I create, delete and drop a lot of tables. Therefore, the disk is highly fragmented.The execution of a very simple create statement takes more than a minute. If I execute the same statement but first truncate the table and insert the data, it takes less than a second!
I think this has to do with the high fragmentation of the disk. Obviously, I can defragment the disk, but I will always have a high fragmentation since I use a lot of create, delete and drops.
how I can improve the performance of create statements on highly fragmented disks?
I am using 11.2.0.1.0 on RHEL 5.4ASM is properly working on my machine. database is running from ASM. I want to configure additional oracleasm devices.command fails saying:
$ oracleasm createdisk ASMDISK08 /dev/xvd8 Writing disk header: done Instantiating disk: failed Clearing disk header: done
I am creating asm diskgroup on loopback devicesHere are the configured loopback devices.
[root@host1 ~]# ls -l /dev/loop[1-9] brw-rw-rw- 1 oracle oinstall 7, 1 Oct 25 14:42 /dev/loop1 brw-rw-rw- 1 oracle oinstall 7, 2 Oct 25 14:42 /dev/loop2 brw-rw-rw- 1 oracle oinstall 7, 3 Oct 25 14:42 /dev/loop3 brw-rw-rw- 1 oracle oinstall 7, 4 Oct 25 14:42 /dev/loop4
I checked and found we have disk that is assigned with 0 disk GROUP_NUMBER. What does that mean ? how to check if disk T1_ASM05 is been part of any disk group or not.?
SQL> select GROUP_NUMBER,NAME from v$asm_diskgroup;
GROUP_NUMBER NAME ------------ ------------------------------ 1 DATA 2 FRA SQL> SQL> select GROUP_NUMBER,name,PATH from v$asm_disk;
What is best practice to change small disk D:? I am beginner with Oracle. 10g on W2008. 5 datafiles (all indexes,second data file, 2 undotabs)*.dbf (34;30;1;34;12 GB) is on D:. Part of tablespaces (1 data, 1 undo)has files on c:.
I. 1.Shutdown 2008 server. 2.Copy D: image with GHOST to USB, network. 3.Connect new D, create RAID. 4.Restore image to D. 5.Start 2008 server.
II. 1.Stop application. 2.CONNECT AS SYSDBA 3.SHUTDOWN NORMAL or (IMMEDIATE)? 4.Copy files *.dbf at OS level from d: to ... USB disk, network. 5.Shutdown 2008 server. 6.Change disks, create RAID in BIOS. 7.Start W2008. Is Oracle at this moment in SHUTDOWN mode? 8.Copy back *.dbf to new D: (with directory structure). 9.STARTUP Oracle.
What should be our approach when we see the disk response time is bad for a particular tablespace in database.I heard a good disk response time should be on an average 10ms.
According to my understanding , if Disk1 Fails Disk4 facilitates normal operations. When there is space crunch it operates in reduced redundancy . Am i right ?
2.I have got 4 Disks in one group (i.e from Disk1 To Disk4 ) i have not defined any failure group and as per my understanding all disks will be added to its own failure group without mirroring and striping.
I was setting up disks groups and I accidentally created one group (DATA) with "NORMAL" redundancy but wanted it to be "EXTERNAL". I tried using asmca to remove disks from the group, drop the group, change the redundancy..... All of this failed because there was an spfile on the disk group.
I finally got it to work with using this procedure:
sqlplus '/ as sysasm' SQL*Plus: Release 11.2.0.3.0 Production on Thu Apr 5 08:58:19 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. SQL> drop diskgroup DATA; drop diskgroup DATA * ERROR at line 1: ORA-15039: diskgroup not dropped ORA-15053: diskgroup "DATA" contains existing files
[code]....
In summary, I am not sure why changing the redundancy would be so difficult if there is data on the disk group.
essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table.
We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data.
What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
deleting the data and reclaiming the space.
Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.
My understanding of DB_FILE_MULTIBLOCK_READ_COUNT parameter is that it affects only Full Table Scans and Fast Full Index Scans - all other disk retrieval is single block.If so, then maybe I'm reading this trace incorrectly:
select /*+ first_rows */ pk from test_join_tgt where pk >= 0 and rownum > 1
I managed to upload images to a database server, resize them, copy to the application server and everything worked just fine - the Apex page successfully displayed images. Since last week, things have broken. This is how: there's a directory object which points to application server's directory:
SQL> select * from all_directories;
OWNER DIRECTORY_NAME DIRECTORY_PATH ------- ------------------------------ ----------------------------------- SYS SLIKE_4005_UPLOAD d:gisslike_4005_upload --> on a database server SYS SLIKE_4005 \my-iasd$homegisslike_4005 --> on an application server
SQL>
I can use a directory located on a database server:
D:GISSlike_4005_upload>dir photo_resize.* Volume in drive D is RAID Volume Serial Number is 88F2-69D2 Directory of D:GISSlike_4005_upload [code]....
How come it doesn't work? I was absent last week, database server was restarted for some reason (there were Windows' updates which required restarting). After that, all applications (lucky us, just two of them, but in multiple procedures/functions) return FALSE for UTL_FILE.FGETATTR.
We recreated directory objects, but that didn't work (UNC or not, no difference). I Googled quite a lot, read Metalink notes - nothing I did solved the problem.
what these OS updates were about; maybe they are not to be blamed at all. Both servers (database & application) run MS Windows Server 2003 Standard Edition Service Pack 2. In the meantime, a colleague developed a workaround (it uses UTL_HTTP) which works, but it is MUCH slower than the previous UTL_FILE.FGETATTR option.
Why don't we keep these images on the database server (instead of the application server)?I was told that Apache is incapable of accessing mapped network directories so we used what we could.
We are using Oracle 10g and have 10 tablespaces defined for our Database which have 108 tables. Size of 108 tables is around 251 MB as seen during importing the dump. While creating these 10 tablespaces I used below parameters for allocation of space
SIZE 1M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE 1M;
which set the initial space for 10 tablespaces to around 1032Kb each. Now my Question is after importing the dump , how the disk space for 10 tablespaces increases to 398 MB in total ?
Is there any relation of Tablespace disk space and Actual Data present in the tables ?
I was trying to delete the database in the test server. When i was deleting listener was already stopped, i continued deleting using dbca, it shown me some alert that datafiles cant be deleted because system could't find database, since listner was stopped so only service was deleted(the one showing in the windows administrator toolsservicesOracleServiceTEST).
All the datafile parameter files are still there. How can i delete the datafiles and parameter files belongs to that database or how to create the deleted service, so that i will start the listener and do the complete deleting of the database.