According to my understanding , if Disk1 Fails Disk4 facilitates normal operations. When there is space crunch it operates in reduced redundancy . Am i right ?
2.I have got 4 Disks in one group (i.e from Disk1 To Disk4 ) i have not defined any failure group and as per my understanding all disks will be added to its own failure group without mirroring and striping.
I Configured an ASM instance and a disk group with two disk for normal redundancy.
> Here .. each disk is 2gb
The disk group has two disks...
SQL> select group_number, name, type, total_mb, free_mb 2 from v$asm_diskgroup;
GROUP_NUMBER NAME TYPE TOTAL_MB FREE_MB ------------ ------------------------------ ------ ---------- ---------- 1 DATA NORMAL 4000 3898
as the group has two way mirroring (Normal redundancy) How much data (2 GB or 4 GB) can i keep in the disk group? My conception is I can keep 2 GB data in the disk group... (as the disk group keeps every extent in another disk as mirror)
I want to OFF tablspace AUTOEXTEND on a prodution system, we have many RAC databses and that will be done on all stations. i have got a document from net which was written on 29-Jun-2007 and it says that if need to OFF the AUTOEXTEND of a TABLESPACE so you need to ist make it off on the underlying datafiles of that tablespace so this doc is for Oracle 8.1.7.2.0
We have quite a number of sessions in database MES (production) coming from another machine.
From v$session, the program is oracle@WID27 (TNS V1-V3). This WID27 (hostname) consists of quite a number of development databases inside. We have to trace which jobs are actually triggering this, as WID27 are not suppose to connect to production databases.
How can we tell whether the sessions came in is from dblink or from the machine itself?
I checked and found we have disk that is assigned with 0 disk GROUP_NUMBER. What does that mean ? how to check if disk T1_ASM05 is been part of any disk group or not.?
SQL> select GROUP_NUMBER,NAME from v$asm_diskgroup;
GROUP_NUMBER NAME ------------ ------------------------------ 1 DATA 2 FRA SQL> SQL> select GROUP_NUMBER,name,PATH from v$asm_disk;
What is best practice to change small disk D:? I am beginner with Oracle. 10g on W2008. 5 datafiles (all indexes,second data file, 2 undotabs)*.dbf (34;30;1;34;12 GB) is on D:. Part of tablespaces (1 data, 1 undo)has files on c:.
I. 1.Shutdown 2008 server. 2.Copy D: image with GHOST to USB, network. 3.Connect new D, create RAID. 4.Restore image to D. 5.Start 2008 server.
II. 1.Stop application. 2.CONNECT AS SYSDBA 3.SHUTDOWN NORMAL or (IMMEDIATE)? 4.Copy files *.dbf at OS level from d: to ... USB disk, network. 5.Shutdown 2008 server. 6.Change disks, create RAID in BIOS. 7.Start W2008. Is Oracle at this moment in SHUTDOWN mode? 8.Copy back *.dbf to new D: (with directory structure). 9.STARTUP Oracle.
What should be our approach when we see the disk response time is bad for a particular tablespace in database.I heard a good disk response time should be on an average 10ms.
I was setting up disks groups and I accidentally created one group (DATA) with "NORMAL" redundancy but wanted it to be "EXTERNAL". I tried using asmca to remove disks from the group, drop the group, change the redundancy..... All of this failed because there was an spfile on the disk group.
I finally got it to work with using this procedure:
sqlplus '/ as sysasm' SQL*Plus: Release 11.2.0.3.0 Production on Thu Apr 5 08:58:19 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. SQL> drop diskgroup DATA; drop diskgroup DATA * ERROR at line 1: ORA-15039: diskgroup not dropped ORA-15053: diskgroup "DATA" contains existing files
[code]....
In summary, I am not sure why changing the redundancy would be so difficult if there is data on the disk group.
essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table.
We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data.
What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
deleting the data and reclaiming the space.
Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.
I have 2 servers both having windows server 2008 64 bit as operating system installed on both I need to install oracle clusterware 11g r1 on both servers with clustering on external storage. I have configured the network(private,public and virtual) for both servers and have started the installation.
In the installation of oracle I add both servers but then I reach to a point where they ask me for voting disk or ocr disk in the cluster configuration storage but no disk is present how can i create ocr disk or voting disk on windows server 2008? And the external storage should I buy a special type of storage that supports clustering to continue my work?
My understanding of DB_FILE_MULTIBLOCK_READ_COUNT parameter is that it affects only Full Table Scans and Fast Full Index Scans - all other disk retrieval is single block.If so, then maybe I'm reading this trace incorrectly:
select /*+ first_rows */ pk from test_join_tgt where pk >= 0 and rownum > 1
We are using Oracle 10g and have 10 tablespaces defined for our Database which have 108 tables. Size of 108 tables is around 251 MB as seen during importing the dump. While creating these 10 tablespaces I used below parameters for allocation of space
SIZE 1M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE 1M;
which set the initial space for 10 tablespaces to around 1032Kb each. Now my Question is after importing the dump , how the disk space for 10 tablespaces increases to 398 MB in total ?
Is there any relation of Tablespace disk space and Actual Data present in the tables ?
- one ASM instance - X DB instances - each DB instance uses 2 or more dedicated diskgroups from the ASM instance - there is one diskgroup named FREEDISK that contains spare disks
On each DB instance you can see:
- the list and global parameters of all diskgroups using v$asm_diskgroup view - the list and parameters of all disks the instance is using with v$asm_disk view
So my question is: how (if this is possible) to know the list of (spare) disks in FREE DISK disk group?
SO....: Win 7 64bits DB....: Oracle 11.2.0.3 EE 64 bits
In my notebook, i have 2 instances (orateste and ora11). They are all for testing purpose. I don't know what happened but, a problem in a partition of my SO made the Oracle software unavailable. I cannot start instance orateste. Even the windows service related to the instance have disappeared from services.msc. What i want to do is to restore only one tablespace from instance orateste to ora11. I have a full rman database backup from D-1.
1) Restore the entire database (orateste - 11.2.0.1) to ora11(11.2.0.3)?
2) Is there a way to restore only one tablespace, between instances and versions?
outlining 'key' differences between databases / configuration / hardware that can potentially make a difference to what results you get when running sql / pl/sql - I am trying to find a check list to use against PROD / DEV / TEST environments.
I will have to proceed with Oracle 9 database refresh from production server to integration server. 5 biggest schemas must be exported and imported. They constitute 97% space used in a database. This is very big database so I would like to be sure that everything will go smoothly. That is why i want to ask you some questions.
Have you got any clues for me before I start with exp/imp? From my side i will tell you that I will have to exp/imp schema by schema because there is small space both on production and integration disk for a dump. First thing I thought are dependencies between schemas that are exported and that which are not, and also between schemas that are exported/imported one by one.
This is procedure that I plan:
For every schema that is to be refreshed { 1. Export schema with ROWS=N CONSTRAINTS=Y 2. EXPORT schema with ROWS=y CONSTRAINTS=N 3. Import schema from step one 4. Disable all the foreign key constraints using ALTER TABLE DISABLE CONSTRAINT. 5. Import schema with rows } ALTER TABLE ENABLE CONSTRAINT
With above procedure i think that I will avoid problems with dependencies between schemas exported/imported one by one. But my concern is if there are any dependencies between those schemas and schemas that are not exported. Is there an way to check it before refresh ?
We have a Production Oracle 10g R2 RAC on HP-UX v2 IA64 servers.We have Two Disk Groups one for Archive (ARC_DISK - 100 GB) and other for Database(DATA_DISK - 1 TB]. We wanted to add more space to the DATA_DISK disk group.Unix admin configured 200 GB from SAN and changed the ownership of the Disk to oracle and permissions to 775 on 1st Node.I opended DBCA from 1st Node and was able to see the disk in 'Show Candidate'.
I added this disk to the DATA_DISK disk group and clicked OK but got ORA- error with some message like some operations could not be performed. I exited DBCA.We realized that we had forgotten to change the ownerhip and permission from the 2nd Node.Unix admin changed the ownership of the Disk to oracle and permissions to 775 on the 2nd Node.
I opened DBCA again from 1st Node and selected the DATA_DISK disk group but could not find the Disk in 'Show Candiate' open. I clicked on 'Show All' and this disk was shown with Header_Status - MEMBER but not allocated to DATA_DISKGROUP. When I clicked the 'Show Member' option, this disk is not shown for DATA_DISK disk group. I exited DBCA at this point.As this is a critcal Production database I didnt proceed any further and exited DBCA.
Now I need to add this Disk to the DATA_DISK disk group but not sure which option to select. I got one reply from another forum to run DBCA select the DATA_DISK Disk Group and then click 'Show All' and select this Disk (which already has MEMBER as Header Status) and select Force Option and click OK to continue.
We just got a new Dell R720 server that will host our Oracle DB. The server hasn't even been turned on yet but we know that the load on the server will be very low for a long time.
One of our problems is that we need to run a VERY important application. Since it is not very resource consuming compared to it's importance we chose to run it on a not so new Xeon 5110 1.60 GHz - 4GB RAM server. He said it's not a good idea and that we should buy a new server. (money is very low)
The software vendor suggested to virtualize our R720 server, host a vm running our database, and along with it other smaller machines like the one I described above. I suggested the use of Oracle VM, Oracle Linux for the database host and transforming the physical servers servers in VM with P2V.
Our IT Manager didn't like that, he said that it's not recommended to run a database on a virtual machine. But our software vendor said that many of their clients run their solution this way.
I have a JSP that works with an Oracle 9i database.
On my local windows workstation where I developed the JSP the application processes very quickly working with the Oracle database. The JSP application on the Production Intranet web server connecting to the same Schema processes very slow sometimes during the day. During off hours the Production Intranet JSP with Oracle processes quicker.
There is only one user for the application and I dont understand why the same application using the same database runs so much slower sometimes on the production server compared to my local workstation.
I would like to know that how can i automate the export from production to test server. I need direction to create process to import data from production (server A) to test server (server B).
I want to automate the import from production to test.
1) export the production schema 2) import in to test server?
How can i automate that currently i am doing it manually as follow:
1) expdb the production schema 2) kill all connection on the test server to test schema 3) drop test user cascade; 4) recreate user; 5) impdb the production schema to test:
but i want it to automated or scheduled so i don't; have to log in every night!!
I ran "exp" command to take a back of Oracle Db based on user and later imported(using "imp" command) the dump into another db. Its seen that some the tables are not exported during exp command run. Can I use exp command on Oracle 11.2 version?
We are planning to consolidate our Oracle Production DB into one server. We are basically a windows shop. Is it feasible to run 8 production Oracle DB in one windows server. All the DB are not really transaction intensive DB. 2 DB in the size of 300GB and others all DB falls under average size of 40GB.
I can take care of the HD slicing so Oracle does not enter into IO bottleneck. We are planning to go for external NAS or SAN for storage.
My main concern is on processor usage. The processor we are thinking about is Intel Xeon Quad Core x 2nos. Will there be a processor bottleneck or is there way in Oracle to assign processor usage(I belive there is no much tweaking options here)
I need to use Data Pump for the first time on my production Database.Currently on Testing Database, when i am taking schema level export there are no errors or warnings in the log file but when i importing it gives fallowing ORA in the import log file. i searched on google,the only way i found is to recompile the invalid objects. how to avoid this warnings in log file.
"ORA-39082: Object type ALTER_PROCEDURE:"QUANTISV4"."P_CTM_ABN_INVST_EQUITY" created with compilation warnings"