RAC & Failsafe :: Storage Options For Building Test Setup For DB
Jul 22, 2013I am looking for different cheap storage options to build test RAC setup for DB on Linux@Dell PowerEdge T100 .
View 1 RepliesI am looking for different cheap storage options to build test RAC setup for DB on Linux@Dell PowerEdge T100 .
View 1 RepliesI wanted to setup a two node 11g RAC with ASM at my home for learning purpose. I have read many documents to configure shared storage using NFS, Openfiler, VMsetup etc. Is there any hardware available (something like maxtor disk) which is not much expensive to setup a shared storage ?
View 3 Replies View RelatedWe have a RAC in a test environment.It's 2 nodes connecting to an IBM DS3512. The DS3512 has one pool of 8 hard disks (RAID10) which has been then sliced into logical drives:
2 x 1GB
then
2 * 400GB
I have been tasked with changing the current configuration as follow:
2 pools of 4HDs in a RAID10 configuration
Then from the first pool:
2 * 1GB
1 * 398GB
from 2nd pool:
1 * 400GB
What I intend to do:
1) shutdown
SQL> shutdown immediate;
./srvctl crs stop
2) backup
dd if=votingdisk of=voting_disk_backup
I already got backups of OCR in $CRS_HOME/cdata/crs/
3) Perform changes on SAN
4) recreate ocfs2 partitions
5) mount the disks to their folder.
6) recovery
recover voting from backup
recover ocr from backup
7) Turn service on again
./srvctl crs start
SQL> startup;
Would this scenario work. There can be some glitches like the disk names changing after the reconfiguration but I only need to modify fstab for that.
I have a SAN Storage device of 8TB size . I am new to rac setup ,i dont know how to configure the SAN device for RAC SETUP on ASM..how to configure the SAN device for RAC setup on ASM
How many disks we need to have? which and sholud be on ASM ? how to impliment raid10 for database file disk and raid5 for arhcive log files
I am trying to create a rac setup on oracle 10.2. through NFS cluster configuration .i succeeded with the installtion of cluster configuarion and installation of oracle software ,but when i am trying to create database ,
View 2 Replies View RelatedI wanted to downgrade our 2 node RAC 11.2.0.3 to previous version11.2.0.2 on our test environment, so i could prepare to upgrade production later.
I followed oracle documentation and issued
'/u01/app/11.2.0.3/grid/crs/install/rootcrs.pl -downgrade -oldcrshome /u01/app/11.2.0/grid -version 11.2.0.2.0'
and both nodes and manually removed file in upgraded oracle home (rm -r u01/app/11.2.0.3).
This seems to have caused havoc and ASM and srvctl are no working. lsinvemntory lsinventory still poitns to removed oracle 11.2.0.3
I tried to set
export ORACLE_HOME=$ORACLE_HOME
.oraenv
+ASM1
srvctl config ##n
srvctl config
PRCR-1119 : Failed to look up CRS resources of database type
PRCR-1115 : Failed to find entities of type resource that match filters (TYPE == ora.database.type) and contain attributes DB_UNIQUE_NAME,ORACLE_HOME,VERSION
Cannot communicate with crsd
I wish to install 11g r1 RAC on my laptop having linux 4 as platform (on vmware) , for that i prepare 4 partition for that (on node1)
/dev/sdb1 - for ocr
/dev/sdb2 - for voting disk
/dev/sdb3 - for asmdisk group
/dev/sda5 - fro asmdisk group
by assuming external redundancy for ocr and voting disk i kept only one disk
and i configured following in /etc/sysconfig/rawdevices
/dev/raw/raw1 /dev/sdb1 -- ocr
/dev/raw/raw2 /dev/sdb2 -- voting disk
/dev/raw/raw3 /dev/sdb3 -- asmdisk group
/dev/raw/raw4 /dev/sdb5 -- asmdisk group
and my question is how node2 can understat these raw device as shared storage?
I have a database running on 10205 version. It is a RAC database with ASM storage.
I want to migrate this database to other server and upgrade this database to 11202 version .
we don't have system or storage admin available..I have to create the development environment. I have 2 Dell 2950 Power Edgde Servers and 500 GB Buffalo NAS with Windows 2K3 X64 EE, tell me whether I can install Oracle RAC with this kit.
If yes , how can I configure shared storage and present it to OS so that both nodes can see it.
We have the following scenario that we need to replicate:
Our current systems are on oracle RAC+ASM with 2 nodes, they are on physical hardware.We would like to replicate from physical to VM, storage is DELL compellent.
Can this be done online without RMAN at SAN level replication? how to replicate the systems.
My organisation is currently discussing different storage options for the database storage. Our production database is nearly 2TB and we do not want to continue with the existing NetApp storage (we use a 2 node RAC running 11.2.02 with nfs filesystem from NetApp filer).
We were looking at different options and came across Nimble Storage, they are very fast growing company aiming mid-range storage customers. The initial talks and demonstration looked very promising in terms of IO performance (they claim 40,000 - 60,000 IOPs for their CS400 series Nimble Storage array) and other options they are providing but we understand that majority of their customers are using it for VDI and other infrastructures.
They have demonstrated us using if for Oracle database with ASM storage over iSCSI LUNs. We are yet to do the POCs and benchmarking.
Has anyone come across Nimble Storage for running Oracle databases?
I'm about to build a machine whos prinary focus will be running a 'hobby' app that uses oracle express as its db. some info on the db..main tabl has 30 fields and 500K records,all other tables add up to about 20% of that size.
I run huge queries that return most of the db in about 200K rows with 50 fields.these queies contain many 'partion fields' using 'rank', 'percect rank' etc. My old pc really struggles with thhis stuff and im going to invest about 600 euro (exculing monitor etc) in a new mahcine.
general terms what i should get, im thinking about OS, hard drive (1 for data, 1 for indexs?), mutli core processer?, type and size of ram. Any othr consideration i should have.
I have NO experience with APEX but - have been browsing the documentation in considering it's use as a web application/dashboard reporting utility.
However - I'm wondering if it allows for the ability to connect to DB2 as well as Oracle...
i.e. - building a form that allows a user to 'pull' data for analysis from a DB2 instance and - generate reporting results that would then be stored in an Oracle instance.
I have an Across Group Report, but can't create a design / layout what i want (explained more detail as shown on the images).
Current report when it's running:
[URL]
The final layout that what i want:
[URL]
Environments:
- Oracle Developer Reports 10g R2
- Oracle Database 10gR2
Currently we are using "exp and imp" utilities to unload from production and load into Dev server. While importing, we are following below steps
(1) Load only data [by specifying INDEXES=N in the par file]
(2) Unlock statistics
(3) Load indexes, other objects [by specifying ROWS=N]
After doing these steps, both data, indexes and others objects are loaded. To verify indexes, we are checking DBA_INDEXES.
DBA_INDEXES :
-------------
OWNER INDEX_NAME TABLE_NAME STATUS LAST_ANALYZED
----- ---------- ---------- ------ -------------
MYSCH CP_INDEX_1 CP_TABLE_1 VALID 14/JAN/12
Question :-
(1) Does imp utility rebuild the indexes while loading data ? or it simply takes the rows from dump and load into test system without building from scratch ?
(2) I am trying to replace 'exp' and 'imp' with datapump utilities ? But, I am confused about the parameters to be used ?
(a) Can I load both data and meta data at the same time (Using CONTENT=ALL option) ?
(b) I am planning to implement this in two steps :
first load only metadata using - CONTENT=METADATA_ONLY TABLE_EXISTS_ACTION=REPLACE
then, load data - CONTENT=DATA_ONLY.
I'm using Apex4.1 on a hosted platform. I'm trying to build a business application and the client wants a Dashboard.
Here is the best way I can explain it:
The dashboard displays a series of rows in the table. When you click on the "edit" button, it runs a query and displays it data on a report. How would I build this?
Let me explain it a different way:
The home page shows a table report with Column 1, Column 2, Column 3.
I want to make it so where you can click any ROW and then it goes to a different page that shows all of the data So the Home Page you can just see 3 Columns, but the next page will have all the columns.
I am working on an assignement where client is using Oracle 10g but stuck to using RBO Now the application team, from the GUI available to them build dynamic queries and some of them run very slow.
Neither the code can not be changed to tune the queries nor do we get the exact step in the plan which is an issue (being RBO).For some long running queries the Tuning advisor is not producing any recommendations.
Another hurdle is that all the application users are using same application user id so we can not write a logon trigger to use CBO for some particular queries to see what is happening in the background!
I am trying to build a data warehouse for Consumer Price Index and so I have downloaded data from the Bureau of Statistics.It is in excel format and since I am working with Oracle Warehouse Builder I have converted it to .csv file so that I can use it as a data source.
Question1: Is it practical to use single .csv file as a source of data for a data warehouse?
Question2: I have 3 dimensions tables and a fact table.The dimensions are one for the Region(as the date is organized in region,states etc),two is the consumer goods and services (as the data is organized in groups of goods and services, services/goods types) and finally time(year and month),
Now how am I going to do the mapping here?Is it possible to do a one to one mapping here as all data required by the dimensions is located in the .csv file.
I have been trying to figure out how to write a query that shows each building code building name and number of rooms from a database with four tables : emp, build, room, roombook
View 9 Replies View Relatedi'm working in an Oracle 10g database on an IBM AIX server.
I have 3 tables (tables A, B and C).
Table A has columns -- product, rate and expiration date.
Table B has columns -- product, rate and deductible.
Table C has columns -- product, rider, gender, age and rate.
I also have a Master table which is used to store the data from Tables A, B and C via the insert statement.
I'm trying to create a dynamic SQL insert statement using a shell script to insert data from the columns in Tables A, B and C into my Master table. Master table does contains all columns from Tables A, B and C, although a column name could be spelled differently. For example, Master table contains a column named "deduct", while Table B has the same column spelled as "deductible".
I build the dynamic query using a for loop in my shell script (see below).
The problem is that i can't get the correct columns in the Master table in the dynamic SQL for the insert because depending on the table i'm selection from, the columns are different. So how do i get the correct columns in the SQL for the Master table?
Example Shell Script
--Archive_Rates.txt contains: Table A, Table B, Table C (but the next time my process runs, Archive_Rates might contain Table D, Table E and Table F -- each which have different column...but all columns are still in the Master table)
for tbl in `more Archive_Rates.txt`
do
echo 'BEGIN WORK; ' > rc1.sql
echo ' ' >> rc1.sql
echo 'insert into Master' >> rc1.sql
echo '(prod, rate, rate_exp) ' >> rc1.sql
[code].....
I had one of my RAC nodes go down due to a disk failure. I have a 3 node cluster running 10.2.0.4 on Dell 610's running Windows Server 2008. I have been running AWR reports this afternoon and am seeing CPU time as my top timed event. Here is the exerpt from the report I am looking at:
Cache Sizes
~~~~~~~~~~~ Begin End
---------- ----------
Buffer Cache: 20,032M 20,032M Std Block Size: 8K
Shared Pool Size: 12,688M 12,688M Log Buffer: 6,336K
[code]...
I wanted to ask if there was anything that I could be doing to alleviate the workload on the 2 remaining nodes right now? As far as I understand it there is no way to stop users from hitting the database and without my 3rd node to load balance the CPU will continue to be pegged until the end of the day as the users are logging off.
I have a production DB that has the oracle tuning pack option turned on, I occasionally need to clone the DB to a test server where I do not use the tuning pack, If I select from DBA_feature_usage_statistics it tells me that the Options pack is turned on for the Prod db DBID but not for the cloned db dbid. Do I need to worry about this old data being left around in my database? If so is there anyway to remove it? I am running 10.2.0.4
View 5 Replies View Relatedsteps to export/import a database with options with exp/imp command.i want to acess the exported database table with my current user schema, means actually i had exported a database earlier but i'm not able to access tables directly rather i've to use exported database schema like abc.tablename.
View 1 Replies View Relatedmaterialized view link with all options?
View 3 Replies View RelatedI have 3 tables, user_login_event, person and resource_viewed_event. What I want to do have a report for each month, users logged in our application and then show for each month, how many records were created in table person and how many resource views events were logged in resource_viewed_event.
Lets only worry about the timestamp fields in these tables now as I want to use them to join the tables together or at least build correlated subqueries along the months. I have tried several options, all not leading to a desired result:
Left outer join. Works but its incredibly slow:
SELECT
distinct to_char(ule.TIMESTAMP,'YYYY-MM') as "YYYY-MM",
count(distinct ule.id) as "User Logins",
count(distinct ule.user_id) as "Users logged on",
count(distinct p2.id) as "Existing Users",
count(distinct p1.id) as "New Users",
count(distinct r1.id) as "Resources created"
[code]....
Tried the same with left outer joins of temporary tables created through select statements:
select
distinct ule.month as "Month",
count(distinct p1.user_id) as "Users created",
count (ule.id) as "Logins",
count (distinct ule.user_id) as "Users logged in",
count(rv.id) as "Resource Views",
count(distinct rv.resource_id) as "Resources Viewed"
[code]....
Tried the same with left outer joins of temporary tables created through select statements:
select
distinct ule.month as "Month",
count(distinct p1.user_id) as "Users created",
count (ule.id) as "Logins",
count (distinct ule.user_id) as "Users logged in",
count(rv.id) as "Resource Views",
count(distinct rv.resource_id) as "Resources Viewed"
[code]....
another approach is to create my own temporary tables using select statements and create fixed Month values which I can use to directly link the sets together.
select
distinct ule.loginday as "Month",
count(distinct ule.id) as "Logins",
count(distinct ule.user_id) as "Users logged in",
count(distinct p1.user_id) as "Users created",
count(distinct p2.user_id) as "Existing users1"
[code]....
performance is OK with 2 tables but the example above takes forever to execute.
Tried an approach with union but this creates new rows for each table
SELECT DISTINCT p1.MONTH AS "Month",
COUNT(DISTINCT p1.user_id) AS "Users created",
NULL AS "Logins",
NULL AS "Users Logged in",
NULL AS "Resource views",
NULL AS "Resources viewed"
FROM (SELECT To_char(person.created_on_date, 'YYYY-MM') AS MONTH,
[code]....
I want to remove some db features and options from our production database:
- Virtual Private Database
- XML DB
- XStreams
For removing of XML DB I have found: XML DB FAQ. Is it still actual? But I found nothing for uninstalling of VPD and XStreams.
We have a table with huge data which is skewed on a 'status' column. The 'status' column has 6 distinct values with 1 particular value occupying 80-85% records.
In the batch process we query the data on the status and process the retrieved records. My senior is insisting on partitioning which I see not much feasible considering cost implications just for a part of functionality
See there are 6 status 'A','B','C','D','E','F'
with 'A' occupying 80% records
'B' to 'F' occupies 2% till 14% records in the table(approx)
1) Create a conditional index on status (using case) to have records with all statuses except 'A' Then create If-ELSE structure
IF input parameter is 'A'
select /*+ FULL Parallel(t) */ * from t where status='A';
ELSE
Select /*+ INDEX (t conditional_index) */ * from t where status in ('B','C');
END IF;
I want to create conditional index here for 2 reasons
1] since it will have values for status except 'A' this nullify the chance that this index will be picked up when status='A' will be queried
Thus making the performance worst (status ='A' is for 80% records) - The IF-ELSE is additional protection
2] Less impact on the DMLS as the index will not be on status='A' which contribute to large chunk of records
2)Populate a dummy table which would contain rowid and status. Since the business closes at 21:00 and batch process starts at 21:30
Between these times periods refresh the dummy table every day using merge (to catch business transactions during the day)
Now during the batch process retrieve records from the main table using the rowids in the dummy table depending on the input status value
3)Create index on status
Make sure hard coded status values are used in the database procedures
Gather stats with the histograms
And leave it to the Optimizer to choose the best possible path
I am looking for Masking options/techniques to mask few columns. I am aware of the option Oracle Data Masking Pack. But its costly.
View 6 Replies View RelatedThe scale of the tests that generate the following scenario is not huge right now, only 50 users simulated (or you can think of them as independently running threads if you like). But here is the crunch, the queries generated (from generic transaction layer) are all running against a table that has 600 columns! We can't really control this right now, but this is causing masses amounts of IO (5GB per request) making requests queue for disk availability (which are setup RAID 0/1); its even noticable for as few as 3 threads.
I have rendered the SQL on one occasion to execute in 13 seconds for a single user but this appears short lived as when stats were freshly gathered it went up to the normal 90-120 seconds. I've added the original query to the file, however the findings here along with our DBA (who I trust implicitly) suggest that no amount of editing the query will improve the response times, increasing the PGA/SGA (currently 4/6GB respectively) will only delay the queuing for a bit and compression can work either. In short it looks as though we've hit hardware restrictions already for this particular scenario.
As I can't really explain how my rendered query no longer takes 13 seconds, it's niggling me that we might be missing a trick.So I was hoping for some guidance on possible ways of optimising these type of queries against such wide tables, in other words possibilities that we haven't considered...
Attached is the query and plan.
Creating thread so we can share experience on using the "Code review options" that comes along with "Toad formatter".
Personally I have used it only for formatting code but not to enforce SQL Standards. any inputs on this and if it's advisable to use this as corporate standard.