Performance Tuning :: Oracle10g Standard Edition On Linux Machine
Jan 16, 2012
In my production ENV there is two node RAC database is running , and on both machine RAM size is (35GB on node1,41GB on Node2) and other configuration is like below.
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
when migrating from 32 bit Linux to 64 bit Windows version on database standard edition, is there a server media needed?if yes, can you give me more details on what it consists of?
The only supported technique for converting an EE database to SE is export-inport, as documented in note 139642.1. Our client is reluctant to do this because of the downtime involved. It is however possible to open the EE database from an SE home, no problem.
The note says only Quote:When you just install the Standard Edition software, you will end up with Data Dictionary objects which are of no use (or perhaps even invalid) and possibly create problems when maintaining the database.
I was trying to generate AWR report, but the report which got generated consist most of the sections without data. Later i came to know that AWR report is not fully supported in 11g? Is that true?
I wish to run a SQL query and measure elapsed time, then compare the values to other Oracle DBs from other companies. That will give me a feeling if our DB performs well.For example in UNIX world, you can create a random 4GB file to measure throughput I/O and compare the values (for example 4MB/sec).
What's the simplest way to compare DB response time from forum members to our own DB? I don't need 100% accurate numbers.
I am querying v$sga and getting variable size : 211337216 bytes.when querying v$sgastat then getting
java Pool : 16777216 Large Pool : 41943040 Shared pool : 398560392
But as per my knowledge following condition should satisfy,but not getting
[code]
Variable sga = java pool + large pool + shared pool select pool,name,sum(bytes) from v$sgastat where pool in ('shared pool','java pool','large pool') group by pool,name;
Here variable size using v$sga : 211337216 bytes
and java pool + large pool + shared pool : 211302536 bytes.
I am using Release 11.2.0.3.0 - 64bit Production version of oracle. Now we are having 3-tier architecture, (firewal/web/app/DB).Now i saw , some of the 'sql' queries, running till ~10hrs in my database and those are part of application(module JDBC THIN CLIENT). After had a talk java guys, they ask to kill the sessions specific to those queries. They are part of search TO, in which user put some large values for the date range and went to other TAB, but these queries gets running infinitely in the database, and user is not interested in the result set.
So how to avoid these things, as because in past, our database has suffered resource contention leading to application slowness. So i was planing to set different timeouts using 'database resource consumer group' for online user request and batch request depending on the app server(that is by machine names) request.
So i have done below set up in my local to test one scenario, in which i will try give a database call from difference machine, and it should get timeout after the specified duration. But its not working , as expected. The calls from the specified machine are not getting assigned to the created 'Consumer group'.
Begin -- create the pending area dbms_resource_manager.create_pending_area(); END; / BEGIN -- Create the consumer group
[code]....
After this when i am verifying calls from machine, 'LR9XY7T8' they are belongs to the consumer group 'OTHER_GROUPS' and sql query not getting timed out within 60 seconds as mentioned.
Recently we have downgraded our database from enterprise to standard edition.....our sga size before downgrade was 11 gb and now it is 11gb and there is no as such problem in database..I have read somewhere that standard edition doesn't support sga size more than 2 gb .
I am checking out licenses. We all know that EE is much more expensive than SE. But many customers do have EE installed - unsure if they need all the features at all. After several years of production, a downgrade is considered 'risky' and we continue to pay the full EE.
How can we check and be sure that a downgrade to SE would not be any problem?
Some checks include: * partitioning used in user schemas? --> no downgrade to EE * bitmap indexes in user schemas? --> no downgrade to EE
How can we complete this list, or is there some script to make this easy?
I have requirement where replication should be done between two 11gR2 RAC on Standard edition.I have following queries,1. Does Standard edition support DDL capture?
Im asking this because on
[URL]......
it says "SE1/SE: no capture from redo" what that really mean ??2. Can it be possible to configure capture at schema level and skip only some of the tables / triggers ?
We are upgrading from oracle 9.2.0.8 to 11g r2, and both are Standard Edition database. The database is part of a product that runs on a customer site, and won't get bigger than 50 GB. It runs in archive mode, and our backup script does a hot backup every night, plus copying the archive logs, redos, controlfiles, etc. We save 2 entire backups - from the last night plus the night before last. Then there's a tape backup that saves the backed-up files to an off-server location.
This architecture has allowed us to recover our customer's data from many odd occurrences at customer sites (power loss during a hot backup, corrupt controlfiles/datafiles/archivelogs). My question is, given that we are running the Standard Edition database, which doesn't have most of the useful RMAN features, is it worth it to switch to RMAN?
I took an Oracle Backup and Recovery class and posed this question to the instructor, and the response was, it would be better to use RMAN over a manual user backup script. Our backup script is pretty battle-hardened - is that the best reason?
I read that Oracle RAC which is bundled in Standard Edition can support at max 4 sockets. One of my client has a proposal of using RAC but for 3 nodes each using single 1 quad core Intel processor. As far as i understood, an Intel quad core is a multi chip module and actually is a combination of 2 modules dual core, so each Intel quad core may be counted as 2 sockets. Which yield to the proposal of my client will be failed, as the total number of sockets in it will be: 2*3=6 that exceed the max 4 support.
my customer wants to create a standby database for his production database (Oracle Standard Edition 11g R2 @ Windows 2008 R2 64 Bit). Now any proof-of-concept which explains shortly the concept and how to achieve it.
In our current setup we have RAC on standard edition and client is now planning to go for Enterprise Edition but not yet decide because of cost. Is there any difference between Grid Infrastructure 11gR3 Enterprise edition and Standard Edition ?
They told me to first install Enterprise Edition and then will move to Standard Edition if they can't get the EE license so in that case do i have to re-install Grid infrastructure for standard edition?
I decided to use Oracle 11g XE on Linux Mint. I read this tutorial Oracle 11gR2 Express Edition on Linux Ubuntu 11.10 howto included the seventh part. However, I did a minor change in the script given. Indeed, since Ubuntu 12.04, a bug appeared. So I use the following script :
cat > /etc/init.d/oracle-shm <<-EOF #! /bin/sh # /etc/init.d/oracle-shm # # case "$1" in start)
[code]...
I already uninstalled and reinstalled Oracle 3-4 times, but nothing else happened. When I want to configure oracle-xe, I have a such message :
Starting Oracle Net Listener...Done Configuring database... Database Configuration failed. Look into /u01/app/oracle/product/11.2.0/xe/config/log for detailsAnd this is the content of the file postDBCreation.log : begin * ERROR at line 1: ORA-01034: ORACLE not available Process ID: 0 Session ID: 0 Serial number: 0
It isn't possible to use one EC2 machine for each RAC node because EC2 can't handle the virtual IPs, but I had thought that I could use one EC2 machine to host several Linux VMs using Virtual Box (the same way that I use Virtual Box on a Windows host). But I can't get Virtual Box working on EC2. The only relevant advice I can find on this is comments to the effect that running any virtualization product with a Xen machine as host is not a good idea.
I do realize that this isn't a 100% Oracle question, but if set up a RAC on an Amazon Cloud machine,
I would like to install Oracle Express Edition on Linux. I have some troubles on x64 architecture of Linux with win32 progs that runs via wine. And I need Oracle for developing. So, I would like install x32 Oracle Express Edition, but I can't find it anywhere. Is it exists.
Why is Oracle Express Edition 32x exist for windows, but there is not Oracle Express Edition 32x for Linux? We are using widely Oracle 10g. And there is many special difference with 11g. But there is no anywhere 10g version. Where can I download it (for Linux)?
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
we are busy updating one databasee from a windows platform 2003 oracle 10G to a linux and oracle 11r2
We exported/imported the data and it looks ok Explain plans look the same . but our heavy batches are twice slower than on the windows box ,the two top events are disk related, sequential and scattered reads there are 90% of the time of the batch job , i read some white paper and found that using ASM can be bad in some cases the same with the linux for this particular kind of scattered reads , i was just wondering if just changing the SGA to 10GB instead of 4GB to get more cache and speedup the things .
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */ INTO DWH_BILL_DET rq USING (SELECT rated_que_rowid, detail_rerate_flag_code, rerate_sel_key,
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns: (id number, job varchar2(20), time date, plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names. For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.