I am querying v$sga and getting variable size : 211337216 bytes.when querying v$sgastat then getting
java Pool : 16777216
Large Pool : 41943040
Shared pool : 398560392
But as per my knowledge following condition should satisfy,but not getting
[code]
Variable sga = java pool + large pool + shared pool
select pool,name,sum(bytes)
from v$sgastat
where pool in ('shared pool','java pool','large pool')
group by pool,name;
Here variable size using v$sga : 211337216 bytes
and java pool + large pool + shared pool : 211302536 bytes.
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
My problem is i install Oracle 9i enterprise edition 9.0.1.1.1 in windows xp professional but at the time of oracle database confiuration assistent it show me following errors it show:
In my production ENV there is two node RAC database is running , and on both machine RAM size is (35GB on node1,41GB on Node2) and other configuration is like below.
The only supported technique for converting an EE database to SE is export-inport, as documented in note 139642.1. Our client is reluctant to do this because of the downtime involved. It is however possible to open the EE database from an SE home, no problem.
The note says only Quote:When you just install the Standard Edition software, you will end up with Data Dictionary objects which are of no use (or perhaps even invalid) and possibly create problems when maintaining the database.
In our current setup we have RAC on standard edition and client is now planning to go for Enterprise Edition but not yet decide because of cost. Is there any difference between Grid Infrastructure 11gR3 Enterprise edition and Standard Edition ?
They told me to first install Enterprise Edition and then will move to Standard Edition if they can't get the EE license so in that case do i have to re-install Grid infrastructure for standard edition?
I have a question regarding memory parameters in oracle database 9.2.0.8, especially sga_max_size and db_cache_size. Database server has 32G of ram. Oracle parameter on server shmmax is set to 16G. Is reasonable to set sga_max_size to the same value, and db_cache_size to 80% of that size?
I have a procedure which mainly run queries on a Table which has nearly 9.5 million recodes. This procedures takes nearly 15 min to complete execution on our main database. I exported and imported the schema to our backup database and the same procedure just took 3 seconds to complete.
I tried to analyze the table in our main database and tried to execute the procedure again but did not show any improvements. ANALYZE TABLE DN_ACTIONS COMPUTE STATISTICS;
I am not sure computing the statistics for all the tables in the schema will work. I also checked there is enough disk space where oracle data files are stored. I am also turning on the sql trace to see what sql statements in the procedure is taking longer time.
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
Customer is sending data from legacy system (Source) with the web service which in turn calls a package lying on Oracle server (Target). Now this package is simply inserting data passed by legacy system into master staging table in Oracle database. When they started this process in Sept 2011 then 4 lack records were inserted into staging table. In Oct 11 it was 0 records Nov 11 it was 2 lack records, Dec 11 it was 1 lack records, in Jan 12 it was 1 lac records, Feb 12 73k records, Mar 12 0 records, Apr 12 52k records.
As we see that number of records inserted in the table got reduced with time.. what should be the starting point here since web service is calling that package on the fly, how can i enable trace for that package? I cannot replicate this is Dev as this process is only working in PROD.
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
Recently we have downgraded our database from enterprise to standard edition.....our sga size before downgrade was 11 gb and now it is 11gb and there is no as such problem in database..I have read somewhere that standard edition doesn't support sga size more than 2 gb .
I am checking out licenses. We all know that EE is much more expensive than SE. But many customers do have EE installed - unsure if they need all the features at all. After several years of production, a downgrade is considered 'risky' and we continue to pay the full EE.
How can we check and be sure that a downgrade to SE would not be any problem?
Some checks include: * partitioning used in user schemas? --> no downgrade to EE * bitmap indexes in user schemas? --> no downgrade to EE
How can we complete this list, or is there some script to make this easy?
Ours is Oracle 11.2.0.2.0 Db 4 instances RAC on Unix AIX OS.Since long we are facing problem that CPU utilization reached 100% and reboot is required alteast once or twice a month.On seeing the Events Logs we find that the Event "CURSOR PIN S WAIT FOR X" is consuming a lot of waits.
On analyzing i came to know that we are firing same query from Application 15 to 20 lack times for which a lot of Mutex keeps spining for getting Shared Mode and consumes a large amount of CPU.
Am moving data from 10g enterprise edition to 11g standard edition using normal import commandafter completing i just go through the Log.i found a thing which makes me confusing
in some tables alone a row is not inserted stating the ORA-12899 while checking with the database it shows the column is varchar(100)in the log the error showing it tries to insert 101 character. how it happening to a single row
I've installed Grid Control (aka OMS) 10.2.0.5 and try to look at the Database Performance. But instead of information I receive headers and blank picture in the place where usually all charts are shown. As a matter of fact it looks like the page tries to access that data source, but fails in that and shows just a symbol of picture that was not loaded and all menus from that page.
am having problems starting Oracle Database 10g Express Edition on Fedora. Here is what I did in order to start it:
[root@x1-6-00-c0-9f-bb-ba-57 ~]# /etc/init.d/oracle-xe start [root@x1-6-00-c0-9f-bb-ba-57 ~]# xhost + access control disabled, clients can connect from any host
my customer wants to create a standby database for his production database (Oracle Standard Edition 11g R2 @ Windows 2008 R2 64 Bit). Now any proof-of-concept which explains shortly the concept and how to achieve it.
insight into the overheads for mutally authenticated SSL for database connections? This is over a fast local network, to a RAC cluster, with DB firewall in front. There's always a large element of "it depends"
Information I'm interested in are things like latency for initial session setup and subsequent data transfer. Also the increase in network packet size, and the increase in CPU cost for the database server. I guess there is some implications for session memory usage as well.
I have one big database which I need to migrate to Oracle, because it rocks with big databases, instead of other databases and when I was made transfer software and all works great except one more think. During this process I found that Oracle normally fill log & undo table, and my question is how to migrate (or can I migrate) database to oracle without filling undo database (deactivate this process) and after that to put database to work normally, because I just need to transfer data as is and from that point Oracle goes on...
i am using 11.2.0.3 version of oracle. We have recently migrated to 11g, after 1 month of smooth and comparatively better performance, we are suddenly facing performance issues with our database and it got crashed twice within 5 days. even we didnt push any new code to our database in recent past, atleast after the 11G migration. And after getting feedback from the ORACLE corporation guys , they pointed out about the default database stats gathering job, which was eating most of the CPU, because of the default degree mentioned So it was running in 160 parallel threads causing resource starvation.so we reduce the degree of the stats gathering job to 8 .
But the database crashed again two days back, and rebooted within 3 mins to back to normal, even after this default degree changed to 8. This is happening due to any specific application related sql or anything else.
I am doing import and export of database.Before loading data i drop all the tables and import.Is there any issue if we do drop tables and import data frequently.
I understand that when data is read from the disk, I/O is done..And When computations are done then CPU is used..Then where the following equation fits?
we are using oracle 9i on AIX Server. When Customer were accessing the database, accidentally power was shut down. we restarted the Server,and Oracle database. all resumed successfully.
However while doing "Payments by the customer" it takes a lot of time to insert even a single payment record on database.The database is Live and our customer are very much frustrated,
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ lock_sga boolean FALSE pre_page_sga boolean FALSE sga_max_size big integer 1152M sga_target big integer 0
[code]....
in scenario above, the database do not using ASMM, and spfile If I wan to increase db_cache_size parameter, do i need to rebounce instance?