Performance Tuning :: SQL Library Miss Rate?
Mar 18, 2011in spotlight toad tool alarm appears about sql library miss rate 61,2%
View 15 Repliesin spotlight toad tool alarm appears about sql library miss rate 61,2%
View 15 RepliesLooking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
My production database version: 11.2.0.1.0 and
OS version is :Microsoft Windows Server 2003
Standard Edition
Service Pack 2
what is the ideal value of following to my database,
Redo latch statistics
statistics per seconds
Library Cache Latch Statistics (Per-Second)
Total Latch Misses (Per-Second)
Total Latch Sleeps (Per-Second)
[code]...
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'
[code]...
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
View 10 Replies View Relatedare the most important performance keys we have to calculate or take in account to preserve or to increase the DB performance in terms of response times, and whatsoever according to performance ?
View 8 Replies View Related This is my sample code:
create table myReport(
report_id integer,
product_class_id integer,
place_id integer,
[code]...
would like to get the rate for my report table base. The search for the rate first will be base on product_class_id, year and place_id, second if the report row match the product_class_id and year but not place_id it should get the default rate (in my rate table the place_id is null).
Last if the report row doesn't match any key the result should be zero.This is my result query, how can I do this query?
report_id product_class_id place_id fy rate
1 1 1 2012 15
2 1 2 2011 6
3 1 3 2011 7
4 2 2 2012 18
5 2 5 2011 2
6 3 1 2012 0
7 4 1 2012 0
I try to do a function by don't know how to handle the default place_id (null)
how would I be able to calculate the Redo rate for using in the Required bandwidth Formula as seen below :
Required bandwidth Formula ((Redo rate in bytes per second / 0.7) * 8) / 1,000,000 = bandwidth in Mbps
Example: 385 KB/sec peak rate would require an available network bandwidth of at least
((394253 / 0.7) * 8) / 1,000,000 = 4.5 Mbps.
SOURCE OF FORMULA Network Bandwidth Implications of Oracle Data Guard...I'm using Oracle 10g
My MVIEW has a refresh rate (sysdate + 1 ) + 24/24 So how often it gets refreshed .
View 3 Replies View RelatedI need to update rate_per_hour for all projects having less than 5 employee to 500
here is the code what i wrote, but it updates all employee
set serveroutput on
declare
cursor rate_cur is
select * from project
[Code].....
1) First Table
Id , Name , Eff_date
10 I1 15-APR-2013
20 I2 30-APR-2013
30 I3 26-May-2013
20 I4 10-SEP-2013
40 I5 10-sep-2013
40 I6 10-Oct-2013
2) Second Table
Eff date Rate
15-APR-2013 900
30-APR-2013 500
16-Sep-2013 400
05-Oct-2013 200
Q. How to get department wise effective rate?
rdbms:oracle 10gr2
os:windows
with reference to
[URL]......
Quote:
A.2.2 Writing Backup Scripts for Disk and Tape Scenarios
As in the disk-only scenarios, the backup scripts in this section are categorized based on database workload. as stated very clearly it depends on the workload, more precisely the rate of block change. The size of the database can be found out based on formula from
[URL]....
so how would I know the rate of block change in order to know which script is suitable for me? I try to find out the rate of block change for the database based on change tracking file but based on
[URL].....
Quote:
The size of the change tracking file is proportional to the size of the database and the number of enabled threads of redo. The size is not related to the frequency of updates to the database. So how do I determine the rate of change? can the rate of block change based on size of archive logs?
I have the following information with me:
starting from 5/10/2011 0101
ending 5/18/2011 1114
this constitute to 9.5 days
F:
ecover_area>dir/w
1644 File(s) 27,942,770,176 bytes
2 Dir(s) 10,019,270,656 bytes free
average size of each file 27,942,770,176/1644
=16996818.841849148418491484184915
average size of each day's log = 27,942,770,176/9.5
=2941344229.0526315789473684210526
about 3G
If I have a database size of 92G, based on the archive log size of about 3G per day, can I conclude that a change of 3G/92G is considered as few block change?
hw can i check the refresh rate of existing Mview..
View 4 Replies View RelatedI am working on an assignement where client is using Oracle 10g but stuck to using RBO Now the application team, from the GUI available to them build dynamic queries and some of them run very slow.
Neither the code can not be changed to tune the queries nor do we get the exact step in the plan which is an issue (being RBO).For some long running queries the Tuning advisor is not producing any recommendations.
Another hurdle is that all the application users are using same application user id so we can not write a logon trigger to use CBO for some particular queries to see what is happening in the background!
I want to tuning the next sql sentence. In this sql I want to get the hash_value and sql_text of the sentences that it's causing TX blocks. Is it possible?. This sentence works fine but sometimes It's slow.
SELECT DISTINCT hash_value,
sql_text
FROM gv$sql sq
WHERE hash_value IN (SELECT DISTINCT prev_hash_value
FROM gv$session se
WHERE sid IN (SELECT sid
FROM gv$lock l
WHERE type = 'TX'
AND ctime >= 2000
AND l.inst_id = se.inst_id
AND l.sid = se.sid)
AND sq.inst_id = se.inst_id);
[code]....
I see one of my SQL's which is ran by the user on a 10.2.0.3 database changing its SQL_ID after some runs even if the query is not changed a bit! However the HASH VALUE for this query remains the same.
how a same query can have different SQL_ID's but same HASH_VALUE?
Note: Statistics are not modified on the base tables of this query.
I am running Oracle 10.2.0.1.0 on MS Windows 2003 server 64-bit with 16G RAM.
Here is the findings for my Oracle database.
SQL> select * * from v$sgainfo;
NAME BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size 1293560 No
Redo Buffers 7094272 No
Buffer Cache Size 830472192 Yes
[code]...
I find that the SGA component "Buffer Cache" is decreasing from the start "1.8G" and down to now 0.8G. On the other hand, the component "Shared Pool" is increasing from the start 0.3G to now 1.2G. I noticed that there are 100 operations of shrinking of "Buffer cache" and growth of "Shared Pool" in Oracle every day.Is it a indicator that I should raise up the SGA_MAX_SIZE?
I tried to increase the SGA_MAX_SIZE to 4G. But I cannot start the Oracle afterward.Is it a limitation of MS Windows(OS) or Oracle?I set the SGA_MAX_SIZE to 3G. This time, I can startup Oracle.What is the optimum/maximum I can set to SGA_MAX_SIZE?Is there any adverse effect/concern when setting the SGA_MAX_SIZE more than 2G?
Here i have three tier application. I want to know it host name from sid or sqlid . I want to know which query run on which host. Because i have one user from application to database. So i want to know which query consume more time on which host ?
View 19 Replies View RelatedHow can I disable redo generation for DML statements.
View 25 Replies View RelatedI am getting enq: TS - contention as top event in AWR Report.where can i look into database to fix this issue.
View 14 Replies View RelatedI want to run some OLTP benchmarks on my system. I have looked up the TPC-E benchmarking suite .. but the documentation on the site makes no sense to me .
View 1 Replies View Related