Performance Tuning :: Tools For Monitoring Load Of Database?
Dec 3, 2010which tools are available for monitoring load of the database?
View 4 Replieswhich tools are available for monitoring load of the database?
View 4 RepliesLooking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
get all the unused index in the system , if i put this query in batch job and execute it every night upto one months and store its data in a table and after one months i can get all the used indexes and left would be our unused indexes.
select
distinct p.object_name c1
from
dba_hist_sql_plan p,
dba_hist_sqlstat s
[Code]....
I have a OWB mapping which takes input from a staging table and add those row to the Cube. The underlying table behind cube is a relational fact table joined with the dimensions using foreign keys. Explain plan behind the query has a rather high cost and the mapping runs for 30 minutes. If you see below, in step 17, the cost goes up to 1,396,573 which is also where nested loops start to appear. Query plan is also attached in the image format.
Plan
SELECT STATEMENT ALL_ROWSCost: 1,746,526,275 Bytes: 386,835,904 Cardinality: 464,947
46 NESTED LOOPS OUTER Cost: 1,746,526,275 Bytes: 386,835,904 Cardinality: 464,947
41 NESTED LOOPS OUTER Cost: 1,744,200,663 Bytes: 380,791,593 Cardinality: 464,947
37 NESTED LOOPS OUTER Cost: 1,743,270,415 Bytes: 374,747,282 Cardinality: 464,947
34 NESTED LOOPS OUTER Cost: 1,740,476,128 Bytes: 368,702,971 Cardinality: 464,947
29 NESTED LOOPS OUTER Cost: 1,739,545,862 Bytes: 362,658,660 Cardinality: 464,947
25 NESTED LOOPS OUTER Cost: 1,710,193,475 Bytes: 356,614,349 Cardinality: 464,947
[code].........
I need to create a script which can fetch the error related details from Trace files regularly (may be twice a day ) to a custom table in DB.
View 3 Replies View RelatedThere is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
11.2.0.1Aix 6.1 5L (quadcore, 16GRam) I am still confused how to take full advantage of these monitoring tools. Actually the our database performance is currently satisfactory, except for occasional few minutes spikes of CPU highs > 80 .I just want to catch the culprit process/program responsible for this spikes. Is it wise to run ASH, AWR, ADDM with an input from time 1AM to 1AM next day? What I mean is I will analyze a 1-day period, so that I can catch the program/process that has the higest cpu/memory usage for the day.
View 24 Replies View RelatedI am trying to use SQL Tracker but when ja want to start monitoring w3wp.exe proccess, it says:
"Failed to create remote thread; error=8(Not enough storage is available to proccess this command.)"
When I try to monitor some other proccess(example toad.exe), it works. I run SQL Tracker in administrator mode.
which are recommended Tool for load testing (for performance) on Oracle-J2EE, 3 Tier applications?
Is 'Oracle Application Test Suite' the best for such test where we can simulate numbers of users and their various actions?
Does it come with Oracle Database license or we have to buy it separately?
An SQL query is taking a lot of time than usual and not completing even left after hours! The query joins a table with a quite complex view.
The same query in a test database completes in less than 2 mins.
I would like to export the sql plan from test database to prod database.
how to export/import in 10.2.0.4 version for a particular sql statement's execution plan.
we can't use the Exadata Plugin for Cloud Control but we need some monitoring of the Cell Servers.Does OS Watcher is the right tool or do we need ADRCI for incidents and so on.
What do have to install and what information do we get.
insight into the overheads for mutally authenticated SSL for database connections? This is over a fast local network, to a RAC cluster, with DB firewall in front. There's always a large element of "it depends"
Information I'm interested in are things like latency for initial session setup and subsequent data transfer. Also the increase in network packet size, and the increase in CPU cost for the database server. I guess there is some implications for session memory usage as well.
I have one big database which I need to migrate to Oracle, because it rocks with big databases, instead of other databases and when I was made transfer software and all works great except one more think. During this process I found that Oracle normally fill log & undo table, and my question is how to migrate (or can I migrate) database to oracle without filling undo database (deactivate this process) and after that to put database to work normally, because I just need to transfer data as is and from that point Oracle goes on...
View 4 Replies View Relatedi am using 11.2.0.3 version of oracle. We have recently migrated to 11g, after 1 month of smooth and comparatively better performance, we are suddenly facing performance issues with our database and it got crashed twice within 5 days. even we didnt push any new code to our database in recent past, atleast after the 11G migration. And after getting feedback from the ORACLE corporation guys , they pointed out about the default database stats gathering job, which was eating most of the CPU, because of the default degree mentioned So it was running in 160 parallel threads causing resource starvation.so we reduce the degree of the stats gathering job to 8 .
But the database crashed again two days back, and rebooted within 3 mins to back to normal, even after this default degree changed to 8. This is happening due to any specific application related sql or anything else.
I am doing import and export of database.Before loading data i drop all the tables and import.Is there any issue if we do drop tables and import data frequently.
View 2 Replies View RelatedI understand that when data is read from the disk, I/O is done..And When computations are done then CPU is used..Then where the following equation fits?
DB Time = sum of database CPU time + waits
Is I/O considered as a part of CPU time?
Does this equation changes with SAN, OS caching?
we are using oracle 9i on AIX Server. When Customer were accessing the database, accidentally power was shut down. we restarted the Server,and Oracle database. all resumed successfully.
However while doing "Payments by the customer" it takes a lot of time to insert even a single payment record on database.The database is Live and our customer are very much frustrated,
SQL> show parameter sga
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1152M
sga_target big integer 0
[code]....
in scenario above, the database do not using ASMM, and spfile If I wan to increase db_cache_size parameter, do i need to rebounce instance?
I am facing a weird situation wherein the explain plan of same sql in SIT and PROD is different.In fact the explain plan is very costly in Prod.Also the DB version of both SIT and PROD is same.
Below is the sql and corresponding explain plan in Prod and SIT respectively.
Query:
SELECT seq,CCN,ProcessorPart,root_item,comp_path,Item,comp_item,comp_item_type,
lag(comp_item_type,1,'PART') over(PARTITION BY seq ORDER BY lvl)Nxt_comp_item_type,lvl,bom_qty,
ROUND(CASE min(abs(bom_qty)) OVER (PARTITION BY seq ORDER BY lvl)
WHEN 0 THEN 0 ELSE 1 END * EXP (SUM (LN (nullif(abs(bom_qty),0))) OVER (PARTITION BY seq ORDER BY lvl))) Ulti_qty,
'AMER'
[code]...
The tables referred in above query is small tables containing arnd 10k records.The above tables are partitioned on Region and not indexed.
Explain Plan in Prod: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS165173613539322883634804
SORT UNIQUE236360
UNION-ALL
PARTITION LIST SINGLE117240
[code]...
Explain Plan in SIT: COST CARDINALITY BYTES
SELECT STATEMENT, GOAL = ALL_ROWS3211689
SORT UNIQUE347240
UNION-ALL
PARTITION LIST SINGLE172120
[code]...
I am not able to attribute why there is a huge change in Cost between SIT and Prod.Apparently the Job is going for 3-5 hours which used to get completed within 20mins in SIT.
We have database with multiple fields containing NULL values and in many queries we have NVL function which in turn is suppressing the index usage when in fact it is really essential (selectying very few rows from massive data) instead of creating lot of Function based indexes (NVL) or composite indexes with (nullable_column, constant) I am thinking of settting a default value for most of the fields In that regard I have some queries :
Which approach is better - setting default value for the fields or updating the fields with default value and modyfing inserts to take care of future data? Though altering table and modifing column to set default value looks better considering it will take care of data inserted in the future, it will invalidate the subroutines.I understand in 10g both statement will generate lot of undo (though in 11g, I heard things changed for setting default value of a column) How to take care of all the queries which are using the criteria 'where column1 IS NULL' or 'where column1 IS NOT NULL'. It will be really difficult task to manually change each and every occurrence of such condition even using user_source.
Finally for numeric values say for ID field which starts from 1 onwards 2,3,4 etc, we can set 0 as sensible default so that the performance is not affected.
Is there such precaution for varchar2 field purely from performance point of view?
I have a question regarding memory parameters in oracle database 9.2.0.8, especially sga_max_size and db_cache_size. Database server has 32G of ram. Oracle parameter on server shmmax is set to 16G. Is reasonable to set sga_max_size to the same value, and db_cache_size to 80% of that size?
View 2 Replies View RelatedBy default the DBMS_STATS package runs once every 24 hours to collect statistics for database objects and Oracle collects new statistics when enough of the data (about 10%) has changed.
My question here is how to check the table has changed 10% in database?
As there was a database performance, we started to analyse the issue by running ADDM using below.
@?/rdbms/admin/addmrpt.sql.
The result of running ADD is below.
DETAILED ADDM REPORT FOR TASK 'TASK_64718' WITH ID 64718
Analysis Period: 10-APR-2012 from 06:00:15 to 07:00:22
Database ID/Instance: 324353546/1
Database/Instance Names: JACK/jack1
Host Name: fdsfggwe001
[code].....
There was no significant database activity to run the ADDM.The database's maintenance windows were active during 100% of the analysis period.The analysis of I/O performance is based on the default assumption that the average read time for one database block is 10000 micro-seconds. An explanation of the terminology used in this report is available when you run the report with the 'ALL' level of detail.
Current parameters set in database are below
statistics_level ---- string
timed_statistics --- boolean
1.Do i need to set the initialization parameter to some other value?
2.should i run the below command before executing the ADD report ? exec dbms_workload_repository.create_snapshot();
I support to get a handle on statistics collectionn in their data warehouses. It seems developers have created several ANALYZE TABLE jobs but the code for these is not stored as PLSQL in the database and thus it is problematic for statistics collection. Even if we collect stats that way we want, these jobs kick in and overlay the statistics we collect every day.
Is there a way to AUDIT ANALYZE TABLE? I can't find it anywhere.
Is there a way to globally turn of ANALYZE TABLE in a 9i database?
A website requires to display consolidated data from databases located in different geographical regions (India, London and New York). The application server for the website is hosted only in one location India. What are the techniques that can be used for faster retrieval of data from all 3 databases?
Note: There is no need of real time data retrieval from different regions; however the user should able to view the updated data at predefined intervals.
its possible to have multiple LGWR processes for a single database.If its possible how does the multiple processes write from redo log buffer to online redo log file.
View 3 Replies View RelatedI am querying v$sga and getting variable size : 211337216 bytes.when querying v$sgastat then getting
java Pool : 16777216
Large Pool : 41943040
Shared pool : 398560392
But as per my knowledge following condition should satisfy,but not getting
[code]
Variable sga = java pool + large pool + shared pool
select pool,name,sum(bytes)
from v$sgastat
where pool in ('shared pool','java pool','large pool')
group by pool,name;
Here variable size using v$sga : 211337216 bytes
and java pool + large pool + shared pool : 211302536 bytes.
[/code]
but it should match?
All the analysis till now on our system proves that our system is clearly I/O bound and db sequential read is the biggest culprit.
We have even identified the index which is being affected by sequential read. I am thinking of creating a new tablespace with 32K blocksize (currently all table spaces are 8k) and migrate this index to the new space. That way, Oracle will have to do less number of reads to get the required data.
But is there anything wrong in having just one tablespace with a differnt block size? Or is there anything that I have to be watchful about while doing it?
I am using Release 11.2.0.3.0 - 64bit Production version of oracle. Now we are having 3-tier architecture, (firewal/web/app/DB).Now i saw , some of the 'sql' queries, running till ~10hrs in my database and those are part of application(module JDBC THIN CLIENT). After had a talk java guys, they ask to kill the sessions specific to those queries. They are part of search TO, in which user put some large values for the date range and went to other TAB, but these queries gets running infinitely in the database, and user is not interested in the result set.
So how to avoid these things, as because in past, our database has suffered resource contention leading to application slowness. So i was planing to set different timeouts using 'database resource consumer group' for online user request and batch request depending on the app server(that is by machine names) request.
So i have done below set up in my local to test one scenario, in which i will try give a database call from difference machine, and it should get timeout after the specified duration. But its not working , as expected. The calls from the specified machine are not getting assigned to the created 'Consumer group'.
Begin
-- create the pending area
dbms_resource_manager.create_pending_area();
END;
/
BEGIN
-- Create the consumer group
[code]....
After this when i am verifying calls from machine, 'LR9XY7T8' they are belongs to the consumer group 'OTHER_GROUPS' and sql query not getting timed out within 60 seconds as mentioned.