Performance Tuning :: Get AWR Report Data?
Aug 8, 2012I am having only select_catalog_role in database. Can I take complete AWR report data from awr views without using DBMS_WORKLOAD_REPOSITORY package?
View 6 RepliesI am having only select_catalog_role in database. Can I take complete AWR report data from awr views without using DBMS_WORKLOAD_REPOSITORY package?
View 6 RepliesMy DB is oracle10g.
I have AWR report comparison for two different days. I want to find the below things.
1. Which day has better performance?
2. What are the top two findings on the report.
I attached the report.
Here are my answer. Please correct me if i am wrong.
1. Which day has better performance? Second day has higher load. Since redosize is showing very high.
2. What are the top two findings on the report.
a) Compared to two days, first day, little bit more I/0 wait for single block read.
b) Compared to two days, 2nd day, it takes higher CPU.
However, which day is best compared to two days?
I am trying to generate AWR report for database observation. But I am not getting any snapshot listed there. below is the output of my awrrpt.sql
SQL> @?/rdbms/admin/awrrpt.sql
Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance
----------- ------------ -------- ------------
1140984076 AFCCV 1 afccv
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text Defaults to 'html'
Enter value for report_type: html
Type Specified: html
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
------------ -------- ------------ ------------ ------------
* 1140984076 1 AFCCV afccv SERVICEDB1
Using 1140984076 for database Id
Using 1 for instance number
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent (n) days of snapshots being listed. Pressing <return> without specifying a number lists all completed snapshots.
Enter value for num_days: 3
Listing the last 3 days of Completed Snapshots
Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap:
Till statspack we had
elapsed time = time spent on waits + time CPU was used
Total time during snaps = Elapsed time + (may be) time waited for CPU...In AWR is it possible to draw such equation? I can see that the AWR report has following elements
1) End Snap time - Begin Snap time
2) DB time - as mentioned at the top of AWR report
3) DB CPU - in "Top 5 Timed Foreground Events" (I assume this is 'CPU used by sesson timing' in statspack)
4) Total of time for all Statistics in "Time Model Statistics"
5) BUSY_TIME + IDLE_TIME - "Operating System Statistics"
Time between 2 snapshots? or what else? Also for which seconds to multiply to 'DB Time(s)' per second and 'DB CPU(s)' per second in Load Profile to get the db time and CPU time?
I would like to generate HTML awr report and save it to my local machine.
After running $ORACLE_HOME/rdbms/admin/awrrpt.sql in sqlplus and specifying HTML in (Enter value for report_type: HTML) i hvae get numerous html code in my sqlplus prompt.. I want to save the html report in local machine and open it by double clicking on it.. it will be opened in a browser..
I was trying to generate AWR report, but the report which got generated consist most of the sections without data. Later i came to know that AWR report is not fully supported in 11g? Is that true?
View 6 Replies View RelatedI want to take AWR report at specific date ex(25-05-2010).
View 2 Replies View RelatedI understand that when data is read from the disk, I/O is done..And When computations are done then CPU is used..Then where the following equation fits?
DB Time = sum of database CPU time + waits
Is I/O considered as a part of CPU time?
Does this equation changes with SAN, OS caching?
One of my procedure recently taking very long time to execute. I'm attaching AWR report. But I don't know how to read and understand this.
View 4 Replies View RelatedIn AWR report, in order to find the disk i/o, Should I see the avg read(ms) under Tablespace I/O and Filesystem I/O columns?
View 1 Replies View RelatedA coworker of mine asked if there was any documentation from Oracle that listed all of the parts of the AWR report and what each meant. I was taken back because I don't think there is. There are third party books that talk about AWR reports and their predecessor Statspack reports.
Oracle has some notes on their support site about reading an AWR or Statspack report. All I found in the official documentation was some basic information about how to run an AWR report and an overview of what it was. It would be nice to have some sort of documentation that lists out each section and explains the units and purpose.
Is it possible to generate AWR report for the duration of 5 min? As we know that snapshots are generated for every 1 hour, which we specified in parameter.
By changing the parameter to 2 min, what could be the impact on database?
Now, I'm tired to capture one-by-part image, and realize.
I've not seen any application or utility to make this work easier to me.
In ASH report
there is a section that goes like this
SQL ID Planhashed Sampled # of Executions % ActivityEvent% Event Top Row Source
fdy93qpr1227 1567 7.58direct path read 3.65TABLE ACCESS - FULL
does it suggest that this SQL has been executed for 1567 times is this correct .
I ran an AWR report. The database looks fine, but a data load that loaded 1 Million rows an hour is now doing 500K per hour.
Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) %DB time
DB CPU 224 80.70
Other 2,668 0 28 10 9.99
System I/O 4,753 0 9 2 3.23
Administrative 1 0 6 5543 2.00
Commit 357 0 4 11 1.46
[code]....
The network value for wait: 630,601. What does this mean? Anything I should look at? When it was 1million per hour, the value was 4,563,000.
Top 5 Timed Foreground Events
Event Waits Time(s) Avg wait (ms) % DB time Wait Class
DB CPU 224 80.70
unspecified wait event 2,666 28 10 9.99 Other
control file sequential read 4,753 9 2 3.23 System I/O
switch logfile command 1 6 5543 2.00 Administrative
log file sync 357 4 11 1.46 Commit
How can i differentiate between system issued sql's and user issued sql's in the tkprof report ?
View 5 Replies View RelatedWe are using 11.2.0.3.0 on solaris 10 facing slow performance, following are the Wait Events in AWR report, Also if any specific document to analyze AWR report and to pin point the performance bottleneck.
Foreground Wait Events
**********************
Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
-------------------------- ------------ ----- ---------- ------- -------- ------
direct path read 308,729 0 21,191 69 58.0 39.5
db file sequential read 208,754 0 3,742 18 39.2 7.0
cursor: pin S 19,541,899 0 2,561 0 3,668.5 4.8
[code]....
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
View 10 Replies View RelatedDB Used : Oracle 10g.
A table X : NUM, INST are column names
NUM ----- INST
1234 ----- 23,22,21,78
2235 ----- 20,7,2,1
1298 ----- 23,22,21,65,98
9087 ----- 20,7,2,1
-- Based upon requirement :
1) Split values from "INST" Column : suppose 23
2) Find all values from "NUM" column for above splitted value i.e 23 ,
Eg:
For Inst : 23 ,
It's corresponding "NUM" values are : 1234,1298
3) Save these values into
A table Y : INST, NUM are column names.
INST NUM
23 1234,1298
1) I have a thousand records in Table X , and for all of those records i need to split and save data into Table Y.Hence, I need to do this task with best possible performance.
2) After this whenever a new data comes in Table X, above 'split & save' operation should automatically be called and append corresponding data wherever possible..
sometimes when I re-run a query a few times, the speed after the first run become much faster. this is a problem for me when I'm trying to optimize a query. is there some sort of cache? can it be disabled?
View 7 Replies View RelatedCREATE OR REPLACE procedure fast_proc (p_rows out number)
is
TYPE object_id_tab IS TABLE OF all_objects.object_name%TYPE INDEX BY BINARY_INTEGER
lt_object_id object_id_tab;
CURSOR c IS
[Code]....
Warning: Procedure created with compilation errors.
Errors for PROCEDURE FAST_PROC:
LINE/COL ERROR
-------- ---------------------------------------------------------
13/7 PL/SQL: SQL Statement ignored
13/22 PL/SQL: ORA-03001: unimplemented feature
I am not able to do INSERT but I am able to do UPDATE/DELETE? What is this inbuilt functionality?
We have few tables in our production database which are havoc in size and will increase in size in future too so as part of the corrective measures , we have jotted down the below 3 methods to manage the size of those tables :-
1> Partitioning the table and take the export of identified partitions and after that, truncate those partition.
2> Creating history tables and remove not so current data from the original table to history table.
I'm extracting/retrieving the data from the oracle database using Java application it's bit slow. However, when I retrieve from the SQL server it's faster than oracle.
View 6 Replies View RelatedWe have a data migration scripts written for oracle. Data is not huge but we are observing that the migration is faster in the development labs but is 5x slower in the production site.
The development Oracle setup is on Windows and Production setup on Solaris. I have attached the AWR generated for a period where migration was run for 3 hours and stopped due to slow performance.
Here is my initial analysis.
1) The first timed events is the DB CPU. Hence I feel the migration scripts can be modified to run in parallel so that they can finish faster. However here the question arises why it should run faster in development env if this is an issue.
2) I tried increasing the
a.large_pool_size set to 512M
b.sga_max_size set to 8G
c.sga_target set to 8G
from 0, 4G and 4G respectively.
I have attached the AWR and below are the etc/system contents for solaris settings.
* Begin MDD root info (do not edit)
rootdev:/pseudo/md@0:0,1,blk
* End MDD root info (do not edit)
set noexec_user_stack=1
set noexec_user_stack_log=1
* IBMdpo vpath_START (do not remove)
* default SCSI timeout is 60 seconds
* uncomment to change SCSI timeout * set sd:sd_io_time=0x1e
forceload: drv/vpathdd
* IBMdpo vpath_END (do not remove)
set noexec_user_stack=1
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmsl=256
set semsys:seminfo_semvmx=32767
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
P.S. The awr report is renamed to .txt from .html to be able to upload the file.
We are on Oracle 10.2.0.4 on Solaris 10. There is a table in my production db that has 872944 number of rows. Most of its data is now unnecessary, we need to retain, based on a date column in the table just last one month's data and delete rest of the data. So after that the table will have just 3000 rows.
However as the table was huge earlier(872k rows prior to delete) , does the delete of data release its oracle blocks and does the size of the table reduce? If not, will it rebuild the table online (online redefinition) so that the query that does a full scan on this table goes faster?
I checked using an example table that just delete of data does not remove the oracle blocks - they remain in the user_tables for that table and cost of full table scan remains same. We have a query that does the full table scan so I am thinking that after this delete I should do an online table re-definition , is that the right decision?
I create a view on production server which takes almost 10 to 12 minutes when it shows data. this view contains 3 or 4 tables on which all primary and unique columns have indexes.which index will be better for fast retrieval of data .
View 5 Replies View RelatedI have a table which contains 8,21,177 amount of data totally.Now I am trying to delete around 4,84,000 of data from this table by using just one filter i.e. my query is something like below
DELETE /*+ parallel(resource,4) */ FROM resource where created_by = 'MIGN'
This is going to delete 4,84,000 rows of data . But my current issue is this is taking lots of time to delete the data . To be precise , its almost taking 25 hours to delete this data..The created_by column is indexed .
Execution Plan
----------------------------------------------------------
Plan hash value: 2389236532
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 499 | 20459 | 39 (0)| 00:00:
01 |
| 1 | DELETE | RESOURCE | | | |
[code]....
We are copying our transaction tables data into another database for our reporting applications (say every day midnight refresh will happen).
The Transaction Database has some 30tables. Existing system is following below points and it is taking 2hours to complete.
1) Truncate data from reporting database (or schema)
2) Direct path Insert into reporting database (or schema) as select * from transaction tables.
3) Rebuild index and Enable constraints.
Note: Each tables data will vary from 30lakhs to 50lakhs. Dump/import/export is not advised by the client.
I want to cut down the time i.e., below 2hours. Instead of above method. Can go for a field in each table specifying the time of each records update/insert operation and then pick the modified records only and copy into reporting db.
I am inserting data using a procedure for 2012 and 2013 year which is using partitioned tables includes crore of data in a partition taking lot of time or taking months. Is there any other way by which I can insert data fast from our query.
View 14 Replies View RelatedI have created a materialized view and also a normal View, which has 3 tables used in both the views, when inserted new records it reflects in a normal view but when i select the materialized view i cant see the updated data.
here is the materialized view i created;
CREATE MATERIALIZED VIEW pct_sales_materialized
BUILD IMMEDIATE REFRESH ON DEMAND
ENABLE QUERY REWRITE
AS
SELECT A.DEP_NAME,B.EMP_ID,C.EMP_NAME
FROM department_head A,department_child B,emp_detail C
WHERE A.DEP_ID = B.DEP_ID
AND B.EMP_ID = C.EMP_ID