CPU Usage Is Ok On The Server - But Database Is Slow With High CPU Wait?
Jan 26, 2013
I have a RAC 48 cores on solaris. I check dbconsole when application performance is very slow and everyone complains, and I see that the main wait is cpu - also on the awr report. however when I check server cpu I see about 80% idle! so how can I make oracle use more cpu power instead of waiting for it? I don't think that parallel is an option here because I can't change the application code.
I am running oracle linux 5.7 on VMware Workstation 9 and trying to install Grid 11gr2, during installation my system freezes and I see out of memory errors. Actually Java installer is eating up too much memory over 1gb. Is there a way ? I can limit memory usage by Java process to a certain extent ?
We have a Oracle 10g database with RAC and Dataguard. When we look at the AWR report, the wait time shown by Oracle for this database is very high.
Service Time : 15.36% Wait Time : 84.64%
This would imply Oracle is waiting for resources 85% of the time and only processing SQL queries during 15% of its non-idle time. However when we check the OS (RHEL), the iowait is only about 10% and the CPU is 80% idle. This means that that processing horsepower is available.
As such, the results between the OS and Oracle database (AWR report) seems contradictory. OS says we have CPU/IO capacity, however Oracle says we don't.
Is there any way to tune the following query using lot of CPU:-select description,time_stamp,user_id from bhi_tracking where description like 'Multilateral:%'The explain plan for this is query is:-
Bhi_tracking is used for reporting purpose and contain millions of records.Generally we keep one year data in this table and delete the remaining.Can I drop the table after taking export and then import it back or can i truncatethe table and then insert the rows into it to enhancethe performance.
We are using the 11g AMM feature and Memory_Target set to 96GB and total RAM on the Server is 128GB Now the top and free shows up only 200MB memory free on the system.
There are 2 process dbw0 and dbw1 which consumes the top memory and this is 30GB per dbw.
Why is the dbw process taking up so much memory when there is not much load on the database.
I am trying to look at wait events for a long running query in TOAD.I start the query on one instance of TOAD and open the Session Browser on another instance.But I am surprised to find that in "TOtal Waits" on the RHS-> SQL*Net message from client is the longest time taking and is already -> 178577 units whereas I have just started the query.
Whereas in the Current Waits it shows DB File Scattered Read currectly as some seconds.
I have setup of two node (prod-db1, prod-db2) clustered database 11gR2 on windows 2008 R2 server. Everything is working fine at this setup.
My question is: Is there a way to make the Enterprise manager Database control run and be available at both the nodes independently. What I see that even at node 2 (which is prod-db2) the EM-DBControl is (https://prod-db1:1158/em) - which means the agent is running at node 1 (prod-db1) only.
My question is that how to make the EM-DBControl also run separately at prod-db2. My idea is to make the high availability of EM-DBControl (in case Prod-db1 machine is down).
We are experiencing tx row lock wait time over hours. There is no blocking session and it seems that the application hangs. What is funny is that when we gather_stats on the tables, those tx row lock wait are being released.
We are using one software it is a test tool for verify the data base posting speed from server to client systems. In windows 2008 R2, database posting speed is very slow when compare to windows 2003 server .
Server configuration is same for both servers ( RAID 5 , RAM 4 GB) how we can improve writing performance in Oracle
when I was analyzing high CPU utilization issue, I saw that the most of the top PID's were INACTIVE in database. But it was utilizing more than 4% CPU. how it is utilizing CPU without doing any work in database?
Redo is getting generated very high. how to find out the reason ? database kept under 2 node cluster. chcked alert log trace and log writer trace files. pasted the content as below:
--alert log trace from node1 ( node2 also has same type of message ). Archive destination disk group - TXCOM_BACKUP_01 having enough space ( 80gb )
Mon Jan 7 00:49:10 2013 Thread 1 advanced to log sequence 448546 (LGWR switch) Current log# 1 seq# 448546 mem# 0: +TXCOM_DATA_01/txcom/onlinelog/group_1.274.785770579 Current log# 1 seq# 448546 mem# 1: +TXCOM_DATA_01/txcom/onlinelog/group_1.302.802265189 Mon Jan 7 00:49:10 2013
[code]...
In the alert log, I am able to see the archive destination disk group ( TXCOM_BACKUP_01 ) is getting DISMOUNTED and again getting MOUNTED during every archive file generation. .
Mon Jan 7 00:49:20 2013 SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted
archive destination parameter in both nodes are not configured. it should read diskgroup name. ( +TXCOM_BACKUP_01 ) and corresponding size limit. Should i configure this ?
SQL> show parameter db_recovery
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_recovery_file_dest string db_recovery_file_dest_size big integer 0
[code]...
should i bring the database to mount stage and set log_archive_max_proesses to high count ? now value is 2 ( default )
I have deleted lot of records in a table.Would oracle be able to insert in the empty blocks generated from deletion of records without bringing the high water mark down.
we are using oracle 9i on AIX Server. When Customer were accessing the database, accidentally power was shut down. we restarted the Server,and Oracle database. all resumed successfully.
However while doing "Payments by the customer" it takes a lot of time to insert even a single payment record on database.The database is Live and our customer are very much frustrated,
In OEM10 I can click on the database size in the database page. When I do this I am redirected by the OEM to teh Database space usage report.
I have 3 questions about reports in OEM10g:
1. Can I create drill-down reports in OEM10g?I want to create a report that shows the space usage per host and drill down to the databases (targets) that are on that host.
2.Can I create a link to the Database space usage report? I would like to redirect to the report from the report in question 1 from a target. So that I can see the space usage of the database that was selected.
3. Can I create a copy of the Database Space Usage summary report that the OEM shows? I can't find it in the report tab of the OEM.
I have 15 million of records as csv, want to load through sqlloader Is sqlloader is the right option to load high volume of data? I have loaded with 2.5 lac records which has taken 4 mins to load.
I have one tablespace called U01. This tablepspace contains 31 data files. Due to high water mark I was unable to most datafiles. Since my database running onair application they will not provide me downtime to move the tables. Is there anyway to fix the high water mark without getting downtime window? almost 700+g space unused. I need to reuse them asap because running out of space with in asm diskgroup.
i have a nightly import ( about 20 tables ) and it takes up to 5 hours..we have one table of about 800,000 lines and the rest are between 1000 and 200,000 this is very slow when i monitor the import i see a very long amount of wait for the SQLnet from client ,
i run the import on the Database server itself .. if i check the current statement i see it's moving from one to one for instance i have
SELECT /*+ all_rows ordered */ "A".ROWID, 'REPORT', 'CONTRACT_LVL', 'SYS_C001329497' FROM "REPORT"."CONTRACT_LVL" "A" WHERE NOT (LENGTH (bonus_nat) <= 31) then SELECT /*+ all_rows ordered */ "A".ROWID, 'REPORT', 'CONTRACT_LVL', 'SYS_C001684584' FROM "REPORT"."CONTRACT_LVL" "A" WHERE NOT (LENGTH (outcome_cd) <= 1)
etc and it takes hours DB is on windows 2003 runnin oracle RDBMS 9.2.0.7 while the import screen show 185000 lines imported..I also see a lot of consistent gets for this sessions raising at that time..Would it be better to export import without statistics ?
I need also to mention that the dump file comes from a linux hosted Database don't think it will make the difference for a exp/imp.It's a peoplesoft Database there are a lot of tables more than 15000 and if i take the table mentioned above and i want to check its constraints it takes decade before toad can display them.I have seen that we have a incredible amount of constraints on those tables it might be the reason .
I just wonder if the system catalog needs to be tuned ? /* Update */ why but now the huge number of wait is no as "Library cache lock".
We have Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production on AIX platform. We've noticed that the CPU usage on the server is slowly but steadily grows since instance restart: after the instance restart the maximum CPU usage is about 25-30%, and a month later with the comparable transactional load it's about 60-65%. In the top I see a lot of Oracle sessions each of which use about 1% of CPU, so I cannot find the one "CPU eater".
Server: Windows 2008 R2 x64 Oracle Database 11g Release 11.2.0.1.0 Standard Edition
I recently applied the Oracle® Database Server Version 11.2.0.1 Patch 16 I have several databases running on this box with the same application.Since then I've had lots of customers complaining about slow systems. The server has plenty of RAM availableI also have several other servers all on the same versions, same patch, running the same application with no issues.
Looking at v$session there are often lots of active sessions that have a sql_address of "00". I've never seen this before and I regularly look at v$session, as you can see for below these have been running for a while. The sessions do drop off but I am at a loss as to what is happening.
select username, last_call_et, sid, serial#, user#, status, sql_address, sql_hash_value from v$session where username is not null and status = 'ACTIVE' order by last_call_et desc, sid
My problem is only on Some Saturday's my Oracle server's Load average goes high (more than 300 ).
-- oracle version 11.2.0.1.0 -- runs on Sun solaris 10 -- Sun fire V 440 -- Sun storEdge 3315 connected to the server.
Same setup is working find with higher volumes without any problem but only on saturday's, that too not on all saturdays, some specific saturdays the load average goes high.
At the time nothing will be processed from the application side and the cpu utilisation goes high upto 95 %.
I am sending necessary information as follows.
vmstat 6 10 kthr memory page disk faults cpu r b w swap free re mf pi po fr de sr s0 s1 s3 s4 in sy cs us sy id 3 0 0 28660024 6513408 285 212 2044 2 2 0 0 0 17 0 42 830 3469 1380 22 5 74 125 0 0 29027480 6156256 2 6 34 0 0 0 0 0 5 0 10 791 53770 30278 83 17 0 125 0 0 29027680 6157496 29 140 21 1 1 0 0 0 5 0 9 786 52756 30309 83 17 0 116 0 0 29031600 6159896 24 125 0 0 0 0 0 7 3 0 5 819 54081 31069 83 17 0
I just did a 112G file migration of production data using oracle_datapump so I know this works in principle. When I tried it on my test instance I am seeing stuff like this
why it could be taking 1800 seconds to select one record from a not very big table? File corruption? Disc fragmentation? Oracle instance configuration?