Performance Tuning :: Can Have Multiple LGWR Processes For Single Database

Jul 2, 2010

its possible to have multiple LGWR processes for a single database.If its possible how does the multiple processes write from redo log buffer to online redo log file.

View 3 Replies


ADVERTISEMENT

Performance Tuning :: How To Determine System Memory Usage By Oracle Processes

Jun 20, 2012

we have 96GB Memory on the UNIX server and 85% of its usage shows oracle processes I want to determine which Oracle processes are taking most of the memory

SGA is around 36G
SGA_TARGET is 40G
PGA is around 4G

the total of around 40-45 GB of usage is understandable but what other oracle process are chewing up the remaining 30-40 GB on the server is not known

load averages: 7.35, 6.46, 6.15; up 248+11:33:21 12:25:03
2202 processes: 2196 sleeping, 1 zombie, 5 on cpu
CPU states: 83.8% idle, 10.5% user, 5.8% kernel, 0.0% iowait, 0.0% swap
Memory: 96G phys mem, 15G free mem, 128G total swap, 128G free swap

PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
21720 oracle 258 0 0 40G 40G cpu/48 215:28 2.04% oracle
10709 oracle 1 0 2 1816K 1448K cpu/9 0:02 0.90% res_conf_email_
[code]......

View 6 Replies View Related

Performance Tuning :: How To Tune 100s Of Transactions Over Single Table

Sep 24, 2012

I am trying to find some way how to tune and optimize the server performance in following situation. There are 100s of sessions inserting records to one table. Sessions are communication threads in java application, each thread is receiving messages that are to be stored in the table. Each message must be commited and then is ACK sent to remove client. Two problems are raising of course - much of ITLs on the table and lots of very small transactions. I can adjust the java application, but cant do much about the design.

I was thinking about some "caching" - if the messages are stored in memory and bulk-inserted to database by single thread the performance would be much higher. However, there would be possible loss of data - the message could be lost from memory cache and client already received ACK.

View 10 Replies View Related

Performance Tuning :: Tools For Database Tuning And Instance Tuning

Jul 12, 2010

Looking to understand the difference between instance tuning and database tuning.

What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.

However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?

View 1 Replies View Related

Performance Tuning :: Multiple SELECT Statement

Apr 8, 2011

I'm working on a query that will show how many differents SKUs we have on-hand, how many of those SKUs have been cycle-counted, and how many we have yet to cycle-count.I've prepared a sample table and data:

CREATE TABLE SKU
(
ABC VARCHAR2(1 CHAR),
SKU VARCHAR2(32 CHAR) NOT NULL,
Lastcyclecount DATE,
[code]....

What I also want to do is select another column that will group by sku.abc and count the total number of A, B, and C SKUs where the lot.qty is > 0:

SELECT sk.abc AS "STRATA",
COUNT (DISTINCT sk.sku) AS "Total"
FROM sku sk,
(SELECT sku
FROM lot
WHERE qty > 0) item
WHERE item.sku = sk.sku(+)
GROUP BY sk.abc

Finally, I need the last column to display the DIFFERENCE between the two totals from the queries above (the difference between the "counted" and the "total"):

COUNT (DISTINCT sk.sku) - COUNT (DISTINCT s.sku)

View 6 Replies View Related

Performance Tuning :: Method Of Tuning Database - Row Reduction?

Oct 20, 2010

There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.

what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?

View 6 Replies View Related

Performance Tuning :: Procedure Performance On New Database Import?

Nov 15, 2010

Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.

The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.

any possible reason that we'd have to re-install a procedure to correct a performance problem?

View 13 Replies View Related

Multiple Processes To Write File?

Sep 10, 2012

In PL\SQL program, I am writing information from one table to file. In my current architecture, I am writing the information to approximately 1000 files.

If I put the database write operation in another package and in another method and call this method from my PL\SQL program asychronously, can that increase performance?

View 1 Replies View Related

Precompilers / OCI & OCCI :: Connection Pooling Over Multiple Processes

Dec 9, 2009

We are currently using OCI to connect to Oracle DB using c language. Each process has its own dedicated Connection and Env handle, session handle ...

Right now we decided to use Connection Pooling. In the documentation , Connection pooling examples were using multi-threaded environment. But in our case its multi-process system and we cant modify products architecture to multi-thread.

I would like to know how we can use Connection Pooling on multiple processes.

View 2 Replies View Related

Windows / .NET :: Multiple Oracle ODBC Drivers In Single Database

Jan 9, 2013

I work in a large bank in a department that produces reports for different areas of the bank. By and large, we use Microsoft Office products to interface with our Oracle databases.Recently, we had two new databases come online that use Oracle 11g - we were not using any 11g databases before this point. We have two other databases that we use that run on Oracle 10g.

Up until the two new databases were brought in, our reporting was done from systems that used 10g and 9I. We all ran the Oracle 9I driver to connect to them, and it worked very well without issue. With the addition of the 11g databases to our reporting pool, we have been forced to upgrade our ODBC connections to the 11g driver, and it has not gone well at all. I had one query that typically ran in 30 minutes take +13 hours+ to run yesterday. Speed is not the only issue, either; we have sporadic ODBC call fails, crashes, and other general failures to deal with.

Our Oracle DBAs have been trying to solve the issue, but have not yet found a solution, and each day that this goes on we fall further and further behind, as I have daily time-sensitive reports to send out that depend on this data.

One of our DBAs said she read somewhere that Oracle had not included MS Access support in the 11g driver, and that the errors were due to the imperfect connection that the driver created. I don't know of there's any truth to that, but it would provide an explanation for our troubles. We have to use Access for our reporting, as over 90% of our existing reports and processes use Access, and having to change over everything at once is just not feasible.

Is there any way to force the 9I and 11g ODBC connections to coexist, so that we don't have to use the 11g driver for our 10g databases? Or is there a better 11g driver available?

View 2 Replies View Related

Performance Tuning :: Export SQL Plan From Test Database To Prod Database?

Jul 16, 2013

An SQL query is taking a lot of time than usual and not completing even left after hours! The query joins a table with a quite complex view.

The same query in a test database completes in less than 2 mins.

I would like to export the sql plan from test database to prod database.

how to export/import in 10.2.0.4 version for a particular sql statement's execution plan.

View 2 Replies View Related

LGWR Switch In Database Alert Log Every 3-4 Minutes

Jul 6, 2012

i am seeing (LGWR switch) happening in my databsae alert log every 3-4 minutes. is that appropriate? if not, what measure can i take to reduce this?

View 1 Replies View Related

Performance Tuning :: SSL On Database Connections?

Oct 10, 2012

insight into the overheads for mutally authenticated SSL for database connections? This is over a fast local network, to a RAC cluster, with DB firewall in front. There's always a large element of "it depends"

Information I'm interested in are things like latency for initial session setup and subsequent data transfer. Also the increase in network packet size, and the increase in CPU cost for the database server. I guess there is some implications for session memory usage as well.

View 4 Replies View Related

Performance Tuning :: Big Database Migration?

Jul 4, 2012

I have one big database which I need to migrate to Oracle, because it rocks with big databases, instead of other databases and when I was made transfer software and all works great except one more think. During this process I found that Oracle normally fill log & undo table, and my question is how to migrate (or can I migrate) database to oracle without filling undo database (deactivate this process) and after that to put database to work normally, because I just need to transfer data as is and from that point Oracle goes on...

View 4 Replies View Related

Performance Tuning :: Database Crash Due To CPU Starvation?

Sep 1, 2013

i am using 11.2.0.3 version of oracle. We have recently migrated to 11g, after 1 month of smooth and comparatively better performance, we are suddenly facing performance issues with our database and it got crashed twice within 5 days. even we didnt push any new code to our database in recent past, atleast after the 11G migration. And after getting feedback from the ORACLE corporation guys , they pointed out about the default database stats gathering job, which was eating most of the CPU, because of the default degree mentioned So it was running in 160 parallel threads causing resource starvation.so we reduce the degree of the stats gathering job to 8 .

But the database crashed again two days back, and rebooted within 3 mins to back to normal, even after this default degree changed to 8. This is happening due to any specific application related sql or anything else.

View 9 Replies View Related

Performance Tuning :: Import And Export Of Database

Oct 14, 2010

I am doing import and export of database.Before loading data i drop all the tables and import.Is there any issue if we do drop tables and import data frequently.

View 2 Replies View Related

Performance Tuning :: Database Time In AWR Report

Jul 9, 2012

I understand that when data is read from the disk, I/O is done..And When computations are done then CPU is used..Then where the following equation fits?

DB Time = sum of database CPU time + waits

Is I/O considered as a part of CPU time?

Does this equation changes with SAN, OS caching?

View 3 Replies View Related

Performance Tuning :: Database Is Slow On Insert

Mar 23, 2012

we are using oracle 9i on AIX Server. When Customer were accessing the database, accidentally power was shut down. we restarted the Server,and Oracle database. all resumed successfully.

However while doing "Payments by the customer" it takes a lot of time to insert even a single payment record on database.The database is Live and our customer are very much frustrated,

View 1 Replies View Related

Performance Tuning :: Restart Database After Increase Db_cache_size?

Aug 23, 2012

SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1152M
sga_target big integer 0

[code]....

in scenario above, the database do not using ASMM, and spfile If I wan to increase db_cache_size parameter, do i need to rebounce instance?

View 6 Replies View Related

Performance Tuning :: Same Query With Different Explain Plan In Two Database?

Aug 17, 2012

I am facing a weird situation wherein the explain plan of same sql in SIT and PROD is different.In fact the explain plan is very costly in Prod.Also the DB version of both SIT and PROD is same.

Below is the sql and corresponding explain plan in Prod and SIT respectively.

Query:
SELECT seq,CCN,ProcessorPart,root_item,comp_path,Item,comp_item,comp_item_type,
lag(comp_item_type,1,'PART') over(PARTITION BY seq ORDER BY lvl)Nxt_comp_item_type,lvl,bom_qty,
ROUND(CASE min(abs(bom_qty)) OVER (PARTITION BY seq ORDER BY lvl)
WHEN 0 THEN 0 ELSE 1 END * EXP (SUM (LN (nullif(abs(bom_qty),0))) OVER (PARTITION BY seq ORDER BY lvl))) Ulti_qty,
'AMER'

[code]...

The tables referred in above query is small tables containing arnd 10k records.The above tables are partitioned on Region and not indexed.

Explain Plan in Prod: COST CARDINALITY BYTES

SELECT STATEMENT, GOAL = ALL_ROWS165173613539322883634804
SORT UNIQUE236360
UNION-ALL
PARTITION LIST SINGLE117240

[code]...

Explain Plan in SIT: COST CARDINALITY BYTES

SELECT STATEMENT, GOAL = ALL_ROWS3211689
SORT UNIQUE347240
UNION-ALL
PARTITION LIST SINGLE172120

[code]...

I am not able to attribute why there is a huge change in Cost between SIT and Prod.Apparently the Job is going for 3-5 hours which used to get completed within 20mins in SIT.

View 5 Replies View Related

Performance Tuning :: Handling NULL Values In The Database?

Feb 6, 2012

We have database with multiple fields containing NULL values and in many queries we have NVL function which in turn is suppressing the index usage when in fact it is really essential (selectying very few rows from massive data) instead of creating lot of Function based indexes (NVL) or composite indexes with (nullable_column, constant) I am thinking of settting a default value for most of the fields In that regard I have some queries :

Which approach is better - setting default value for the fields or updating the fields with default value and modyfing inserts to take care of future data? Though altering table and modifing column to set default value looks better considering it will take care of data inserted in the future, it will invalidate the subroutines.I understand in 10g both statement will generate lot of undo (though in 11g, I heard things changed for setting default value of a column) How to take care of all the queries which are using the criteria 'where column1 IS NULL' or 'where column1 IS NOT NULL'. It will be really difficult task to manually change each and every occurrence of such condition even using user_source.

Finally for numeric values say for ID field which starts from 1 onwards 2,3,4 etc, we can set 0 as sensible default so that the performance is not affected.

Is there such precaution for varchar2 field purely from performance point of view?

View 3 Replies View Related

Performance Tuning :: Memory Parameters In Oracle Database 9.2.0.8

Nov 18, 2012

I have a question regarding memory parameters in oracle database 9.2.0.8, especially sga_max_size and db_cache_size. Database server has 32G of ram. Oracle parameter on server shmmax is set to 16G. Is reasonable to set sga_max_size to the same value, and db_cache_size to 80% of that size?

View 2 Replies View Related

Performance Tuning :: How To Check Table Has Changed 10% In Database

Dec 13, 2011

By default the DBMS_STATS package runs once every 24 hours to collect statistics for database objects and Oracle collects new statistics when enough of the data (about 10%) has changed.

My question here is how to check the table has changed 10% in database?

View 23 Replies View Related

Performance Tuning :: Tools For Monitoring Load Of Database?

Dec 3, 2010

which tools are available for monitoring load of the database?

View 4 Replies View Related

Performance Tuning :: No Significant Database Activity To Run ADDM

Apr 11, 2012

As there was a database performance, we started to analyse the issue by running ADDM using below.

@?/rdbms/admin/addmrpt.sql.

The result of running ADD is below.

DETAILED ADDM REPORT FOR TASK 'TASK_64718' WITH ID 64718
Analysis Period: 10-APR-2012 from 06:00:15 to 07:00:22
Database ID/Instance: 324353546/1
Database/Instance Names: JACK/jack1
Host Name: fdsfggwe001
[code].....

There was no significant database activity to run the ADDM.The database's maintenance windows were active during 100% of the analysis period.The analysis of I/O performance is based on the default assumption that the average read time for one database block is 10000 micro-seconds. An explanation of the terminology used in this report is available when you run the report with the 'ALL' level of detail.

Current parameters set in database are below

statistics_level ---- string
timed_statistics --- boolean

1.Do i need to set the initialization parameter to some other value?

2.should i run the below command before executing the ADD report ? exec dbms_workload_repository.create_snapshot();

View 10 Replies View Related

Performance Tuning :: Audit Analyze Table On 9i Database?

Apr 19, 2011

I support to get a handle on statistics collectionn in their data warehouses. It seems developers have created several ANALYZE TABLE jobs but the code for these is not stored as PLSQL in the database and thus it is problematic for statistics collection. Even if we collect stats that way we want, these jobs kick in and overlay the statistics we collect every day.

Is there a way to AUDIT ANALYZE TABLE? I can't find it anywhere.

Is there a way to globally turn of ANALYZE TABLE in a 9i database?

View 2 Replies View Related

Performance Tuning :: Database Links - Display Consolidated Data?

Oct 14, 2013

A website requires to display consolidated data from databases located in different geographical regions (India, London and New York). The application server for the website is hosted only in one location India. What are the techniques that can be used for faster retrieval of data from all 3 databases?

Note: There is no need of real time data retrieval from different regions; however the user should able to view the updated data at predefined intervals.

View 9 Replies View Related

Performance Tuning :: Oracle Database 10g Enterprise Edition 10.2.0.4.0 - 64bi

Sep 2, 2011

I am querying v$sga and getting variable size : 211337216 bytes.when querying v$sgastat then getting

java Pool : 16777216
Large Pool : 41943040
Shared pool : 398560392

But as per my knowledge following condition should satisfy,but not getting

[code]

Variable sga = java pool + large pool + shared pool
select pool,name,sum(bytes)
from v$sgastat
where pool in ('shared pool','java pool','large pool')
group by pool,name;

Here variable size using v$sga : 211337216 bytes

and java pool + large pool + shared pool : 211302536 bytes.

[/code]
but it should match?

View 5 Replies View Related

Performance Tuning :: Tablespace With Different Block Size Inside Same Database?

Nov 25, 2011

All the analysis till now on our system proves that our system is clearly I/O bound and db sequential read is the biggest culprit.

We have even identified the index which is being affected by sequential read. I am thinking of creating a new tablespace with 32K blocksize (currently all table spaces are 8k) and migrate this index to the new space. That way, Oracle will have to do less number of reads to get the required data.

But is there anything wrong in having just one tablespace with a differnt block size? Or is there anything that I have to be watchful about while doing it?

View 14 Replies View Related

PL/SQL :: Merge Multiple Rows Into Single Row (but Multiple Columns)

Oct 17, 2012

How to merge multiple rows into single row (but multiple columns) efficiently.

For example

IDVal IDDesc IdNum Id_Information_Type Attribute_1 Attribute_2 Attribute_3 Attribute_4 Attribute_5
23 asdc 1 Location USA NM ABQ Four Seasons 87106
23 asdc 1 Stats 2300 91.7 8.2 85432
23 asdc 1 Audit 1996 June 17 1200
65 affc 2 Location USA TX AUS Hilton 92305
65 affc 2 Stats 5510 42.7 46 9999
65 affc 2 Audit 1996 July 172 1100

where different attributes mean different thing for each Information_type. For example for Information_Type=Location

Attribute_1 means Country
Attribute_2 means State and so on.

For example for Information_Type=Stats

Attribute_1 means Population
Attribute_2 means American Ethnicity percentage and so on.

I want to create a view that shows like below:

IDVal IDDesc IDNum Country State City Hotel ZipCode Population American% Other% Area Audit Year AuditMonth Audit Type AuditTime
23 asdc 1 USA NM ABQ FourSeasons 87106 2300 91.7 46 85432 1996 June 17 1200
65 affc 2 USA TX AUS Hilton 92305 5510 42.7 46 9999 1996 July 172 1100

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved