Performance Tuning :: NOLOGGING In Oracle 11gr2?
Oct 16, 2012
I am building a database to store call quality statistics for VOIP networks. It is a very insert heavy application, and data reliability is of relatively minimal importance (in the sense that a few corrupt call records here and there doesn't matter the way corruption does in for example a banks database). Long term storage is also unimportant, most customers only wish to keep 3 months of data readily available in the database. Most do not even archive the older data.
To that end I am searching for every possible way to improve my insert performance and the internet has turned me onto the idea of NOLOGGING. These are the steps I have taken to reduce my IO consumed by the Redo and Undo logs.
1. I am inserting with the APPEND_VALUES hint.
2. I have disabled force logging at the database level
3. I have disabled force logging at the tablespace level
4. I have disabled logging on the relevant table and each of its indices
As best I can tell this is all I can do to minimize Redo/Undo, but based on my observations of the Disk portion of the WinServer2008 Performance Monitor, this has made little to no change in the amount of IO to my REDO and UNDO files. IO to the .dbf containing my table makes up less than 20% of the total disk IO for oracle.exe, the rest is the REDO and UNDO logs.
asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:897564200346274711
The above article is a little over my head but I am able to extract from it that I will never entirely eliminate REDO/UNDO, which is fine, but I would think I could get it lower than it currently is.ted.
View 26 Replies
ADVERTISEMENT
Dec 29, 2012
I am aware that from 11g, memory_target is sufficient for memeory management between SGA and PGA.
what happens if MEMORY_TARGET set to non-zero and SGA_TARGET set to zero values in a 11g database? Does it enable automatic memory management within the SGA?
We regularly hit by ORA-4031 errors. Also, memory_target advisory (v$memory_target_advice) does not show any advisory information.
for eg:
memory_max_target = 500m
memory_target = 500m
and
sga_max_size=500m
sga_target=0
View 6 Replies
View Related
Dec 24, 2012
The below query is taking high CPU almost 98% and longer time to execute.
SELECT ancestor,
Max(D.alarmstate) ALARMSTATE,
Max(D.sialarmstate) SIALARMSTATE,
Max(D.uncralarmstate) UNCRALARMSTATE,
Max(M.commstate) COMMSTATE,
Max(M.nncommstate) NNCOMMSTATE,
Max(M.servicestate) SERVICESTATE,
Max(M.abnormal) ABNORMAL,
CASE
[code]....
View 15 Replies
View Related
May 25, 2013
I am using 11gR2 on windows server. This is the query that runs many times a day and effect badly the performance of database. I don't have much idea about this query.
SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS failed_count
FROM sys.dba_audit_session
WHERE returncode != 0 AND TO_CHAR(timestamp, 'YYYY-MM-DD HH24:MI:SS') >= TO_CHAR(current_timestamp - TO_DSINTERVAL('0 0:30:00'), 'YYYY-MM-DD HH24:MI:SS')
View 1 Replies
View Related
Feb 9, 2012
We are having major differences in performance when accessing an Oracle 11gR2 database from two different environments.
I won't sadden you with the gory details of the full application but I will use an indicative example of our problem.
We have SQL Developer in both environments. In each of these SQL Developer instances is a defined a connection to the same database on another machine. So there is the PC looking at a database and a virtual instance looking at the same database.
Selecting the '+' sign next to 'tables' for that connection, on the PC the list of four tables comes back in 4 seconds but on the virtual instance the list of the same four tables comes back in 1 minute and 50 seconds.
This sort of timing issue is repeatable for all accesses to the Oracle database on the third machine.
View 4 Replies
View Related
Jul 12, 2013
I have installed database in one server. I would like to enable AWR into it. Statistics_level is set to Typical. While running the below script to enable the AWR, its gives error -
SQL> exec dbms_scheduler.enable('GATHER_STATS_JOBS');
BEGIN dbms_scheduler.enable('GATHER_STATS_JOBS'); END;
*
ERROR at line 1:
ORA-27476: "SYS.GATHER_STATS_JOBS" does not exist
ORA-06512: at "SYS.DBMS_ISCHED", line 4343
ORA-06512: at "SYS.DBMS_SCHEDULER", line 2802
ORA-06512: at line 1
make AWR automatical generation.
View 3 Replies
View Related
Aug 28, 2012
I am running an Oracle 10.2.0.3 on Solaris 5.9 OS. Front end appplication is PeopleSoft v8.8.From my AWR report I have found below SQL which needs to be tuned:
SELECT TO_CHAR (TO_DATE (TO_CHAR (B.ASOFDATE, 'YYYY-MM-DD'), 'yyyy-mm-dd'),
'dd/mm/yyyy'),
B.EMPLID,
B.PWCUK_LEGACY_ID,
B.NAME,
[code]...
View 6 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
Nov 13, 2010
Recently i have been working analyzing Oracle Row Chaining and Migration in the database. Is their any way to track the chaining & migration of rows as part of database health checkup.
Else we have analyze the table for detecting row Chaining and Migration.
View 4 Replies
View Related
Oct 31, 2013
We are re-designing our App and we have a critical question, what's the best way (in terms of performance) of using TIFF images (about 20K size) with Oracle.
Currently we have a Windows shared file server and we create the tiff images there under a huge directory structure (like /images/ddddmmyy/aa/bb/001, then /images/ddddmmyy/aa/bb/002, etc, etc). Our database is usually in LINUX version 10, 11 or 12. We create about 200,000 images per day, keep them for 60 days and then remove that structure.
Our Web app (developed with .NET) reads those images just to display them on a Web Session (IE).As you can see, what we are doing now works fine. But network sometimes is an issue and also it's hard to keep synchronization with our DR server, backups, etc.
Are we taking the correct approach? It would be better to have the images in CLOB or BLOBS for better performance? If so, As I mentioned, performance is the KEY FACTOR and the most important item to consider in this design.
View 6 Replies
View Related
Oct 25, 2010
I am querying a table having 6 million records,It takes arround 35 seconds in command line terminal using putty in a windows client.
Also the server is linux centos with oracle 11g installed. I saw the method Oracle result_cache which i am suspecting to be fast
my query is
"select /*+ result_cache */ * from relationshipobject;"
SQL> set autotrace traceonly stat
SQL> set timi on
[Code]....
i can i reduce my query execution time and consistent gets?
View 10 Replies
View Related
Mar 24, 2013
how do you limit oracle redo?
View 2 Replies
View Related
Dec 2, 2010
I have a table "NEWS_COMMENT" like this:
Name Type
------- --------------
ID NUMBER(8)
USERID NUMBER(8)
SORT_TEXT VARCHAR2(100)
TEXT VARCHAR2(1000)
DATE DATE
VALID VARCHAR2(1)
CODNEW NUMBER(10)
The table has a normal index for the userid column.
There is a query that looks for the differents CODNEW for a USERID but allways the CODNEW has to be greater than 2248833
select codnew from news-comment where userid=2914655 and valid='N' and codnew>2248833
I have created a new index for this kind of querys
create index coment_new_IDX on news_comment
(CASE WHEN codnew >2248833 and valid='N' THEN userid ELSE NULL END )
but oracle doesn't use it. I have used a hint to force it but doesn't run.
View 9 Replies
View Related
Jun 7, 2011
We have a data migration scripts written for oracle. Data is not huge but we are observing that the migration is faster in the development labs but is 5x slower in the production site.
The development Oracle setup is on Windows and Production setup on Solaris. I have attached the AWR generated for a period where migration was run for 3 hours and stopped due to slow performance.
Here is my initial analysis.
1) The first timed events is the DB CPU. Hence I feel the migration scripts can be modified to run in parallel so that they can finish faster. However here the question arises why it should run faster in development env if this is an issue.
2) I tried increasing the
a.large_pool_size set to 512M
b.sga_max_size set to 8G
c.sga_target set to 8G
from 0, 4G and 4G respectively.
I have attached the AWR and below are the etc/system contents for solaris settings.
* Begin MDD root info (do not edit)
rootdev:/pseudo/md@0:0,1,blk
* End MDD root info (do not edit)
set noexec_user_stack=1
set noexec_user_stack_log=1
* IBMdpo vpath_START (do not remove)
* default SCSI timeout is 60 seconds
* uncomment to change SCSI timeout * set sd:sd_io_time=0x1e
forceload: drv/vpathdd
* IBMdpo vpath_END (do not remove)
set noexec_user_stack=1
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmsl=256
set semsys:seminfo_semvmx=32767
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
P.S. The awr report is renamed to .txt from .html to be able to upload the file.
View 6 Replies
View Related
Apr 7, 2011
Here, let me explain:
I have create a table with 8 million records and 2 different indexes using 2 different columns (columns name NUM1 & NUM2) on that table. First indexed column (NUM1) values have many different values (1,2,3... etc).
Second indexed column (NUM2) values have only 2 different values.
7999999 records values is same("A") and remaining one record values is different("B").
Query1:
select * from tbl where num1=val
Query2:
select * from tbl where num2='B'
I have compare explain plan both queries, but Query2 doesn't use predefined index. Why Oracle don't use my redefined index at column NUM2?
View 5 Replies
View Related
Sep 9, 2010
Oracle UNION ALL performance issue: when I try to run below SQL query separately SQL part1 and SQL part2 it takes some seconds only but if I run together with group by and without group by it take much time.
SELECT AVG(date_completed-login_date),to_char(to_date(login_date), 'YYYY') as wYear FROM
(
SELECT test.date_completed 'date_completed',sample.login_date 'login_date')
FROM sample test
where (some conditions) ) ---SQL part 1
UNION ALL
[code]...
View 33 Replies
View Related
Feb 21, 2013
I have a query optimized as to it indexes and others runs immediately when the answer is few records in SQL Server such as Oracle, however when the result is large eg 20,000 records all data access times are very diferent. The query returns many fields (about 20) and some of them are of type Varchar 250 and some of 2000 I understand here may be the problem, but not is because for similar results (20,000 records) sql run in 2 seconds and Oracle but it responds little to have full data takes around 30 seconds. The problem is really in bringing information to all these fields since if the inquiry it also but only returning a numeric field is done in 2 seconds. Tests I've done them both through ODBC, in the Toad as in the own Oracle console on the server, so it is not problem Driver or flow of data through the network, I would like to think that this is some of the settings I think there is as much difference between Oracle and Sql. The databases are ORACLE 10 and SQL Server 2008.
View 1 Replies
View Related
Jun 4, 2012
A coworker of mine asked if there was any documentation from Oracle that listed all of the parts of the AWR report and what each meant. I was taken back because I don't think there is. There are third party books that talk about AWR reports and their predecessor Statspack reports.
Oracle has some notes on their support site about reading an AWR or Statspack report. All I found in the official documentation was some basic information about how to run an AWR report and an overview of what it was. It would be nice to have some sort of documentation that lists out each section and explains the units and purpose.
View 3 Replies
View Related
Jun 4, 2013
avoid duplication of **where** clause in my query.
In my below query, **JOIN** condition is same for both the queries and **WHERE** condition also same except this clause "and code.code_name="transaction_1" In **IF ** condition only credit and debit is swapped on both queries, due to this **Credit and Debit** and this where clause "and code.code_name="transaction_1" I am duplicating the query. avoid this duplication. I am using oracle 11g
SELECT day AS business_date,
SUM(amount) AS AMOUNT,
type_amnt AS amount_type,
[Code]....
View 2 Replies
View Related
Apr 3, 2012
We have a query which makes Oracle behave very strangely. It is a straight-forward join between four tables of about 30.000 rows each, with some simple comparisons and some NOT LIKE:s.
When we run this query, it either takes about 1 second or more than 1.000 seconds to run and return the approximately 5.000 rows of the result. If we run the same query over and over again, it fluctuates back and forth between two different execution plans, apparently at random, 3 times out of 4 selecting the 1.000 second version and 1 time out of 4 the 1 second version.
There are no other connections to the database, the schema is not modified, the data is identical, the query is identical, and the response is identical, but the execution time alternates between 1 second and 1.000 seconds.On the same database instance we have another schema which is identical, but with slightly less data, which is used for development. The 1.000 second run times did not happen in that schema, but only in the test system's database.
Therefore we would REALLY like to understand what happens and why, so that we can avoid triggering this in the future. We could try locking the 1 second execution plan, but then we're afraid of doing the same thing wrong again in the future.
Here are the two execution plans that Oracle switches between, more or less at random:
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
5455 5455 5455 HASH JOIN (cr=15663 pr=10536 pw=0 time=855673 us cost=82273 size=2707430769293 card=14028138701)
79272 79272 79272 TABLE ACCESS FULL GROUPS (cr=1008 pr=0 pw=0 time=22154 us cost=277 size=10693 card=289)
[code]...
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
5455 5455 5455 HASH JOIN (cr=15664 pr=0 pw=0 time=778178696 us cost=30838477 size=741611997206725 card=3842549208325)
375411 375411 375411 TABLE ACCESS FULL GROUP_GROUPS_FLAT (cr=3782 pr=0 pw=0 time=51533 us cost=1029 size=25152738 card=375414)
[code]...
The query:
select g.ucid, a.ucid
from account a, groups g, group_members gm, group_groups_flat ggf
where a.ucid = gm.ucid_member
and gm.ucid_group = ggf.ucid_member
[code]...
And excerpts from the schema:
CREATE TABLE "PDB"."GROUPS"
(
"UCID" VARCHAR2(256 BYTE),
"UNIX_GID" NUMBER(*,0),
[...]
[code]...
View 4 Replies
View Related
Nov 18, 2012
I have a question regarding memory parameters in oracle database 9.2.0.8, especially sga_max_size and db_cache_size. Database server has 32G of ram. Oracle parameter on server shmmax is set to 16G. Is reasonable to set sga_max_size to the same value, and db_cache_size to 80% of that size?
View 2 Replies
View Related
Dec 30, 2010
We have large tables 60-70 GB having 120 million records. We have to perform index rebuild frequently which takes significant time to complete and effects database performance too. how we can use index Coalesce? what are its benefits, coalesce results in performance gain?
View 10 Replies
View Related
Mar 11, 2013
Ways for improving the Table performance which holds million of records for oracle. Currently we have partitioning and indexing but it doesn't seem to work.
View 2 Replies
View Related
Jul 17, 2012
I have an Oracle Database 11gR2 installed on Windows 2008 server. But there is a kind of hang sometimes arise during work hours. while i am opening control panel i saw oracle process is around 15G even we configured SGA_MAX_TARGET=6g.
View 2 Replies
View Related
Jul 18, 2010
Need sql scripts which can identify the fragmented table in oracle 10g.
View 23 Replies
View Related
Mar 9, 2012
I have a question regrading how oracle fetches record in case of partitioning...
we have a table of nearly 6 crores of records.. that table has several columns among which the DBA has given the range partition on COL1 and hash subpartition on COL2...
CREATE TABLE ORDER_BOOK
(CUST_ID NUMBER(10),
ORDER_DATE DATE,
...........,
...........,
...........)
PARTITION BY RANGE (CUST_ID)
SUBPARTITION BY HASH (ORDER_DATE)
[code]........
I want to know how the rows will get organised in the tablespaces...the DBA didn't mention any tablespace name in the main partition and he has mentioned the tablespace names only in the subpartition template. so how the records will get organised.
View 5 Replies
View Related
Mar 30, 2006
we have oracle 8i/win nt. (Prod Server running 24 X7)
On doing a explain plan on long running queries. i realized a couple of columns (that were used in the WHERE condition, were not having any indexes.)
More importantly, here we do not have a Test environment.
I would like to know if i can go ahead and create indexes for those 2 columns by issuing
CREATE INDEX index_name on table_name(Column_name);
on our PDB.
Would this be of any impact on our PDB? Do i check anything more prior to building these indexes. or Do have i create them at a down time of PDB.
View 4 Replies
View Related
Jul 21, 2011
Trying to export a TUNINGSET.Here are my steps below.
select * from dba_sqlset order by name;
ID NAME OWNER DESCRIPTION CREATED LAST MOD STATEMENT_COUNT
8LOAD_TEST_1SYS11g upgrade load test 1.15-JUL-1115-JUL-11180
7LOAD_TEST_2SYS11g upgrade load test 2.15-JUL-1115-JUL-11210
[code]...
ERROR at line 1:
ORA-19374: invalid staging table
ORA-06512: at "SYS.DBMS_SQLTUNE", line 4328
ORA-06512: at "SYS.DBMS_SQLTUNE", line 6829
ORA-06512: at line 2
View 2 Replies
View Related
Dec 14, 2010
I need to warn readers that I am not a DBA but am heavily involved in application development. Whatever I know about database tuning is whatever I've managed to pick up via self-learning, and I must admit that the sum total of my knowledge isn't a lot.
Anyway, our "DBAs" recently did an upgrade to our 10g database, going from version 10.2.0.2.0 to 10.2.0.4.0. Immediately after the upgrade, a particular query has started to under-perform. The query itself was not altered in any way during the upgrade.
We have two explain plans for the query, a before and an after plan. The two plans are similar but not identical. The plans are too massive to post here, so I hope the following synopsis of the differences will do.
The 10.2.0.2.0 plan:
shows a HASH GROUP BY
has a TempSpc column in the explain plan
shows a particular table (EMP_HISTORY) as having ~1700 rows
The 10.2.0.4.0 plan:
shows SORT GROUP BY instead of HASH GROUP BY does not show a TempSpc column in the explain plan shows the EMP_HISTORY table as having only 25 rows
Other than these points, no other discernible differences can be noted. I'm wondering what would cause HASH to change to SORT. I'm told that stats are up-to-date.
View 5 Replies
View Related
May 7, 2011
I am currently in the favorable situation in which I have excess amounts of memory available on the database server - a single node setup. The server only serves the single instance and no other processing. Database size is around 2.3tb and memory is 50gb. For the majority of processing, AIX is allocating a significant amount (anywhere from 30-40%) of the memory to the AIX file system cache (persistent pages).
I've been trying to find documentation about this, but have not had any luck yet. My guess is that it would be better to allow Oracle to cache this data - meaning increase the SGA target and max size to allow for a larger buffer cache. However, the nice thing about the AIX cache is if process memory is needed, the file system cache gives up pages. If the memory was allocated to the SGA, its pretty much locked in.
I have read several articles stating that a larger buffer cache is not always better, as a larger cache takes more management. But having both of the caches active seem to be a waste of memory, effectively storing the data twice - once in AIX persistent pages and a second time in Oracle database buffer cache.
View 4 Replies
View Related