Performance Tuning :: Oracle Row Chaining And Migration
Nov 13, 2010
Recently i have been working analyzing Oracle Row Chaining and Migration in the database. Is their any way to track the chaining & migration of rows as part of database health checkup.
Else we have analyze the table for detecting row Chaining and Migration.
We have a data migration scripts written for oracle. Data is not huge but we are observing that the migration is faster in the development labs but is 5x slower in the production site.
The development Oracle setup is on Windows and Production setup on Solaris. I have attached the AWR generated for a period where migration was run for 3 hours and stopped due to slow performance.
Here is my initial analysis.
1) The first timed events is the DB CPU. Hence I feel the migration scripts can be modified to run in parallel so that they can finish faster. However here the question arises why it should run faster in development env if this is an issue. 2) I tried increasing the a.large_pool_size set to 512M b.sga_max_size set to 8G c.sga_target set to 8G from 0, 4G and 4G respectively.
I have attached the AWR and below are the etc/system contents for solaris settings.
* Begin MDD root info (do not edit) rootdev:/pseudo/md@0:0,1,blk * End MDD root info (do not edit) set noexec_user_stack=1 set noexec_user_stack_log=1 * IBMdpo vpath_START (do not remove) * default SCSI timeout is 60 seconds * uncomment to change SCSI timeout * set sd:sd_io_time=0x1e forceload: drv/vpathdd * IBMdpo vpath_END (do not remove)
set noexec_user_stack=1 set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=1024 set semsys:seminfo_semmsl=256 set semsys:seminfo_semvmx=32767 set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10
P.S. The awr report is renamed to .txt from .html to be able to upload the file.
I have one big database which I need to migrate to Oracle, because it rocks with big databases, instead of other databases and when I was made transfer software and all works great except one more think. During this process I found that Oracle normally fill log & undo table, and my question is how to migrate (or can I migrate) database to oracle without filling undo database (deactivate this process) and after that to put database to work normally, because I just need to transfer data as is and from that point Oracle goes on...
I have installed database in one server. I would like to enable AWR into it. Statistics_level is set to Typical. While running the below script to enable the AWR, its gives error -
SQL> exec dbms_scheduler.enable('GATHER_STATS_JOBS'); BEGIN dbms_scheduler.enable('GATHER_STATS_JOBS'); END;
* ERROR at line 1: ORA-27476: "SYS.GATHER_STATS_JOBS" does not exist ORA-06512: at "SYS.DBMS_ISCHED", line 4343 ORA-06512: at "SYS.DBMS_SCHEDULER", line 2802 ORA-06512: at line 1
I am running an Oracle 10.2.0.3 on Solaris 5.9 OS. Front end appplication is PeopleSoft v8.8.From my AWR report I have found below SQL which needs to be tuned:
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
We are re-designing our App and we have a critical question, what's the best way (in terms of performance) of using TIFF images (about 20K size) with Oracle.
Currently we have a Windows shared file server and we create the tiff images there under a huge directory structure (like /images/ddddmmyy/aa/bb/001, then /images/ddddmmyy/aa/bb/002, etc, etc). Our database is usually in LINUX version 10, 11 or 12. We create about 200,000 images per day, keep them for 60 days and then remove that structure.
Our Web app (developed with .NET) reads those images just to display them on a Web Session (IE).As you can see, what we are doing now works fine. But network sometimes is an issue and also it's hard to keep synchronization with our DR server, backups, etc.
Are we taking the correct approach? It would be better to have the images in CLOB or BLOBS for better performance? If so, As I mentioned, performance is the KEY FACTOR and the most important item to consider in this design.
Name Type ------- -------------- ID NUMBER(8) USERID NUMBER(8) SORT_TEXT VARCHAR2(100) TEXT VARCHAR2(1000) DATE DATE VALID VARCHAR2(1) CODNEW NUMBER(10)
The table has a normal index for the userid column.
There is a query that looks for the differents CODNEW for a USERID but allways the CODNEW has to be greater than 2248833
select codnew from news-comment where userid=2914655 and valid='N' and codnew>2248833
I have created a new index for this kind of querys
create index coment_new_IDX on news_comment (CASE WHEN codnew >2248833 and valid='N' THEN userid ELSE NULL END )
but oracle doesn't use it. I have used a hint to force it but doesn't run.
I am building a database to store call quality statistics for VOIP networks. It is a very insert heavy application, and data reliability is of relatively minimal importance (in the sense that a few corrupt call records here and there doesn't matter the way corruption does in for example a banks database). Long term storage is also unimportant, most customers only wish to keep 3 months of data readily available in the database. Most do not even archive the older data.
To that end I am searching for every possible way to improve my insert performance and the internet has turned me onto the idea of NOLOGGING. These are the steps I have taken to reduce my IO consumed by the Redo and Undo logs.
1. I am inserting with the APPEND_VALUES hint.
2. I have disabled force logging at the database level
3. I have disabled force logging at the tablespace level
4. I have disabled logging on the relevant table and each of its indices
As best I can tell this is all I can do to minimize Redo/Undo, but based on my observations of the Disk portion of the WinServer2008 Performance Monitor, this has made little to no change in the amount of IO to my REDO and UNDO files. IO to the .dbf containing my table makes up less than 20% of the total disk IO for oracle.exe, the rest is the REDO and UNDO logs.
The above article is a little over my head but I am able to extract from it that I will never entirely eliminate REDO/UNDO, which is fine, but I would think I could get it lower than it currently is.ted.
I have create a table with 8 million records and 2 different indexes using 2 different columns (columns name NUM1 & NUM2) on that table. First indexed column (NUM1) values have many different values (1,2,3... etc).
Second indexed column (NUM2) values have only 2 different values. 7999999 records values is same("A") and remaining one record values is different("B").
Query1: select * from tbl where num1=val
Query2: select * from tbl where num2='B'
I have compare explain plan both queries, but Query2 doesn't use predefined index. Why Oracle don't use my redefined index at column NUM2?
Oracle UNION ALL performance issue: when I try to run below SQL query separately SQL part1 and SQL part2 it takes some seconds only but if I run together with group by and without group by it take much time.
SELECT AVG(date_completed-login_date),to_char(to_date(login_date), 'YYYY') as wYear FROM ( SELECT test.date_completed 'date_completed',sample.login_date 'login_date') FROM sample test where (some conditions) ) ---SQL part 1 UNION ALL
I have a query optimized as to it indexes and others runs immediately when the answer is few records in SQL Server such as Oracle, however when the result is large eg 20,000 records all data access times are very diferent. The query returns many fields (about 20) and some of them are of type Varchar 250 and some of 2000 I understand here may be the problem, but not is because for similar results (20,000 records) sql run in 2 seconds and Oracle but it responds little to have full data takes around 30 seconds. The problem is really in bringing information to all these fields since if the inquiry it also but only returning a numeric field is done in 2 seconds. Tests I've done them both through ODBC, in the Toad as in the own Oracle console on the server, so it is not problem Driver or flow of data through the network, I would like to think that this is some of the settings I think there is as much difference between Oracle and Sql. The databases are ORACLE 10 and SQL Server 2008.
A coworker of mine asked if there was any documentation from Oracle that listed all of the parts of the AWR report and what each meant. I was taken back because I don't think there is. There are third party books that talk about AWR reports and their predecessor Statspack reports.
Oracle has some notes on their support site about reading an AWR or Statspack report. All I found in the official documentation was some basic information about how to run an AWR report and an overview of what it was. It would be nice to have some sort of documentation that lists out each section and explains the units and purpose.
avoid duplication of **where** clause in my query.
In my below query, **JOIN** condition is same for both the queries and **WHERE** condition also same except this clause "and code.code_name="transaction_1" In **IF ** condition only credit and debit is swapped on both queries, due to this **Credit and Debit** and this where clause "and code.code_name="transaction_1" I am duplicating the query. avoid this duplication. I am using oracle 11g
SELECT day AS business_date, SUM(amount) AS AMOUNT, type_amnt AS amount_type,
I am aware that from 11g, memory_target is sufficient for memeory management between SGA and PGA.
what happens if MEMORY_TARGET set to non-zero and SGA_TARGET set to zero values in a 11g database? Does it enable automatic memory management within the SGA?
We regularly hit by ORA-4031 errors. Also, memory_target advisory (v$memory_target_advice) does not show any advisory information.
for eg: memory_max_target = 500m memory_target = 500m
We have a query which makes Oracle behave very strangely. It is a straight-forward join between four tables of about 30.000 rows each, with some simple comparisons and some NOT LIKE:s.
When we run this query, it either takes about 1 second or more than 1.000 seconds to run and return the approximately 5.000 rows of the result. If we run the same query over and over again, it fluctuates back and forth between two different execution plans, apparently at random, 3 times out of 4 selecting the 1.000 second version and 1 time out of 4 the 1 second version.
There are no other connections to the database, the schema is not modified, the data is identical, the query is identical, and the response is identical, but the execution time alternates between 1 second and 1.000 seconds.On the same database instance we have another schema which is identical, but with slightly less data, which is used for development. The 1.000 second run times did not happen in that schema, but only in the test system's database.
Therefore we would REALLY like to understand what happens and why, so that we can avoid triggering this in the future. We could try locking the 1 second execution plan, but then we're afraid of doing the same thing wrong again in the future.
Here are the two execution plans that Oracle switches between, more or less at random:
select g.ucid, a.ucid from account a, groups g, group_members gm, group_groups_flat ggf where a.ucid = gm.ucid_member and gm.ucid_group = ggf.ucid_member
[code]...
And excerpts from the schema: CREATE TABLE "PDB"."GROUPS" ( "UCID" VARCHAR2(256 BYTE), "UNIX_GID" NUMBER(*,0), [...]
I have a question regarding memory parameters in oracle database 9.2.0.8, especially sga_max_size and db_cache_size. Database server has 32G of ram. Oracle parameter on server shmmax is set to 16G. Is reasonable to set sga_max_size to the same value, and db_cache_size to 80% of that size?
We have large tables 60-70 GB having 120 million records. We have to perform index rebuild frequently which takes significant time to complete and effects database performance too. how we can use index Coalesce? what are its benefits, coalesce results in performance gain?
Ways for improving the Table performance which holds million of records for oracle. Currently we have partitioning and indexing but it doesn't seem to work.
I have an Oracle Database 11gR2 installed on Windows 2008 server. But there is a kind of hang sometimes arise during work hours. while i am opening control panel i saw oracle process is around 15G even we configured SGA_MAX_TARGET=6g.
I have a question regrading how oracle fetches record in case of partitioning...
we have a table of nearly 6 crores of records.. that table has several columns among which the DBA has given the range partition on COL1 and hash subpartition on COL2...
CREATE TABLE ORDER_BOOK (CUST_ID NUMBER(10), ORDER_DATE DATE, ..........., ..........., ...........) PARTITION BY RANGE (CUST_ID) SUBPARTITION BY HASH (ORDER_DATE) [code]........
I want to know how the rows will get organised in the tablespaces...the DBA didn't mention any tablespace name in the main partition and he has mentioned the tablespace names only in the subpartition template. so how the records will get organised.
I need to warn readers that I am not a DBA but am heavily involved in application development. Whatever I know about database tuning is whatever I've managed to pick up via self-learning, and I must admit that the sum total of my knowledge isn't a lot.
Anyway, our "DBAs" recently did an upgrade to our 10g database, going from version 10.2.0.2.0 to 10.2.0.4.0. Immediately after the upgrade, a particular query has started to under-perform. The query itself was not altered in any way during the upgrade.
We have two explain plans for the query, a before and an after plan. The two plans are similar but not identical. The plans are too massive to post here, so I hope the following synopsis of the differences will do.
The 10.2.0.2.0 plan:
shows a HASH GROUP BY has a TempSpc column in the explain plan shows a particular table (EMP_HISTORY) as having ~1700 rows
The 10.2.0.4.0 plan:
shows SORT GROUP BY instead of HASH GROUP BY does not show a TempSpc column in the explain plan shows the EMP_HISTORY table as having only 25 rows
Other than these points, no other discernible differences can be noted. I'm wondering what would cause HASH to change to SORT. I'm told that stats are up-to-date.
I am currently in the favorable situation in which I have excess amounts of memory available on the database server - a single node setup. The server only serves the single instance and no other processing. Database size is around 2.3tb and memory is 50gb. For the majority of processing, AIX is allocating a significant amount (anywhere from 30-40%) of the memory to the AIX file system cache (persistent pages).
I've been trying to find documentation about this, but have not had any luck yet. My guess is that it would be better to allow Oracle to cache this data - meaning increase the SGA target and max size to allow for a larger buffer cache. However, the nice thing about the AIX cache is if process memory is needed, the file system cache gives up pages. If the memory was allocated to the SGA, its pretty much locked in.
I have read several articles stating that a larger buffer cache is not always better, as a larger cache takes more management. But having both of the caches active seem to be a waste of memory, effectively storing the data twice - once in AIX persistent pages and a second time in Oracle database buffer cache.
I am querying v$sga and getting variable size : 211337216 bytes.when querying v$sgastat then getting
java Pool : 16777216 Large Pool : 41943040 Shared pool : 398560392
But as per my knowledge following condition should satisfy,but not getting
[code]
Variable sga = java pool + large pool + shared pool select pool,name,sum(bytes) from v$sgastat where pool in ('shared pool','java pool','large pool') group by pool,name;
Here variable size using v$sga : 211337216 bytes
and java pool + large pool + shared pool : 211302536 bytes.
I have an Oracle database (9.2.0.7) installed on a HP-UX server.When trying to access this database from another HP-UX or Linux server, connection is fine. But when trying to connect from a Windows based client, connection is very slow (almost 1 minute to return the result of a 'select count(*)' like query, which is immediate from the Linux client).
Here are some facts I can add :
- Clients and servers are on the same network segment (it is not a network matter)
- No matter which client version I use, there no difference
- I tried to know what happens on the Oracle server when performing my sample query using tusc command : the result is that the server is performing exactly the same actions when sending my query from a Linux client or a Windows client
- The only relevant difference seems to be the client OS
I was confused by partitioed table, when i select a partition of table, how does oracle to scan blocks? it scan all blocks of table or scan a single partition blocks only?
SQL> Explain Plan For 2 Select Count(1) From Tb_Hxl_List Partition(p_L3);