Performance Tuning :: Oracle UNION ALL Not Working?
Sep 9, 2010
Oracle UNION ALL performance issue: when I try to run below SQL query separately SQL part1 and SQL part2 it takes some seconds only but if I run together with group by and without group by it take much time.
SELECT AVG(date_completed-login_date),to_char(to_date(login_date), 'YYYY') as wYear FROM
(
SELECT test.date_completed 'date_completed',sample.login_date 'login_date')
FROM sample test
where (some conditions) ) ---SQL part 1
UNION ALL
[code]...
View 33 Replies
ADVERTISEMENT
Mar 31, 2011
I have a SQL query where I am making UNION of two select statements. The table that I am joining in each select statement have indexes defined for those tables.
Now the UNION of the two select statements again in enclosed in an inline view , from which I fetching my final field values.
The select statements inside the inline view returns huge number of row (like 50 million rows).
The whole query fails with time out.
Is there a way to pass Oracle Hints so that Oracle uses indexes?
View 1 Replies
View Related
Oct 25, 2010
I am querying a table having 6 million records,It takes arround 35 seconds in command line terminal using putty in a windows client.
Also the server is linux centos with oracle 11g installed. I saw the method Oracle result_cache which i am suspecting to be fast
my query is
"select /*+ result_cache */ * from relationshipobject;"
SQL> set autotrace traceonly stat
SQL> set timi on
[Code]....
i can i reduce my query execution time and consistent gets?
View 10 Replies
View Related
Oct 11, 2009
I m looking for any other way that i can code without using union all for my case.my data example is like this
table1
--------
col1 col2 Segment1 Segment2 Segment3
-------------------------------------------------------------
A11 B11 John Jhonny Johnathan
A12 B12 Melisa Amy Abagial
I need to create view of above record as below:
table2
------
col1 col2 col3 col4
--------------------------------------------------------------
A11 B11 Segment1 John
A11 B11 Segment2 Jhonny
A11 B11 Segment3 Johnathan
A12 B12 Segment1 Melisa
A12 B12 Segment2 Amy
A12 B12 Segment3 Abagial
now my code is using UNION ALL to get output as in table2
select col1,col1,'Segment1' col3,Segment1 col4 from table1
union all
select col1,col1,'Segment2' col3,Segment2 from table1
union all
select col1,col1,'Segment3' col3,Segment3 from table1
But the problem is the performance is realy bad.Is there any way i can do this without using union all? The time that take to execute this is not exceptable.
View 4 Replies
View Related
Jan 27, 2012
We have a large customer table so first thought was to partition.Also we see two union alls in the plan - can we introduce parallelism? Below is the plan - have attached a text file if difficult to read
SELECT V_IDENTIFIER_LOOKUP.UID_V_IDENTIFIER_LOOKUP AS "UID",
V_IDENTIFIER_LOOKUP.ABA, V_IDENTIFIER_LOOKUP.ADDRESS1,
V_IDENTIFIER_LOOKUP.ADDRESS2, V_IDENTIFIER_LOOKUP.ADDRESS3,
V_IDENTIFIER_LOOKUP.ADDRESS4, V_IDENTIFIER_LOOKUP.ALIAS,
V_IDENTIFIER_LOOKUP.CITY, V_IDENTIFIER_LOOKUP.COUNTRYCODE,
V_IDENTIFIER_LOOKUP.CUST_CODE, V_IDENTIFIER_LOOKUP.CUST_NAME,
V_IDENTIFIER_LOOKUP.HEAD_OFFICE_IN,
V_IDENTIFIER_LOOKUP.IDENTIFIER,
V_IDENTIFIER_LOOKUP.IDENTIFIER_TYPE,
[code]...
View 1 Replies
View Related
Aug 5, 2010
this statement is taking 1hr , can we reduce the timing?
CREATE TABLE DGT_ITEMEFFORTDATA (ENTERPRISEID, OWNERTYPE, OWNERID, SUPEROWNERTYPE, SUPEROWNERID,
ITEMTYPE, ITEMID, STAGEID, USERID, DATEIDENTIFIED,
DATECLOSED, ACTIVITYCODEID, PHASEID, RELEASEID, MONTHID,
QUARTERID, INITIALEFFORT, BASELINEDEFFORT,
ACTUALEFFORT, ITEMSTATUS, ALLOCATIONSTATUS, STAGESTATUS,
OCCURANCETYPE, DSLPROJECTTYPE, METRICCALCRUNID,
[code].....
This is the explain plan of the above query
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%C
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 11M| 4137M| 46149 (
| 1 | UNION-ALL | | | |
| 2 | TABLE ACCESS FULL| DGT_ITEMEFFORTDATA_DAILY | 3455K| 428M| 14575
[code].....
This is the index details
1DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDITEMID4
2DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDITEMTYPE3
3DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDOWNERID2
4DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDOWNERTYPE1
There is no index on DGT_ITEMEFFORTDATA_TEMP table
[code].....
View 27 Replies
View Related
Jul 14, 2010
I have a query with FULL hint that is behaving in a strange manner. The query fetches around 700000 of data. Sometimes it fetches the data with the hint and sometimes it does not fetch any data with the hint and then I have to remove the hint and have to fetch the data. Below is the query,
select /*+ FULL(COMP_TM) FULL(TRANS_TM) FULL(INVC_TM) */
CUST_BE_ID ,
DISTR_BE_ID ,
FG_BE_ID ,
KIT_BE_ID ,
BG_ID_NO_BE_ID ,
[code]....
The statistics gathering activity of FACT_DLY_ALGND_SLS table takes around 5 hours to complete. It is a range partitioned table with subpartitions.
View 14 Replies
View Related
Jan 2, 2013
I'm trying to demonstrate the working of the OCI client result cache. I've set some parameters,orcl> sho parameter result_cache
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
client_result_cache_lag big integer 3000
client_result_cache_size big integer 1000000
result_cache_max_result integer 5
result_cache_max_size big integer 1984K
result_cache_mode string FORCE
result_cache_remote_expiration integer 0
If I understand the docs correctly, that should be all that is needed.I've complied and run the cdemoqc and cdemoqc2 OCI demos, but I never get anything in the V$CLIENT_RESULT_CACHE_STATS or CLIENT_RESULT_CACHE_STATS$ views. This is probably because the sessions exit after running the queries.
So I've also tried repeating arbitrary queries through cdemo2, which gives a persistent session, but there is still nothing in those views and furthermore the v$result_cache_objects.scan_count for my queries keeps increasing. So I don't think think the client side cache is working.
View 4 Replies
View Related
Aug 10, 2010
which one is better?
unloading 5 tables of same structure using a ETL tool then merging the data
using Union operator to unload 5 tables then do transformations in ETL tool
View 4 Replies
View Related
May 24, 2010
I have a view, which has a union. (Union is required because of the nature of the data fetched). THis view is later joined with a global temp table which holds the -say employee Id the user selects.
So at runtime there is a join with the global temp table and the view. But the performance is really bad. I have tried using various hints, like materialize, /*+ CARDINALITY(gtmp 1) */ etc.
When i query the view alone,. the performance is good. When I remove the union, the performance is good. Some how with the union- there is a full table scan on one of the joining tables.
View 7 Replies
View Related
Jul 12, 2013
I have installed database in one server. I would like to enable AWR into it. Statistics_level is set to Typical. While running the below script to enable the AWR, its gives error -
SQL> exec dbms_scheduler.enable('GATHER_STATS_JOBS');
BEGIN dbms_scheduler.enable('GATHER_STATS_JOBS'); END;
*
ERROR at line 1:
ORA-27476: "SYS.GATHER_STATS_JOBS" does not exist
ORA-06512: at "SYS.DBMS_ISCHED", line 4343
ORA-06512: at "SYS.DBMS_SCHEDULER", line 2802
ORA-06512: at line 1
make AWR automatical generation.
View 3 Replies
View Related
Aug 28, 2012
I am running an Oracle 10.2.0.3 on Solaris 5.9 OS. Front end appplication is PeopleSoft v8.8.From my AWR report I have found below SQL which needs to be tuned:
SELECT TO_CHAR (TO_DATE (TO_CHAR (B.ASOFDATE, 'YYYY-MM-DD'), 'yyyy-mm-dd'),
'dd/mm/yyyy'),
B.EMPLID,
B.PWCUK_LEGACY_ID,
B.NAME,
[code]...
View 6 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
Nov 13, 2010
Recently i have been working analyzing Oracle Row Chaining and Migration in the database. Is their any way to track the chaining & migration of rows as part of database health checkup.
Else we have analyze the table for detecting row Chaining and Migration.
View 4 Replies
View Related
Oct 31, 2013
We are re-designing our App and we have a critical question, what's the best way (in terms of performance) of using TIFF images (about 20K size) with Oracle.
Currently we have a Windows shared file server and we create the tiff images there under a huge directory structure (like /images/ddddmmyy/aa/bb/001, then /images/ddddmmyy/aa/bb/002, etc, etc). Our database is usually in LINUX version 10, 11 or 12. We create about 200,000 images per day, keep them for 60 days and then remove that structure.
Our Web app (developed with .NET) reads those images just to display them on a Web Session (IE).As you can see, what we are doing now works fine. But network sometimes is an issue and also it's hard to keep synchronization with our DR server, backups, etc.
Are we taking the correct approach? It would be better to have the images in CLOB or BLOBS for better performance? If so, As I mentioned, performance is the KEY FACTOR and the most important item to consider in this design.
View 6 Replies
View Related
Mar 24, 2013
how do you limit oracle redo?
View 2 Replies
View Related
Dec 2, 2010
I have a table "NEWS_COMMENT" like this:
Name Type
------- --------------
ID NUMBER(8)
USERID NUMBER(8)
SORT_TEXT VARCHAR2(100)
TEXT VARCHAR2(1000)
DATE DATE
VALID VARCHAR2(1)
CODNEW NUMBER(10)
The table has a normal index for the userid column.
There is a query that looks for the differents CODNEW for a USERID but allways the CODNEW has to be greater than 2248833
select codnew from news-comment where userid=2914655 and valid='N' and codnew>2248833
I have created a new index for this kind of querys
create index coment_new_IDX on news_comment
(CASE WHEN codnew >2248833 and valid='N' THEN userid ELSE NULL END )
but oracle doesn't use it. I have used a hint to force it but doesn't run.
View 9 Replies
View Related
Jun 7, 2011
We have a data migration scripts written for oracle. Data is not huge but we are observing that the migration is faster in the development labs but is 5x slower in the production site.
The development Oracle setup is on Windows and Production setup on Solaris. I have attached the AWR generated for a period where migration was run for 3 hours and stopped due to slow performance.
Here is my initial analysis.
1) The first timed events is the DB CPU. Hence I feel the migration scripts can be modified to run in parallel so that they can finish faster. However here the question arises why it should run faster in development env if this is an issue.
2) I tried increasing the
a.large_pool_size set to 512M
b.sga_max_size set to 8G
c.sga_target set to 8G
from 0, 4G and 4G respectively.
I have attached the AWR and below are the etc/system contents for solaris settings.
* Begin MDD root info (do not edit)
rootdev:/pseudo/md@0:0,1,blk
* End MDD root info (do not edit)
set noexec_user_stack=1
set noexec_user_stack_log=1
* IBMdpo vpath_START (do not remove)
* default SCSI timeout is 60 seconds
* uncomment to change SCSI timeout * set sd:sd_io_time=0x1e
forceload: drv/vpathdd
* IBMdpo vpath_END (do not remove)
set noexec_user_stack=1
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmsl=256
set semsys:seminfo_semvmx=32767
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
P.S. The awr report is renamed to .txt from .html to be able to upload the file.
View 6 Replies
View Related
Oct 16, 2012
I am building a database to store call quality statistics for VOIP networks. It is a very insert heavy application, and data reliability is of relatively minimal importance (in the sense that a few corrupt call records here and there doesn't matter the way corruption does in for example a banks database). Long term storage is also unimportant, most customers only wish to keep 3 months of data readily available in the database. Most do not even archive the older data.
To that end I am searching for every possible way to improve my insert performance and the internet has turned me onto the idea of NOLOGGING. These are the steps I have taken to reduce my IO consumed by the Redo and Undo logs.
1. I am inserting with the APPEND_VALUES hint.
2. I have disabled force logging at the database level
3. I have disabled force logging at the tablespace level
4. I have disabled logging on the relevant table and each of its indices
As best I can tell this is all I can do to minimize Redo/Undo, but based on my observations of the Disk portion of the WinServer2008 Performance Monitor, this has made little to no change in the amount of IO to my REDO and UNDO files. IO to the .dbf containing my table makes up less than 20% of the total disk IO for oracle.exe, the rest is the REDO and UNDO logs.
asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:897564200346274711
The above article is a little over my head but I am able to extract from it that I will never entirely eliminate REDO/UNDO, which is fine, but I would think I could get it lower than it currently is.ted.
View 26 Replies
View Related
Apr 7, 2011
Here, let me explain:
I have create a table with 8 million records and 2 different indexes using 2 different columns (columns name NUM1 & NUM2) on that table. First indexed column (NUM1) values have many different values (1,2,3... etc).
Second indexed column (NUM2) values have only 2 different values.
7999999 records values is same("A") and remaining one record values is different("B").
Query1:
select * from tbl where num1=val
Query2:
select * from tbl where num2='B'
I have compare explain plan both queries, but Query2 doesn't use predefined index. Why Oracle don't use my redefined index at column NUM2?
View 5 Replies
View Related
Feb 21, 2013
I have a query optimized as to it indexes and others runs immediately when the answer is few records in SQL Server such as Oracle, however when the result is large eg 20,000 records all data access times are very diferent. The query returns many fields (about 20) and some of them are of type Varchar 250 and some of 2000 I understand here may be the problem, but not is because for similar results (20,000 records) sql run in 2 seconds and Oracle but it responds little to have full data takes around 30 seconds. The problem is really in bringing information to all these fields since if the inquiry it also but only returning a numeric field is done in 2 seconds. Tests I've done them both through ODBC, in the Toad as in the own Oracle console on the server, so it is not problem Driver or flow of data through the network, I would like to think that this is some of the settings I think there is as much difference between Oracle and Sql. The databases are ORACLE 10 and SQL Server 2008.
View 1 Replies
View Related
Jun 4, 2012
A coworker of mine asked if there was any documentation from Oracle that listed all of the parts of the AWR report and what each meant. I was taken back because I don't think there is. There are third party books that talk about AWR reports and their predecessor Statspack reports.
Oracle has some notes on their support site about reading an AWR or Statspack report. All I found in the official documentation was some basic information about how to run an AWR report and an overview of what it was. It would be nice to have some sort of documentation that lists out each section and explains the units and purpose.
View 3 Replies
View Related
Jun 4, 2013
avoid duplication of **where** clause in my query.
In my below query, **JOIN** condition is same for both the queries and **WHERE** condition also same except this clause "and code.code_name="transaction_1" In **IF ** condition only credit and debit is swapped on both queries, due to this **Credit and Debit** and this where clause "and code.code_name="transaction_1" I am duplicating the query. avoid this duplication. I am using oracle 11g
SELECT day AS business_date,
SUM(amount) AS AMOUNT,
type_amnt AS amount_type,
[Code]....
View 2 Replies
View Related
Dec 29, 2012
I am aware that from 11g, memory_target is sufficient for memeory management between SGA and PGA.
what happens if MEMORY_TARGET set to non-zero and SGA_TARGET set to zero values in a 11g database? Does it enable automatic memory management within the SGA?
We regularly hit by ORA-4031 errors. Also, memory_target advisory (v$memory_target_advice) does not show any advisory information.
for eg:
memory_max_target = 500m
memory_target = 500m
and
sga_max_size=500m
sga_target=0
View 6 Replies
View Related
Apr 3, 2012
We have a query which makes Oracle behave very strangely. It is a straight-forward join between four tables of about 30.000 rows each, with some simple comparisons and some NOT LIKE:s.
When we run this query, it either takes about 1 second or more than 1.000 seconds to run and return the approximately 5.000 rows of the result. If we run the same query over and over again, it fluctuates back and forth between two different execution plans, apparently at random, 3 times out of 4 selecting the 1.000 second version and 1 time out of 4 the 1 second version.
There are no other connections to the database, the schema is not modified, the data is identical, the query is identical, and the response is identical, but the execution time alternates between 1 second and 1.000 seconds.On the same database instance we have another schema which is identical, but with slightly less data, which is used for development. The 1.000 second run times did not happen in that schema, but only in the test system's database.
Therefore we would REALLY like to understand what happens and why, so that we can avoid triggering this in the future. We could try locking the 1 second execution plan, but then we're afraid of doing the same thing wrong again in the future.
Here are the two execution plans that Oracle switches between, more or less at random:
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
5455 5455 5455 HASH JOIN (cr=15663 pr=10536 pw=0 time=855673 us cost=82273 size=2707430769293 card=14028138701)
79272 79272 79272 TABLE ACCESS FULL GROUPS (cr=1008 pr=0 pw=0 time=22154 us cost=277 size=10693 card=289)
[code]...
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
5455 5455 5455 HASH JOIN (cr=15664 pr=0 pw=0 time=778178696 us cost=30838477 size=741611997206725 card=3842549208325)
375411 375411 375411 TABLE ACCESS FULL GROUP_GROUPS_FLAT (cr=3782 pr=0 pw=0 time=51533 us cost=1029 size=25152738 card=375414)
[code]...
The query:
select g.ucid, a.ucid
from account a, groups g, group_members gm, group_groups_flat ggf
where a.ucid = gm.ucid_member
and gm.ucid_group = ggf.ucid_member
[code]...
And excerpts from the schema:
CREATE TABLE "PDB"."GROUPS"
(
"UCID" VARCHAR2(256 BYTE),
"UNIX_GID" NUMBER(*,0),
[...]
[code]...
View 4 Replies
View Related
Nov 18, 2012
I have a question regarding memory parameters in oracle database 9.2.0.8, especially sga_max_size and db_cache_size. Database server has 32G of ram. Oracle parameter on server shmmax is set to 16G. Is reasonable to set sga_max_size to the same value, and db_cache_size to 80% of that size?
View 2 Replies
View Related
Dec 30, 2010
We have large tables 60-70 GB having 120 million records. We have to perform index rebuild frequently which takes significant time to complete and effects database performance too. how we can use index Coalesce? what are its benefits, coalesce results in performance gain?
View 10 Replies
View Related
Mar 11, 2013
Ways for improving the Table performance which holds million of records for oracle. Currently we have partitioning and indexing but it doesn't seem to work.
View 2 Replies
View Related
Jul 17, 2012
I have an Oracle Database 11gR2 installed on Windows 2008 server. But there is a kind of hang sometimes arise during work hours. while i am opening control panel i saw oracle process is around 15G even we configured SGA_MAX_TARGET=6g.
View 2 Replies
View Related
Jul 18, 2010
Need sql scripts which can identify the fragmented table in oracle 10g.
View 23 Replies
View Related