Performance Tuning :: How To Examine Impact Of Too Long Varchar2 Field

Nov 6, 2012

I have been used to the consciousness that we should use the minimum length for varchar2 field that can store the data we need manipulate. But recently I was told that it has little impact on performance if we assign a much longer size.

View 13 Replies


ADVERTISEMENT

Performance Tuning :: Data Type Conversion Impact?

Nov 28, 2011

my sql query has three tables in from clause so it has two join conditions and one where condition.

account_no is number data type and v_account_no is varchar2() data type

The where clause is :

"where account_no=to_number(v_account_no)" with this condition in my sql query has the cost 392

we just modify the where clause as where v_account_no=to_char(account_no) with this condition in the sql query has the cost 11.

what is impact of this data type conversion and difference between these two "to_number() and to_char()" in performance wise to reduce the cost of query?

View 8 Replies View Related

Server Administration :: Functions To Convert The Long Type Field Data To Varchar2 Type

Oct 17, 2011

Is there some functions to convert the long type field data to varchar2 type?

View 2 Replies View Related

Performance Tuning :: Analyze Stats Running Long

Jul 31, 2013

I have a Datamart DB which has a continuously increasing volume. I run a daily optimize job, to have the data analysed using the below:-

EXECUTE dbms_stats.gather_schema_stats(ownname=> 'xxx' ,estimate_percent=> 25 , DEGREE => 4, cascade=> TRUE );

analyze table xxx.abc compute statistics; But this optimization itself is taking nearly 4 hrs to complete and I can't afford to have the delay.

Is there a better way of running this optimization?

View 9 Replies View Related

Performance Tuning :: Tracing Long Running Program?

Oct 25, 2011

We have a program that is taking about 13-14 hours to run and we need to generate traces to see where it is taking so long. I usually use 10046 for the tracing, I'm wondering if the traces can be built incremently so that it doesn't become one huge trace file.

View 14 Replies View Related

Performance Tuning :: Query Running For A Long Time In Second Schema

Apr 27, 2012

I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.

I have verified the following
All records in tables in 2 schemas are same.
All indexes are same
Analyzed the tables
Gathered Histogram on all the columns as per the first schema.

But now i still have the same problem, don't know what could be the problem.

Table_nameNum_RowsBlocks
PRPSL_LST_T5866107159
PRPSL_WKFLW_ACTVTY_T5829904030
ITEM_CHR_VAL_T5134340104049020
ITEM_RGN_ASSN_T8571220137215

Also attached 2 screen shots of OEM Plans..

View 2 Replies View Related

Performance Tuning :: LONG To CLOB Conversion In Huge Tables

Jan 25, 2013

We have a huge table in production, with LONG column. We are trying to change its datatype to CLOB. The table has 120 Million records and is of 270 GB in size.

We tried using the oracle expdp/impdp option to try the conversion in our perf environment. With 32 parallels, the export completed in 1.5 hrs. However, the import took 13 hrs.

I also tried the to_lob option using inserts, it went on for 20 hrs and I killed the process. Are there any ways to improve the performance of LONG to CLOB conversion on huge tables?

View 6 Replies View Related

Performance Tuning :: Create Partitioned Table With Column Of LONG Or LONGRAW?

Nov 3, 2010

the reason behind the below statements:

1) We cant create TABLE PARTITIONED on CLUSTER or INDEX on CLUSTER TABLE.

2) We cant create a partitioned table with the column of LONG or LONGRAW? (But how it could be possible with BLOB, CLOB?

View 3 Replies View Related

Performance Tuning :: Full Tablescan Even Though Join Is On Indexed Field

Oct 18, 2010

I am posting the below query:

SELECT PEA.INCURRED_BY_PERSON_ID AS PERSON_ID,
PEA.EXPENDITURE_ENDING_DATE AS WEEK_END_DATE,
CASE

[Code].....

The explain is below:

SELECT STATEMENT ALL_ROWSCost: 48,287 Bytes: 18,428,818 Cardinality: 297,239
3 HASH JOIN Cost: 48,287 Bytes: 18,428,818 Cardinality: 297,239
1 TABLE ACCESS FULL TABLE PA.PA_EXPENDITURES_ALL Cost: 2,964 Bytes: 3,506,094 Cardinality: 194,783
2 TABLE ACCESS FULL TABLE PA.PA_EXPENDITURE_ITEMS_ALL Cost: 43,425 Bytes: 26,637,468 Cardinality: 605,397

View 9 Replies View Related

Trigger Performance Impact When Not Firing?

Apr 30, 2013

I'm trying to find some information on the performance impact of a trigger on a heavily updated table when the condition to fire the update trigger is NOT met. In other words I guess what I'm really trying to find out is what the performance impact of the system checking the condition on the trigger to determine if it should fire or not is.

For example I have a batch job that inserts and updates a table heavily, but the batch job almost never updates the column in question on the trigger to the value that would cause it to fire, but it does update that column to other values often.

I know about the many downsides of using triggers in general, but I'm working with a third party application, so more optimal solutions aren't an option.

View 1 Replies View Related

Minimize Impact As Users Are Complaining For Performance

Sep 9, 2011

in my oracle enterprise manager under " user i/o " .i am having basically four category:

if we rank them out of ten it would be like :
read by other session 2/10
db file scattered read 1/10
direct path read .5/10
db file sequential read 6.5/10

and all these are comming for 2 tables involved for almost all time .some way to handle "read by other session" and " db file sequential read " .

i am rebuilding indexes of these involved table once in 10days and statistics for the these tables are collected every day using "analyze table "xxx" compute statistics."tell me the indepth approach i should take to minimize the impact as users are complaing for performance.

View 2 Replies View Related

SQL & PL/SQL :: Sorting VARCHAR2 Field

Mar 23, 2010

sort data set A,B,1,2,A1,A2,B1,B2,2B,1000 as A,B,1,2,1000,A1,A2,B1,B2,2B.

I tried with LPAD and EXPREG_REPLACE funtion. But it did not work.

View 13 Replies View Related

SQL & PL/SQL :: Partition A Existing Table Based On Varchar2 Field?

Oct 13, 2011

I need to partition a existing table based on varchar2 field (which is actaully date value but storing as character in the table). Using below statement for creating table, but getting error.

create table TST_CUST_ARC
(
interact_id NUMBER(10),
extrn_id VARCHAR2(38),
src_cd VARCHAR2(25),
full_nm VARCHAR2(45),
run_control_date VARCHAR2(13)

[code]....

Getting error : ORA-00907: missing right parenthesis

View 5 Replies View Related

Windows :: Arabic Text Saved Using Odp In Varchar2 Field Becomes ????

Jan 6, 2011

we have 2 databases with AR8MSWIN1256 characterset (arabic). our client machine has English windows installed. our .NET application, which runs on the client machine, using ODAC and it reads and writes from one db to another db. We have no problems with nvarchar fields containing arabic, but with varchar fields it seems that it reads the data just fine - we can see it in GUI.but when it writes the data to the second db, arabic becomes question marks. We tried to set client NLS_LANG in registry to AR8MSWIN1256 and to WE8MSWIN1252. It doesnt work.

View 4 Replies View Related

Precautions For Converting A Table Field From Long To Clob

Jul 31, 2012

I am more of a C/C++ guy and relatively amateur in oracle. I have to update a table field from "Long" to "CLOB". I have planned to do a simple alter table, and as far as I know there won't be any issues.

Queries:
1. Although I have triple checked, is there any scenario under which there can be any data loss during the data type change? The data is very critical and no data loss can be entertained.
2. Is there any easy way to update all the related views without having to do so manually?
3. Any particular precautions I should take before introducing the change?

View 2 Replies View Related

Performance Of CHAR Versus VARCHAR2 In VLDB DW

Jul 20, 2010

With a very large database (VLDB) for a data warehouse (DW) using primarily a STAR based schema in an environment in which time (both human and CPU) is orders of magnitude more valuable than storage capacity, is there any signficant difference in query performance when tables have all fixed length (CHAR) columns compared to tables with variable length (VARCHAR2) columns?

I realize this is one of those "in general" questions so considering "a given VLDB DW environment" with all other things being equal, what, if any, is the time based performance difference between a database of tables with all fixed sized columns versus one of tables with variable length columns ?

View 2 Replies View Related

SQL & PL/SQL :: Performance Of Varchar2 Fields Based On Width?

Nov 10, 2010

I did a search on this topic and did see the ASK TOM response that storing all varchar2 fields as (2000) or what not is a bad idea based on an array fetch that developers may use etc. However I'm not sure that applies to my specific question, and the other examples he gave certainly didn't apply. So I'll pose the question a different way:

Question #1: Is there, for example, a performance difference between setting a field as varchar(2000) and varchar(25) if I was just running a native SQL query using a front end tool like TOAD?

Question #2: If I also need to index that field, will it take longer to index a varchar(2000) field than a varchar(25) field, assuming the same data is in both fields?

View 2 Replies View Related

Performance Tuning :: Tools For Database Tuning And Instance Tuning

Jul 12, 2010

Looking to understand the difference between instance tuning and database tuning.

What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.

However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?

View 1 Replies View Related

Performance Tuning :: Merge Statement Tuning For 100M Records In Table?

Oct 31, 2011

I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.

SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,

[code].....

View 39 Replies View Related

Performance Tuning :: How Length Of Column Width Effects Index Performance

Sep 30, 2010

How the length of column width effects index performance?

For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)

Table key consist of(id, job, time)

Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).

What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.

View 24 Replies View Related

Performance Tuning :: Fragmentation Can Reduce Performance In Query Times

Jun 16, 2010

I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..

I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.

Any useful script or query to do this, any interesting oracle system view?

View 2 Replies View Related

Performance Tuning :: Method Of Tuning Database - Row Reduction?

Oct 20, 2010

There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.

what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?

View 6 Replies View Related

Performance Tuning :: Performance Standard Edition Without Partitioning?

Jun 16, 2011

How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?

300 Millions in only one table with 500K transactions / day is too much?

Simple database with simple schema.

How many records begin to be too many?

View 2 Replies View Related

Performance Tuning :: Procedure Performance On New Database Import?

Nov 15, 2010

Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.

The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.

any possible reason that we'd have to re-install a procedure to correct a performance problem?

View 13 Replies View Related

Performance Tuning :: Checking Delete Performance In Package

Apr 12, 2013

I need to check the package performance and need to improve the package performance.

1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.

View 13 Replies View Related

Performance Tuning :: Query Performance Gain Using Statistics?

Aug 9, 2010

Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.

Q1. If it is true what statistics do we adjust for influencing the execution plan and how?

For example, I have the following simple query:

select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;

emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.

If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.

Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?

I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.

View 6 Replies View Related

Performance Tuning :: How To Improve The Performance Of Export Job (expdp)

Dec 6, 2011

I have an issue with export(expdp).

When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.

expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk

Do i need any look into any memory parameters for this?

View 1 Replies View Related

Performance Tuning :: DECODE In WHERE CLAUSE Performance?

Oct 17, 2011

The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.

If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.

For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.

DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'

[code]...

View 9 Replies View Related

Performance Tuning :: Same Data But Different Performance Results

Sep 3, 2010

what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.

View 10 Replies View Related

VARCHAR2 (100) Changes Column To VARCHAR2 (100 Byte)

Oct 24, 2007

i am fairly new in the oracle arena, but what would cause a statement such as

ALTER TABLE TEST_TABLE MODIFY text_field1 varchar2(100) DEFAULT 'testval' NULL

to change a column's type from VARCHAR2(100) to VARCHAR2(100 byte)? i found a few mentions of the 100 byte concept online but nothing that jumped out at me.

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved