Performance Tuning :: LONG To CLOB Conversion In Huge Tables

Jan 25, 2013

We have a huge table in production, with LONG column. We are trying to change its datatype to CLOB. The table has 120 Million records and is of 270 GB in size.

We tried using the oracle expdp/impdp option to try the conversion in our perf environment. With 32 parallels, the export completed in 1.5 hrs. However, the import took 13 hrs.

I also tried the to_lob option using inserts, it went on for 20 hrs and I killed the process. Are there any ways to improve the performance of LONG to CLOB conversion on huge tables?

View 6 Replies


ADVERTISEMENT

Performance Tuning :: MVIEW Huge Temp Space?

Nov 23, 2011

I run a query, takes 20 minutes or so, I traced it and can see no more then 20-30 mb of temp space required in the plan.

I developed it for use in a materialized view, however when I create the mview with the sql, the temp space required grows until it maxxes out. I increased the existing 10gb to 50gb but still maxxed out. Took the SQL out, reran it, ran in 20 minutes barely scratching the temp, I ran a "create table as <select>" and same behaviour as the SQL, barely touched the temp as per the plan. So the temp space blolwing is unique to the mview create.

Im working with mviews years on several sites and have never seen this.

View 6 Replies View Related

Performance Tuning :: Data Type Conversion Impact?

Nov 28, 2011

my sql query has three tables in from clause so it has two join conditions and one where condition.

account_no is number data type and v_account_no is varchar2() data type

The where clause is :

"where account_no=to_number(v_account_no)" with this condition in my sql query has the cost 392

we just modify the where clause as where v_account_no=to_char(account_no) with this condition in the sql query has the cost 11.

what is impact of this data type conversion and difference between these two "to_number() and to_char()" in performance wise to reduce the cost of query?

View 8 Replies View Related

Performance Tuning :: Analyze Stats Running Long

Jul 31, 2013

I have a Datamart DB which has a continuously increasing volume. I run a daily optimize job, to have the data analysed using the below:-

EXECUTE dbms_stats.gather_schema_stats(ownname=> 'xxx' ,estimate_percent=> 25 , DEGREE => 4, cascade=> TRUE );

analyze table xxx.abc compute statistics; But this optimization itself is taking nearly 4 hrs to complete and I can't afford to have the delay.

Is there a better way of running this optimization?

View 9 Replies View Related

Performance Tuning :: Tracing Long Running Program?

Oct 25, 2011

We have a program that is taking about 13-14 hours to run and we need to generate traces to see where it is taking so long. I usually use 10046 for the tracing, I'm wondering if the traces can be built incremently so that it doesn't become one huge trace file.

View 14 Replies View Related

Performance Tuning :: How To Examine Impact Of Too Long Varchar2 Field

Nov 6, 2012

I have been used to the consciousness that we should use the minimum length for varchar2 field that can store the data we need manipulate. But recently I was told that it has little impact on performance if we assign a much longer size.

View 13 Replies View Related

Performance Tuning :: Query Running For A Long Time In Second Schema

Apr 27, 2012

I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.

I have verified the following
All records in tables in 2 schemas are same.
All indexes are same
Analyzed the tables
Gathered Histogram on all the columns as per the first schema.

But now i still have the same problem, don't know what could be the problem.

Table_nameNum_RowsBlocks
PRPSL_LST_T5866107159
PRPSL_WKFLW_ACTVTY_T5829904030
ITEM_CHR_VAL_T5134340104049020
ITEM_RGN_ASSN_T8571220137215

Also attached 2 screen shots of OEM Plans..

View 2 Replies View Related

Performance Tuning :: Create Partitioned Table With Column Of LONG Or LONGRAW?

Nov 3, 2010

the reason behind the below statements:

1) We cant create TABLE PARTITIONED on CLUSTER or INDEX on CLUSTER TABLE.

2) We cant create a partitioned table with the column of LONG or LONGRAW? (But how it could be possible with BLOB, CLOB?

View 3 Replies View Related

SQL & PL/SQL :: Blob To Long Raw Conversion

Dec 19, 2010

I have a problem i need to convert a blob column contains pic file to long row

i had many tries but no one succeeded

-----------------------------------
Source table | destination table
id number | id number
img blob | img long raw
------------------------------------

1 - INSERT INTO destination table SELECT id , img FROM Source table WHERE ROWNUM < 2

i have this error ORA-22835 Buffer too small for CLOB to CHAR or BLOB to RAW conversion (actual: 146092, maximum: 2000)

2 - INSERT INTO destination table SELECT id , dbms_lob.SUBSTR(img,0,2000) FROM Source table WHERE ROWNUM < 2

now errors but the lengh is 0 (i have no file )

What can i Do

View 7 Replies View Related

Performance Tuning :: Join With 30 Tables

Jan 16, 2012

I have to do the optimization of a query that has the following characteristics:

- Takes 3 hours to process
- Performs the inner join with 30 tables
- Produces an output of 280 million records with 450 fields

First of all it is not feasible to make 30 updates (one for each table) to 280 million records.

The best solution that I had found so far was to create 3 temporary tables, where each of them to do the join with 1/3 of the 30 tables, and in the end I make the join between the main table and these three tables temporary.

I know that you will ask (or maybe not) to the query and samples, but it is impossible to create 30 examples.

how to optimize this type of querys that perform the join with multiple tables and produce a large output with (too) many columns.

View 15 Replies View Related

Performance Tuning :: Truncate To Some Of Tables

Jan 13, 2012

We have a procedure, which do truncate to some of the tables. Most of the time it finished in short of spam of time. But from last few days, it is taking much longer time.

where should i start the investigation.

View 4 Replies View Related

Performance Tuning :: How To Partition Tables And Indexes

Jan 12, 2011

So our situation is pretty simple. We have 3 tables.

A, B and C

the model is A->>B->>C

Currently A, B and C are range partitioned on a key created_date however it's typical that only C is every qualfied with created date. There is a foreign key from B -> A and C -> Bhave many queries where the data is identified by state that is indexed currently non partitioned on columns in A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at this time.

It is typical that we qualifier A on either account or user or both. There are indexes (non partitioned on these) We have a problem with now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This often results in large full table scans. Our solution has been to remove the leading wildcard.

We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on created_date but rarely qualified by that. We are also wondering where and how we can benefit from either global partitioned index or local partitioned indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.

View 3 Replies View Related

Performance Tuning :: Moving Tables To Encrypted Tablespace?

Aug 25, 2013

i am using 11.2.0.3.0 version of oracle. We are planning to move some ~40 tables/indexes to new encrypted tablespace as a part of TDE(transparent data encryption). Currently three tables are having size ~30GB and one having ~800GB other have <2GB in size. And tables/indexes are altogether placed in different tablespaces.

whether i should create as many no of encrypted table spaces as it was before as unencrypted tablespace? or I should create one encrypted tablespace and move all the tables/indexes into that?

View 2 Replies View Related

Performance Tuning :: Couldn't Have Indexes On Tables In Production

Sep 27, 2013

I have tried a lot by alternate solutions like rearranging the order of tables in join and moving where conditions before but no success...Its a bottleneck and I could not have indexes on these tables in production...I want to change the approach in subquery

SELECT
g.COLUMN1,
g.COLUMN2,
e.COLUMN3,
g.COLUMN4,
MIN(e.dat1) KEEP ( DENSE_RANK FIRST ORDER BY date2 Desc) * -1,
min(to_char(date3,'dd-mm-yyyy'))
[code]....

View 5 Replies View Related

SQL & PL/SQL :: Performance Tuning For Oracle Tables With Million Of Records

Mar 11, 2013

Ways for improving the Table performance which holds million of records for oracle. Currently we have partitioning and indexing but it doesn't seem to work.

View 2 Replies View Related

Performance Tuning :: Index Has Been Created On Both Depart_id For Two Tables

Jul 30, 2010

SELECT department_id
FROM (SELECT department_id
FROM employees
UNION
SELECT department_id
FROM employees_old )
WHERE department_id=100;
[code]....

The index has been created on both depart_id for the two tables. The only difference between the two I observed was the 1 recursive call for the 1st sql.and also, one additional view in the plan.There is a little difference in bytes sent over the network.

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
6 consistent gets
0 physical reads
0 redo size
[code]....

Is there any performance impact you find in those above two sqls if you compare?

View 14 Replies View Related

SQL & PL/SQL :: Conversion Of CLOB To NCOLB

Sep 19, 2011

I want to convert Clob column datatype to NClob datatype. I tried to modify the column by using below method but got the mentioned error:

SQL> alter tab1 TABLE_NAME modify COLUMN_NAME nclob;
*
ERROR at line 1:

ORA-22859: invalid modification of columns

One of the other method with which I am familiar with is :

Quote:

First to Create a new column of the desired type and copy the current column data to the new type using the appropriate type constructor.

how to change the clob to Nclob datatype without using the temp column usage?

View 1 Replies View Related

Performance Tuning :: Options For Optimizing SQL Running Against Wide Tables

Nov 13, 2012

The scale of the tests that generate the following scenario is not huge right now, only 50 users simulated (or you can think of them as independently running threads if you like). But here is the crunch, the queries generated (from generic transaction layer) are all running against a table that has 600 columns! We can't really control this right now, but this is causing masses amounts of IO (5GB per request) making requests queue for disk availability (which are setup RAID 0/1); its even noticable for as few as 3 threads.

I have rendered the SQL on one occasion to execute in 13 seconds for a single user but this appears short lived as when stats were freshly gathered it went up to the normal 90-120 seconds. I've added the original query to the file, however the findings here along with our DBA (who I trust implicitly) suggest that no amount of editing the query will improve the response times, increasing the PGA/SGA (currently 4/6GB respectively) will only delay the queuing for a bit and compression can work either. In short it looks as though we've hit hardware restrictions already for this particular scenario.

As I can't really explain how my rendered query no longer takes 13 seconds, it's niggling me that we might be missing a trick.So I was hoping for some guidance on possible ways of optimising these type of queries against such wide tables, in other words possibilities that we haven't considered...

Attached is the query and plan.

View 9 Replies View Related

Performance Tuning :: Create Hash Partition On Fact Tables?

Aug 5, 2010

I have to create a hash partition on fact tables.. we can use temp tablespace or permanent tablespace.

View 10 Replies View Related

Performance Tuning :: Creating Index On Base Tables Of A View?

Dec 9, 2010

I have a view on base tables holding historical data for previous 60 months(one table per month) with union all operators.create index on those base tables will improve performance or creating a primary key with disabled novalidate will improve for retrieving data?

The view has around 8 million data and used as a fact table with 4 dimension tables.A DTS package from MSSql side refreshes OLAP cube by retrieving data from these tables in oracle.

View 1 Replies View Related

SQL & PL/SQL :: Convert LONG To CLOB

Apr 20, 2009

I would like to know if there is possible to transform a CLOB type variable intro a long type variable. I know, that LONG is obsolete in Oracle, but I need it, because in a PL/SQL 'execute immediate' sentence a CLOB is not allowed.

View 5 Replies View Related

Performance Tuning :: Only LOCAL Bitmap Indexes Are Permitted On Partitioned Tables

Feb 4, 2005

16:28:32 SQL> create bitmap index bp_idx_ag_id on transactions(type);

create bitmap index bp_idx_ag_id on transactions(type)
*
ERROR at line 1:ORA-25122: Only LOCAL bitmap indexes are permitted on partitioned tables

how to create bitmap index on partitiioned tables

View 3 Replies View Related

Performance Tuning :: How To Find Tables In Database On Which High DMLs Firing

Feb 4, 2011

How to find the tables in the database on which high DMLs are firing.

View 5 Replies View Related

Performance Tuning :: Update Million Rows In One Table With Values From Another Tables?

Feb 15, 2011

I am trying to update a million rows in one table with the values from another tables.

Table being updated CI_ADJ_CHAR column CHAR_VAL_FK1
Table from which values will be used CK_ADJ columns (cx_id, ci_id)

The CI_ADJ_CHAR.CHAR_VAL_FK1 values match CK_ADJ.CX_ID and should be updated with the value CK_ADJ.CI_ID.

The CK_ADJ table has 1.3 million rows and both the columns have indexes defined. Table definitiuon mentioned below

The CI_ADJ_CHAR table has 14 million rows and will update 1 million rows and has an index on the ADJ_ID column but not on the CHAR_VAL_FK1 column.

View 1 Replies View Related

How To Convert Long Data Type To CLOB

Nov 15, 2007

i am trying insert data from one DB to other DB table. one field data type is LONG in first DB table, Same field data type in other DB is CLOB.

i used TO_LOB function to convert from LONG to CLOB data type.

My problem is, i used this TO_LOB function, i got illegal operation of LONG Data type.

View 3 Replies View Related

Performance Tuning :: Create Small Functional Indexes For Special Cases In Very Large Tables

Apr 5, 2012

Create small functional indexes for special cases in very large tables.

When there is a column having one values in 99% records and another values that have to be search for, it is possible to create an index using null value. Index will be small and the rebuild fast.

Example

create index vh_tst_decode_ind_if1 on vh_tst_decode_ind
(decode(S,'I','I',null),style)

It is possible to do index more selective when the key is updated and there are many records to create more levels in b-tree.

create index vh_tst_decode_ind_if3 on vh_tst_decode_ind
(decode(S,'I','I',null),
decode(S,'I',style,null)
)

To access the record can by like:

SQL> select --+ index(vh_tst_decode_ind_if3)
2 style ,count(*)
3 from vh_tst_decode_ind
4 where
5 decode(S,'I','I',null)='I'
6 group by style
7 ;

[code]....

View 2 Replies View Related

SQL & PL/SQL :: ORA-22835 Buffer Too Small For CLOB To CHAR Or BLOB To RAW Conversion?

Sep 24, 2008

Recently I came across this issue, which gives me an error as following.

ERROR:
ORA-22835: Buffer too small for CLOB to CHAR or BLOB to RAW conversion (actual: 4907, maximum: 4000)

Now I am running a very simple query against one of the View in our Database.
Query is:

Select Prj_Num,
PM_Comments
From ProjectDetails
Where Rel = ‘2008 10’

What I have discovered so far is one of the field in this view name “PM_Comments” has more than 4000 bytes of information in it, Which is not supported by tools I have available on my computer. I have Oracle SQL Plus, SQL plus Worksheet, Access and Excel installed on my machine. DBA related to this database are stating that the field is working fine and query is executing without error since they are using TOAD for SQL, which does have capabilities to read more than 4000 bytes.

What I have figured out so far is “PM_Comments” is a LOB and SQL plus is having trouble reading this information more than 4000 bytes in one field of information.Because of this diagnosis, I have tried using following queries but it did not useful either.

Select Prj_Num,
Substr (PM_Comments, 1, 4000)
From ProjectDetails
Where Rel = ‘2008 10’

Select Prj_Num,
DBMS_LOB.Substr(PM_Comments, 4000, 1)
From ProjectDetails
Where Rel = ‘2008 10’

But both of the above mentioned queries did not work either. and I get the same ORA-22835 Error.I do not need all of the information in “PM_Comments” field, I only need about first 1000 characters of it.

View 10 Replies View Related

Precautions For Converting A Table Field From Long To Clob

Jul 31, 2012

I am more of a C/C++ guy and relatively amateur in oracle. I have to update a table field from "Long" to "CLOB". I have planned to do a simple alter table, and as far as I know there won't be any issues.

Queries:
1. Although I have triple checked, is there any scenario under which there can be any data loss during the data type change? The data is very critical and no data loss can be entertained.
2. Is there any easy way to update all the related views without having to do so manually?
3. Any particular precautions I should take before introducing the change?

View 2 Replies View Related

How To Compare Data Of Clob And Long Datatypes Using DBLink

Aug 13, 2012

I would like to run below query on all tables, however it doesnt work on clob and long datatypes

select * from owner.table_name
minus
select * from owner.table_name@remote_db;
from dba_tables
where owner in ( '....');

ORA-00932: inconsistent datatypes: expected - got CLOB

How can I compare the data of clob and long datatypes using dblink ?

View 2 Replies View Related

Archiving And Purging Data From Huge Tables?

Apr 22, 2013

I'm currently working on a project which is to archive the old data and then purge the same data from the main table.

Here is a detail description:

There are around 50 odd tables from which I would need to archive the old data(matching certain filter conditions...not date based). Meaning I have to store the data in a temp table. Once stored in temp table then I would have to delete those rows from the main table. This temp table will be later exported and stored on ARchive database(a seperate database). These tables are very huge. One of the table is actually 250 GB in size. And all these tables have many indexes built - both normal and bitmap.The 250 GB size table has 40 million rows that need to be archived and purged. The total number of rows in the table are 540 million.On this table alone there are 50 bitmap indexes and 2 normal indexes. This table is partitioned based on date column.This date column is not used/useful in identifying the old data. There are around 20 tables which are quite similar in size to the above described table. Rest of them are little small when compared to the above table.

We have to execute this activity over a weekend which gives us about 48 hours time to complete the activity. Best possible ways to handle this activity. Most importantly should be able to complete the activity within the specified 48 hour window.

The solution what we are now thinking of is:

1. Create the temp table ---Create tmp_tbl as select * from main_table where <<conidtions identifying old data>>

2. Once the temp table is created. Make copy of indexes that exist on the main table and eventually drop them.

3. Execute a PL/SQL script to perform the bulk delete from main table and commit for every 100000 rows.

4. Once the bulk delete is finished then recreate the indexes on the main table using the copy made at earlier step.

Our main worry is about the step#4. Considering the size of these tables and the number of indexes to be built,we are not sure how long the index re-creation will run for each table.

depending on the possibilities we may have to split the activity in to 2-3 phases spreading across 2-3 weekends. Even then we are not sure whether we will be able to pull off this activity.

The database we are using is Oracle 10g.

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved