SQL & PL/SQL :: Performance Tuning For Oracle Tables With Million Of Records

Mar 11, 2013

Ways for improving the Table performance which holds million of records for oracle. Currently we have partitioning and indexing but it doesn't seem to work.

View 2 Replies


ADVERTISEMENT

Performance Tuning :: Update Million Rows In One Table With Values From Another Tables?

Feb 15, 2011

I am trying to update a million rows in one table with the values from another tables.

Table being updated CI_ADJ_CHAR column CHAR_VAL_FK1
Table from which values will be used CK_ADJ columns (cx_id, ci_id)

The CI_ADJ_CHAR.CHAR_VAL_FK1 values match CK_ADJ.CX_ID and should be updated with the value CK_ADJ.CI_ID.

The CK_ADJ table has 1.3 million rows and both the columns have indexes defined. Table definitiuon mentioned below

The CI_ADJ_CHAR table has 14 million rows and will update 1 million rows and has an index on the ADJ_ID column but not on the CHAR_VAL_FK1 column.

View 1 Replies View Related

Querying A Table With 129 Million Records - Performance Is Slow?

Oct 9, 2013

I have table with 129 million records.

If I just to select count(*) on the table its taking more than a minute in Sql Developer.

The table structure is as below, Primary key is a sequence and then 3 foriegn keys and one non-unique index on the date column.

<Table_Name>
column1 NOT NULL NUMBER ( Primary Key)
column2 NOT NULL NUMBER ( FK1)

[Code].....

View 1 Replies View Related

Performance Tuning :: Converted Non-partitioned Table (1 Million Data) Into Range-partition

Mar 28, 2013

As per Article mentioned in Oracle Base,I have converted non-partitioned table (1 million data) into range-partition table,but,I don't see performance improvement in explain .

View 9 Replies View Related

Performance Tuning :: How Oracle Organizes Records In Partitions

Mar 9, 2012

I have a question regrading how oracle fetches record in case of partitioning...

we have a table of nearly 6 crores of records.. that table has several columns among which the DBA has given the range partition on COL1 and hash subpartition on COL2...

CREATE TABLE ORDER_BOOK
(CUST_ID NUMBER(10),
ORDER_DATE DATE,
...........,
...........,
...........)
PARTITION BY RANGE (CUST_ID)
SUBPARTITION BY HASH (ORDER_DATE)
[code]........

I want to know how the rows will get organised in the tablespaces...the DBA didn't mention any tablespace name in the main partition and he has mentioned the tablespace names only in the subpartition template. so how the records will get organised.

View 5 Replies View Related

SQL & PL/SQL :: Inserting 300 Million Records Into Oracle Table Using Database Links?

Dec 19, 2011

I would like to know if we can insert 300 million records into an oracle table using a database link. The target table is inproduction and the source table is in development on different servers.The target table will be empty and have its indexes disabled before the insert. if this can be accomplished in less than 1 hour.

View 26 Replies View Related

Performance Tuning :: Merge Statement Tuning For 100M Records In Table?

Oct 31, 2011

I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.

SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,

[code].....

View 39 Replies View Related

Performance Tuning :: Join With 30 Tables

Jan 16, 2012

I have to do the optimization of a query that has the following characteristics:

- Takes 3 hours to process
- Performs the inner join with 30 tables
- Produces an output of 280 million records with 450 fields

First of all it is not feasible to make 30 updates (one for each table) to 280 million records.

The best solution that I had found so far was to create 3 temporary tables, where each of them to do the join with 1/3 of the 30 tables, and in the end I make the join between the main table and these three tables temporary.

I know that you will ask (or maybe not) to the query and samples, but it is impossible to create 30 examples.

how to optimize this type of querys that perform the join with multiple tables and produce a large output with (too) many columns.

View 15 Replies View Related

Performance Tuning :: Truncate To Some Of Tables

Jan 13, 2012

We have a procedure, which do truncate to some of the tables. Most of the time it finished in short of spam of time. But from last few days, it is taking much longer time.

where should i start the investigation.

View 4 Replies View Related

Performance Tuning :: Delete No Of Records From A Table

Aug 3, 2010

I am using one script to delete the records from a table, its taking 1hr to delete.

declare
cursor c1 is select ownerid,ownertype from nightly_metric_projects
;
v1 c1%rowtype;
open c1;
loop
fetch c1 into v1;
exit when c1%notfound;
DELETE FROM DGT_ITEMEFFORTDATA WHERE OWNERTYPE = c1.OWNERTYPE
AND OWNERID = c1.OWNERID;
end loop;
close c1;
commit;

nightly_metric_projects--1200 records
DGT_ITEMEFFORTDATA--13200000

View 14 Replies View Related

Performance Tuning :: Display All Records From TAB_ONE

Jul 8, 2010

My query formation is like below..

*Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi*

SELECT T1.COL1,T1.COL2,
T1.COL3,T2.COL2,
T3.COL1,T3.COL2,
T4.COL1,T4.COL3,
<CASE statements and calculations results - Some Amount1>,
<CASE statements and calculations results - Some Amount2>
[code]..........

First I need to display all the records from TAB_ONE which contains more than 10million records. If you see there are columns like AMOUNT_ONE,AMOUNT_TWO which has got some complex calculations and that is based on some other calculations and which in turn and goes on.... Like this I have some ten amount columns. Finally these records has to be inserted into a new table.

To get this I have written nested inline queries to calculate these AMOUNT columns but since considering the huge amount of records, it takes more than 8 hours for inserting this into a new table.

View 7 Replies View Related

Performance Tuning :: How To Partition Tables And Indexes

Jan 12, 2011

So our situation is pretty simple. We have 3 tables.

A, B and C

the model is A->>B->>C

Currently A, B and C are range partitioned on a key created_date however it's typical that only C is every qualfied with created date. There is a foreign key from B -> A and C -> Bhave many queries where the data is identified by state that is indexed currently non partitioned on columns in A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at this time.

It is typical that we qualifier A on either account or user or both. There are indexes (non partitioned on these) We have a problem with now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This often results in large full table scans. Our solution has been to remove the leading wildcard.

We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on created_date but rarely qualified by that. We are also wondering where and how we can benefit from either global partitioned index or local partitioned indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.

View 3 Replies View Related

Performance Tuning :: Query Using Row Num In Where Clause With Millions Of Records

Dec 8, 2010

There is a table in Database with millions of records and a query --- Select rowid, ANI, DNIS, message from tbl_sms_talkies where rownum<=:"SYS_B_0" ---- using the high CPU and also this query having high number of executions.

View 10 Replies View Related

Performance Tuning :: Moving Tables To Encrypted Tablespace?

Aug 25, 2013

i am using 11.2.0.3.0 version of oracle. We are planning to move some ~40 tables/indexes to new encrypted tablespace as a part of TDE(transparent data encryption). Currently three tables are having size ~30GB and one having ~800GB other have <2GB in size. And tables/indexes are altogether placed in different tablespaces.

whether i should create as many no of encrypted table spaces as it was before as unencrypted tablespace? or I should create one encrypted tablespace and move all the tables/indexes into that?

View 2 Replies View Related

Performance Tuning :: Couldn't Have Indexes On Tables In Production

Sep 27, 2013

I have tried a lot by alternate solutions like rearranging the order of tables in join and moving where conditions before but no success...Its a bottleneck and I could not have indexes on these tables in production...I want to change the approach in subquery

SELECT
g.COLUMN1,
g.COLUMN2,
e.COLUMN3,
g.COLUMN4,
MIN(e.dat1) KEEP ( DENSE_RANK FIRST ORDER BY date2 Desc) * -1,
min(to_char(date3,'dd-mm-yyyy'))
[code]....

View 5 Replies View Related

Performance Tuning :: Index Has Been Created On Both Depart_id For Two Tables

Jul 30, 2010

SELECT department_id
FROM (SELECT department_id
FROM employees
UNION
SELECT department_id
FROM employees_old )
WHERE department_id=100;
[code]....

The index has been created on both depart_id for the two tables. The only difference between the two I observed was the 1 recursive call for the 1st sql.and also, one additional view in the plan.There is a little difference in bytes sent over the network.

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
6 consistent gets
0 physical reads
0 redo size
[code]....

Is there any performance impact you find in those above two sqls if you compare?

View 14 Replies View Related

Updating 15 Million Records

Jul 12, 2012

i'm using the below query to update a VOTER table with over 15million records but it's taking ages to finish. i am using 11gr2 on linux

the query:

MERGE INTO voter dst
USING (
SELECT voterid,
pollingstation || CASE
WHEN ROW_NUMBER () OVER ( PARTITION BY pollingstation
[code]........

View 6 Replies View Related

Best Way To Update 8 Out Of 10 Million Records?

Jun 21, 2013

I want to update a table 8 million records of a table which has 10 millions records, what could be the best strategy if the table has a BLOB column with 600GB worth of data. BLOB itself is 550GB.  I am not updating the BLOB column. Usually with non-BLOB data i have tried doing "CREATE TABLE new_table as select <do the update "here"> from old_table;" method .

View 6 Replies View Related

Performance Tuning :: Options For Optimizing SQL Running Against Wide Tables

Nov 13, 2012

The scale of the tests that generate the following scenario is not huge right now, only 50 users simulated (or you can think of them as independently running threads if you like). But here is the crunch, the queries generated (from generic transaction layer) are all running against a table that has 600 columns! We can't really control this right now, but this is causing masses amounts of IO (5GB per request) making requests queue for disk availability (which are setup RAID 0/1); its even noticable for as few as 3 threads.

I have rendered the SQL on one occasion to execute in 13 seconds for a single user but this appears short lived as when stats were freshly gathered it went up to the normal 90-120 seconds. I've added the original query to the file, however the findings here along with our DBA (who I trust implicitly) suggest that no amount of editing the query will improve the response times, increasing the PGA/SGA (currently 4/6GB respectively) will only delay the queuing for a bit and compression can work either. In short it looks as though we've hit hardware restrictions already for this particular scenario.

As I can't really explain how my rendered query no longer takes 13 seconds, it's niggling me that we might be missing a trick.So I was hoping for some guidance on possible ways of optimising these type of queries against such wide tables, in other words possibilities that we haven't considered...

Attached is the query and plan.

View 9 Replies View Related

Performance Tuning :: Create Hash Partition On Fact Tables?

Aug 5, 2010

I have to create a hash partition on fact tables.. we can use temp tablespace or permanent tablespace.

View 10 Replies View Related

Performance Tuning :: Creating Index On Base Tables Of A View?

Dec 9, 2010

I have a view on base tables holding historical data for previous 60 months(one table per month) with union all operators.create index on those base tables will improve performance or creating a primary key with disabled novalidate will improve for retrieving data?

The view has around 8 million data and used as a fact table with 4 dimension tables.A DTS package from MSSql side refreshes OLAP cube by retrieving data from these tables in oracle.

View 1 Replies View Related

Performance Tuning :: LONG To CLOB Conversion In Huge Tables

Jan 25, 2013

We have a huge table in production, with LONG column. We are trying to change its datatype to CLOB. The table has 120 Million records and is of 270 GB in size.

We tried using the oracle expdp/impdp option to try the conversion in our perf environment. With 32 parallels, the export completed in 1.5 hrs. However, the import took 13 hrs.

I also tried the to_lob option using inserts, it went on for 20 hrs and I killed the process. Are there any ways to improve the performance of LONG to CLOB conversion on huge tables?

View 6 Replies View Related

Performance Tuning :: Function - Insert Around 900000 Records In Name_compress Table?

Apr 12, 2013

I have created below function to remove specific words/special characters from string. This function is producing expected result. Using this function i need to insert around 900000 records in name_compress table. Insert is taking around 7 mins, how we can tune this function so that insert will be executed within 1-2 mins.

Function -

CREATE OR REPLACE FUNCTION NAME_FN(IN_STRING1 VARCHAR2)
RETURN VARCHAR2 IS
V_OUTPUT VARCHAR2(300);
V_OUTPUT1 VARCHAR2(300);
V_OUTPUT2 VARCHAR2(300);
V_OUTPUT3 VARCHAR2(300);

[code]...

View 11 Replies View Related

SQL & PL/SQL :: Utl_match Comparing Million Records?

Nov 22, 2010

the problem is: 2 tables - one with 2 million records, and the other with 8000 records.

i need to compare for each record in a table if there's a similar string on the other table.

i've created a procedure that does the following:

opens the first cursor (select col1,col2,col3,col4... from table 1)
loop
opens second cursor (select col1 from table 2)
loop
if utl_match(col1, table2.col1) > 80 then
insert col1,col2,col3,col4... into tableX
end if
close second cursor
close first cursor

the thing is that this procedure takes forever to end...about 8 days.

is it because im using the utl_match function? is there a way to speed this up?

View 13 Replies View Related

PL/SQL :: How To Insert 3 Million Records From One Table To Another

Aug 13, 2013

I want to know how we can insert more than 3 million records from one table to another table. Can we use Bulk collect and forall to insert the all data.Can we use create table tableB as select * from tableA; From the above which is one is performance wise good.

View 16 Replies View Related

PL/SQL :: How To Update A Table That Has Around 1 Million Records

Sep 21, 2012

Lets take the basic emp table for our Referenece and lets assume that it contains around 60000 Records and all the deptno in that table are Initially 10. Please provide an update statement which would update deptno column of EMP table((based on) order by EMPNO) in for every 120 records incrementing by 1.(DeptNo to be incremented by 1,like 10 ,11 , 12 etc).

First 120 Records deptno should be 10,
Next 120 Records deptno should be 11, and so on.
.
.
.
.
.
.
For Last 120 records deptno should be updated with 500.

View 3 Replies View Related

Performance Tuning :: Only LOCAL Bitmap Indexes Are Permitted On Partitioned Tables

Feb 4, 2005

16:28:32 SQL> create bitmap index bp_idx_ag_id on transactions(type);

create bitmap index bp_idx_ag_id on transactions(type)
*
ERROR at line 1:ORA-25122: Only LOCAL bitmap indexes are permitted on partitioned tables

how to create bitmap index on partitiioned tables

View 3 Replies View Related

Performance Tuning :: How To Find Tables In Database On Which High DMLs Firing

Feb 4, 2011

How to find the tables in the database on which high DMLs are firing.

View 5 Replies View Related

SQL & PL/SQL :: Joins On Two Tables With 10 Million Rows Each

Jun 11, 2010

We have two tables, TableA and TableB that contain list of accounts and balances.The requirement is to compare the balances of accounts in both the tables, and if there is a difference, then record that difference with account number in another table.

Both TableA and TableB contain more than 10 million rows.What is the best way to do this task in PL/SQL? A join on TableA and TableB to know the differences has become very slow due to large volume.

View 20 Replies View Related

SQL & PL/SQL :: Update 10 Million Records Table With Certain Condition?

Sep 16, 2013

what are the type of ways to update the 10 million records table with certain condition?

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved