Performance Tuning :: How To Increase Data Retrieval / Insertion Speed

Oct 24, 2013

How To Increase Data Retrieval / Insertion Speed my data base has more than 0.5 million records Forms Some Time Respond Very Slow .

View 8 Replies


ADVERTISEMENT

Performance Tuning :: Which Index Will Be Better For Fast Retrieval Of Data

Nov 12, 2010

I create a view on production server which takes almost 10 to 12 minutes when it shows data. this view contains 3 or 4 tables on which all primary and unique columns have indexes.which index will be better for fast retrieval of data .

View 5 Replies View Related

Performance Tuning :: Speed Up Gathering Of Stats

Jun 2, 2011

I am gathering stats by using below block i.e., for some 3 million records and there are 6 indexes on the table. What is the relevance of value 4 here (i.e., method_opt => 'FOR ALL INDEXED COLUMNS SIZE 4')? If I increase 4 to 250 will there be any speed change in gathering stats. My intention is to speed up the gathering of stats.

begin
dbms_stats.gather_table_stats(
ownname => SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA'),
tabname => 'LEGAL_VIEW_TARGET',
method_opt => 'FOR ALL INDEXED COLUMNS SIZE 4',
cascade => TRUE
);
END;

View 12 Replies View Related

Performance Tuning :: How Expensive (speed) Is Unique Versus Primary Key In Large Table

Aug 15, 2011

I have two design alternatives and need to understand how expensive (speed) is one of them against the other for a medium size table (100K-200K records):

create table xyz
(
f1 number not null,
f2 varchar2(20) not null,
f3 number not null,
f4 varchar2(50),

[code]....

the idea is to optimize the design by using a PK instead of the 3 keys and there is a debate that searching a unique index field(2nd scenario) is of the same speed than searching a PK field (1st scenario).

View 5 Replies View Related

Performance Tuning :: Real Time Speed Improvements And Auto-trace Results

Feb 4, 2011

I've got a query running a select count (*) over a table. The default plan takes in the order of 15 minutes to return, a hinted plan to use a different index takes 3 minutes to return.

Unfortunately I cant get at the index stats and a few other areas which I suspect may be key here.When running autotrace against the two queries I see fairly different values as one would expect.

Query

select count (*) from fulfilmentitem bfi where created >= sysdate-30 AND bfi.status = 'FA' AND bfi.fulfilmentmethod = 'D'
Slow run
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | 33119 (1)|
| 1 | SORT AGGREGATE | | 1 | 15 | |
|* 2 | TABLE ACCESS BY INDEX ROWID| FULFILMENTITEM | 12525 | 183K| 33119 (1)|
|* 3 | INDEX RANGE SCAN | IDX_FULFIL_METHODSTATUS | 250K| | 1786 (1)|
----------------------------------------------------------------------------------------------
[code]....

IDX_FULFIL_METHODSTATUS is across FULFILMENTMETHOD & STATUS in that order.
IDX_BFI_CREATED is on CREATED and is approx 70% of the size of the other index

The row counts estimated in the explain plan are out, the count(*) comes in at 32.8k rows.As you will have seen, the fast run shows a pretty significant consistent get increase compared to the slow run and a decent though not dramatic physical read drop.

My uncertainty is around if these changes in consistent get/phys read values would typically be enough to suggest the real time improvements I'm observing or if other (albeit perhaps temporary) factors are involved. It is a prod OLTP environment so the data will be rapidly changing and that may be a factor.

I know it can never be an exact science without intimately knowing the hardware/current loads etc but I also know that there's enough experience on these boards to have a loose handle on if the time shifts between queries are likely (or not) to be reflective of the stat changes or if those differences alone shouldn't (or typically wouldn't account) for it.

I'm thinking about instructing the query to ignore its original plan but am hesitant to do so without being a little more confident that it's not just a timing thing or something other than the change of index approach which may be causing the improvement. the autotrace stat changes observed I couldn't put my hand on heart say "yup - that change is good, ignore the default index all the time for this job".

View 11 Replies View Related

Performance Tuning :: One Job Is Hanging After SGA Increase?

Mar 11, 2013

Additional Information

Step 1: Increased Physical Memory on one Node from 32 G to 48 G.
Step 1 Impact : DB was running same as before
Step 2 : Increased SGA from 12 G to 15 G.
Step2 Impact : DB was running same as before for 1 day next day one reporting job was hanging.
Step 3 : Increased DB_CACHE_SIZE from 5G to 7G.
Step 3 Impact : Over all CPU Utilization was high and no effect on reporting job.
Step 4 : Decreased DB_CACHE_SIZE from 7 G to 5 G.
Step 4 Impact : CPU Utilization came down little bit but no effect on reporting job.

Now our main concern is why CPU Utilization is going high. Because same thing we did last time and we got positive results.

View 15 Replies View Related

Performance Tuning :: Restart Database After Increase Db_cache_size?

Aug 23, 2012

SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 1152M
sga_target big integer 0

[code]....

in scenario above, the database do not using ASMM, and spfile If I wan to increase db_cache_size parameter, do i need to rebounce instance?

View 6 Replies View Related

Performance Tuning :: Increase Few Of Columns Size In The Table

Feb 14, 2013

We have a table emp_details with 23772889 records. Our requirement is to increase few of the columns size in the table emp_details. We are following the below alter statement which is taking around 2 hours of time.

ALTER TABLE emp_details
MODIFY
(
address char(90)
,department char(30)
)
/

Is there any way to improve the above query performance?

View 2 Replies View Related

Performance Tuning :: ORA-01555 - Increase Undo Size Or Retention?

Apr 22, 2013

I have got the following error yesterday

ORA-01555 caused by SQL statement below (SQL ID: fdxcyoin67ty8t, Query Duration=380128 sec, SCN: 0x0229.ff00afd0):

following are the existing settings

SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 96000
undo_tablespace string undo

[code]....

following are the details from v$undostat

select begin_time, end_time, undotsn, undoblks, maxquerylen, maxqueryid, activeblks, unexpiredblks, expiredblks, tuned_undoretention from v$undostat
where trunc(begin_time)=trunc(sysdate)-1 order by begin_time;
BEGIN_TIME END_TIME UNDOTSN UNDOBLKS MAXQUERYLEN MAXQUERYID ACTIVEBLKS UNEXPIREDBLKS EXPIREDBLKS TUNED_UNDORETENTION
-------------- -------------- ---------- ---------- ----------- ------------- ---------- ------------- ----------- -------------------
21-04-13 00:08 21-04-13 00:18 1 12733 378446 duqnawh32hp4u 91152 7068448 225440 345600
21-04-13 00:18 21-04-13 00:28 1 8951 379047 duqnawh32hp4u 99344 7072800 225440 345600
21-04-13 00:28 21-04-13 00:38 1 14073 379650 duqnawh32hp4u 90128 7075872 234656 345600

[code]....

Following are the details in AWR report (00:00 til 01:00 of 21-Apr-2013) .... not thet the error was produced at 00:42

Undo Segment Summary DB/Inst: DBCPY/dbcpy01 Snaps: 18853-18854
-> Min/Max TR (mins) - Min and Max Tuned Retention (minutes)
-> STO - Snapshot Too Old count, OOS - Out of Space count
-> Undo segment block stats:
-> uS - unexpired Stolen, your - unexpired Released, uU - unexpired reUsed

[code]....

Undo Advisor information taken 'now' is as following

SQL> select dbms_undo_adv.longest_query(sysdate-2,sysdate) from dual;
DBMS_UNDO_ADV.LONGEST_QUERY(SYSDATE-2,SYSDATE)
----------------------------------------------
379650
SQL> select dbms_undo_adv.required_retention from dual;

[code]....

In above situation what should be my first choice (assuming increasing space is not an issue) - increase undo tablespace or increase undo retention?

If latter is the choice then what should be the value? Because as I understand present 96000 value is taken as lower limit and because of auto tuning the actual value (TUNED_UNDORETENTION) being used was 345600 In that case shall I set it to something > max(maxquerylen) i.e 379,650 + X?Or I shall increase the undo tablespace size?

From Undo Advisor output it looks to me that even if I increase the undo retention to 379650 current undo size will be able to support it (may be at the expense of DMLs)Is that right?

View 13 Replies View Related

Forms :: Anyway To Speed Up Performance Of Go_record Built It

Mar 4, 2010

Is there anyway to speed up the performance of the go_record built it or is there an alternative way to do it.

I have a table with nearly 30,000 rows and I would like to implement a text field that will allow the user to jump to a specified record. The only problem is if they try to jump too far away it will take a long time to load (beginning to end of 30,000 takes over a minute).

This problem doesn't arise if all the records, or up to the one they are jumping to, have been fetched already, but even if I fetch all records at the beginning it will still take a long time to initially load them.

View 10 Replies View Related

Performance Tuning :: Same Data But Different Performance Results

Sep 3, 2010

what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.

View 10 Replies View Related

BLOB Based Schema - Performance Of Loading / Insertion

May 17, 2012

Would using a blob based schema load noticeably faster than a binary_double based schema?

CODEBlob Scenario:

Load 1 row: 5 columns (1 integer column, 4 blob columns) of size X
VERSUS

CODEDouble Scenario:

Load 10,000 rows: 5 columns (1 integer column, 4 binary_double columns) of size X

While the benefit of using the rows approach is obviously the capability to query the values, I'd like quick answer concerning the loading/insertion performance. Associative array binding is used for loading from a .NET client. Also, would the answer also hold true for 200 columns instead of just 5 columns.

View 1 Replies View Related

SQL & PL/SQL :: Data Retrieval

Jan 17, 2012

i hv one table breco_data which is having two columns one is partycode and second is chequee no .and i just want filter the partycode and cheque no which are not availble in the breco_data table .and party cannot have same cheque no

View 2 Replies View Related

Increase In Performance Through Re-Indexing

Feb 28, 2011

I'm Using Oracle DB, I have got performance issue in one of my portlet, Can Re-Indexing work for Increasing the performance.

View 3 Replies View Related

How To Increase DML Statement Performance

Jun 27, 2012

How do we increase the DML statement Performance?

View 8 Replies View Related

Forms :: Data Retrieval

Jan 16, 2013

what could be the cause of the following scenario :

A form is running, data is entered and saved. After saving on form, checking in the backend, the data is present. But when using the same form again to retrieve data, no data is shown.. and also if tried to enter same data (previously entered for saving again) then error comes up that data is already in the database.

could this be related to any restrictions that might be placed on tables or something ??

View 5 Replies View Related

Performance Tuning :: Split Data Separated By Comma / Then Create Collection With Data Mapped From Other Columns

Sep 25, 2013

DB Used : Oracle 10g.

A table X : NUM, INST are column names

NUM ----- INST

1234 ----- 23,22,21,78
2235 ----- 20,7,2,1
1298 ----- 23,22,21,65,98
9087 ----- 20,7,2,1

-- Based upon requirement :

1) Split values from "INST" Column : suppose 23
2) Find all values from "NUM" column for above splitted value i.e 23 ,

Eg:

For Inst : 23 ,
It's corresponding "NUM" values are : 1234,1298

3) Save these values into

A table Y : INST, NUM are column names.

INST NUM
23 1234,1298

1) I have a thousand records in Table X , and for all of those records i need to split and save data into Table Y.Hence, I need to do this task with best possible performance.

2) After this whenever a new data comes in Table X, above 'split & save' operation should automatically be called and append corresponding data wherever possible..

View 4 Replies View Related

SQL & PL/SQL :: Replace The Like Operator To Increase The Performance

Jul 16, 2012

How to replace the like operator for increase the performance. Because it is taking more time and not using the index.

SELECT *
FROM emp
WHERE ename like '%AL';

View 30 Replies View Related

Table Data Storage And Retrieval?

Apr 10, 2008

Let me explain in detail. I have three tables

1. Emp Table: Columns-> EMPID and DeptID
2. Dept Table: Columns-> DeptName and DeptID
3. Team table : Columns -> Date, EmpID1, EmpID2, DeptNo.

There is a stored procedure which runs every day, and for "EVERY" deptID that exists in the dept table, selects two employee from emp table and puts them in the team table. Now assuming that there are several thousands of departments in the dept table, the amount of data entered in Team table is tremendous every day.

If I continue to run the stored proc for 1 month, the team table will have lots of rows in it.

The real problem is when I want to retrive data for a employee(empid1 or empid2) from Team table and view the related details like date, deptno and empid1 to empid2 from emp table.

How do we optimise the data retrieval and storage for the table Team. optimize the query and data retrieval time from Team table.

View 4 Replies View Related

Networking And Gateways :: How To Increase Oracle SQL*Net Performance

Nov 25, 2011

One of my MOD_PLSQL based Oracle APEX Application is running on the web. I have almost 1000 web users to access the Application where at least 250-300 users are always on line. In Oracle EM web interface When run ADDM I see suggestions "Investigate the cause of SQL*Net more data to client" or something like this. and I am getting complains about poor server response. Here are my configuration of the system:

Database Server Host: DBServer (Oracle Database 10.2.0.3)
HTTP Server Host: OraHTTP (Oracle Companion CD 10g)

DADS.conf HTTP Server configuration
MOD_PLSQL database access for web clients.

How can I increase database connection performance?

View 6 Replies View Related

Performance Tuning :: Get AWR Report Data?

Aug 8, 2012

I am having only select_catalog_role in database. Can I take complete AWR report data from awr views without using DBMS_WORKLOAD_REPOSITORY package?

View 6 Replies View Related

Performance Tuning :: Data Caching

Feb 7, 2011

sometimes when I re-run a query a few times, the speed after the first run become much faster. this is a problem for me when I'm trying to optimize a query. is there some sort of cache? can it be disabled?

View 7 Replies View Related

Performance Tuning :: Inserting Data Using FOR ALL

Mar 6, 2011

CREATE OR REPLACE procedure fast_proc (p_rows out number)
is
TYPE object_id_tab IS TABLE OF all_objects.object_name%TYPE INDEX BY BINARY_INTEGER
lt_object_id object_id_tab;
CURSOR c IS

[Code]....

Warning: Procedure created with compilation errors.

Errors for PROCEDURE FAST_PROC:

LINE/COL ERROR
-------- ---------------------------------------------------------
13/7 PL/SQL: SQL Statement ignored
13/22 PL/SQL: ORA-03001: unimplemented feature

I am not able to do INSERT but I am able to do UPDATE/DELETE? What is this inbuilt functionality?

View 3 Replies View Related

Performance Tuning :: Big Table Data Maintenance

Jun 17, 2011

We have few tables in our production database which are havoc in size and will increase in size in future too so as part of the corrective measures , we have jotted down the below 3 methods to manage the size of those tables :-

1> Partitioning the table and take the export of identified partitions and after that, truncate those partition.
2> Creating history tables and remove not so current data from the original table to history table.

View 3 Replies View Related

Performance Tuning :: Retrieving From Data Is Slow

Jan 12, 2011

I'm extracting/retrieving the data from the oracle database using Java application it's bit slow. However, when I retrieve from the SQL server it's faster than oracle.

View 6 Replies View Related

Performance Tuning :: Oracle Data Migration

Jun 7, 2011

We have a data migration scripts written for oracle. Data is not huge but we are observing that the migration is faster in the development labs but is 5x slower in the production site.

The development Oracle setup is on Windows and Production setup on Solaris. I have attached the AWR generated for a period where migration was run for 3 hours and stopped due to slow performance.

Here is my initial analysis.

1) The first timed events is the DB CPU. Hence I feel the migration scripts can be modified to run in parallel so that they can finish faster. However here the question arises why it should run faster in development env if this is an issue.
2) I tried increasing the
a.large_pool_size set to 512M
b.sga_max_size set to 8G
c.sga_target set to 8G
from 0, 4G and 4G respectively.

I have attached the AWR and below are the etc/system contents for solaris settings.

* Begin MDD root info (do not edit)
rootdev:/pseudo/md@0:0,1,blk
* End MDD root info (do not edit)
set noexec_user_stack=1
set noexec_user_stack_log=1
* IBMdpo vpath_START (do not remove)
* default SCSI timeout is 60 seconds
* uncomment to change SCSI timeout * set sd:sd_io_time=0x1e
forceload: drv/vpathdd
* IBMdpo vpath_END (do not remove)

set noexec_user_stack=1
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmsl=256
set semsys:seminfo_semvmx=32767
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10

P.S. The awr report is renamed to .txt from .html to be able to upload the file.

View 6 Replies View Related

Enterprise Manager :: Increase Performance Of Form And Report Server?

Aug 27, 2009

We are just but new oracle 10g(10.1.2.0.2) Database and application server ( form and report server ).have installed application server on window server 2003.

it will running ,but it is used lots of memory.so the performance decrease when user increase.

View 1 Replies View Related

Performance Tuning :: Table Rebuild Should Be Done After Removing Most Of Its Data?

Oct 23, 2012

We are on Oracle 10.2.0.4 on Solaris 10. There is a table in my production db that has 872944 number of rows. Most of its data is now unnecessary, we need to retain, based on a date column in the table just last one month's data and delete rest of the data. So after that the table will have just 3000 rows.

However as the table was huge earlier(872k rows prior to delete) , does the delete of data release its oracle blocks and does the size of the table reduce? If not, will it rebuild the table online (online redefinition) so that the query that does a full scan on this table goes faster?

I checked using an example table that just delete of data does not remove the oracle blocks - they remain in the user_tables for that table and cost of full table scan remains same. We have a query that does the full table scan so I am thinking that after this delete I should do an online table re-definition , is that the right decision?

View 4 Replies View Related

Performance Tuning :: Lot Of Time Being Taken To Delete Data From A Table

Apr 27, 2012

I have a table which contains 8,21,177 amount of data totally.Now I am trying to delete around 4,84,000 of data from this table by using just one filter i.e. my query is something like below

DELETE /*+ parallel(resource,4) */ FROM resource where created_by = 'MIGN'

This is going to delete 4,84,000 rows of data . But my current issue is this is taking lots of time to delete the data . To be precise , its almost taking 25 hours to delete this data..The created_by column is indexed .

Execution Plan
----------------------------------------------------------
Plan hash value: 2389236532

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------------

| 0 | DELETE STATEMENT | | 499 | 20459 | 39 (0)| 00:00:
01 |

| 1 | DELETE | RESOURCE | | | |
[code]....

View 26 Replies View Related

Performance Tuning :: Copy Data From Transaction (DB1) To Reporting (DB2)

Nov 26, 2010

We are copying our transaction tables data into another database for our reporting applications (say every day midnight refresh will happen).

The Transaction Database has some 30tables. Existing system is following below points and it is taking 2hours to complete.

1) Truncate data from reporting database (or schema)

2) Direct path Insert into reporting database (or schema) as select * from transaction tables.

3) Rebuild index and Enable constraints.

Note: Each tables data will vary from 30lakhs to 50lakhs. Dump/import/export is not advised by the client.

I want to cut down the time i.e., below 2hours. Instead of above method. Can go for a field in each table specifying the time of each records update/insert operation and then pick the modified records only and copy into reporting db.

View 4 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved