Performance Tuning :: Delete Is Very Slow
Apr 26, 2013I am running one simple delete statement in one table with rownum<10000 but it is taking nearly 10 to 15 mins.Table doesn't have any child table rows and triggers.
View 13 RepliesI am running one simple delete statement in one table with rownum<10000 but it is taking nearly 10 to 15 mins.Table doesn't have any child table rows and triggers.
View 13 RepliesWe have a MV which fetches data from around 27 tables containing 26 joins out of which 25 are outer joins. Some tables in the query are being referred multiple times through different alias names and hence the actual no of physical tables used is 18. This MV takes about 50 mins to refresh through complete refresh mechanism. We decided to make it fast refresh and thus made these configurations:
- Created MV logs based on rowid for each of the base tables.
- Recreated MV using FAST refresh,with primary key option enabled
- Pulled rowid for all these tables in the select column statement.
Even after making all the recommendations suggested by Oracle for fast refresh MV's we are still getting refresh time of around 65 mins(refresh time increased!!!).We already have indexes built on all the join columns of the base tables. What else do we need to do to make this a "fast" refresh MV ?
I'm extracting/retrieving the data from the oracle database using Java application it's bit slow. However, when I retrieve from the SQL server it's faster than oracle.
View 6 Replies View RelatedMy ERP Application is responding fast while running reports or saving entries, if Oracle 10g Express Edition (XE) is installed. But in Oracle 10g Enterprise Edition or Standard Editions the same application is running very slow.
View 1 Replies View RelatedIf a table(have a primary key) is empty(after truncate),the sql of dml(insert,update) is very quickly,but if the table have many rows about 10,000,000 rows, the dml is very slowly,why?
View 6 Replies View Relatedwe are using oracle 9i on AIX Server. When Customer were accessing the database, accidentally power was shut down. we restarted the Server,and Oracle database. all resumed successfully.
However while doing "Payments by the customer" it takes a lot of time to insert even a single payment record on database.The database is Live and our customer are very much frustrated,
The product I work on requires a query to tell us what tables are dependent on certain types.
SELECT dba_tab_cols.owner,
dba_tab_cols.table_name,
dba_tab_cols.data_type_owner,
dba_tab_cols.data_type
FROM dba_tab_cols
JOIN dba_types
ON dba_types.owner = dba_tab_cols.data_type_owner
AND dba_types.type_name = dba_tab_cols.data_type
WHERE (dba_types.owner IN ('SCHEMA1', 'SCHEMA2'......))
I find this query to be pretty slow. I think it is because data_type_owner in dba_tab_cols is not indexed. Adding an index is not an option because users expect our product to read-only.
Few days ago, My database server no access to StorageBox then I reboot it then after works fine. But, know DB import process is too slow. Before 100GB DB import process completed within 10 hours when server normal running. Now 2 day working, but not complete
How to investigate this issue? Maybe I miss increase some parameters on the Server or Oracle?
Here is my server brief info:
RAM is 16GB,
SWAP size is 16GB,
CPU 12 cores
SQL> show sga;
Total System Global Area 4294967296 bytes
Fixed Size 1984144 bytes
Variable Size 369105264 bytes
Database Buffers 3909091328 bytes
Redo Buffers 14786560 bytes
I have an Oracle database (9.2.0.7) installed on a HP-UX server.When trying to access this database from another HP-UX or Linux server, connection is fine. But when trying to connect from a Windows based client, connection is very slow (almost 1 minute to return the result of a 'select count(*)' like query, which is immediate from the Linux client).
Here are some facts I can add :
- Clients and servers are on the same network segment (it is not a network matter)
- No matter which client version I use, there no difference
- I tried to know what happens on the Oracle server when performing my sample query using tusc command : the result is that the server is performing exactly the same actions when sending my query from a Linux client or a Windows client
- The only relevant difference seems to be the client OS
I have a query which takes 5 minutes when run through the java app which uses hibernate. I've cut and pasted the SQL directly from hiberate trace file and run it in sqlplus/sqldeveloper and it runs instantly (0.01 seconds)(uses the index all ok and explain plan looks good - see below.) I don't know how to get the explain plan when it's running through the app or why it should be any different anyway as the query is identical.
My query is as follows:
SELECT /*+ INDEX (SPD SPD_SEQ_CODE) */ SPD.*
FROM SEQ_ADDR_DATA SPD, SEQ_ADDR_LEVELS SPL
WHERE SPD.SPVR_ID = '10'
AND SPL.SPLE_ID = SPD.SPLE_ID
AND SPL.SPLE_LEVEL <= '2' AND SPDA_ID NOT IN
[code]....
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
I am using one script to delete the records from a table, its taking 1hr to delete.
declare
cursor c1 is select ownerid,ownertype from nightly_metric_projects
;
v1 c1%rowtype;
open c1;
loop
fetch c1 into v1;
exit when c1%notfound;
DELETE FROM DGT_ITEMEFFORTDATA WHERE OWNERTYPE = c1.OWNERTYPE
AND OWNERID = c1.OWNERID;
end loop;
close c1;
commit;
nightly_metric_projects--1200 records
DGT_ITEMEFFORTDATA--13200000
I have a table which contains 8,21,177 amount of data totally.Now I am trying to delete around 4,84,000 of data from this table by using just one filter i.e. my query is something like below
DELETE /*+ parallel(resource,4) */ FROM resource where created_by = 'MIGN'
This is going to delete 4,84,000 rows of data . But my current issue is this is taking lots of time to delete the data . To be precise , its almost taking 25 hours to delete this data..The created_by column is indexed .
Execution Plan
----------------------------------------------------------
Plan hash value: 2389236532
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | 499 | 20459 | 39 (0)| 00:00:
01 |
| 1 | DELETE | RESOURCE | | | |
[code]....
Test1 table have around 385772300 rows. below delete and select statment talking lot of time.
Select stament taking more than 1 hrs.
SELECT TO_NUMBER(MAX(f.T3))
--INTO v_FISCAL_MONTH_ID
FROM Test1 f;
delete statment taking more than 2 hours
DELETE FROM TEST1 WHERE TRUNC(T10) < TRUNC(ADD_MONTHS(SYSDATE,-36));
CREATE TABLE Test1
(
[Code].....
In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T
WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT)
IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT
FROM LOAD_TRADE_ORDER
WHERE IND_IS_BAD_RECORD='N');
Tables Used:
oTRADE_ORDER_EMP_ALLOCATION Row count (329525880)
oLOAD_TRADE_ORDER Row count (29281)
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
we are busy updating one databasee from a windows platform 2003 oracle 10G to a linux and oracle 11r2
We exported/imported the data and it looks ok Explain plans look the same . but our heavy batches are twice slower than on the windows box ,the two top events are disk related, sequential and scattered reads there are 90% of the time of the batch job , i read some white paper and found that using ASM can be bad in some cases the same with the linux for this particular kind of scattered reads , i was just wondering if just changing the SGA to 10GB instead of 4GB to get more cache and speedup the things .
I am trying to insert huge data into another huge table which is almost taking around 2-3 hrs. See my below query
INSERT /*+ APPEND *//*+ NOLOGGING */ INTO DB1.Table1 SELECT * FROM DB2.Table2 ;
COMMIT;
Both Table1 and Table2 have same structure and table1 is master table having 100 Billion records and table2 having 30 Million records. This is a direct insert where each day this operation carried.
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
I am using 11gR2 on windows server. This is the query that runs many times a day and effect badly the performance of database. I don't have much idea about this query.
SELECT TO_CHAR(current_timestamp AT TIME ZONE 'GMT', 'YYYY-MM-DD HH24:MI:SS TZD') AS curr_timestamp, COUNT(username) AS failed_count
FROM sys.dba_audit_session
WHERE returncode != 0 AND TO_CHAR(timestamp, 'YYYY-MM-DD HH24:MI:SS') >= TO_CHAR(current_timestamp - TO_DSINTERVAL('0 0:30:00'), 'YYYY-MM-DD HH24:MI:SS')
I have table with 129 million records.
If I just to select count(*) on the table its taking more than a minute in Sql Developer.
The table structure is as below, Primary key is a sequence and then 3 foriegn keys and one non-unique index on the date column.
<Table_Name>
column1 NOT NULL NUMBER ( Primary Key)
column2 NOT NULL NUMBER ( FK1)
[Code].....
I just trying to import some informations from excel to Oracle using OLE2 over Oracle Forms 6i, but It´s very slow when I have import under then 10k lines. anything to optimize that ? Follow the code used...
application OLE2.Obj_Type;
workbooks OLE2.Obj_Type;
workbook OLE2.Obj_Type;
[Code]....
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'
[code]...