Performance Tuning :: DML Slow When Table Have Many Rows
Sep 4, 2011
If a table(have a primary key) is empty(after truncate),the sql of dml(insert,update) is very quickly,but if the table have many rows about 10,000,000 rows, the dml is very slowly,why?
I am running one simple delete statement in one table with rownum<10000 but it is taking nearly 10 to 15 mins.Table doesn't have any child table rows and triggers.
We have a MV which fetches data from around 27 tables containing 26 joins out of which 25 are outer joins. Some tables in the query are being referred multiple times through different alias names and hence the actual no of physical tables used is 18. This MV takes about 50 mins to refresh through complete refresh mechanism. We decided to make it fast refresh and thus made these configurations:
- Created MV logs based on rowid for each of the base tables. - Recreated MV using FAST refresh,with primary key option enabled - Pulled rowid for all these tables in the select column statement.
Even after making all the recommendations suggested by Oracle for fast refresh MV's we are still getting refresh time of around 65 mins(refresh time increased!!!).We already have indexes built on all the join columns of the base tables. What else do we need to do to make this a "fast" refresh MV ?
I'm extracting/retrieving the data from the oracle database using Java application it's bit slow. However, when I retrieve from the SQL server it's faster than oracle.
My ERP Application is responding fast while running reports or saving entries, if Oracle 10g Express Edition (XE) is installed. But in Oracle 10g Enterprise Edition or Standard Editions the same application is running very slow.
we are using oracle 9i on AIX Server. When Customer were accessing the database, accidentally power was shut down. we restarted the Server,and Oracle database. all resumed successfully.
However while doing "Payments by the customer" it takes a lot of time to insert even a single payment record on database.The database is Live and our customer are very much frustrated,
The product I work on requires a query to tell us what tables are dependent on certain types.
SELECT dba_tab_cols.owner, dba_tab_cols.table_name, dba_tab_cols.data_type_owner, dba_tab_cols.data_type FROM dba_tab_cols JOIN dba_types ON dba_types.owner = dba_tab_cols.data_type_owner AND dba_types.type_name = dba_tab_cols.data_type WHERE (dba_types.owner IN ('SCHEMA1', 'SCHEMA2'......))
I find this query to be pretty slow. I think it is because data_type_owner in dba_tab_cols is not indexed. Adding an index is not an option because users expect our product to read-only.
Few days ago, My database server no access to StorageBox then I reboot it then after works fine. But, know DB import process is too slow. Before 100GB DB import process completed within 10 hours when server normal running. Now 2 day working, but not complete
How to investigate this issue? Maybe I miss increase some parameters on the Server or Oracle?
Here is my server brief info:
RAM is 16GB, SWAP size is 16GB, CPU 12 cores
SQL> show sga;
Total System Global Area 4294967296 bytes Fixed Size 1984144 bytes Variable Size 369105264 bytes Database Buffers 3909091328 bytes Redo Buffers 14786560 bytes
I have an Oracle database (9.2.0.7) installed on a HP-UX server.When trying to access this database from another HP-UX or Linux server, connection is fine. But when trying to connect from a Windows based client, connection is very slow (almost 1 minute to return the result of a 'select count(*)' like query, which is immediate from the Linux client).
Here are some facts I can add :
- Clients and servers are on the same network segment (it is not a network matter)
- No matter which client version I use, there no difference
- I tried to know what happens on the Oracle server when performing my sample query using tusc command : the result is that the server is performing exactly the same actions when sending my query from a Linux client or a Windows client
- The only relevant difference seems to be the client OS
I have a query which takes 5 minutes when run through the java app which uses hibernate. I've cut and pasted the SQL directly from hiberate trace file and run it in sqlplus/sqldeveloper and it runs instantly (0.01 seconds)(uses the index all ok and explain plan looks good - see below.) I don't know how to get the explain plan when it's running through the app or why it should be any different anyway as the query is identical.
My query is as follows:
SELECT /*+ INDEX (SPD SPD_SEQ_CODE) */ SPD.* FROM SEQ_ADDR_DATA SPD, SEQ_ADDR_LEVELS SPL WHERE SPD.SPVR_ID = '10' AND SPL.SPLE_ID = SPD.SPLE_ID AND SPL.SPLE_LEVEL <= '2' AND SPDA_ID NOT IN [code]....
I am trying to insert huge data into another huge table which is almost taking around 2-3 hrs. See my below query
INSERT /*+ APPEND *//*+ NOLOGGING */ INTO DB1.Table1 SELECT * FROM DB2.Table2 ; COMMIT;
Both Table1 and Table2 have same structure and table1 is master table having 100 Billion records and table2 having 30 Million records. This is a direct insert where each day this operation carried.
I'm trying to write a query that counts how many sessions are active during a 1 second time interval, then returns the maximum number of sessions active during any time interval, and all the time intervals that hit that max.
There is coulmn called DATA in a table with LONG RAW datatype. we are facing more than 60% chained rows in this table because of this LONG RAW column.
It is very difficult to clean up these chained rows periodically. Since an application using this table is a business critical interms of high availability.Hence, is there any other way in oracle to avoid chained rows permanently in future?
1) I have 5 Exported Dump files. 2) All of those 5 dump files were taken in different time periods. 3) Many of those Dump files are having the same Partition records.
eg:- Dump 1:- 01-06-2010 to 31-11-2010 Dump 2:- 01-09-2010 to 31-12-2010
4) Now i want to import all those partitioning data into a single table, without having any duplication.
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */ INTO DWH_BILL_DET rq USING (SELECT rated_que_rowid, detail_rerate_flag_code, rerate_sel_key,
Our application servers will be running a SELECT which returns zero rows all the time.This SELECT is put into a package and this package will be called by application servers very frequently which is causing unnecessary CPU.
Original query and plan
SQL> SELECT SEGMENT_JOB_ID, SEGMENT_SET_JOB_ID, SEGMENT_ID, TARGET_VERSION FROM AIMUSER.SEGMENT_JOBS WHERE SEGMENT_JOB_ID NOT IN (SELECT SEGMENT_JOB_ID FROM AIMUSER.SEGMENT_JOBS) 2 3 4 5 ; [code]....
Which option will be better or do we have other options?They need to pass the column's with zero rows to a ref cursor.
I am trying to update columns of Table A with the columns of Table B. Both these tables have 60,000 rows each. I tried this operation using following 2 queries:
Query 1
Update TableA A set (A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3 from TableB where A.CODE=B.CODE)
Query 2 Update TableA A set (A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3 from TableB where A.CODE=B.CODE) where exists A.code = (select B.code from TableB B where A.code=B.code)
When i execute these two above queries, it keeps executing indefinitely.
select gam.SOL_ID,COUNT(gam.FORACID) from gam,smt where gam.ACID=smt.ACID and gam.ACID NOT IN(select ACID from imt) and gam.SCHM_TYPE in('SBA','CCA','CAA','ODA') and GAM.ACCT_CLS_FLG='N' and gam.SOL_ID IN(select SOL_ID from IMT) group by gam.SOL_ID /
attached is the explain plan.
in which index on IMT table is not used. And the query is doing a FTS on IMT table. What needs to be done to avoid FTS on IMT table.
I am using one script to delete the records from a table, its taking 1hr to delete.
declare cursor c1 is select ownerid,ownertype from nightly_metric_projects ; v1 c1%rowtype; open c1; loop fetch c1 into v1; exit when c1%notfound; DELETE FROM DGT_ITEMEFFORTDATA WHERE OWNERTYPE = c1.OWNERTYPE AND OWNERID = c1.OWNERID; end loop; close c1; commit;
nightly_metric_projects--1200 records DGT_ITEMEFFORTDATA--13200000
I have normal tables with hugh Data and would like to increase the performace by following means:
1) Add a new column in each table. Say this column Name is IS_LIVE. This new column have only two value 1 ( LIVE ) OR 0 ( NOT LIVE ). 2) Change the normal tables to Partitioned table. There would be only two partitioned in all the table. The partitioned key column would be IS_LIVE and both partitioend recrods would be in two different tablespace. 3) Added a POLICY function to these partitioned table to Always add a Query Predicate of '1' to all queuries.
I am interested to know that what kind of Indexes ( Global Or local ) would be suitable for these kind of Design.Is there any use of having Local index on IS_LIVE.Please note that Primary Key doesnot have this new column in it.
Objective : To find solution to archieve data from 2 big tables which is occupying maximum size in the data base. With current data (From Jan 2005 to Sept 2011) it has records as mentioned below:
We need to load data and run monthly batches from October 2011 to current month which will increase this space.
1. Issue is there will not be having so much space.
2. Maintenance of such table is diffcult now.Also there is huge impact on performance. Can we think of partitioning the table base on date aswe query 1st table based on certain date range?
3. Most of reports use this table and creating performances issues
We have few tables in our production database which are havoc in size and will increase in size in future too so as part of the corrective measures , we have jotted down the below 3 methods to manage the size of those tables :-
1> Partitioning the table and take the export of identified partitions and after that, truncate those partition. 2> Creating history tables and remove not so current data from the original table to history table.
I have tried below steps for removing the table fregmentation but for some table i am not getting good result here.
1. It will collect the data which are having more than 100MB fragmentation.
select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2)||'MB' "TOTAL_SIZE", round((num_rows*avg_row_len /1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE" from dba_tables where owner in('a','b','c','d') and round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) > 100 order by 8 desc;
2. then move the object(table) to the same tablespace.
alter table abc move; alter table bcd move; alter table efg move;
3. also rebuild the dependent objects.
alter index abc_PK rebuild online;
4. Then analyze the table which are having more than 100MB of fragmentation.
From the below query i found that there are some stale stats for 3 tables.
================================= select table_name, stale_stats, last_analyzed from dba_tab_statistics where owner= 'SYSADM' and stale_stats='YES' order by last_analyzed desc
I collect stats for those above 3 tables with dbms_stats.gather_table_stats().But no luck.After collection of stats immediately I ran the above query.But still it is showing there are stale stats for 3 tables.
how can I change "STALE-STATS" status, so that optimizer can use the updated stats eficiently.