Partitioning Table To Improve Load And Query Performance?
Nov 20, 2012
I have a big table in which we load about 37M recrods. We have informatica ETL which Loads the data in bulk Mode and creats index after completion. The data load takes about 1Hr and Index Creation takes about 1/2 hr. In total it takes about 90 to 95 Mnts.
Now I thought if Partition and Load paralley, It will improve perfromance. We did 4 partition and and each Partition about 9M records. The data load in Bulk mode is completing in 25 Mnts. Again When I am creating index over it, It is taking about 40 Mnts. and in Total Load time is 65 Mnts.
Is there way I can better performance to complete the load in 1/2 hr ?
View 2 Replies
ADVERTISEMENT
Dec 6, 2011
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
View 1 Replies
View Related
Jun 30, 2013
It seems certain queries search by the number of days to ship(number of days between the order and shipping dates). What kind of index would improve the performance of these queries?
View 2 Replies
View Related
Feb 10, 2012
Why do stored procedures and functions improve performance?
A. They reduce network round trips.
B. They reduce the number of calls to the database and decrease network traffic by bundling commands.
C. They reduce the number of calls to the database and decrease network traffic by using the local PL/SQL engine.
D. They allow the application to perform high-speed processing locally.
E. They postpone PL/SQL parsing until run time.
I think the answer should be A and B but i came across answers as B and E Can u explain me what is the difference between option A and B and does it postpone parsing till run time?
View 1 Replies
View Related
Jan 31, 2012
In search queries generally we select 10-25 columns (more can't be displayed on the screen) from 5-10 tables
Say in case of insurance related application, the search might be on policy number, policy holder's first name, policy holder's last name, region, policy type etc.
And not to many columns we are displaying on the screen, say, 4 tables have collectively 4 * 20 = 80 columns, then we are displaying say 12-15 columns with 2-3 columns have aggregates on it.
since the search criteria (e.g. first name, last name, policy number etc.) is not known till last moment it will be a generic dynamic query
Is it possible that instead we create a Materialized view with query with only joining conditions but no filter conditions and selecting only columns to be displayed on the screen and then we will refresh the materialized view (to take care of recent business transactions) and fire refined query with filter criteria on this materialized view
Select col1,col2,col3,col4,col5
From tab1,tab2,tab3,tab4
Where tab1.col1=tab2.col1
And tab2.col2=tab3.col2
And tab2.col2=tab4.col2;
Will it improve performance of the search functionality
View 2 Replies
View Related
Mar 6, 2012
select
a.empno,a.ename,a.job,
B.DNAME
from scott.emp a,scott.dept b
where ( a.ename like 'S%' and a.deptno=b.deptno)
union
select
a.empno,a.ename,a.job,
'aaa' AS DNAME
from scott.emp a,scott.dept b
where ( a.ename like 'S%' and a.job not like 'SALES%');
Output:
7369SMITHCLERKRESEARCH
7369SMITHCLERKaaa
7788SCOTTANALYSTRESEARCH
7788SCOTTANALYSTaaa
Quote:There is other way to improve the query so that UNION can be removed.
View 3 Replies
View Related
Jun 16, 2011
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
View 2 Replies
View Related
Apr 4, 2013
I have problem to transfer data in non partitioning table to partitioning table.
I have non partitioning table and i create new table partitioning that have same column and type like in non partitioning. So how can i transfer data from table in non partitioning to table in partitioning?
View 10 Replies
View Related
Oct 10, 2013
I am trying to improve a procedure which is looping through a query to make inserts.
FOR P IN (
SELECT O.TYPEID
,o.KEY
,O.ID
,O.NAME
,O.LGNUM
,O.LGNAME
[code]....
View 12 Replies
View Related
Jul 13, 2013
I need to create a script which can fetch the error related details from Trace files regularly (may be twice a day ) to a custom table in DB.
View 3 Replies
View Related
Mar 28, 2012
What could be the strategy on deciding which columns to create partitions on? I understand for deciding this, first, we need to know the columns we are using in the WHERE clause
consider following scenario assume that emp table is very large
(1)
Query - select * from emp where empno=<pk_value>
what could the partitioning column here?
This is confusing as we access with, quite selective criteria here but we access lot of data. No particular date range Or No particular flag, value to check with! Would hash partition on the pk_column will be useful here?
(2)
select * from emp where empno=<pk_value> and deptno=<some value> what could the partitioning column here? I assume deptno here. Right?
In general what could be the considerations in deciding the partitioning columns? whether the column is not a unique key column Or the column is preferrable if used in joins Or the column is not updateable
Finally will the pruning (take place) if the query spans across multiple partitions, though Not all partitions?
View 21 Replies
View Related
May 4, 2013
I am doing partitioning in the table ap_invoices_all and i have gone through all the process of i am making a script for a table and process
1. Create the new partitioned table with the same column structure as the original and with the partitions.
2. Insert data from the original table to the partitioned. Use parallel DML.
3. Rename the indexes of the original table
4. Create indexes to the partition table with the same columns as the original indexes.
5. Save the source code of the original table triggers
6. Rename the triggers of the original table to OLD
7. Do the table renaming. Rename original table to OLD and the partitioned table to original.
8. Drop the synonyms for the OLD table and recreate to point to the new partitioned
9. Grant the appropriate privileges to new partitioned table.
10. Create the triggers to the partitioned table
i want to know do i need to copy the constraints of the original table to the partitioned table?
View 10 Replies
View Related
Aug 30, 2011
1) I have 5 Exported Dump files.
2) All of those 5 dump files were taken in different time periods.
3) Many of those Dump files are having the same Partition records.
eg:-
Dump 1:- 01-06-2010 to 31-11-2010
Dump 2:- 01-09-2010 to 31-12-2010
4) Now i want to import all those partitioning data into a single table, without having any duplication.
View 2 Replies
View Related
Jun 24, 2011
Below query is taking a long time...
select gam.SOL_ID,COUNT(gam.FORACID) from gam,smt where
gam.ACID=smt.ACID and gam.ACID NOT IN(select ACID from imt) and
gam.SCHM_TYPE in('SBA','CCA','CAA','ODA') and GAM.ACCT_CLS_FLG='N' and
gam.SOL_ID IN(select SOL_ID from IMT) group by gam.SOL_ID
/
attached is the explain plan.
in which index on IMT table is not used. And the query is doing a FTS on IMT table. What needs to be done to avoid FTS on IMT table.
View 10 Replies
View Related
Sep 23, 2010
When i run a script that does a select from a single table (table has 33521868 records)the query is executed in about .094 seconds. I use the exact same query to insert into a temporary table and the query takes 10 minutes and more.
What should I be doing to speed up this process. Also tried using hints and it does not speed up the insert.
View 3 Replies
View Related
Nov 11, 2012
The Item data for individual cycles is as below.
Item_tbl
ItemRundate StddateStatus
P103-Nov-1203-Nov-12A
P104-Nov-1204-Nov-12D
P2 04-Nov-1203-Nov-12A
The requirement is I have to get the details of all data of previous Active cycle(status A) when the Item became disabled(status = D) for Input date.
In above case,since for Item P1 and on cycle date 04-Nov-12,status is D,I have to consider the previous active cycle which is 03-Nov-12. Based on above std date,the data is queried from another table to get all the Items. Item P2 should not be considered in above case.
Below is the code which I have written which considers the rundate as Input parameter.
-- To get the Items disabled for Input date
with Itemdisabled as
(
select item,stddate maxcycledate
from Item_tbl
where rundate = stddate
[code]....
In above case,I'm querying the Item_tbl twice once for getting the disabled Items and once for getting the Previous cycle which is active.
Is there any way to query above only once and get the required results using Lag/Lead functions etc.
View 5 Replies
View Related
Feb 19, 2012
How do i find a particular SQL or a set of SQL's which are excuted against a table (user identified table) that is either a very frequently executed query against that table or high impact SQL against that table? I am currently looking through the AWR reports to go through all the queries but i was wondering if there are any dictionary views where we can find this info from?
View 2 Replies
View Related
Jul 27, 2012
How to partition a table which is already having data.I have a STUDENT table along with following fields which is having million of rows.
studentid name class Gender
Now I want to partition this table based on gender MALE and FEMALE.
View 8 Replies
View Related
Sep 23, 2013
I have a partitioned table with 1 lakh records
if i disable the partition feature im my database will it affect my table data.
View 1 Replies
View Related
Jul 11, 2013
Below query is degrading the performance of database. As we know that, without where clause, query do full table scan.Now, it is written to generate the sequence no.
SQL> explain plan for
2 SELECT NVL(MAX(P.NUM_SERIAL_NO), 0) + 1 FROM CNFGTR_IRDA_ENVELOPE_DTLS P
3 /
Explained.
SQL> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 3345343365
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
[code].....
Index is not created on the column.
View 6 Replies
View Related
Jan 24, 2013
we apply partitioning concept on a table which don't have any primary key ?
I just want to add one more field as primary key with some sequence generated values while partitioning ? Is it possible ?
View 7 Replies
View Related
Jun 2, 2010
an existing normal table be converted to a partitioned table without recreating the table or truncating/reloaded data?
View 4 Replies
View Related
Nov 2, 2012
Oracle 10.2.0.4
I partitioned a source table of around 100 million rows (62GB) in DEV server. The target database was created new. It was range partioned on a date column as follows:
PARTITION BY RANGE (ENTRY_DATE_TIME)
(
PARTITION ppre2012 values less than (TO_DATE('01/01/2012','DD/MM/YYYY')) TABLESPACE WST_LRG_D,
PARTITION p2012 values less than (TO_DATE('01/01/2013','DD/MM/YYYY')) TABLESPACE WST_LRG_D,
PARTITION p2013 values less than (TO_DATE('01/01/2014','DD/MM/YYYY')) TABLESPACE WST_LRG_D,
PARTITION p2014 values less than (MAXVALUE) TABLESPACE WST_LRG_D
)
That is yearly basis. Anything before 2012 went to ppre2012, then p2012, p2013 and so forth. There is 20 million rows in p2012. and around 75 million rows in ppre2012. We needed both the source (un-partitioned) and target (partitioned) tables in DEv for comparision. The queries are normally on the current year partition. Just to state taht I am a developer and don't have full visibility to the production instance.
Now that our tests are complete, we would like to promote this in production. Obviously in production we would not not need both source and target tables. In all probability this will be performed over a weekend window. Therefore I would like to suggest the following .
1) use expdp to export source table
2) drop the source table
3) create a new source table "partitioned" with no indexes
4) use impdp to get data back into table
5) create global index (it is a unique index to enforce uniquness) and the rest of indexes as local
6) perform dbms_stats.gather_table_stats(user,'SOURCE', cascade=>true). This takes around 2 hours in dev
My point is that whether importing 100 million rows will not cause issues with undo segments. Can we import data say first to the current partition p2012 (20 million rows) first.
View 18 Replies
View Related
Mar 28, 2013
If a Interval Partitioning can be created on a table for every fortnight ? db version is 11g.
View 3 Replies
View Related
Apr 27, 2012
I have a OWB mapping which takes input from a staging table and add those row to the Cube. The underlying table behind cube is a relational fact table joined with the dimensions using foreign keys. Explain plan behind the query has a rather high cost and the mapping runs for 30 minutes. If you see below, in step 17, the cost goes up to 1,396,573 which is also where nested loops start to appear. Query plan is also attached in the image format.
Plan
SELECT STATEMENT ALL_ROWSCost: 1,746,526,275 Bytes: 386,835,904 Cardinality: 464,947
46 NESTED LOOPS OUTER Cost: 1,746,526,275 Bytes: 386,835,904 Cardinality: 464,947
41 NESTED LOOPS OUTER Cost: 1,744,200,663 Bytes: 380,791,593 Cardinality: 464,947
37 NESTED LOOPS OUTER Cost: 1,743,270,415 Bytes: 374,747,282 Cardinality: 464,947
34 NESTED LOOPS OUTER Cost: 1,740,476,128 Bytes: 368,702,971 Cardinality: 464,947
29 NESTED LOOPS OUTER Cost: 1,739,545,862 Bytes: 362,658,660 Cardinality: 464,947
25 NESTED LOOPS OUTER Cost: 1,710,193,475 Bytes: 356,614,349 Cardinality: 464,947
[code].........
View 1 Replies
View Related
Jan 23, 2011
I recently started working with legacy code and noticed that some huge tables (5 years worth of data, don't have more details on me right now but can post later if needed) are partitioned based on time sequence number column while majority of queries are done based on time (different column). Queries performance is degrading and I'd like to try to modify partitioning and run some tests to evaluate performance improvement.
My only concern is with so much live data I have to come up with solution on how to switch partitioning with the least impact on applications running 24 x 7. Something you have done in the same situation and it worked?
View 1 Replies
View Related
Jan 28, 2013
I am trying to partition an existing table through DBMS_REDEFNITION. Following are the steps that I have taken and the error I have got.
1. Creating a table to be partitioned.
CREATE TABLE SO33070_ORIGINAL
(
SERIAL_ID NUMBER(15,0),
INSERTED_TIME DATE DEFAULT SYSDATE,
PRIMARY KEY (SERIAL_ID)
);
Success
2. Checking if the table can be partitioned
DECLARE
v_name VARCHAR2(256);
BEGIN
SELECT sys_context('userenv', 'current_user') INTO v_name FROM dual;
DBMS_REDEFINITION.CAN_REDEF_TABLE(v_name, 'SO33070_ORIGINAL', dbms_redefinition.CONS_USE_ROWID);
END;
Success
3. Creating a duplicate table
CREATE TABLE SO33070_NEW
(
SERIAL_ID NUMBER(15,0),
INSERTED_TIME DATE DEFAULT SYSDATE
)
PARTITION BY RANGE ("INSERTED_TIME") INTERVAL (NUMTODSINTERVAL(1,'DAY'))
(
PARTITION "p1_1" VALUES LESS THAN (TO_DATE(' 2012-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
)
Success
4. Starting the redefnition process
EXEC DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'CDS_USER', orig_table => 'SO33070_ORIGINAL', int_table => 'SO33070_NEW', col_mapping => '', options_flag => DBMS_REDEFINITION.CONS_USE_ROWID);
Success
5. Copying the dependents
DECLARE
num_errors NUMBER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
uname => 'CDS_USER',
orig_table=>'SO33070_ORIGINAL',
[code]....
View 3 Replies
View Related
Aug 18, 2010
after running some TPC-C performance benchmarks on oracle 11g/r2 the following results for average transaction time vs user load were found:
Userload 1 : 60ms
Userload 2 : 55ms
Userload 3 : 25ms
Userload 4 : 25ms
Surely these should be the other way wound, transaction time increasing as the number of users does?
what may have caused this? or are the user loads too small to be representative
View 2 Replies
View Related
Dec 3, 2010
which tools are available for monitoring load of the database?
View 4 Replies
View Related
Apr 12, 2013
At moment we use range-hash partitioning of a large dimension table (dimension model warehouse) table with 2 levels - range partitioned on columns only available at bottom level of hierarchy - date and issue_id.
Result is a partition with null value - assume would get a null partition in large fact table if was partitioned with reference to the large dimension.Large fact table similarly partitioned date range-hash local bitmap indexes
Suggested to use would get automatic partition-wise joins if used reference partitioningWould have thought would get that with range-hash on both dimension.
View 3 Replies
View Related