SQL & PL/SQL :: Load 10 Million Rows In Table From Another Table Based On Multiple Joins
Sep 24, 2010
We have to load 10 million rows in a table from another table based on the multiple joins. How much tablespace size we allocate to the table and for performance point of view how much should be the SGA size.
View 11 Replies
ADVERTISEMENT
Jun 11, 2010
We have two tables, TableA and TableB that contain list of accounts and balances.The requirement is to compare the balances of accounts in both the tables, and if there is a difference, then record that difference with account number in another table.
Both TableA and TableB contain more than 10 million rows.What is the best way to do this task in PL/SQL? A join on TableA and TableB to know the differences has become very slow due to large volume.
View 20 Replies
View Related
Apr 25, 2012
trying to update a column in a table which has 3 columns of 16million rows from column in another table which has 1million rows, there is no relationship between the 2 tables.
Table A has 3 columns of 16million rows, the first two columns have 16million ID numbers, the 3rd colunm is currently NULL.
Table B has 1million Numbers, i need to somehow update column 3 in table A using the numbers in table B, it doesnt how many times each of the 1 million numbers are used but i dont want it to just update every row to the same value.
View 13 Replies
View Related
Feb 15, 2011
I am trying to update a million rows in one table with the values from another tables.
Table being updated CI_ADJ_CHAR column CHAR_VAL_FK1
Table from which values will be used CK_ADJ columns (cx_id, ci_id)
The CI_ADJ_CHAR.CHAR_VAL_FK1 values match CK_ADJ.CX_ID and should be updated with the value CK_ADJ.CI_ID.
The CK_ADJ table has 1.3 million rows and both the columns have indexes defined. Table definitiuon mentioned below
The CI_ADJ_CHAR table has 14 million rows and will update 1 million rows and has an index on the ADJ_ID column but not on the CHAR_VAL_FK1 column.
View 1 Replies
View Related
Dec 15, 2010
The following query calls the function get_meter_desc 8 times.
SELECT iu.instd_unit_id, iu.fk_cust_id, bill_cd,
iu.mfg_prod_cd, iu.mfg_prod_seq_no, ccequip.fk_mkt_cd,
eff_tmstmp, cust_ord_id, ord_dt, iu.xnac_co_cd,
iu.xnac_div_cd, smmr_wvr_cd, warr_expn_dt,
iu.auto_replen_ind, iu.ms_stat_cd,
inst_dist_id, instn_dt, iu.last_auto_dt,
iu.replen_freq_vlu, cpc_pln_cd, std_sply_ind,
emcv_total_qty, tot_actl_cpy_qty, init_emcv_tot_qty, ms_tran_cd,
[code]...
How can I replace the function call by the join in the main query?
View 19 Replies
View Related
May 17, 2012
load data in multiple table using sql loader. I have IN predicate which i don't know is allowed in the sql loader or not
my control file and is as below
LOAD DATA
INFILE 'c: empdemo05.dat'
BADFILE 'c: empad05.bad'
DISCARDFILE 'c: empdisc05.dsc'
REPLACE
[code]....
i am getting below error when executing above error
SQL*Loader-350: Syntax error at line 5.
Expecting "(", found keyword when.
WHEN DEPTNO IN ('
View 4 Replies
View Related
Jul 17, 2013
Oracle 11gI have a large table of 125 million records - t3_universe. This table never gets updated or altered once loaded, but holds data that we receive from a lead company. I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads - that will be updated with regard to when the lead is mailed and for other relevant information. select records from this 125 million record table to insert into the smaller table.
I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches. My current attempt has been to create a View using the query that selects the records as shown below. Then use a second query that inserts into T3_Leads from this View V_Market. This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause? My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key.
CREATE VIEW V_Market asWITH got_pairs AS ( SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */ l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no , ROW_NUMBER () OVER ( PARTITION BY l.address_key ORDER BY l.hh_verification_date DESC ) AS r_num FROM t3_universe e JOIN t3_universe l ON l.address_key = e.address_key AND l.zip_code = e.zip_code AND l.p1_gender != e.p1_gender
[code]....
View 2 Replies
View Related
Nov 26, 2010
I am attempting to select back multiple values for a specific key on one row. See the example below. I have been able to use the sys_connect_by_path to combine the fields into one field but I am unable to assign them to fields of their own. See the example below
TABLE DETAILS:
Policy id plan name
111 A Plan
111 B Plan
111 Z Plan
112 A Plan
112 Z Plan
My desired result is to be able to show the output as follows
Policy ID Plan_1 Plan_2 Plan_3
111 A Plan B Plan Z PLan
112 A Plan Z PLan
View 6 Replies
View Related
Mar 29, 2013
generate the number of rows based on table field.
Sample code is given below.
CREATE TABLE T
(
docno VARCHAR2(10),
CODE VARCHAR2(8),
QTY NUMBER(3)
)
LOGGING
View 4 Replies
View Related
Feb 8, 2011
I have two tables. The first contains all Segment information within Oracle i.e;
Table 1
Segment Description Flex_Value
1 North 1234
1 South 1235
1 East 1236
2 Car C001
2 Boat B001
and the second table contains financial data, but only the segment code;
Table 2
Date Segment1 Segment2 Value
01/01/11 1234 C001 10,000
02/01/11 1235 C001 10,000
what I want to return is some of the columns within table two as well as additional columns for the segment descriptions.
Below is an extract of what I have put together, and it does return what I need, but I'm sure there is a much more efficient way of creating the query;
SELECT
b.DATE,
b.SEGMENT1,
b.SEGMENT2,
b.SEGMENT3,
b.SEGMENT4,
b.SEGMENT5,
[code].....
View 6 Replies
View Related
Mar 24, 2013
I have one table , with one column having 2,3 or 4 machine codes , i need to display them as each row per machine code will it be possible to do as i have thousands of records similar to the test case and which i had to do it manually in excel and then upload it back.
create table ow_oper_setup (wo_no varchar2(12),mrk_no varchar2(20),pos_no varchar2(30),mc_code varchar2(60))
insert into ow_oper_setup VALUES ('1270','1270001','W165','IR HO BV ')
insert into ow_oper_setup VALUES ('1270','1270001','W1332','IR BV ')
insert into ow_oper_setup values ('1270','1270001','W1367','RE HO SC BV ')
insert into ow_oper_setup values ('1270','1270001','W389','RE HO SC BV')
commit;
SELECT * FROM ow_oper_Setup;
WO_NOMRK_NOPOS_NOMC_CODE
12701270001W165IR HO BV
12701270001W1332IR BV
12701270001W1367RE HO SC BV
12701270001W389RE HO SC BV
--i want the output in the following way or the same table data to be replaced as below
WO_NOMRK_NOPOS_NOMC_CODE
12701270001W165IR
12701270001W165HO
12701270001W165BV
12701270001W1332IR
12701270001W1332BV
12701270001W1367RE
12701270001W1367HO
12701270001W1367SC
12701270001W1367BV
12701270001W389RE
12701270001W389HO
12701270001W389SC
12701270001W389BV
View 12 Replies
View Related
Aug 25, 2011
I am new to oracle, I have request to build a query,
we have table that generates data from 7am to 20pm for eavery hour it generates 4 rows and has 43 session values as 43 columns.
Now i want to find for every hour which is the hights session value at what time. in one hour it runs four times like 7, 7:15, 7:30 and 7:45 and each row has date, time and 43 session columns in table...
View 12 Replies
View Related
Sep 7, 2012
how to select multiple rows in an OAF table and access them in the code.
View 0 Replies
View Related
Sep 14, 2012
I need to join ISSUED_REMOVED Table with ITL Table. having each quantity each row.
Eg. If a unit Serial no '354879019900009' has a part (1015268) issued 8 times and then unissued 4 times so finally the part was issued 4 times. so I need 4 rows to show for each qty 1 for that part and unit serial number.
-- ITL Table
Create table ITL_TEST (
ITEM_SERIAL_NO, ITEM_BCN, ITEM_ID, ITEM_PART_NO, OPER_ID,
ISSUED_REMOVED_PARTNO, ISSUED_REMOVED_QUANTITY, QUANTITY, SHIPMENT_ID)
[code]....
-- Issued Removed table
create table ISSUED_REMOVED_ITEM
(REPAIRED_ITEM_ID, ISSUED_REMOVED_ITEM_ID, ISSUED_PART_ID, OPER_ID, ISSUED_REMOVED_QUANTITY)
as select
122013187, 1323938, 1015268, 308, 2 from dual union all select
122013187, 1323939, 1015269, 308, 2 from dual union all select
122013187, 1323940, 1015268, 308, 2 from dual union all select
[code]....
-- The way I need to join the Issued_Removed Table
select * from ITL_TEST ITL
left join
issued_removed_item iri
on iri.REPAIRED_ITEM_ID = ITL.ITEM_ID --ITL.ITEM_ID --rlsn2.item_id --126357561
and iri.oper_id = 308 --in ( 308, 309)
[code]....
View 1 Replies
View Related
Jul 27, 2012
I have a flat file as source wherein I am getting values like
Comp_id, Comp_name, ISIN, column_name, column_value
The structure is like this may contain multiple records like Comp_id, Comp_name, ISIN will be same, but column_name will contain the column_name to which its corresponding column_value needs to be populated to.
E.g. of Feed File -
Comp_id, Comp_name, column_name, column_value
1,HSBC,branch_name,HSBC-DELHI
1,HSBC,branch_add,24-Lajpat Nagar
1,HSBC,branch_phone,2322322
2,HSBC,branch_name,HSBC-MUMBAI
2,HSBC,branch_add,24Andheri
2,HSBC,branch_phone,4445221
2,HSBC,branch_postalcode,400023
Target table structure
Comp_id, Comp_name, branch_name, branch_add, branch_phone, branch_postalcode
I need to insert the above data to a table by selecting data from above scenario.
View 10 Replies
View Related
May 11, 2010
I am working on Pro*C and i have a requirement where i need to select all the rows from a table into a c - structure variable. Since i get to know the no of rows in the table which is getting selected only at run time, i need to create a pointer variable to the structure and i'll allocate the size to it based on the count of rows in the table using malloc or calloc.I tried allocating memory using calloc and it does not show any error. But when i when the exec select statement run it shows an error.
Statements i have used:
struct common *comp;
struct common_ind *comp_i;
comp = (struct common*) calloc(rowcount, sizeof(struct common));
comp_i = (struct common_ind*) calloc(rowcount, sizeof(struct common_ind));
exec sql at db1 select * into :comp indicator :comp_i from tab1;
Error i get :
Stop Error: -2112
Stop Error: -1012
Stop Error: -1012
View 2 Replies
View Related
Dec 22, 2010
The following control file updates multiple rows in database table.
LOAD DATA
INFILE *
replace
INTO TABLE temp_tab
FIELDS TERMINATED BY ","
(Data LOBFILE(CONSTANT cadd_pass.xml) terminated by eof
There are 21 lines in the xml. So 21 rows are updated in table.update only one row?
View 9 Replies
View Related
Aug 13, 2013
I want to know how we can insert more than 3 million records from one table to another table. Can we use Bulk collect and forall to insert the all data.Can we use create table tableB as select * from tableA; From the above which is one is performance wise good.
View 16 Replies
View Related
Sep 21, 2012
Lets take the basic emp table for our Referenece and lets assume that it contains around 60000 Records and all the deptno in that table are Initially 10. Please provide an update statement which would update deptno column of EMP table((based on) order by EMPNO) in for every 120 records incrementing by 1.(DeptNo to be incremented by 1,like 10 ,11 , 12 etc).
First 120 Records deptno should be 10,
Next 120 Records deptno should be 11, and so on.
.
.
.
.
.
.
For Last 120 records deptno should be updated with 500.
View 3 Replies
View Related
Sep 16, 2013
what are the type of ways to update the 10 million records table with certain condition?
View 2 Replies
View Related
Oct 9, 2013
I have table with 129 million records.
If I just to select count(*) on the table its taking more than a minute in Sql Developer.
The table structure is as below, Primary key is a sequence and then 3 foriegn keys and one non-unique index on the date column.
<Table_Name>
column1 NOT NULL NUMBER ( Primary Key)
column2 NOT NULL NUMBER ( FK1)
[Code].....
View 1 Replies
View Related
Mar 21, 2011
can i able to partition the table based on the column which is in another table ??
For example Table X need to be partitioned based on the column in The Table Y . and Table X and table Y has some relation.
View 2 Replies
View Related
Dec 19, 2011
I would like to know if we can insert 300 million records into an oracle table using a database link. The target table is inproduction and the source table is in development on different servers.The target table will be empty and have its indexes disabled before the insert. if this can be accomplished in less than 1 hour.
View 26 Replies
View Related
Sep 2, 2011
I have two large tables(rptbody and rpthead) which has over millions or even more records. Below is the table schema
describe rpthead
Name Null Type
--------------------------- -------- -------------
RPTNO NOT NULL NUMBER
RPTDATE NOT NULL DATE
RPTD_BY NOT NULL VARCHAR2(25)
PRODUCT_ID NOT NULL NUMBER
[code]...
What I want is getting all data if the referenced RPTNO belongs to a particular product_id from rptbody table, here's the sql
SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE
FROM RPTBODY t0
WHERE
(
t0.RPTNO IN
(
SELECT t1.RPTNO FROM RPTHEAD t1 where t1.PRODUCT_ID IN ('4647')
)
)
ORDER BY t0.LINENO
Since the result set is pretty large, so my application(think it as c couple of jobs, each job should be finished in a time window) can only process a subset of all data, so I need pagination so that the next job can continue the processing until all data is processed, below is the SQL with pagination
select * from (
select a.*, ROWNUM rnum from
(
SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE
FROM RPTBODY t0
WHERE
(
[code]....
As you can see each query will take 100 rows from the db. The problem for now is that the query taking too much of time(10+ mins), I know the slowness is due to "ORDER BY t0.LINENO", but it's required for pagination.
View 4 Replies
View Related
Mar 28, 2011
I huge table with million of rows and no of column more than 300.. is it possible to keep this table in some memory where oracle can access it fast as compare to disk memory.
View 8 Replies
View Related
Mar 28, 2013
As per Article mentioned in Oracle Base,I have converted non-partitioned table (1 million data) into range-partition table,but,I don't see performance improvement in explain .
View 9 Replies
View Related
Jun 27, 2013
We need to load data from index by table to table.Below code is working fine.
declare
query varchar2(200);
Type l_emp is TABLE OF emp%rowtype INDEX BY Binary_Integer;
rec_1 l_emp;
begin
[Code]....
But data from source table and target table is dynamic.Ex:In above code, emp(source) and target table is emp_b are static. But for our scenario is depends on the source table , target would change as below.If source is emp then target is emp_bIf source is emp1 then target is emp_b1 ............
create or replace procedure p(source in varchar2, target in varchar2)
as
query varchar2(200);
source varchar2(200);
Type l_emp is TABLE OF emp%rowtype INDEX BY Binary_Integer;
rec_1 l_emp;
[Code]....
Its throwing. How to implement this scenario .
View 2 Replies
View Related
Apr 29, 2011
have a performance problem with a simple query, for example:
SELECT *
FROM CA_ADDRESS
WHERE address_k1 = 'L163'
I have an index created in column address_k1. CA_ADDRESS is a 3-4 Million Record table.
I need to improve the performance of the query. How I can improve it?
View 3 Replies
View Related
Oct 5, 2010
Due to some business requirements a table field needs to change from date to timestamp in order to handle the millisecs.
1>When i alter the row , for a table with 150 million recs will there be a conversion. Is there a recommended way to convert the field. Mind you this field is used as a part of composite PK.
2> There is a interfacing application which connects and copies the data to its system and is using the date type, will that application be able to continue to work without any changes, if it does not care about the millisecs.
3> Will there be performance impact on an existing application that uses the date field to sort
4> Will DB need more space due to the change
View 3 Replies
View Related
Jul 30, 2012
I stumbled about some weird 11gR2 behavior (running on AIX).When I performed a join between a table with user based content (parts belonging to an sourcing scope) and a base table (parts available) whereas the parts have to fulfill a special regular expression, it showed that the same query is faster when using outer join than inner join (about 0.7sec vs. 20sec; which makes me believe that regexp_like works wrong when involved in an inner join).
i tried the same statement with a standard like (but not fulfilling the same condition).This time performance was as expected (inner join outperforming outer join).
Oracle version information
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production
[code]...
I can see it, the execution plan for the "inner join" doesn't show so much more costs than the one for the outer (but why at all is does an inner join cost more?) ...The execution plan for both "not like" is the same and (surprisingly ;-) ) similar to "outer-regexp".
I hope sample data are not needed as there would be needed a lot...this is the second time I came across the "plan worse but execution time better" phenomenon.
View 10 Replies
View Related