SQL & PL/SQL :: Joins On Two Tables With 10 Million Rows Each
Jun 11, 2010
We have two tables, TableA and TableB that contain list of accounts and balances.The requirement is to compare the balances of accounts in both the tables, and if there is a difference, then record that difference with account number in another table.
Both TableA and TableB contain more than 10 million rows.What is the best way to do this task in PL/SQL? A join on TableA and TableB to know the differences has become very slow due to large volume.
We have to load 10 million rows in a table from another table based on the multiple joins. How much tablespace size we allocate to the table and for performance point of view how much should be the SGA size.
I huge table with million of rows and no of column more than 300.. is it possible to keep this table in some memory where oracle can access it fast as compare to disk memory.
Due to some business requirements a table field needs to change from date to timestamp in order to handle the millisecs.
1>When i alter the row , for a table with 150 million recs will there be a conversion. Is there a recommended way to convert the field. Mind you this field is used as a part of composite PK.
2> There is a interfacing application which connects and copies the data to its system and is using the date type, will that application be able to continue to work without any changes, if it does not care about the millisecs.
3> Will there be performance impact on an existing application that uses the date field to sort
trying to update a column in a table which has 3 columns of 16million rows from column in another table which has 1million rows, there is no relationship between the 2 tables.
Table A has 3 columns of 16million rows, the first two columns have 16million ID numbers, the 3rd colunm is currently NULL.
Table B has 1million Numbers, i need to somehow update column 3 in table A using the numbers in table B, it doesnt how many times each of the 1 million numbers are used but i dont want it to just update every row to the same value.
Ways for improving the Table performance which holds million of records for oracle. Currently we have partitioning and indexing but it doesn't seem to work.
why how ever way i try i cant get the joins on the tables properly.... well i know i have to work hard....if join is not proper the data i extract is also not proper.Well now i have 3 tables...
ps_operations
Name Null? Type ----------------------------------------- -------- -------- ASSB_PT_NBR_SEQ_ID NOT NULL NUMBER(8) OPERATION_NBR NOT NULL VARCHAR2(10) EFFECTIVE_FROM_DT NOT NULL DATE DML_TS NOT NULL DATE DML_USER_ID NOT NULL VARCHAR2(30) OPERATION_DESC NOT NULL VARCHAR2(70) HOURS_PER_PIECE_QTY NOT NULL NUMBER(9,6) PIECES_PER_HOUR_RATE_QTY NOT NULL NUMBER(15,7) EFFECTIVE_TO_DT DATE EXTRACT_IND VARCHAR2(1) [code]...
I have never worked on CPK and UK....so i dont know how to use them to join the tables,.
I have two tables A with columns a.key, a.location_code, a.status and a.first_name and table B with cols b.key, b.location_code, b.status and b.first_name.
I want to find the missing records between the two tables and as well check whether each column is populated correctly. That is if u take a record with id 1 check if loc_code is same in both the tables and if they are different, insert the key and first record column and second record column into a new table. And similarly if there is no record wiht that particular id in the second table, insert the record.
For missing records in the sense for records which are present in A but not in B, am using
Select a.key_no, a.loc_code, b.loc_code from A,B where a.key_no=b.key_no(+) and b.key_no IS NULL
But the problem is I need to put some constraints on the B table like b.status='Married'and b.loc_code='CA'. When am using this condition in the above query, it's throwing me error saying cannot use outer join operator in and or or.And I could not figure out how to check for the columns being populated correctly between the two tables and at the same time check for missing ones
I am using the SQL-Developer to access and manipulate a database. I am not very sure about the format of the database (I'm new to databases), but I had to setup the TNS-folder.
Anyway, I guess the problem is the same for any database.
I am having a table with the BOM (bill of material) positions of certain articles and I want to change the BOM quantities of some of the articles. What happens is that I can only change some of the rows. For other rows I get the message like (it is in German, so I try to translate it):
"data was commited in another/the same session already. row cannot be updated"
This error message looks like there is somebody else locked on the database and manipulating it, correct? Is that possible to see somewhere which processes/people are currently accessing to the database?
I saw that there is one process/another database, which is having the authorization to access to the database. But where can I check if this process is accessing to the database?
BTW: I used to do this process before, and it worked. I had been able to manipulate arbitrary entries on the database. I guess that the process or the person, mentioned above, hasn't been accessing to the database at that time.
I have the task that I have to determine the number of parts that need to be produced based on the number of products sold for the day (each product consists of many parts).
I am using SQL 11g Express.
The report would look something like this:
{OrderDate PartID PartDesc NumOfParts(Total for that day) 10-24-2011 2001 12" X 12" Solid Shelf 108 10-24-2011 2003 12" X 24" Solid Shelf 32 10-24-2011 3001 96" Side Panel 50
[code].......
My issue is, I can't get the equation right to produce the total number of parts. I think I need to multiply ProductPart.NumOfParts by SUM(CustOrder.Qty) Group by CustOrder.SKU.
Below I have what the calculations should look like
i work on oracle 8i with around 950 tables in my database. when i export or import it gives me the no of rows exported / imported from each table. is it possible to take a print out of the no of rows in each table through a single query .
select count(*) from each table takes a long time , since there are 950 tables.
Here the SUB1 and SUB2 are "tables" and are similar in their structure. The "Main_Table" will be update dynamically and the no of rows in this table will vary.
Now my question , i need to create a view which will have all the rows from these tables ,in the current case it is something like
create or replace view sample as select * from SUB1 union all select * from SUB2
How can this be achived. I tried as shown below:
spool file_to_exe.sql select 'select * from ' || AA ||' union all ' from Main_table; spool off
i will end up in a union all "extra" , if i do like this.
DECLARE L_DE_TGUP_ID BBP_ALLOC.DATA_ELEMENT.DATA_ELEMENT_ID%TYPE; L_ZERO_EXP EXCEPTION; L_DATA NUMBER; BEGIN SELECT DATA_ELEMENT_ID INTO L_DE_TGUP_ID FROM BBP_ALLOC.DATA_ELEMENT WHERE DATA_ELEMENT_CD = 'TGUP'; SELECT DECODE(UPPER('0.0028'),'',0,'0',0,'NULL',0) INTO L_DATA FROM DUAL; [code]......
Here the relationship between the source and target tables is one to one. but still i am getting "unable to get the stable set of rows from the source tables" error. Source query retrieves only 2 rows which are distinct. but still getting this error.
A block shouldn't have rows from multiple tables... Is that true? I read in one of the OTN thread (i don't exactly remember the thread name) that a block can have data from multiple tables. If it doesn't have, what's the table directory in block signifies?
create a procedure or cursor to allocate extents to all tables with zero rows for all the user in the database.I have used the below query to check table with zero rows and no extents allocated.
select onwer,table_name,initial_extent from dba_tables where initial_extent is null order by owner; I generated the query to allocate extents by using concatenation in the above query. select 'ALTER TABLE '||table_name|| ' ALLOCATE EXTENT; ' from dba_tables where initial_extent is null order by owner;
now I want the extent allocation for such table auutomatically for aal the tables with zero rows.
I am trying to insert rows in two tables using sql loader.
I have two tables in database as
SQL> desc name Name Null? Type ---------------------- -------- ------------ ID NUMBER NAME VARCHAR2(20) BD DATE
SQL> desc name3 Name Null? Type --------------------- ----------- ------------- ID NUMBER NAME VARCHAR2(20) BD DATE
I created controlfiles as
[oracle@DBTEST sqldri]$ cat datafile.ctl options (direct=true) load data INFILE * into table name truncate when id='1'
[code]....
when i run sql loader as
[oracle@DBTEST sqldri]$ sqlldr hr/hr control=/u01/sqldri/datafile.ctl SQL*Loader: Release 10.2.0.1.0 - Production on Tue Aug 7 23:30:07 2012 Copyright (c) 1982, 2005, Oracle. All rights reserved. Load completed - logical record count 2.
no rows is inserted..the log file contain entries as
[oracle@DBTEST sqldri]$ cat datafile.log SQL*Loader: Release 10.2.0.1.0 - Production on Tue Aug 7 23:30:07 2012 Copyright (c) 1982, 2005, Oracle. All rights reserved. Control File: /u01/sqldri/datafile.ctl Data File: /u01/sqldri/datafile.ctl Bad File: /u01/sqldri/datafile.bad
I want to update a table 8 million records of a table which has 10 millions records, what could be the best strategy if the table has a BLOB column with 600GB worth of data. BLOB itself is 550GB. I am not updating the BLOB column. Usually with non-BLOB data i have tried doing "CREATE TABLE new_table as select <do the update "here"> from old_table;" method .
I need to insert almost million rows in my database.I have already split the row in separate files so that task would be easier. Now, i am planning to put commit after every 1000 line so that undo generation would be less and no locking would take place if i inserting those lines from multiple sessions.
But how can i insert commit after every 1000 line??
the problem is: 2 tables - one with 2 million records, and the other with 8000 records.
i need to compare for each record in a table if there's a similar string on the other table.
i've created a procedure that does the following:
opens the first cursor (select col1,col2,col3,col4... from table 1) loop opens second cursor (select col1 from table 2) loop if utl_match(col1, table2.col1) > 80 then insert col1,col2,col3,col4... into tableX end if close second cursor close first cursor
the thing is that this procedure takes forever to end...about 8 days.
is it because im using the utl_match function? is there a way to speed this up?
I want to know how we can insert more than 3 million records from one table to another table. Can we use Bulk collect and forall to insert the all data.Can we use create table tableB as select * from tableA; From the above which is one is performance wise good.
Lets take the basic emp table for our Referenece and lets assume that it contains around 60000 Records and all the deptno in that table are Initially 10. Please provide an update statement which would update deptno column of EMP table((based on) order by EMPNO) in for every 120 records incrementing by 1.(DeptNo to be incremented by 1,like 10 ,11 , 12 etc).
First 120 Records deptno should be 10, Next 120 Records deptno should be 11, and so on. . . . . . . For Last 120 records deptno should be updated with 500.