SQL & PL/SQL :: Finding Identical Rows And Updating Records?
May 30, 2013
I need to find the identical rows in the below table based on ID column and update the previous identical record's end_date with latest record's start_date-1.
Name _____ Smith Street Smith Street John Street Ed Street Ed Street Ed Street
and need to assign sequence numbers only when the record (Name) changes, e.g. :
Name Seq _____ ____ Smith Street 1 Smith Street 1 John Street 2 Ed Street 3 Ed Street 3 Ed Street 3
I have experimented with row_number partition but then i just get the sequence returning to 1 when the name value changes.
If I grouped the records by Name I would like to have unique, sequential numbers: 1, 2, 3 but where there is the same name I would like the sequence to stop and the number to replicate?
i have table with name, count, flag with dublicate records
example
with swayam name , counts are 3, 4 with ramana name, counts are 5,5 with reddy name, counts are 1,2,3
i want to update the flag
if count are same then update one of record (flag='A') and other should be flag='R' if count are different then update the max count (flag='A') and other should be reject remaing (flag='R'). use below quires
CREATE TABLE TEST_DUB ( NAME VARCHAR2(99), V_COUNT NUMBER, FLAG VARCHAR2(1));
Insert into TEST_DUB (NAME, V_COUNT) Values ('SWAYAM', 3); Insert into TEST_DUB (NAME, V_COUNT) Values
I have table emp that contains empno, empname, mgr .what i want is a general procedure that will take empno as input and will give all the child rows and parent for entered empno.
for ex
E A-->B-->C-->D F-->G H
When i will pass d as node it will return c,b,a,e,f,g,h
and I would like insert the same gross and net column values of ids 7 to 16 into columns with the ids 40 to 49 in the same order. therefore I would like to obtain the result that I describe below:
I have made a correlated update statement using rowid. Find my attachment. Its updating all columns which i wanted but issue is that its not updating in 1st commit.
Suppose 6 rows is to be updated, then in 1st commit its updating 1 record, then in 2nd commit its updating 2nd record and so on. And in Toad its showing 6 rows updated in 1st commit, then 5 rows updated in 2nd commit and 1 rows updated in last record. I want that all records to be updated in first commit only.
Now if there is more than one row with same email the one with the latest edit date should be updated with missing fields by using same field value other rows (if the field is present in more than one row, the one with the next latest edit date is to be considered) and the archived status of all rows with same email except this master row must be set to 1.
The Create_Date must be set to the minimum of all the create_date values of rows with same email value The create table would be as follows:
CREATE TABLE student(Id NUMBER PRIMARY KEY,first_name VARCHAR2(30) NOT NULL,last_name VARCHAR2(30) NOT NULL,email VARCHAR2(30) NOT NULL,contact NUMBER,adress1 VARCHAR(30),adress2 VARCHAR(30),city VARCHAR(30),edit_date DATE,create_date DATE,archived CHAR(1))
Sample insert statements would be: insert into student values
I have a form with two data blocks, one parent, one child block.
The parent is holds mineral lease info while the child holds the mineral owner info, such as addresses and phone numbers. One owner can be in the owner block multiple times (different owner types). The form only displays one owner at a time.
We have a separate master owner table which holds owner address. (We set it up this way because we get electronic info from mineral companies that we have to load each year).
As you tab through the owner block, it checks the FEIN against the master table and pulls updated address info from the master table.
I have a problem in which if an owner is on the lease multiple times, when you tab through the first instance, it pulls in the new address info, but when you go to the next instance, it won't update. If you requery, it seems that the first update actually updated all the owner records on that lease. How can I turn this off?
I have a situation where there are multiple records for a join criteria. I am trying to find a way to update a particular column for all the records returned by the join criteria. Example :
I have a table which has plenty of rows. In production, I would estimate it to be from 30 millions to 300 millions. I need to update on column (flag) in all the rows (created before certain date).Now saying just:
UPDATE MyTable SET flag = 3 WHERE created < to_date('2010-10-08 23:59:59', 'YY-MM-DD HH24:MI:SS'); COMMIT;
Does not seem like a good idea - the commit-buffer would become too big.I will write a PL/SQL script for this. The question is, whether I should:
a) Update each row separately, and commit after every 10000 rows. ( WHERE RowId = [rowId] ) b) Update 10000 rows with set of dates ( WHERE rowId > [some_row_id] AND RowId < [some_row_id_2]
In the latter example the some_row_ids would naturally be fetched. The rowIds come from sequence. So which one would be more effective?I am not too familiar with PL/SQL or Oracle for that matter.
I am trying to bulk update records in oracle using XML , front end is vb.net.Now the problem when i updating for 1000 - 5000 records on my development server. Its getting updated.
But when we are updating on the production server for 100000-200000 records , we receive error
"ORA-01460: unimplemented or unreasonable conversion requested "
When I run the code below It runs very Long. It updates SUSR5 in the TEMPTABLE3 that has 112000 records. If I Change it when c>m to 2 to test. It runs very fast. The value for m is always between 10000 and 12000. That How many times it must loop to update the correct records.
I've created a system for managing football within APEX and it is at a stage now whereby the user can view any number of the tables through Reports and insert data into these tables through Tabular Forms. Its using triggers and sequences to allow for new primary keys to be generated each time within these Tabular Forms so I'm at a stage now where I'm really quite pleased with it..
The last thing I'm needing to do now is have it update certain fields when certain records are entered.
club1 (clubId foreign key from clubs table) club2 (clubId foreign key from clubs table) club1goals (the amount of goals, type Number) club2goals (the amount of goals, type Number) club1points (points earned, Type Number) club2points (points earned, Type Number)
When filling out a result, the user will enter the following (as an example):
club1 - 1 (club with ID, 1) club2 - 12 (club with ID, 12) club1goals - 2 (the first club scored 2 goals) club2goals - 0 (the second club scored 0 goals) club1points - 3 (the first club picked up 3 points) club2points - 0 (the second club picked up 0 points)
The result is then entered into the results table and what I am hoping to achieve at this stage is the following:
1) in the clubs table, the gamesplayed is incremented by 1 for both clubs as a result of playing this fixture
2) club1 has however many goals club1 scored added to its current clubtotalgoals field (in this case, it is of course 2)
3) club1 has however many points club1 earned to its current clubPoints field. (In this case it would be 3)
I have a table A, whose table structure is in the below format.
Table A
ID DESC VALUE 123 A 454 123 B 1111 123 C 111 123 D 222 124 A 123 124 B 1 124 C 111 124 D 44
Now i need to insert the data from this table to another table B, the sturcture of which is as below
Table B
ID A B C D 1234541111111222 124123111144
How do i frame a query to fetch data from table A and insert that into table B? I don't want to use max and decode combination. as it would return only single row for an ID. I need all the id's to be displayed.
I have a set of rows based on a complex view from multiple table.
I will be updating some of its columns from front-end . Is there any possible ways to lock those rows of data while updating and no other users can update it;
I have a multi record block based on a view. All records in the view are displayed in the block by use of Post-Query trigger when entering the form.
The block has 5 items as follows:
1) RECORD_STATUS = a non-base table column which is a checkbox. 2) ITEM_TYPE = a text-item which has an LOV attached. 3) ITEM_TEXT = a text-item which is free format text. 4) LAST_UPDATE_DATE a date column 5) STATUS = a text item either 'Open' or 'Closed'
The LOV is based on a table of Item Types with values say, 'Type1', upto 'Type9'.
I have a Wnen-New-Record-Instance trigger which 'Posts' changes to the database. This has been included as i want to limit the values of the ITEM_TYPE column to values which have not been previously used.
Consider this scenario...
The block has 3 records.
record 1 has 'Closed' status so no updates are allowed. record 2 has 'Open' status so updating of Item_Text is allowed. record 3 has 'Open' status so updating of Item_Text is allowed.
I check the RECORD_STATUS checkbox on record2.
(This sets the RECORD_STATUS checkbox to a checked value and changes the STATUS column to 'Closed' by When-Checkbox-Changed trigger.) At this point the record has not been saved so if you uncheck the checkbox , then the STATUS column will go back to 'Open'. However at this point i will leave it as Checked (Closed).
I then insert a new record, only values Item4 to Item 9 are correctly shown in the LOV. I select Item4.
I then go back to the previous record and uncheck the Checkbox to say that i wish to leave it 'Open' after all (in effect no changes have occurred), then the STATUS column correctly reverts back to 'Open' by my WCC trigger. If i then SAVE the changes, the new record has been inserted on the database correctly, however the LAST_UPDATED_DATE from the record which was checked and then unchecked has also been updated incorrectly even though no net changes have actually occurred.
(because i am using WNRI trigger to limit the List of Values on the LOV column, this has incorrectly set the previous records LAST_UPDATED_DATE column to be Sysdate.)
Table 1Name Item DateJon Apples 06/11/2013 00:30:00 hrsSam OrangesNish Apples Table 2 - Net countName Item CountNish Apples 10Nish Oranges 17Nish BananaSam Apples 10Sam Oranges 1Sam Bananas 1Jon Apples 8
I need to create a job that checks Table 1 for new records added after last run and then add the count in Table 2 accordingly.how to achieve this using PL/SQl or something similar
I have two tables eim_asset and eim_asset1.I want to update the table eim_asset1 using the following update SQL (Or Logic)
update eim_asset1 set emp_emp_login = (select login from s_user where row_id in (select row_id from s_emp_per where row_id in (select pr_emp_id from s_postn where row_id in (select position_id from s_accnt_postn where ou_ext_id in (select row_id from s_org_ext where row_id in (select owner_accnt_id from s_asset where owner_accnt_id is not null)))))
It gives me the ORA error : ORA-01427:single-row subquery returns more than one row.know why I am getting it, because of the one-to-many relationship between owner accounts and their assets.
We have a 2 database identical ( Say DB1 & DB2). In that, one of the database(DB2) base got corrupted. We cant recover back the database due to hard disk problem. So we did a new installation of database and patched to current level.
Now I want make DB2 up and running. So I though of generating a script from DB1 and run it in DB2 to restore back.
I have two schemas with 149 tables in each schema, what I need to do is to prove that the content(data) between the two schemas is identical. I know that all the table names between the two schemas are the same, just need to prove that there is no difference in data.
So the query needs to prove that Schema A content = Schema B content
I know I cant do a simple select from Schema A.tab1 minus select Schema B.tab1 but since there are 159 tables, I am not sure if this is an efficient way of doing it.
I have a sets of data in table ow_ship_det , from which i want to group all the records which are having same sl_desc but with the condition that sl_qty is not more than 1000 and sl_wt not more than 50000, i managed to do it but the problem is i want the wieght(sl_wt) and qty(sl_qty) to be evenly distributed among groups or boxes for example take the first four records which have common sl_desc 'H170' Where the qauntities are 15000,15000,10000,10000 as per the condition and loop written in program it will bring the 2 boxes or serial numbers with first 3 weights into 1 box as 40000 and other box as 10000, which i dont want instead i want to have them as 25000 each.
CREATE TABLE OW_SHIP_DET (SL_PM_CODE VARCHAR2(12),SL_DESC VARCHAR2(20), SL_WT NUMBER,SL_QTY NUMBER); insert into ow_ship_det(sl_pm_code,sl_desc,sl_wt,sl_qty) values ('A','H170',15000,300); insert into ow_ship_det(sl_pm_code,sl_desc,sl_wt,sl_qty) values ('B','H170',15000,300); insert into ow_ship_det(sl_pm_code,sl_desc,sl_wt,sl_qty) values ('C','H170',10000,300);
[code]...
--if you see above the weight is not balanced properly in batches 0001 for H170 Desc it should get divided equally as below
ob_batch OB_PM_CODE OB_DESC OB_QTY OB_WT 0001 A H170 300 15000 0001 C H170 300 10000 0002 B H170 300 15000 0002 D H170 300 10000
insert into table1(field1,field2)values('A','1'); insert into table1(field1,field2)values('A','1'); insert into table1(field1,field2)values('A','1'); insert into table1(field1,field2)values('B','2'); insert into table1(field1,field2)values('B','2'); insert into table1(field1,field2)values('B','1'); insert into table1(field1,field2)values('B','1'); SELECT field1 FROM table1 WHERE field2=all(select '1' from dual) FIELD1