Jul 30, 2011
I am working on form 6i, database 9i.
I have a table(emp): empno, ename, job, created by, creation_date
My form :
windows: 1. main window 2. find window
Main Window: It contains data block(emp block) on table emp. Shows 'empno', 'ename', 'job', 'INSERT' button,and 'UPDATE' button.
Find Window: It contains 'empno', 'ename', 'job','created_by','creation_date', and 'FIND' button.
If i search through FIND window, it will fetch the data from 'EMP' table and shows in MAIN window, emp block.
FIND BUTTON: SELECT empno, ename, job
FROM emp
WHERE empno=:blockemp.empno
AND ename=:blockemp.ename
AND job=:blockemp.job
AND created_by=:blockemp.created_by
AND creation_date=:blockemp.creation_date
[code]....
If i query data(F11) on block emp, i can update data any number of times, it's working fine.But if i search through the FIND window, if i udpate a record, first time, it's updating; but second time if i try update the record,it's giving below error.
ERROR:frm-40654 Record has been updated by another user.Re Query to see change
I understand that it's locking the table if manually update it(when i search data through FIND window).
View 7 Replies
View Related
Apr 21, 2010
I have the following case to solve:
Example Table:
Nr_ordPos1Pos2Itemqty
O4018510000107 170,00
O4018520000107 30,00
O40651010000107 500,00
O40651020000107 50,00
O4114510000107 300,00
O31141010000107 50,00
O3114520000107 50,00
I need to create a query that returns record by record a field qty_progr with the cumulate qty considering previous records. The result should be the following:
Nr_ordPos1Pos2Itemqty qty_progr
O4018510000107 170,00 170,00
O4018520000107 30,00 200,00
O40651010000107 500,00 700,00
O40651020000107 50,00 750,00
O4114510000107 300,00 1050,00
O31141010000107 50,00 1100,00
O3114520000107 50,00 1150,00
View 8 Replies
View Related
Oct 21, 2011
I have a csv file which is in this format -
qwerty SCHEMATIC FILE
; Version 2007.7.1
Project,qwerty Project,,1,,7,1.5,1.5,1,1,0,,0,1,0,0,0,0,3,1,1,0,1,0,0,0,0,2,3,1,0,1,0,0,0,0,3,3,1,1,1,0,0,0,0,0,0,0,0,
abc,25150-28407.dat,,102,192,42,12632256,1,1,102,192,42,1,12632256,0,6,1,0,32896,1,0,0,0,0,,,-1,-1,-1,-1,0,0,0,-1,-1,-1
xyz, , , 0, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, , , , , , , , , , , 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
Product,00094416505678,19133,"24-36""X80"" FOLD DR VIA MEH",,4.5,81,3, 8421504,,0,,L.T.L. WHOLESALE.,abc,,0,0,0,
Product,00094416502345,37154,"24-36""X80"" xyz WHITE",,5,81,3, 8421504,,0,,L.T.L. WHOLESALE.,abc,,0,0,
Product,00094416501111,83120,"24-36""X80"" abc WH",,5,81,3, 8421504,,0,,L.T.L. WHOLESALE.,abc,,0
Can I create an external table for loading this file based on selection of fields driven by the record identifier which is the first field of each record, except for the firt two?
For e.g In the above file, for records which say 'abc' in it's first field, I want to load the 2,3,6,8 fields delimited by a comma (the values may as well be null in those fields) and for records which say 'Product' in their first fields, I want to load 5,8,9 and 10 fields from the same file. Basically, I want to know if we can use instr function while choosing the fields line by line based on a search criteria in the file. There will be less columns in my table than the number of fields in the csv file, so I guess I got to mention 'MISSING FIELD VALUES ARE NULL' option. There is another challenge too - I have to skip loading the first two lines from the file to the table.
I have written a big pl/sql proc for doing the same, using utl_file.get_line option, but it is still untested and I would be extremely happy to believe that this can be achieved by creating an external table too.
View 11 Replies
View Related
Nov 11, 2012
Just explaining what I am trying to achieve:
1) i have a hr.departments table that was loaded in hr schema on 1st oct 2012 with 4 columns(department_id, department_name, manager_id, location_id)
2) now I have a new schema by my name 'rahul' and I have loaded departments table but now an additional column has come into picture,ie created_date, this table got loaded on 1st-Nov-2012
3) Now going forward my columns could be dropped from the departments table (it can be a case), for example might be my departments table in my schema 'rahul' one day could comprise of only 3 columns(department_id,department_name,manager_id)
4) Now in the next step, I have managed to extract common column names(in a single line where columns are delimited using a comma) from both the tables(hr.departments and rahul.departments) which are (department_id, department_name, manager_id, location_id) using all_tab_cols table and I have written a function for it which i will be pasting below.
5) now going forward, using the above column names line with column names delimited using comma, I have used a ref cursor and assigned a query to it using the line of columns that I have extracted from the above point
6) Now I want to create a record variable which refers to my ref cursor, something like we do when we create a record variable by reffering to an explicit cursor defination that we give in the declaration block.
PS:
1) I have been out of touch with plsql for a long time so I have lost a lot of mmeory regarding plsql.
2) basically I need to compare data in hr.departments table with rahul.departments table for only columns that are common to both the tables, rest new or discarded columns information will go in one of the log tables that I have created(this is done already)
Code :
===================================================================================================
create or replace procedure p_compare_data(fp_old_table_name in varchar2, fp_new_table_name in varchar2)
is
[Code].....
View 5 Replies
View Related