PL/SQL :: Update Table Based On The Value Of Another Table
Apr 21, 2013
Using Oracle 11g SOE R2...
I used to have two tables to store details of PROPERTIES e.g UNIT_COMMERCIAL , UNIT_RESIDENTIAL. I need to combine both of them into one table called UNIT. I moved the data to the new table, but now I am stuck how to update the values of the child table which called MARKETING.
---
Now, I have two tables:
MARKETING (ID NUMBER PK , OLD_UNIT_ID NUMBER FK ...)
UNIT (NEW_ID NUMBER PK , OLD_UNIT_ID NUMBER ... )
I need to update the value of OLD_UNIT_ID in Marketing table to be equal to NEW_ID in the table of UNIT. As you can see the values of OLD_UNIT_ID in both tables are the same.
I used this statement
update ( SELECT distinct M.UNIT_ID , U.OLD_ID , U.ID FROM MARKETING M INNER JOIN UNIT U ON (M.UNIT_ID = U.OLD_ID)) set unit_id = idBut I got this error:
ORA-01732: data manipulation operation not legal on this view??
Now i want to UPDATE reducing the AVAILABLE column by 1 in COURSESEATS table based on common columns collegecode,coursecode for a ROW inserted into SEATALLOTMENT table ,i am confused to what approach i have to follow whether its a procedure or a trigger
CASE:
Here in this case as i insert a row with krcl,cse as college code and course code respectively into seatallotment table the available column in courseseat table for the respective row with mentioned common column must become 59 from 60
ITEMNUM STORELOC lastyear currentyear AM1324 AM1 need sum(quantity) here need sum(quantity) AM1324 AM2 need sum(quantity) here need sum(quantity)
We have to update the last year and current year columns with sum of quantities for each item from matusetrans table based on date at different location in Inventory table.
we had nearly 13,000 records(itemnum's with different location) in inventory table in DB we have to update entire records.
How to write an sql queries to update lastyear and currentyear columns with sum of quantities based on itemnum and location in Inventory table
I was trying to update the AGENTS table with the data based on queries. But all the records updates with the same data. I can't understand why.
Write a PL script to populate these columns as follows:
TRAVEL_STATUS is one of three values: GLOBETROTTER: agent has visited five or more different locations in the course of his missions ROVER: agent has visited between one and four locations SLOB: agent has been on no missions.
CONTACTS is the number of targets that share the agent's home location.
some of the tables and columns are agents table: agent_id, location_id targets table: target_id, location_id missions_agents table: agent_id, mission_id missions_targets: target_id, mission_id missions table: mission_id, location_id
My code is
DECLARE status AGENTS.TRAVEL_STATUS%TYPE; contact AGENTS.CONTACTS%TYPE; CURSOR loc_cur IS SELECT a.AGENT_ID id,
I have two tables lets say TAB_A and TAB_B. I altered table B to include a new column from table A I wrote a merge statement as follows to merge the data
MERGE INTO TAB_AUSING TAB_BON (TAB_A.SECURITYPERSONKEY=TAB_B.SECURITYPERSONKEY)WHEN MATCHED THEN UPDATE SET TAB_A.PPYACCOUNT=TAB_B.PPYACCOUNT;
I know INSERT is for inserting new records UPDATE to my knowledge is to modify currently existing records (loosely) MERGE is one I rarely used, until this particular scenario. The code works perfectly fine, but I was wondering how could I write an update statement? Or in this scenario should I even be using an update statement?
I am trying to update records in the target table based on the records coming in from source. For instance, if the incoming record is present in the target table I would update them in the target else I would simply insert. I have over one million records in my source while my target has 46 million records. The target table is partitioned based on calendar key. I implement this whole logic using Informatica. Looking at the informatica session log I find that the informatica code is perfectly fine but its in the update part it takes long time (more than 5 days to update one million records). find the TARGET TABLE query and the UPDATE query as below.
TARGET TABLE: CREATE TABLE OPERATIONS.DENIAL_REGRET_FACT ( CALENDAR_KEY INTEGER NOT NULL, DAY_TIME_KEY INTEGER NOT NULL, SITE_KEY NUMBER NOT NULL, RESERVATION_AGENT_KEY INTEGER NOT NULL, LOSS_CODE VARCHAR2(30) NOT NULL, PROP_ID VARCHAR2(5) NOT NULL, [code].....
We have to load 10 million rows in a table from another table based on the multiple joins. How much tablespace size we allocate to the table and for performance point of view how much should be the SGA size.
I stumbled about some weird 11gR2 behavior (running on AIX).When I performed a join between a table with user based content (parts belonging to an sourcing scope) and a base table (parts available) whereas the parts have to fulfill a special regular expression, it showed that the same query is faster when using outer join than inner join (about 0.7sec vs. 20sec; which makes me believe that regexp_like works wrong when involved in an inner join).
i tried the same statement with a standard like (but not fulfilling the same condition).This time performance was as expected (inner join outperforming outer join).
Oracle version information Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production [code]...
I can see it, the execution plan for the "inner join" doesn't show so much more costs than the one for the outer (but why at all is does an inner join cost more?) ...The execution plan for both "not like" is the same and (surprisingly ;-) ) similar to "outer-regexp".
I hope sample data are not needed as there would be needed a lot...this is the second time I came across the "plan worse but execution time better" phenomenon.
creating an sql script that can update info from one table in dbase1 to another table in dbase2 that has the same columns and if possible insert date and time in one column when the synchronized is done?
My scenario is I need to insert into History table when a record is been updated into a tabular form(insert the updated record along with the additional columns Action_by,Action_type(Like Update or delete) and Action Date Into History table i.e History table contains all the records as the main table which is been visible in tabular form along with these additional columns ...Action_by,action_type and action_date.
So now i dont want to create a befor/after update trigger on base table rather i would like to create a generic procedure which will insert the updated record into history table taking the page alias and pade ID as the parameters(GENERIC procedure is nothing but whcih applies to all the tabular forms(Tables) contained int he application ).
I am trying to update columns of Table A with the columns of Table B. Both these tables have 60,000 rows each. I tried this operation using following 2 queries:
Query 1
Update TableA A set (A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3 from TableB where A.CODE=B.CODE)
Query 2 Update TableA A set (A.col1,A.col2,A.col3)=(select B.col1,B.col2,B.col3 from TableB where A.CODE=B.CODE) where exists A.code = (select B.code from TableB B where A.code=B.code)
When i execute these two above queries, it keeps executing indefinitely.
I need to take a snapshot of a table before insert or update happens to that table.... in oracle 10g. I am reading the MV docs from oracle and below link..
[URL].......
how MV should be written for this and how to schedule it in dbms_jobs for auto refresh?
assuming that t1 is the table where DML operation are goin to happen so before any insert or update, snapshot has to be taken, and I am assuming that to do this it would look something like this?
create materialized view my_view refresh fast as select * from t1;
i have two databases and created the link between them. I can easily query the data but when i need to update my local records from the remote its showing an error
SQL> update laptop set name = 2 (select name from laptop@ora_link1 where id between 2 and 4) 3 where id between 2 and 4;
(select name from laptop@ora_link1 where id between 2 and 4) * ERROR at line 2: ORA-01427: single-row subquery returns more than one row
select multiple rows from the remote db and update them in the local db.
I have a base table (Table A) block with multiple records displayed. I need to track audits to this underlying table in the following way:
If user updates a field in the block I want the pre-changed record's audit fields to be set and I need to create a copy of the record with the changed values. Basically any changes will result in the record being logically deleted, and a copy record created with the newly changed values.
Tried to implement in the block's pre-update trigger which will call a package to directly update Table A then Insert into Table A, then requery the block. Is there a clean and efficient way to do this?
i want to create a trigger that will update a table when there is an insert or update.i can't across this error that i don't even know what it means "table %s.%s is mutating, trigger/function may not see it".
*Cause: A trigger (or a user defined plsql function that is referenced in this statement) attempted to look at (or modify) a table that was in the middle of being modified by the statement which fired it.
*Action: Rewrite the trigger (or function) so it does not read that table.
CREATE OR REPLACE TRIGGER set_date_end BEFORE INSERT OR UPDATE OF issued ON shares_amount FOR EACH ROW DECLARE BEGIN INSERT INTO shares_amount(date_end) VALUES(SYSDATE); END set_date_end; /
how to adjust a total (counter) after a record is inserted into a table.
the dilemma i am facing is we are using third party software for our fundraising operations so I have no control over what gets done in the background as users process their daily batches into the system. below is the scenario:
during batch posting records are inserted into the paytable, on some pledge donations donors will send overpayments when fulfilling a PLEDGE(as is the case with donor 16084) therefore the system will split the payment during the process and will assign a trantype of 'PP' to the exact pledge amount and a 'PPO'(pledge payment overage) towards the balance. additionally as records get inserted into paytable there is counter of those paytable records going into the appealtable for that particular appealcode so in the case above when batchno 20120808 is completed appealtable.total# will show 103 and total$ will show $2532($10,$12,$10,,,I did not include payment$ since that is not the focus of this issue and will not change).
mgt wants the counter into the appealtable to be 2 instead of 3 records since the two records that were split(same split_transnum) should be recorded as one response not two.
I have tried writing an after insert trigger(dreaded mutating table error) and can't seem to figure out how to update the counter to the appealtable after records are inserted into paytable. below is some code I've been working with but it's not working.
CREATE OR REPLACE TRIGGER PPO_Payment AFTER INSERT ON paytable FOR EACH ROW
I want to UPDATE the field DCR of the table TEST1 with the VALUE of the field DCR2 of the table TEST2.At the end, after the update, the table TEST1 would be like that:
I want to update a row in a table say Table A and the updated row should be inserted into another table say Table B. I need to do it in a single SQL query and i don't want to do it in PL/SQL with triggers. And i tried with MERGE statement but its working with this scenario.
(Note: I'm using Oracle Database 10g Enterprise Edition Release 10.2.0.1.0).
I have a column "empno" in EMP table and "deptno" in DEPT table . I want to update both the columns with single UPDATE statement. With out a creation of stored procedure or view(updating it through view).
I have partitioned the table based on field.But when I am selecting by Partition or by the field I am getting Explain plan as Table Access full.I am pasting the sql and Explain Plan here. The table has two partition by BOOKING_DT_WID. One less than 20100801 and other less than 99991231.
CODESELECT * FROM WC_BOOKING_SALESREP_F WHERE BOOKING_DT_WID >= 20100801; SELECT * FROM WC_BOOKING_SALESREP_F PARTITION(SALESREP_LESS1_99991231); Here is the Explain Plan for the same. CODESELECT STATEMENT ALL_ROWSCost: 1,501 Bytes: 293,923,641 Cardinality: 809,707 4 PX COORDINATOR [code]....
How do I know if the sql is doing partition prune.
in one of the data base some of the column values are TE. i wanted to serch in what are the tables this TE values are present. so m running the below function
CREATE OR REPLACE FUNCTION find_in_schema(val VARCHAR2) RETURN VARCHAR2 IS v_old_table user_tab_columns.table_name%TYPE; v_where VARCHAR2(4000); v_first_col BOOLEAN := TRUE;
[code]....
but v_where := v_where || ' or ' || r.column_name || ' like ''%' || val || '%''' is giving me numaric or value error when i run as select find_in_schema('@TL') from dual; so how can i go ahed with the serch ?
I am trying to write an update statement which updates the User IDs in one table with the User IDs in another table. However I need to update statement to ignore any duplicates that are in the tables.