I need to write a query in plsql to select records for first 3 distinct values of a single column (below example, ID )and all the rows for next 3 distinct values of the column and so on till the end of count of distinct values of a column.
eg: ID name age 1 abc 10 1 def 20 2 ghi 10 2 jkl 20 2 mno 60 3 pqr 10 4 rst 10 4 tuv 10 5 vwx 10 6 xyz 10 6 hij 10 7 lmn 10 . . . so on... (till some count) Result should be Query 1 should result ---> ID name age 1 abc 10 1 def 20 2 ghi 10 2 jkl 20 2 mno 60 3 pqr 10
query 2 should result --> 4 rst 10 4 tuv 10 5 vwx 10 6 xyz 10 6 hij 10
query 3 should result --> 7 lmn 10 . . 9 .. .. so on..
I am trying to update records in the target table based on the records coming in from source. For instance, if the incoming record is present in the target table I would update them in the target else I would simply insert. I have over one million records in my source while my target has 46 million records. The target table is partitioned based on calendar key. I implement this whole logic using Informatica. Looking at the informatica session log I find that the informatica code is perfectly fine but its in the update part it takes long time (more than 5 days to update one million records). find the TARGET TABLE query and the UPDATE query as below.
TARGET TABLE: CREATE TABLE OPERATIONS.DENIAL_REGRET_FACT ( CALENDAR_KEY INTEGER NOT NULL, DAY_TIME_KEY INTEGER NOT NULL, SITE_KEY NUMBER NOT NULL, RESERVATION_AGENT_KEY INTEGER NOT NULL, LOSS_CODE VARCHAR2(30) NOT NULL, PROP_ID VARCHAR2(5) NOT NULL, [code].....
Now i want to UPDATE reducing the AVAILABLE column by 1 in COURSESEATS table based on common columns collegecode,coursecode for a ROW inserted into SEATALLOTMENT table ,i am confused to what approach i have to follow whether its a procedure or a trigger
CASE:
Here in this case as i insert a row with krcl,cse as college code and course code respectively into seatallotment table the available column in courseseat table for the respective row with mentioned common column must become 59 from 60
i am trying to create Procedure which will create the partitions based on the other table date values one per each day.
CREATE OR REPLACE PROCEDURE PARTITION_TEST(PART_DATE_TABLE IN VARCHAR2, TABLE_NAME IN VARCHAR2,SCHEMA_NAME IN VARCHAR2) AS V_PART_NM VARCHAR2(20); V_PART_CNT NUMBER; V_DATE DATE; V_SCHEMA_NAME VARCHAR(15);
[Code]..
It is not creating the partitions and even not giving any errors.
basically this view has columns based on the previous tables column ( MEASURE_ID) values and the values will be corresponding value in column Percentage.
Table 1Name Item DateJon Apples 06/11/2013 00:30:00 hrsSam OrangesNish Apples Table 2 - Net countName Item CountNish Apples 10Nish Oranges 17Nish BananaSam Apples 10Sam Oranges 1Sam Bananas 1Jon Apples 8
I need to create a job that checks Table 1 for new records added after last run and then add the count in Table 2 accordingly.how to achieve this using PL/SQl or something similar
I have two tables lets say TAB_A and TAB_B. I altered table B to include a new column from table A I wrote a merge statement as follows to merge the data
MERGE INTO TAB_AUSING TAB_BON (TAB_A.SECURITYPERSONKEY=TAB_B.SECURITYPERSONKEY)WHEN MATCHED THEN UPDATE SET TAB_A.PPYACCOUNT=TAB_B.PPYACCOUNT;
I know INSERT is for inserting new records UPDATE to my knowledge is to modify currently existing records (loosely) MERGE is one I rarely used, until this particular scenario. The code works perfectly fine, but I was wondering how could I write an update statement? Or in this scenario should I even be using an update statement?
I have more than 100 records in CSV format. I have to import these records to a particular table which already contains data. I have to do a multiple update at a time, based on the condition . ie., if field1 is '1' then update field2 as 'A0001' and if field1 is '5' then update field2 as 'A0007' . The values are not in an order. Is it possible.
I am having data in this format in my column 1234~2345~3456~4567.
I need a query to split the data in the column based on the identifier '~',so that i can pick out the value after the second occurrence of the identifier.
notes column having 2000 characters max, i want my string output based on 35 characters, ya i need to replace tag after 30 characters in the string.. I need out put as "hi hello how are you doing out there, similar i need to calculate the sting length and have to split it 35+35+35..
This i tried
select substr(note,1,(instr(note, ' ',35)))||' '||substr(note,instr(note, ' ',35),(instr(note, ' ',35)))notes from test
We have certain records like SQL, PL/SQL, Reports, Forms, OAF etc in a table. We wanted to capture rating for each of these criteria. So we want a form to be displayed dynamically..
Insert into temp_a values ('1','002.0 AND 002.9'); Insert into temp_a values ('2','729.90 AND 079.99 AND 002.9');
Output :
1 002.0 1 002.9 2 729.90 2 079.99 2 002.9
So, once we get the output, it needs to be joined to another table. I did Google search, but most of them are retuning collections / arrays as output. Not sure how I join the collection with the table.
create or replace function splits ( p_list varchar2, p_del varchar2 ) return split_tbl pipelined is l_idx pls_integer; [code].......
I have a requirement in which the amount need to be split in to multiple period.
For example if there amount of 3000 and start and end date is 01-jan-2013 and 31-mar-2013 then the output of the query should be
Name Start Period End Period Jan-13 Feb-13 Mar-13 ------------------------------------------------------------------------------ XXX 01-JAN-13 31-MAR-13 1000 1000 1000
I used Region, Process by to search the report which appears as shown above. Then I use Choose Auditors column to select my Auditor and copy paste it into the report under To be Audited By col. Is there a way to automate the process. I am here using a tabular form in APEX. My main aim is to assign auditors based on Region, not equal to Processed by.
I have created a function that is used for splitting a comma separated string & give the output in tabular form.here is the function
Here I have used CLOB as my input string will be huge(greater than max limit of varchar2)
CREATE OR REPLACE TYPE SPLIT_TBL_CLOB AS TABLE OF CLOB; CREATE OR REPLACE FUNCTION CSVTOSTRING_CLOB ( P_LIST CLOB, P_DEL VARCHAR2 := ',' ) RETURN SPLIT_TBL_CLOB PIPELINED
[code]....
But here I am facing 2 problems.
1. The function is not accepting a large string & I am getting the error
I have table called test script for table is given below
create table TEST ( col1 Number );
[Code]...
For these values Query is not returning values(3 and 4). So i want generic query to get this result. I am working on it but not able to generate proper query.
In employees table I am having employee_name column data as follows
Aaron, Mrs. Jamie (Jamie)Aaron, Mrs. Jenette (Jenette)Abbott, Ms. Rachel (Rachel) Breton, Mr. Jean Britz, Mrs. Sarie (Sarie) --> Now, I want to display the employee name like "Mrs. Jamie" (with out Surname and with out bracket included text),-->
We are trying insert records from a select query into temporary table, some of the records is missing in the temporary table. The select statement is having multiple joins and union all which it little complex query. In simple terms the script contains 2 part 1st Part Insert in to temporary table 2nd part Select query with multiple joins, inline sub queries, unions and group by classes and conditions Eg. If we execute select statement alone it returns some count for example => 60000 After inserting into the temp table, in temp table the count is around 42000 why is the difference?
It is simple bulk inserts... insert in to temp table select * from xxx. also, there is no commit in between. The problem is all the records populated by the select statement are not inserted in to temp table. some records are not inserted.
Also, we had some other observation. It only happens in its 2nd execution and not its first run. Hope there might be some cache problem Even, we also did not believe that. We are wondering. In TOAD, we tested however at times it happens. In application jar file, after "insert in to temp select * from xxx" we take the i. record count of temp table and ii. record count of "select * from xxx" separately but both doesn't match. Match only at 1st time.
When I execute the query, it returns the data (approx - 40,000 rows) in 1 min.But when I try to insert this data into another table (or create a table of this data) it takes me about 2 hours.
Tried using Materialized view, its again the same the refresh takes 2 hours.Basically here, what I am trying to do is the data from the above query is used to update the values in another table.What ever the procedure I am trying it takes 2 hours.
I don't understand why the custid is the same for each customer, and why it's selecting every customer and not just those with more than 150 gallons ordered.
For this one use the oil tables that you set up and use a subquery. Select the minimum average fall use from the house table. Then show all customers whose number of gallons delivered times two is greater than the minimum.
ID NAME CRT_DTE 1 AB 03/05/1992 2 EF 15/04/1995 3 CD 20/08/1995 4 GH 01/01/1999 5 UV 08/07/2001
[code]....
I want a query which splits the total time period (from min crt_dte to max crt_dte) into year ranges.For eg, lets say a range of 5 years then I need to get results like below.
I have got a procedure that successfully creates an oracle external table and populates it with the contents of a file. This works fine until I have a situation where one of the fields is a VARCHAR2(2) and I try to insert say, a 5 character value. When this happens the record in question does not get populated in the external table (and rightly so), but I could do with working out if there is a discrepancy in the number of records in the file and the number of records that actually make it into the table so I could inform the user that there is a problem.
I have attached the code that creates the external table and populates it.
Now my problem is we should fetch the data based on the below rules
If an OID contains 2 IOIDs for which there is a NEW and DISCO status attached, then fetch the 2 records If an OID has only 1 of these status, then ignore the same If an OID has none of the 2 status, then ignore the same.