Now if there is more than one row with same email the one with the latest edit date should be updated with missing fields by using same field value other rows (if the field is present in more than one row, the one with the next latest edit date is to be considered) and the archived status of all rows with same email except this master row must be set to 1.
The Create_Date must be set to the minimum of all the create_date values of rows with same email value The create table would be as follows:
CREATE TABLE student(Id NUMBER PRIMARY KEY,first_name VARCHAR2(30) NOT NULL,last_name VARCHAR2(30) NOT NULL,email VARCHAR2(30) NOT NULL,contact NUMBER,adress1 VARCHAR(30),adress2 VARCHAR(30),city VARCHAR(30),edit_date DATE,create_date DATE,archived CHAR(1))
Sample insert statements would be: insert into student values
Requirement of merging two schema's into one. I have a client in two diffrent location . intially we setup the application with two diffrent instances of database for them for a smooth opration as they were not having any connectivity between the branches. Now they are moving their both branches together as one organisation. My application database table structure is same in both places.
I understand you can do this using cursors, but i need it in plain SQL; so that I can make this a correlated sub-query to another table using the id column.
Here's the original table:
id subid text -- ----- ---- 1 1 red 1 2 blue 1 3 green 2 1 yellow 2 2 black 2 3 orange
result should be:
id text -- ---- 1 red,blue,green 2 yellow,black,orange
SQL to Create and populate the table:
CREATE TABLE testStringJoin(ID number, subid number,text varchar2(50)); INSERT INTO testStringJoin values(1,1,'red'); INSERT INTO testStringJoin values(1,2,'blue'); INSERT INTO testStringJoin values(1,3,'green'); INSERT INTO testStringJoin values(2,1,'yellow'); INSERT INTO testStringJoin values(2,2,'black'); INSERT INTO testStringJoin values(2,3,'orange');
way to delete my subpartitions by keeping my datas, and keep it into my partitions. In fact, i want to remove my subpartitions, and keep my table partitionning. I already remove my subpartition template, but i don't want to do an insert as select on a new table wich will be partitionned (without subpartitions). ALTER TABLE myTable SET SUBPARTITION TEMPLATE ();
drop table test / create table test ( lib varchar2(100) ) / insert into test values ('111/aaa/bbb/ccc'); insert into test values ('222/aaa/bbb/ccc'); insert into test values ('333+444/aaa/bbb/ccc'); insert into test values ('333/aaa/bbb/ccc'); insert into test values ('222+333+444/aaa/bbb/ccc'); insert into test values ('222+333+444+555/aaa/bbb/ccc');
I need to transpose the following table columns to rows and rows to columns...Im not quite sure how to acheive this...I have the following table with fixed number of columns and dynamic number of rows based on date filter in query
MONTH_YEAR RMS RMS_OCC TTL_RMS --------------------------------------- SEPTEMBER 200917790017790 OCTOBER 2009183831278818347 NOVEMBER 2009177901460517762
and I need to display this as
COL1 SEPTEMBER 2009 OCTOBER 2009 NOVEMBER 2009 -------------------------------------------------------------- RMS 17790 18383 17790 RMS_OCC 0 12788 14605 TTL_RMS 17790 18347 17762
We are using PL/SQL Release 11.2.0.2 .I would like to pull a query with each student each day an attendance record.Our database setup an AM and PM Period for all elementary students. I will pull if they absent both periods(AM, PM), then count that as one day absent.The hard part is I need to put the AM absent code and PM absent code - which is basically to put two records for each student's AM and PM absent code into one row.
Below is the query I use, but it violates the key of database, for PK is studentid+ attendance date. My query result turns out for some students they have different attendance code in AM vs PM, there are two records returned.
We have large (millions of records) Slow changind dimension (SCD) type 2 (see "Creating another dimension record " URL>.....We need to get several rows from this SCD for each key (AGREEMENT_ID) in a SQL query - to join to facts table and get several data points of each agreement (on several different points in time) stored in SCD.Here is SCD table structure:
CREATE TABLE AGREEMENT ( "AGREEMENT_ID" NUMBER(*,0) NOT NULL ENABLE, "ACTUAL_DATE" DATE NOT NULL ENABLE, "ACTUAL_END_DATE" DATE NOT NULL ENABLE, "OPEN_DATE" DATE NOT NULL ENABLE, "LIMIT" NUMBER(23,8) --++ a lot of other fields not needed for this task .... CONSTRAINT "PK_MD_AGREEMENT" PRIMARY KEY ("AGREEMENT_ID", "ACTUAL_DATE") USING INDEX )
The 1st simple approach would be to join facts to SCD as many (N) times as many different points of time you need - resulting in N Full Table Scans for SCD:
select ... from fact, AGREEMENT agr1, AGREEMENT agr2, AGREEMENT agr3 where fact.AGREEMENT_ID = agr1.AGREEMENT_ID and agr1.open_date between actual_date and actual_end_date and fact.AGREEMENT_ID = agr2.AGREEMENT_ID and :dateBOP between actual_date and actual_end_date and fact.AGREEMENT_ID = agr3.AGREEMENT_ID and :dateEOP between actual_date and actual_end_date
2nd approach: 1 Full Table Scan for SCD + group by:
select ... from fact, ( Select AGREEMENT_ID, max(case when open_date between actual_date and actual_end_date then LIMIT end) LIMIT_At_Open_DATE, max(case when :dateBOP between actual_date and actual_end_date then LIMIT end) LIMIT_At_BeginOfPeriod_DATE, max(case when :dateEOP between actual_date and actual_end_date then LIMIT end) LIMIT_At_EndOfPeriod_DATE
from agreement
-- ++optionally WHERE for those 3 dates, but possibly with no effect on non-partitioned table? Or WHERE to put less data on MAX() input (3 row for each agreement instead of 4...1000 without WHERE?)
group by AGREEMENT_ID ) agr where fact.AGREEMENT_ID = agr.AGREEMENT_ID
Simple question, Why comparison operator ANY returns FALSE if no rows returned, and why operator ALL return TRUE if no rows returned? I dont know is this some kind of language or math assumption or is this just oracle rule?
There are at most 2 entries of a in b. Depending on the value of the type column in B, this determines whether the entry should be male or female. I want to have a select statement that will retrieve 2 rows into one row essentially like below, how is this done:
id male_name female_name 1 paul paula
the column names will appear as such, if its a 0 its a male name if its 1 its a female name, there will generally be 2 entries in B for 1 value of a.
I would want to know how can we predict how many rows are fetched per second for a particular query. What are the factors which are responsible for this.hoe does this whole process of fetching records from Source happens. Like when a query is fired how does it try to access the table and fetch records and how what are the factors whichc are responsible for this and how can we predict how many rows can be fetched per sec
Input data: Sec_SSC_ID Column_nameAs of dateOld valu New Value IBM Mat_dt 10/10/20101/1/2001 1/1/2002 IBM Bid Market 10/10/201075 85 IBM asset_nm 1/1/2011International IBM MSFT asset_nm 1/2/2011Microsoft Intel MSFT Bid Market price 1/1/201189 90
tried searching google and this site too, found postings on WM_CONCAT, STRAGG, concat_all, LISTAGG functions by Michel and have experimented with these, but either the syntax is giving me a hard time or i just have not got the concept down.
Trying to get 2 rows into one. Have provided the create statements and insert of data. Also below will show what is returned with a Select i have and what is ideally required.
CREATE TABLE Person_Lang ( Person_ID NUMBER NOT NULL, Language_ID NUMBER NOT NULL, Contact_Name VARCHAR2(255 CHAR), Main_Phone VARCHAR2(255 CHAR), Secondary_Phone VARCHAR2(255 CHAR),
i want to sum different rows and want get wum in nest column.
for example i have fee column and there is some different student fee in specific month for example(jan).i want to sum this fee against the month of Jan.
How to get the previous value of row with calling function to add value in SELECT statement for the row value.
Consider the example Table A1 having column a with values 1,NULL,NULL,NULL
SELECT CASE WHEN a IS NULL THEN (prev_row_value+function_return_Value) ELSE a END as A from A1
And my result-set should be like
a ---------------------- 1 1+(Return Value Of Function) Prev_Row_Value+(Return Value Of Function) Prev_Row_Value+(Return Value Of Function) Below is sample code but doesn't fulfill my criteria
I want to update salary column of emp table in a way that every value of salary column be increased by 1000, is this possible I can do this one statement only??
(Just FYI-
SQL> desc emp; Name Null? Type ----------------------------------------- -------- ----------------------------
EMPNO NOT NULL NUMBER(4) ENAME VARCHAR2(10) JOB VARCHAR2(9) MGR NUMBER(4) HIREDATE DATE SAL NUMBER(7,2) COMM NUMBER(7,2) DEPTNO NUMBER(2)
drop table test; create table test ( my_env varchar2(8) , id int ) / alter table test add constraint pk_test primary key (my_env, id); insert into test values ('A', 1); insert into test values ('A', 2);
[Code]...
These rows are valued :
drop table valued_test; create table valued_test ( my_env varchar2(8) , id int , val int ) /
[Code].....
Now the rows of the table test can move from one situation to another. To model the movement from one situation to another, there is another table test_history
drop table test_history; create table test_history ( my_env varchar2(8) , id int , my_env_new varchar2(8) , id_new int ) /
[Code]...
So the row (A,1) moved to (A1,1). What I want to do is to replicate the valued rows with(A,1) to (A1,1).
if we issue select * from <some_query> we will get this :
MY_ENV ID VAL -------- ---------- ---------- A 1 100 A 3 200 B 1 100 A1 1 100
We can issue this query to get it :
select my_env, id, val from valued_test union select my_env_new my_env, id_new, val from valued_test, test_history
[Code]...
MY_ENV ID VAL -------- ---------- ---------- A 1 100 A 3 200 A1 1 100 B 1 100
It works fine, but when we have lot of data in test and valued_test, these query becomes very slow. I think it is because of the union. So here are my questions :
1) could we remove the union 2) What columns in test, valued_test, test_history should we index to make the result be returned faster.