SQL & PL/SQL :: Financial Year Wise Grouping?
Feb 20, 2012I need to query by fiscal year wise.
For example
1-Apr-2010 to 31-Mar-2011
I need to query by fiscal year wise.
For example
1-Apr-2010 to 31-Mar-2011
i hav the following records i need to display the records of financial year wise (i.e april year-1 to mar year) here there are no records for months 9,10(september and october)
what i want is 0 to be displayed over there in the below table
s.no key id yyyymm ce1 ce2 ce3 t ihu pf mi pwd F
920110600000156431372010043424541500065457709539XXXXXXXXXXU
10201106000001565313720100534245415000009539XXXXXXXXXXU
11201106000001566313720100634245415000009539XXXXXXXXXXU
12201106000001567313720100738555415000009539XXXXXXXXXXU
13201106000001568313720100836445415000009539XXXXXXXXXXU
1420110600000157131372010110000009539XXXXXXXXXXU
15201106000001572313720101276881082200001888009539XXXXXXXXXXU
1620110600000157331372011010000009539XXXXXXXXXXU
I have form (table GL_PV)in this form I want to generate voucher no financial year wise.
I have Four columns
Year varchar2(4)
Month number(2)
Pv_no number(6)
Pv_date date
--- I use pre-insert tirgger---
when Insert record year save pv_date 'YYYY' (2010)
Month save as 'MM' (7) and pv_date I enter manulay
now I want Pv_no it should generate auto according to financial year.
I have 2 financial year
2009-2010 June-July
2010-2011 June-July
I want that every financial year PV_no should be separate. both financial year pv_no start from 1. when I enter data any financial year first it check the MAX(PV_no) then generate next Pv_no No. MAX(pv_No)+1
Haw to sort the data based on financial year.
For example Indain Fiscal year 01-Apr-2010 to 31-Mar-2011
Apr2010 -1
May2010 -2
.
.
.
.
Feb2011-11
Mar2011-12
I had given a query..for Quarter idendification
SELECT distinct
'Qtr'
|| CASE
WHEN PERIOD BETWEEN TO_DATE ('01-04-2010', 'DD-MM-YYYY')
AND ADD_MONTHS (TO_DATE ('01-04-2010', 'DD-MM-YYYY')-1, 3)
[code].....
I hava an requirement to get week number from particular date as per indian financial year(ie apr-01 to mar-31).I have tried with to_char(the_date_field,'w') and to_char(the_date_field,'iw') formats but I know its only shows the iso standard.
But I want the weeknumber 1 has to be started from april-01 not from jan-01.For more explanation see below,
Date week_no
apr 01 to 07 1
apr 08 to 14 2
apr 15 to 21 3
.
.
.
.
.
Next year mar - 31 n
how to do this am waiting for your reply.
My problem is I need to get the data for every quarter for financial year and also I need data for every week for financial year.For example for financial year 2012-13, Apr2012 to Jun2012 would be Q1, Jul2012 to Sep2012 would be Q2 and so on. Total 8quarters should come upto Apr2013.In the same way 1st apr 2012 to 7th apr 2012 would be week1, 8th apr to 15th apr would be week2 and son on. How to write a query for this scenario in oracle.
View 4 Replies View RelatedIn oracle query can i want find out how many day wise count for a year days (for example how may sundays, mondays, tuesdays, wednesdays ,thursdays,fridays,saturdays) in a given year (we can give the start day of the year and the end day of a year).
example
----------
jan sun-5 mon-4 tue-5 wed-5 thu-5 fri-4 sat-5
feb ------------do---------------------------------
like this for all 12 months at a single query.
I need a pl/sql function to calculate the financial month from an date... For eg. If i enter input date as '21-sep-2012' the function should return me the output value in months as 6. so it should calculate from april to september of that year...
View 13 Replies View RelatedI have a requirement to get the records group wise.Ex: For each departments, i need to get the employee details as a coma seperated.It means that the output must have the department name in first column and the second column must contain all the employees in that particular department (As a coma seperated).
View 3 Replies View RelatedIn the below data, a container is moving from one city to another. 1,2 ,3 can be any number which i want to generate and use as keys to group the cities. Eg: AUH, JEB, CIW belong to the same key=2; SIN, IKT belong to a new group 4. The City where difference between the Seq# is greater than 1 (eg between S8W and AUH), a new group starts.
Conotainer #CitySeqI want this
-------------------------------------------
Container1S8W5251
Container1S8W5261
Container1AUH5362
Container1AUH5372
Container1JEB5382
Container1JEB5392
[code]....
I have a table like this
Name Hours date
a810/11/2011
a 510/12/2011
a610/13/2011
a710/14/2011
a710/15/2011
a810/16/2011
a710/17/2011
a810/18/2011
a810/19/2011
a710/20/2011
a710/21/2011
If i want the sum of hours for 3 days range ,how should i do it.
E.g. say
name hrs startdate enddate
a 19 10/11/2011 10/13/2011
a 22 10/14/201110/16/2011
a 23 10/17/2011 10/19/2011
How can I get the grp_id for unique combination of manager and department, grp_id should be created on asc order of manager_id.
In this example manager_id 100 is minimum, so it should be grp 1 and all the employees with that manager_id should be in grp_id 1, for manager_id 114 grp_id should be 2.
If, there is manager_id 117, it should create grp_id 3.
To get grp_num ,I can use row_number() over (partition by department_id,manager_id order by employee_id) grp_num
I am looking for an update statement for this issue.
Oracle version : Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
CREATE TABLE HR.EMPLOYEES_2
(
EMPLOYEE_ID NUMBER(6),
FIRST_NAME VARCHAR2(20 BYTE),
LAST_NAME VARCHAR2(25 BYTE),
EMAIL VARCHAR2(25 BYTE),
PHONE_NUMBER VARCHAR2(20 BYTE),
[Code]....
Expected result
----------------
EMPLOYEE_ID SALARY MANAGER_ID DEPARTMENT_ID GRP_NUM GRP_ID
114 11000 100 30 1 1
115 3100 100 30 2 1
116 2900 114 30 1 2
117 2800 114 30 2 2
118 2600 114 30 3 2
119 2500 114 30 4 2
I am trying to break down the balance_date to display the following groupings:
7:00-17:30 CDT
18:00-4:30 CDT
I currently have the query setup to display by day instead of these time ranges. I would like the output to read
19 May Day
19 May Night
20 May Day
20 May Night
I am fairly new to this, but how would I go about making this change?
SELECT
TO_CHAR(TRUNC(balance_date,'D') + 4,'YYYY') || '-' ||
TO_CHAR(TRUNC(balance_date,'D') + 4,'IW') as year_wk,
TO_CHAR(TRUNC(balance_date,'D') + 4,'IW')as wk,
[Code] ........
Am getting an output like this in oracle9i,
S.No Column1 Column2 Column3 DateCol
1 A B C 10/2001
2 A B C 03/2001
3 B B C 02/2001
4 B B C 01/2001
5 A B C 03/2000
But My real scenario is , i need to populate the output in below structure
S.No Column1 Column2 Column3 DateCol
1 A B C 10/2001
A B C 03/2001
2 B B C 02/2001
B B C 01/2001
3 A B C 03/2000
I dont know how to form the query , to retrieve the structure..
SELECT
pas_code,
pas_profile,
count(sutp_id),
sum(sutp_price),
[code]...
And the problem is, that when i use sutp_price_proc and pbk_price in grouping, it splits my results by those rows. If i delete them from grouping, sql gives me error about not a single grouping in line 1.
pas_codepas_profilesutp_idsutp_pricex
2664good stuff310069< because pbk_price is like 67 from that period
2664good stuff310071< because pbk_price is like 50 from other period
how to get all results in a single line like:
pas_codepas_profilesutp_idsutp_pricex
2664good stuff6200140
TASK_ID, TASK_STATUS, TASK_OWNER
================================
00001 , OPEN , ABC
00002 , OPEN , XYZ
00003 , WIP , ABC
00004 , CLOSED , XYZ
00005 , WIP , XYZ
00006 , CLOSED , XYZ
00007 , OPEN , XYZ
Output Required
Owner , Open , WIP, Closed
ABC 1 1 0
XYZ 2 1 2
I have this table,
Create table TBL_OK_HIST
(
DATE_KEY NUMBER,
A_N VARCHAR2(22 BYTE),
R_DUR VARCHAR2(8 BYTE),
CH_DUR VARCHAR2(8 BYTE),
REV VARCHAR2(20 BYTE)
)
insert into TBL_OK_HIST
values (20101010,123456768,5,20,2);
insert into TBL_OK_HIST
values (20101010,123496568,15,20,2);
insert into TBL_OK_HIST
values (20101012,122235768,25,25,3);
[Code] ......
Thus, applying the following would yeld:
Select * from TBL_OK_HIST
DATE_KEYA_N R_DUR CH_DUR REV
201010101234567685202
2010101012349656815202
2010101212223576825253
201010112345676819202
2010101234567681252527
2010101323456768136365
2010101056768123411202
2010101134681256717202
2010101068123456755559
generate the following results:
range_start_rdur range_end_rdur no_of_an sum_of_rdur sum_of_chdur sum_of_rev
1 5 1 5 20 2
6 10 1 9 20 2
11 15 2 26 40 4
16 20 1 17 20 2
21 25 1 25 25 3
26 30 0 0 0 0
31 35 0 0 0 0
36 40 1 36 36 5
41 45 0 0 0 0
46 50 0 0 0 0
51 55 2 107 107 16
I thought I would make use of the following query, but I am not getting the proper results when applying it to a real table with more than 20 mln records:
SELECT trunc(R_DUR/6)*5+1 as range_start_rdur,
trunc(R_DUR/6)*5+5 range_end_rdur,
sum(noofan) as no_of_an,
sum(sumofrdur) as sum_of_rdur,
sum(sumofchdur) as sum_of_chdur,
[Code] ...........
i have a table where there are codes of length 6 or length 12 some times, i need to add the summary of amount based on two different types of codes, one is adding the distinct codes which are of 6 char long and other sum will be based on from substr(7) till last.
create table strings ( strings_var varchar2(12),strings_amt number);
insert into strings (strings_var,strings_amt) values ('02.01',10 );
insert into strings (strings_var,strings_amt) values ('02.01_A11111',15);
insert into strings (strings_var,strings_amt) values ('02.02_A11111',15);
insert into strings (strings_var,strings_amt) values ('03.01_B11111',15);
insert into strings (strings_var,strings_amt) values ('03.02_B11111',15);
the output which i want is as below.
string value
'02.01' 10
'A11111' 30
'B11111' 30
In my schema the employees table has a number of 55 rows in department_id 30.
How can I spit the employees table into views group by department_id as
- one view with no more than 55 rows (this view will contain only a department)
- another view with more departments but whose number of rows is not > 55 but can contains 2 department_id (e.g.: 9, 10 and the sum of these rows is 43 but if I would like to bring another department the rows count will be > 55)
Allow me to preface this with the notice that I am not familiar with XML outside of its hierarchical structure, and am not familiar with what you can do with it using formatting.
As an example, let us say you have the following table:
Object_Type | Object_Name | Descriptor |
------------------------------------------------------------
Fruit | Apple | Crunchy |
Fruit | Orange | Sour |
Utensil | Pencil | Wooden |
Now let's say you want to query this table to return an XML format, which will be used in a web site to display the information, and you want to group the display by Object_Type, so that you want an XML format like this:
<Object Group>
<Object Type>Fruit</Object Type>
<Object>
<Object Name>Apple</Object Name>
<Descriptor>Crunchy</Descriptor>
[code]........
However, from what I can tell, using the XMLELEMENT function, it appears the closest I can get is following:
SELECT XMLELEMENT("Object Group",
XMLELEMENT("Object Type", object_type),
XMLELEMENT("Object",
XMLELEMENT("Object Name", object_name),
XMLELEMENT("Descriptor", descriptor)
)
)
FROM object_tbl;
<Object Group>
<Object Type>Fruit</Object Type>
<Object>
<Object Name>Apple</Object Name>
[code].........
Is it possible to group it in a way so that Apple and Orange end up in the the same <Object Group>? Or is that meaningless and such grouping can be done on the web site itself by formatting the XML?
When a GROUP BY clause contains multiple columns, which grouping is the most major grouping?What puzzled me was, I never knew there was such a thing as a "most major grouping" in a GROUP BY clause. Anyway, the answer:
the first column listed in the GROUP BY clause . what this means in practice? It must mean something different to your bog standard "select sum(order value) from sales group by city,country,region" because in that case, I can't see how city has any more or less relevance to the query than region.
I am trying to write a single SELECT statement that groups at 2 levels of aggregation (using grouping sets) and assigns row numbers (to rank each item) that are partitioned at the correct level for each grouping set. I have the grouping sets figured out but I can't find a way to make Partition By match each level of aggregation.
What I am looking for (in a single SELECT statement) is logically equivalent to:
SELECTweek
,region
,NULL as country
,item
,SUM(qty)
,ROW_NUMBER OVER (PARTITION BY week, region ORDER BY SUM(qty) DESC) as rownumFROM base
GROUP BY week
,region
,item
UNION ALL
SELECTweek
,NULL as region
,country
,item
,SUM(qty)
,ROW_NUMBER OVER (PARTITION BY week, country ORDER BY SUM(qty) DESC) as rownumFROM base
GROUP BY week
,country
,item
I hoped that I could do something like this:
SELECTweek
,region
,country
,item
,SUM(qty)
,ROW_NUMBER OVER (PARTITION BY week, GROUPING SETS (region, country) ORDER BY SUM(qty) DESC) as rownumFROM base
GROUP BY week
,GROUPING SETS (region, country)
,item
But it looks like I am not allowed to partition by grouping sets -- I get the error ORA-00907: missing right parenthesis. I didn't expect it to work but I am not sure how else to partition by multiple levels.
let me know if I could have tagged my code or met other forum standards better.
I am in the very early planning stages of a project the goal of which is to identify separate organizations which may in fact be the same organization.
Our first implementation of this task was a process designed to look for a few thousand organizations in a pool of a few hundred thousand organizations. To accomplish this we made heavy use of Oracle's Text index as well as a custom index type we created which utilized n-grams. This approach worked quite well for on-demand editing of the organizations, in which a user might log in and say in addition to what we already know about organization A we also know x, y and z does that change anything and worked acceptably well for the bulk processing we did on our "known" information once a week running for a couple of hours on the weekend.
We have now been tasked with reworking this initial implementation only now we want to look at a set consisting of several million organizations for potential matches which exist within the set. As in our initial implementation we will be breaking what we know about organizations into groupings so we aren't comparing a phone number to an email address and normalizing the data as much as we can so we ignore things like case and punctuation. Even after all this we are still talking about looking for similar values in a group which might be in the tens of millions (some types of data will have more than one value per organization).
My initial thought on the problem is to use n-grams though not in the way we did in the past. The basic idea here is that we break the search values up into all the substrings it is made of and look for other values which have a high number of those substrings in common.
SQL & PL/SQL was the best place for the question, but I could not think of a better one.
I have a query on displaying data as per my requirement. I have created a table called sales it has four columns
create table sales(country,state,district,sales);
and am inserting some same data
insert into sales('india','TN','Chennai',100);
insert into sales('india','TN','KPURAM',120);
insert into sales('india','TN','Bangalore',35);
insert into sales('india','ANDR','Guinder',100);
insert into sales('india','ANDR','Nellai',76);
insert into sales('london','city-a','xstreet',89);
insert into sales('london','city-a','binroad',100);
select * from sales;
country state district sales
india TN chennai 100
india TN KPURAM 120
india TN Bangalore 35
india ANDR Guinder 100
india ANDR Nellai 76
london city-a xstreet 89
london city-a binroad 100
the data is displayed in this format. How i am trying to display data.
I have to get totals from a table using different criteria, which I do like this:
<QUERY>
SELECT DISTINCT
SUM(CASE WHEN MYCONDITION1 THEN 1 ELSE 0 END) AS TOTAL1,
SUM(CASE WHEN MYCONDITION2 THEN 1 ELSE 0 END) AS TOTAL2
FROM TABLE1, TABLE2
WHERE COMMON_CONDITION1 AND COMMON_CONDITION2
AND datevalue1 >= DATE1 AND datevalue1 <= DATE2;
<QUERY>
This works fine and I get the intended result.Now, I have to repeat this for every week for the last 12 months, excluding holidays period. So, I generate a set of date ranges which will be used in the queries. So, I repeat the above sql statement for all the date ranges, which is a lengthy process.How can I do that in a single shot and get all totals for each date range.
Ok assume there is a table (TableA) in this format
col1col2 col3col4col5col6
--------------------------------------------------------------
R1route1route1Description1AABBCC
R1route1route1Description1AACC
R1route1route1Description1CCBB
R2route2route1Description2GGKKLL
R2route2route1Description2GGLL
R2route2route1Description2LLKK
[Code]..
The data in the table was imported from a csv file and there is a relationship between the rows. Each combination of col1, col2 and col3 describes a full route of a journey. The row with an entry in col6 describes the full route and the other rows describes each leg in the route.
For example, for R1, the route is AA to BB via CC.
Another example for R4 the route is FF to SS via XX, PP, and OO.
What i would like to do is missing a route. For example the route for R3 is DD to EE via FF. There is an entry for DD to FF but is missing an entry for FF to EE.
The results should return the following rows which are incomplete
R3route3route1Description3DDEEFF
R3route3route1Description3DDFF
R5route5route5Description5RRTTUU|VV
What is the best way to do this?
Here is what i have come up with but it doesnt quite returned the correct result.
select * from tableA a
Where not exists(
select 1 from tableA b
where instr(col6,col4,1)>0 and instr(col6,col1,1)>0)
And a.col1=b.col1
And a.col2=b.col2
And a.col3=b.col3
)
Is there an easier way to achieve this?
I have a one column table that store lists of elements :
create table test_table (c1 VARCHAR2(4000));
insert into test_table values ('1,23');
insert into test_table values ('1,2');
insert into test_table values ('3,4,5');
[code]...
The output column would be something like that:
output_column
1,2,7,23
6,9,0
3,4,5
I'm grouping columns that have at least one element in common.
(1,23) and (1,2) merge into : (1,2,23)
(1,2,23) and (7,2) merge into : (1,2,7,23) --> Output
(6,9) and (9,0) merge into : (6,9,0) --> Output
(3,4,5) and (5,5) merge into : (3,5,5) --> Output
I have made this logic using only PL/SQL, with loops and nested tables using function memberof, but I suppose that there is a way to improve the performance using only SQL.
Actually I have a table with the following data:
--------------------------------------------------
DATE ITEM NOFEE TYPEAMOUNT
--------------------------------------------------
1/1/20121234561 $0.50
1/1/20121234562 $0.40
1/1/20121234563 $0.30
[code]...
I would like to have a data set like this: grouping by ITEM NO & DATE
----------------------------------------------------------------------
DATE ITEM NO1 2 3 4
----------------------------------------------------------------------
1/1/2012123456$0.50 $0.40 $0.30 $0.20
2/1/20121234567$0.50 $0.40 $0.30 $0.20
3/1/201212345678$0.00 $0.40 $0.30 $0.20
if you see, from the third column in the result set, each fee type becomes different columns.
I'm having a problem with grouping sets over dictionary views.
10g output:
SQL> select
2 -- 10g results
3 segment_name,
4 round(sum(bytes/1024/1024),2) mb
5 from dba_segments
[Code]...
ERROR at line 10: ORA-03001: unimplemented feature
Elapsed: 00:00:00.12
The query is fine over a non-dictionary table however (My actual code isnt against dual, but this makes it generic, the error is consistent)
11g output
SQL> select
2 -- 11g results
3 segment_name,
4 round(sum(bytes/1024/1024),2) mb
5 from dba_segments
[Code]....
ERROR at line 8: ORA-00904: : invalid identifier
Different error and that syntax, as far as I can tell, is sound - error message be damned.
An example of a query working on both versions is
select
employee_id,
sum(salary)
from
employees
group by grouping sets ((employee_id),null)
;
Am I missing something? Does grouping sets not work over the dictionary views?
Edit: Added version tags over the code to make it easier to read
I'm using oracle 10g.I have a table with 4 columns
main_group-----id--------start_date------------end_date
M1-------------1---------07FEB11---------------10FEB11
M1-------------2---------09FEB11---------------11FEB11
M1-------------3---------10FEB11---------------12FEB11
M1-------------4---------13FEB11---------------16FEB11
M2-------------5---------18FEB11---------------21FEB11
M2-------------6---------19FEB11---------------24FEB11
M2-------------7---------26FEB11---------------27FEB11
i need to group the id's which are having overlapping dates and the output should be
main_group-----id--------start_date------------end_date
M1-------------1---------07FEB11---------------10FEB11------G1
M1-------------2---------09FEB11---------------11FEB11------G1
M1-------------3---------10FEB11---------------12FEB11------G1
M1-------------4---------13FEB11---------------16FEB11------G2
M2-------------5---------18FEB11---------------21FEB11------G3
M2-------------6---------19FEB11---------------24FEB11------G3
M2-------------7---------26FEB11---------------27FEB11------G4
I can give you the logic first i'll sort the start_date(already sorted in given example), then i'll compare the 2'nd id start date with 1'st id end date if it is less than the 1'st id end date, which means overlapping is there, then i'll group those 2 id's in to same group if not group them into 2 different groups.