CREATE TABLE TEST1 (AGG_DATE DATE, COL1 NUMBER(9), COL2 NUMBER(9), COL3 NUMBER(9));
Here is the test-data population script:
insert into TEST1 (AGG_DATE, COL1, COL2, COL3)
values (to_date('01-01-2012', 'dd-mm-yyyy'), 1, 1, 1);
[code]....
The problem is when I wrote an analytical query, it is giving the BEGIN_DATE and END_DATE by taking all the partition values together and so instead of the values above, it is creating an answer as follows:
I'm posting below test case in which I'm not able to understand output for LAST_VALUE function. I'm expecting maximum value for the salary in a department. Because I'm partitioning by department and ordering a partition as assending so being last value it should give me maximum value within a partition i.e. department in this case.
I need to calculate the sum of values over a period of exactly one month (including the current row). Now if I use a windowing clause of "range between interval '1' month preceding and current row", the total period length is 1 month plus one day (being the day in the current record).
Basically, I want to sum over a period starting at "add_months(startdate, -1) + 1" up until startdate of each row.
drop table window_tst; create table window_tst ( id number primary key
[Code]....
So instead of having 01-feb going back to 01-jan, it should only include 02-jan till 01-feb
I could of course recalculate the period length back to a number of days for each row, but that is not really what I would prefer, as it would make the code rather unreadable.
I want to use Analytical function instead of group by clause for below query..
select CASE WHEN ADMT.SOURCESYSTEM ='CLU' THEN COUNT(ADMT.TOTAL_COUNT)*5 ELSE COUNT(ADMT.TOTAL_COUNT) END TOTAL_COUNT from ESMARTABC.ABC_DRVR_MFAILS_TMP ADMT group by ADMT.SOURCESYSTEM
how to delete duplicated records from a table without using row_id. I found the duplicated rows from a table using Analytical Function. But i could not use the Analytical function in the where condition.
I was reading a tutorial for analytical function and i found something like this
sum(princial) keep(dense_rank first order by d_date) over partition by (userid, alias, sec_id, flow, p_date)
How to translate this into simple queries / subquery? i am aware that analytical function are faster but i would like to know how this can translate to using query without analytical function.
Is there any way to apply the restriction on analytical functions, just like WHERE and HAVING .AS we know that we can apply the restriction on table by using WHERE and grouping functions by using HAVING clause .
For Ex: Departments wise count including all employees record :
SQL> select count(*) over(partition by deptno) dept_Count, ce.* 2 from scott.emp ce 3 order by deptno, job;
DEPT_COUNT EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ----- ---------- --------- ----- ----------- --------- --------- ------ 3 7934 MILLER CLERK 7782 1/23/1982 1300.00 10 3 7782 CLARK MANAGER 7839 6/9/1981 2450.00 10 3 7839 KING PRESIDENT 11/17/1981 5000.00 10 5 7788 SCOTT ANALYST 7566 4/19/1987 3000.00 20 [code]....
Now i am trying to get the max(Value_2011) keep (dense_rank Last order by Month_ID) but i get a NULL. I can understand its because the Month_ID accomodates all years but i only need it to look at Month_ID for 2011 and return me the last dense_rank value, how can i achieve this?
I tried a couple of different methods like Last_Value() but i have group by in my original statement and i think analytical functions dont like GROUP by if they are not part of it. How can i achieve this?
I need to calcaulate the salary avarage for three days prior, leaving the current row. That should happen to every row moving back words.I have given all the details.
create table Employee( ID VARCHAR2(4 BYTE) NOT NULL, name varchar(20), Start_Date DATE, Salary Number(8,2), mv_avg number(8,2) [code]....
My table has two date columns EFF_DT which is the start date and TERM_DT is the end date. The EFF_DT of the next record should be the next date of the TERM_DT record.
In the fourth record, the effective date should be 1-Oct-13 which is the next date to the last TERM_DT 30-Sep-13.As the is the break in the date, the output should show 15-Oct-13 sa the second start date.
Note: Refer to the PI_ID columns, there is a break in the date for the sale PI_ID 'ABC'.
Here I am trying to generate a pseudo column, so that the table with the pseudo column looks like as shown below. and I can use first_value and LAST_value by partitioning on the pseudo column to get the desired output.
1) CNT_VAL is the pseudo column: ----------------------------- CK_IDPI_IDEFF_DT TERM_DT CNT_VAL Mem1ABC1-Jan-1331-Mar-131 Mem1ABC1-Apr-1331-May-131 Mem1ABC1-Jun-1330-Sep-131
[code].....
My Query : ----------
I not getting the desired output here as the value in pseudo column is 3.
select CK_ID, PI_ID,EFF_DT,TERM_DT, (case when case_CONT - LAG(case_CONT,1) over (ORDER BY EFF_DT) = 0 then to_char(case_CONT) when case_CONT - LAG(case_CONT,1) over (ORDER BY EFF_DT) <> 0 then to_char(LAG(case_CONT,1) over (ORDER BY EFF_DT) + 1) else to_char(nvl(case_CONT,0))
We've got a query which returns one row, but uses an IN statement. The IN statement links to more than one row in the subquery. When we use a combination of DISTINCT and an ANALYTICAL sum, the sum total is multiplied by the number of rows in the sub query. Remove the DISTINCT and we get a single value.
A simplified example of the problem is below.
I can't see how a query which returns a single row then returns multiple values with the addition of a DISTINCT. Removing the analytical sum also provides a single row, but we need this in the actual query we're running. So it seems to be some combination of DISTINCT, ANALYTICAL SUM and IN query is causing multiple values to be returned.
CREATE TABLE go_test_distinct1 (gtd_value NUMBER); -- Three identical values -- To replicate the three identical values returned by
I am having a table with 5 lakhs transactions. I want to fetch the last balance for a particular date. So i have have returned a query like below.
SELECT curr_balance FROM transaction_details WHERE acct_num = '10'
[Code]...
This has to be executed for incrementing of 12 months to find the last balance for each particular month. But this query is having more cpu cost, 12 times it is taking huge time. how to remodify athe above query to get the results in faster way using analytical query. Whether this can be broken into two part in PL/SQL to achive the performance. ?
I am using lag function to display values like below:
order details date starttime ----------------- -------- -------------- main order 1 07/10/12 06:00am line 1 07/10/12 06:21am line 2 07/10/12 06:31am main order 2 07/11/12 07:00am line 1 07/11/12 07:01am line 2 07/11/12 07:02am
the data displays correctly when i use lag function except that the line 1 details are never getting displayed ie first line under every order does not get displayed? is using lag function in this case correct?
I have written a query which basically retrieves id and created date. IF i put MAX function it is returning id which have max created date. But if i use min function this query is not providing id with min created date,its not returning any rows.
SELECT To_char(OSH.osh_id), OSH.osh_created FROM tn_order_status_history osh, tn_order_status_type ost, tn_orderline_product op [code]..........
Well the company i work for has just recently upgraded from Oracle 9i to 10g. We are having various problems with the migration, and with certain code breaking. My question regards this piece of code;
[code]
to_number(to_char('a_date','YYYY'))
[code]
There are various statements like the one above scattered throughout a query i am trying to fix. When run, the query returns an "invalid number ora-01722" error, which i know is caused by the above code.
if this method of converting from date, to character was discontinued from 9i to 10g?it just seems strange as there are a lot of statements like the one above and they must have worked at some point, but now i cant even get one to work on its own.
CREATE OR REPLACE FUNCTION get_project_id( schema_p IN VARCHAR2, table_p IN VARCHAR2) RETURN VARCHAR2 IS projects_pred VARCHAR2 (400); [code].......
I am trying to get the projects a user has from the works_on table (user_id, project_id). The user_id is retrieved from the context projects_ctx. I am getting the error Function created with compilations errors.
I have an oracle package that i am using to search for a string in a blob entry. I compiled the package and the package body in one environment, it had no errors, when i execute, i get my results.I went ahead and created the same package and function in another environment and it fails by giving me the below error
ORA-06503: PL/SQL: Function returned without value ORA-06512: at "SYSTEM.IMPACTUS_PCODE", line 158 for sysadm
I have used this on other environments often and have never had an issue.
when i press the save button in my application the record will be saved but the value in my "display item" wont be refreshed. when i minimize the form designer and maximize it again, the "display item" is refreshed.
I have a table with date column (16/06/1996 15:03:59) as value in the displayed format. As I have tried with the below query format I could not able to retrieve the
records. select * from <table> where DATE_INSERT=SYSDATE;
We have a function that is called in various other PL/SQL packages, and performance has always been very good. On 29th Sept we upgraded our db to 10.2.0.5.0 and since then, a package that calls the function has gone from ~4mins, to ~2.5hrs to run.
In PL/SQL Developer, a simple select that calls the function has gone from ~0.5secs to retrieve the first 100 rows, to ~12secs. I ran a profile of the main package, which highlighted the where the bottleneck was (a fetch from an explicit cursor). Running an explain plan on the cursor SQL doesn't really show up anything untoward.
However, I found that if I subtly changed the cursor SQL, (so that it did the same thing, but was written differently), it fixed the performance problems.
where ade_start_date between cpDate-cpDays and cpDate-1 /*and ade_start_date < cpDate and ade_start_date >= (cpDate-cpDays)*/
From this, we thought that there may have been a bad cached execution plan which the change of code forced a recalculation of. However, about 2 hours later, the changed code ran slowly again. So a further subtle change was made, which fixed the issue again. Until this morning, when it was running slowly again.
This feels like it is CBO/stats related potentially, but is out of my area of knowledge unfortunately. We have our DBA investigating this, but there may be things I can test to narrow down the possibilities in the meantime.
I am using REGEXP_LIKE function, but it is giving me error like this.
SQL>select * from v$version;
BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for 32-bit Windows: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production
5 rows selected.
SQL>select regexp_like(testcol,'^ab[cd]ef$') from test; select regexp_like(testcol,'^ab[cd]ef$') from test * ERROR at line 1: ORA-00904: "REGEXP_LIKE": invalid identifier
But in Oracle documentation it is given under heading of 10g. Does Oracle 10g supports this function?