SQL & PL/SQL :: Restriction On Analytical Functions
Dec 6, 2012
Is there any way to apply the restriction on analytical functions, just like WHERE and HAVING .AS we know that we can apply the restriction on table by using WHERE and grouping functions by using HAVING clause .
For Ex: Departments wise count including all employees record :
SQL> select count(*) over(partition by deptno) dept_Count, ce.*
2 from scott.emp ce
3 order by deptno, job;
DEPT_COUNT EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ----- ---------- --------- ----- ----------- --------- --------- ------
3 7934 MILLER CLERK 7782 1/23/1982 1300.00 10
3 7782 CLARK MANAGER 7839 6/9/1981 2450.00 10
3 7839 KING PRESIDENT 11/17/1981 5000.00 10
5 7788 SCOTT ANALYST 7566 4/19/1987 3000.00 20
[code]....
Now i am trying to get the max(Value_2011) keep (dense_rank Last order by Month_ID) but i get a NULL. I can understand its because the Month_ID accomodates all years but i only need it to look at Month_ID for 2011 and return me the last dense_rank value, how can i achieve this?
I tried a couple of different methods like Last_Value() but i have group by in my original statement and i think analytical functions dont like GROUP by if they are not part of it. How can i achieve this?
Tell me restriction on commit means where this keyword is not used....like i somewhere read in trigger we can't used commit...instead of that we use pragma autonomous_transaction..
but my confusion arise when i see commit used in trigger in our database table....
is commit used in trigger , if not then what will be use...
Another one is commit used while creating procedure or function?
I would like to pass my 1Z0-047 certification, but I don't understand the limitation on the scalar subqueries, especially for the having clause.
Here is my scalar subquery because it returns only one value.
CODEselect avg(list_price) from product_information
I use it in a having clause as a scalar subquery and it works
CODEselect status ,avg(list_price) from product_information group by status having (select avg(list_price) from product_information) >= avg(list_price);
but it is documented that it can't works :
QUOTE There are also important restrictions on scalar subqueries. Scalar subqueries can�t be used for: Default values for columns RETURNING clauses Hash expressions for clusters Functional index expressions CHECK constraints on columns WHEN condition of triggers GROUP BY and HAVING clauses START WITH and CONNECT BY clauses
I want my user to be restricted for entering duplicate time within two times.
create table asd(dt_frm date,dt_to date);
insert into asd VALUES(to_date('01-04-2012 08:00','dd-mm-yyyy hh24:mi'),to_date('01-04-2012 10:00','dd-mm-yyyy hh24:mi')); insert into asd VALUES(to_date('01-04-2012 09:00','dd-mm-yyyy hh24:mi'),to_date('01-04-2012 11:00','dd-mm-yyyy hh24:mi'));
now in the second insertion I want to alert the entry user that 9am already falls in the saved record which is 8am to 10am and so that this record can't be saved.
I have one question regarding database access. I have one database server on which 3 databases are running. I want to restrict each database access for particular group only , so if anyone outside of this group try to access the database then they can not access the database.
Question is end user never login to Database server and access the database , they always connect to the database using different tool like Pl/sql dev,sqldev etc.
Is there any option through which I can make database access within group only ?
Currently, I am using oracle 10g and getting foolowing error
"PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records"
When trying to access attribute in record-collection inside FORALL. I know its prohibited in 10g but allowed in 11g. My questions is how can we achieve this in 10g. I will rephrase my problem:
I have three record-collections say rc1, rc2 and rc3 and I need to put all these record-collection inside final record collection say FC using FORALL
In my organization, I have a table and in that there is a column named "code".I want to restrict some insertion to that particular column. suppose that code column values are 12 and 1245 then i cant insert the value 12,1245, 1 ,124 and so on but i can insert 2 ,123,15,12456 and so on.
that means the new values should not be any substring of the existing data from left. making that column primary key and then I had a logic to compare the existing value which are longer than the new value and then to perform this.But dont know how to make it happen correctly.
I'm posting below test case in which I'm not able to understand output for LAST_VALUE function. I'm expecting maximum value for the salary in a department. Because I'm partitioning by department and ordering a partition as assending so being last value it should give me maximum value within a partition i.e. department in this case.
I was reading a tutorial for analytical function and i found something like this
sum(princial) keep(dense_rank first order by d_date) over partition by (userid, alias, sec_id, flow, p_date)
How to translate this into simple queries / subquery? i am aware that analytical function are faster but i would like to know how this can translate to using query without analytical function.
I have an XMLType column that is validated via some XMLSchema. Now I want to remove this restriction and make it just a generic XMLType column. I tried bunch of alter table commands but I couldn't figure out the magic combination. Is this possible?
Here is the actual problem. Unfortunately, we ended up both local and global schemas using the same URL. We have customers out there with both schemas (early customers) or only global schema (new customers). Most of the tables were created before the global schema was added. So, they are referencing the local schema. Now we want to evolve our schema, and I am trying to write a sql script to clean-up this mess. if only global schema exists then (these are late customers)
- do nothingelse if both global and local schema exist then (these are early customers) - If any table or table.column has dependency on local schema (I can find this out from user_dependencies) I am going to mark those tables/columns as no schema validated. i.e. remove all references to local schema - delete and purge the local schema - modify those tables/columns and make them point to the global schema end if - evolve the schema (only global one is left)
Is there a way to modify a column definition and move its reference from local schema to global schema where both schemas have the same URL? Of course the main constraint is that we do not want to loose customer data. By the way, both local and global schemas are identical in terms of xsd.
is it possible to change XMLSchema/Element settings of an XMLType column from X to Y, X to null, or null to X?
CREATE TABLE TEST1 (AGG_DATE DATE, COL1 NUMBER(9), COL2 NUMBER(9), COL3 NUMBER(9)); Here is the test-data population script: insert into TEST1 (AGG_DATE, COL1, COL2, COL3) values (to_date('01-01-2012', 'dd-mm-yyyy'), 1, 1, 1); [code]....
The problem is when I wrote an analytical query, it is giving the BEGIN_DATE and END_DATE by taking all the partition values together and so instead of the values above, it is creating an answer as follows:
I need to calculate the sum of values over a period of exactly one month (including the current row). Now if I use a windowing clause of "range between interval '1' month preceding and current row", the total period length is 1 month plus one day (being the day in the current record).
Basically, I want to sum over a period starting at "add_months(startdate, -1) + 1" up until startdate of each row.
drop table window_tst; create table window_tst ( id number primary key
[Code]....
So instead of having 01-feb going back to 01-jan, it should only include 02-jan till 01-feb
I could of course recalculate the period length back to a number of days for each row, but that is not really what I would prefer, as it would make the code rather unreadable.
I need to calcaulate the salary avarage for three days prior, leaving the current row. That should happen to every row moving back words.I have given all the details.
create table Employee( ID VARCHAR2(4 BYTE) NOT NULL, name varchar(20), Start_Date DATE, Salary Number(8,2), mv_avg number(8,2) [code]....
I want to use Analytical function instead of group by clause for below query..
select CASE WHEN ADMT.SOURCESYSTEM ='CLU' THEN COUNT(ADMT.TOTAL_COUNT)*5 ELSE COUNT(ADMT.TOTAL_COUNT) END TOTAL_COUNT from ESMARTABC.ABC_DRVR_MFAILS_TMP ADMT group by ADMT.SOURCESYSTEM
My table has two date columns EFF_DT which is the start date and TERM_DT is the end date. The EFF_DT of the next record should be the next date of the TERM_DT record.
In the fourth record, the effective date should be 1-Oct-13 which is the next date to the last TERM_DT 30-Sep-13.As the is the break in the date, the output should show 15-Oct-13 sa the second start date.
Note: Refer to the PI_ID columns, there is a break in the date for the sale PI_ID 'ABC'.
Here I am trying to generate a pseudo column, so that the table with the pseudo column looks like as shown below. and I can use first_value and LAST_value by partitioning on the pseudo column to get the desired output.
1) CNT_VAL is the pseudo column: ----------------------------- CK_IDPI_IDEFF_DT TERM_DT CNT_VAL Mem1ABC1-Jan-1331-Mar-131 Mem1ABC1-Apr-1331-May-131 Mem1ABC1-Jun-1330-Sep-131
[code].....
My Query : ----------
I not getting the desired output here as the value in pseudo column is 3.
select CK_ID, PI_ID,EFF_DT,TERM_DT, (case when case_CONT - LAG(case_CONT,1) over (ORDER BY EFF_DT) = 0 then to_char(case_CONT) when case_CONT - LAG(case_CONT,1) over (ORDER BY EFF_DT) <> 0 then to_char(LAG(case_CONT,1) over (ORDER BY EFF_DT) + 1) else to_char(nvl(case_CONT,0))
We've got a query which returns one row, but uses an IN statement. The IN statement links to more than one row in the subquery. When we use a combination of DISTINCT and an ANALYTICAL sum, the sum total is multiplied by the number of rows in the sub query. Remove the DISTINCT and we get a single value.
A simplified example of the problem is below.
I can't see how a query which returns a single row then returns multiple values with the addition of a DISTINCT. Removing the analytical sum also provides a single row, but we need this in the actual query we're running. So it seems to be some combination of DISTINCT, ANALYTICAL SUM and IN query is causing multiple values to be returned.
CREATE TABLE go_test_distinct1 (gtd_value NUMBER); -- Three identical values -- To replicate the three identical values returned by
how to delete duplicated records from a table without using row_id. I found the duplicated rows from a table using Analytical Function. But i could not use the Analytical function in the where condition.
i am populating 3 records in my data block.i do not want more than 3 rows.but when i click the plus icon in form menu one more row is getting generated which i do not want. i can restrict the new record addition through plus icon in menu bar for a particular block.
,in 11g,is there a way I could limit the number of open application actions by some os user ?We have an application where users are executing the same thing while the last is not yet finished,so we have several same things runninng at the same time executed by the same user.
Can we restrict that somehow through the database or that needs to be done through application?
I am having a table with 5 lakhs transactions. I want to fetch the last balance for a particular date. So i have have returned a query like below.
SELECT curr_balance FROM transaction_details WHERE acct_num = '10'
[Code]...
This has to be executed for incrementing of 12 months to find the last balance for each particular month. But this query is having more cpu cost, 12 times it is taking huge time. how to remodify athe above query to get the results in faster way using analytical query. Whether this can be broken into two part in PL/SQL to achive the performance. ?
I would like to use character functions (LPAD, INSTR, SUBSTR, etc) to accomplish what i feel should be rather easy. I would like to take the following character strings: