SQL> ed Wrote file afiedt.buf 1 select * 2 from (select deptno, ename, sal 3 ,dense_rank() over (partition by deptno order by sal desc) rank 4 from emp)
[code]...
why is that i just added ename on the ORDER BY part of the DENSE_RANK and then SQL> ed Wrote file afiedt.buf 1 select * 2 from (select deptno, ename, sal 3 ,dense_rank() over (partition by deptno order by sal desc, ename) ran
[code]...
ADAMS and WARD we're removed from the result, why is it? did it rank it as UNIQUE per sal and ename?
How can I rewrite this without the analytic functions?
SELECT employee_ID, first_name, salary, RANK() OVER(ORDER BY salary desc) toprank_desc, RANK() OVER(ORDER BY salary ASC) toprank_asc FROM employees ORDER BY first_name
I have a table (events) with this structure: customer_id, event_id, ... For each customer_id there can be several rows in the table. I need to run a query of the format: select customer_id, expensive_function(customer_id),... from events.
The expensive_function to be applied to customer_id in the query is really expensive (a Java class calculating a check sum) and the events table has billions of rows.
Rows in events table have same customer_id for a few rows, then continue with a different customer_id ang again coming back to the first, etc.
I was thinking that it should be a way to trigger calculation of expensive_function only when customer_id changes, in order to reduce the number of calls. Only my knowledge about SQL is not going that far and I cannot use PL/SQL or any other procedural language, need to stick to standard SQL (or Oracle version of it).
I am building a reporting table using the count analytic function in order to count up several different attributes in one statement.What I find is that this method quickly eats up my TEMP space. This is 10gR2. I have attempted to use MANUAL workarea policy with as large ofsort_area_size as possible (2G) but that does not seem to have any effect on performance or TEMP usage. The RAW table is about 12G with 75 million rows. I am not that concerned about execution time, but rather TEMP usage.
--INSERT into <object>... select distinct file_sid,filename,control_numb,processing_date,file_class, vendor_id,vendor_desc, c_status_id,c_status_desc,
[code]...
I am not seeing any increase in onepass or multipass executions on the PGA during execution of this statement using...
SELECT CASE WHEN low_optimal_size < 1024*1024 THEN to_char(low_optimal_size/1024,'999999') || 'kb <= PGA < ' || (high_optimal_size+1)/1024|| 'kb' ELSE to_char(low_optimal_size/1024/1024,'999999') ||
[code]...
I'd like to get a better explaination of how analytics use the instance resources and TEMP space. For example if I add a count with a different window (such as the last two columns commented in the above query) I blow out my temp space (70G). Is the critcal factor the use of distinct? or multiple windows? or something else?
I need to return an ordered list of documents. The documents may belong to a set id (optional) and if so, are either a "master" or a "duplicate" type. For each set there can be only one master but many duplicates. My goal is to group all the sets together such that each master is proceeded by its duplicates.
There's also a documents table containing the documentid and among other things a page_count. In the following example I want to sort the documents first by page count but preserving the master/dupe grouping. Any documents which don't belong to a set or are just a duplicate without a master i want at the end of my set but also ordered by page count.
Here's an example set that I would want to order by:
As you can see I have a little bit of everything here. Docs 2001 and 2002 are the typical set of 1 master and its duplicate. Docs 2010, 2011, and 2012 is the same just a set of 3. Doc 2004 is a master but without any duplicates. Docs 2003 and 2014 are duplicates without a master (these docs have a master in the table but that doc isn't in the set i need to order by). Docs 2008 and 2009 do not belong to a set and as such do not have a master/dupe type.
The result i'm looking to achieve will be ordered as follows:
As I said above I first want to get the groupings of master/dupes and order ascending on the masters page count. For each duplicate of a master I then want to order the duplicates by page count. After I finished ordering all the master/dupe groups I then want to move on to the rest of the documents which will contain documents that don't belong to a set along with documents which are duplicates but have no master in my set. However, documents which are masters but without duplicates should have been ordered along with the other master/dupes groupings.
With this all in mind I have just been completely overwhelmed as to where to even start. Am I using analytic functions? Hierarchical stuff?
I have a question regarding analytic functions. I've been working with some functions, but I can't achieve the one which gives me the pretend result. I know to resolve this without using a function, with a internal select, but I think the analytical function is faster and proper.
can I rewrite the following query without using the 'ROW_NUMBER() OVER ' part.The query is supposed to pull out the records whose CODE is not NULL and has most recent date for UPDATE_DATE . The reason I wanted to do this is, When I embed this query in between many other queries along with JOINs, My oracle server is unable to execute. So, I thought its better to supplant 'ROW_NUMBER() OVER ' logic with something else and try it. .
CURRENT QUERY: SELECT a.* FROM (SELECT b.*, ROW_NUMBER() OVER (PARTITION BY b.PIDM ORDER BY b.UPDATE_DATE DESC) AS Rno FROM ( SELECT * FROM SHYNCRO WHERE CODE IS NOT NULL )b )a WHERE a.Rno = 1
SQL Error: ORA-30353: expression not supported for query rewrite 30353. 00000 - "expression not supported for query rewrite" *Cause: The select clause referenced UID, USER, ROWNUM, SYSDATE, CURRENT_TIMESTAMP, MAXVALUE, a sequence number, a bind variable, correlation variable, a set result,a trigger return variable, a parallel table queue column, collection iterator, etc.
I want to create a materialized view for the last 10 days with the enable query rewrite option.
e.g. i want to create a view with the list of employees who joined the company in the last 10 days.
create materialized view M_Employee refresh fast on commit enable query rewrite as select joining_date , name from employee where joining_datde < TRUNC(sysdate) - 10
I seem to get the error SQL Error: ORA-30353: expression not supported for query rewrite
30353. 00000 - "expression not supported for query rewrite" *Cause: The select clause referenced UID, USER, ROWNUM, SYSDATE.
This error is self explanatory , but is there any work around to have a query liek this to list the employees based on sysdate.
update t_emp set TTL_FLG = CASE WHEN EXISTS (SELECT 1 from Schema1.T_STG_LW_EMP E WHERE E.Employee = Schema2.T_emp.EMPLOYEE_NUMBER AND E.JB_CODE like '%TP%' or E.JB_CODE like '%DGD%' or E.JB_CODE like '%PDD%' or E.JB_CODE like '%YND%' ) THEN 'Y' ELSE 'N' END;
When one enters the form, he can look up records either on Companyname or on Projectname. Therefor I have provided 2 buttons who pop-up a LOV. After eitherone is selected, the query has to be executed. There is a Master-Detail relationship between Company and Project.
My plsql for the company button:
declare v_show_lov boolean ; begin enter_query; v_show_lov := show_lov('LOVFIRMA'); if not v_show_lov then
[code]....
Plsql for the project button:
declare v_show_lov boolean; v_get_value number; begin v_show_lov := show_lov('LOVPROJECT'); if not v_show_lov then
[code]....
The first button only works when I go manually in Querymode first (by pressing F11). So I reckon my enter_query doesn't work? The property 'Fire in Enter-Query Mode' is Yes.
When I press it in non-query mode. It just fills in the LOV-values and the CCODE from company. It doesn't execute the query (probably because there is no enter_query).
When I enter query mode, the focus changes automatically to Company. And the LOV doesn't appear.
I have tried placing the enter_query on different places, just as the go_block and clear_block things, but there is always something wrong.
To rewrite this sql which is in ANSI 92 standard to ANSI 89 standard.
SELECT "PROJECT"."X_SAMPLED_DATE", SAMPLE"."SAMPLE_NUMBER" FROM "SHIRE_PRD"."LimsUser"."SAMPLE" "SAMPLE" INNER JOIN "SHIRE_PRD"."LimsUser"."PROJECT" "PROJECT" ON"SAMPLE"."PROJECT"="PROJECT"."NAME" WHERE ("SAMPLE"."SAMPLE_TYPE"='EM' OR "SAMPLE"."SAMPLE_TYPE"='WATER') AND "SAMPLE"."STATUS"<>'X' AND("PROJECT"."X_SAMPLED_DATE">={ts '2011-05-01 00:00:00'} AND "PROJECT"."X_SAMPLED_DATE"<{ts '2011-06-01 00:00:00'}) ORDER BY "SAMPLE"."PRODUCT"
How do I write this MSSQL statement so it works in Oracle?
update b1 set b1.b1_app_status = r3.application_status from conv_app_status_update a, statyp r3, b1perm b1 where a.spc = r3.serv_code and a.task_des = r3.r3_act_type_des and a.task_status =r3.r3_act_stat_des and a.process_code = r3.r3_process_code and r3.application_status is not null and a.spc = b1.serv_code and a.id1 = b1.id1 and a.id2 = b1.id2 and a.id3 = b1.id3
I can' use sequence in the group by function and if I get equivalent analytic for above group by even then I can't write row_number as the order by gives detail record
I don't want to wrap this select inside other select
"representant" acct_id per group (about 300 rows total)acct_repres as( select distinct acct_id, origin_id, acct_parm_id from ( select a.* , source_id , dense_rank() over (partition by source_id origin_id order by acct_nbr nulls first, acct_id) as odr from account a join account_parm ap on (a.parm_id = ap.acct_parm_id) ) where odr = 1)select col1 , col2 , ( select accct_id from acct_repres ar where ar.acct_parm_id = t2.acct_parm_id) col3 , ( select count(1) from acct_repres) col4from some_table t1join other_table t2 on (....)
And here it comes.
The "acct_repres" subquery returns more than 300 rows when executed separately. But when used in CTE sometimes (depending on execution plan) it seems to have only one row - the value in the column col4 is "1",while value for col3 is NULL for most of the cases. It looks like the the dense_rank function and the condition "where odr =1" are evaluated at the very end.
When I use MATERIALIZE hint the result was the same. But when I put the result of account_repres into dedicated table and use that table instead of CTE the output is correct.
I am creating a "time aware" (DAY, WEEK, MONTH, QUARTER, and YEAR) dimension using Analytic Workspace Manager.
Let me give you some background. I'm coming from a traditional "Oracle Express" OLAP background where all our data is stored in cubes and these are defined, populated and operated on using OLAP DML, there is no SQL or traditional relational tables involved.
I now want to pull data from relational tables into some OLAP cubes and am using Analytic Workspace Manager to do this (maybe this is not the best way?)
Let me explain what I'm trying to achieve. In OLAP worksheet I can type the following DML commands:
DEFINE MY_DAY DIMENSION DAY MAINTAIN MY_DAY ADD TODAY '01JAN2011'
What this will do is create a "day dimension" and will populate it with values for each and every day between 1st Jan 2011 and today. It will be fully "time aware" and thus you can use date functions such as DAYOF to limit the MY_DAY dimension to all the Fridays etc. Similarly if I define a "month dimension" there will be an automatic implicit relationship between these two dimensions, this relationship and time aware cleverness is built into Oracle.
However, a dimension defined using DML commands (and indeed all objects created using DML language) is not visible in Analytic Workspace Manager (as there is no metadata for them?) and for the life of me I cannot work out how to create such a dimension using AWM. If I create a "Time Dimension" then, as far as I can tell, this is not a proper time dimension but merely a text dimension and I, presume, I have to teach it time awareness.
I have no issues creating, and populating cubes from relational tables using Analytic Workspace Manager, the only issue I have is creating a "proper" time aware dimension.
I have attempted to use the analytic function to keep a running total of the count of active calls based on the connect and disconnect times given for each record row.
I have the following query with analytic function but wrong results on the last column COUNT.
1)I am getting the output order by b.sequence_no column . This is a must. 2)COUNT Column :
I don't want the total count based on thor column hence there is no point in grouping by that column. The actual requirement to achieve COUNT is:
2a -If in the next row, if either the THOR and LOC combination changes to a new value, then COUNT=1 (In other words, if it is different from the following row)
2b-If the values of THOR and LOC repeats in the following row, then the count should be the total of all those same value rows until the rows become different. (In this case 2b-WHERE THE ROWS ARE SAME- also I only want to show these same rows only once. This is shown in the "MY REQUIRED OUTPUT) .
My present query: select r.name REGION , p.name PT, do.name DELOFF, ro.name ROUTE,
[code]...
My incorrect output[PART OF DATA]:Quote: REGIONPT DELOFF ROUTE THOR LOC SEQ COUNT NAASNAAS MAYNOOTHMAYNOOTHR010 DUBLINRD CEL 1 1 NAASNAAS MAYNOOTHMAYNOOTHR010 NEWTOWNRD CEL 2 1
[code]...
My required output[PART OF DATA]-:Quote: REGIONPT DELOFF ROUTE THOR LOC COUNT NAASNAAS MAYNOOTHMAYNOOTHR010 DUBLINRD CEL 1 NAASNAAS MAYNOOTHMAYNOOTHR010 NEWTOWNRD CEL 1 NAASNAAS MAYNOOTHMAYNOOTHR010 PRIMHILL CEL 1
[code]...
NOTE :Count as 1 is correctly coming.But where there is same rows and I want to take the total count on them, I am not getting.
Oracle 11g databaseidval1val2100a110b120c200a220b WITH input AS (SELECT 1 id
[Code].....
input; Output:idval1val2assigned_number100a0110b0120c2200a0220b1 The dense numbering sequence should be assigned to each row based on id and val1 column. For a given Id, the numbering only starts after val1 > 1 till then the assigned_number will be zero.