I want to find out consecutive non-increasing sequences of value (2nd column) order by sr. no (first column) in ascending order.
For example, in the 2nd column, 17 is followed by 0 and 0 and then 38 so it means 3 consecutive values (i.e starting from 17 are 0 and 0) are non-increasing and they are ranked by '1' in my desired in third column as shown below. similarly, the 2nd non-increasing sequence starts with 38,32,24 and 12 and this is ranked as '2' in the third column. same is the case with rank '3' for the third non increasing sequence. so bascially i want the third column with "ranks" starting and ending as per above logic. i tried using LEAD function but doesn't get what I want. I need the shortest possible query to get that third column since i have other columns in the original table in a multiple group by query.
I want to do group ranking in desired col3 in such a way that it checks for different values across consecutive rows under col2 and assigns a number to each group. Just when two consecutive rows in col2 have same value then the group ends and the next group starts.
So my desired output is:
Col1 Col2 Col3 1 A 1 2 B 1 3 C 1 4 D 1 5 D 2 6 A 2
[code]...
Here you can see that the first four rows under col2 are unique i.e A,B,C,D so col3 assigns this as group number 1. It ends at row 4 becuase row 5 also has value D under column 2. So in other words, each group must have all unique values and there should not be any repetition. For example, see group 3 (under col3) in above desired output; it starts from row 9 and ends at row 11 because row 12 also has value 'C' and the value 'C' has already occurred in group 3 in row 9.
I want to achieve this SQL. I tried using Dense rank but couldn't go through. I want the shortest possible query to acheive this.
I want update col1 whis is null to max(col1) ++ in a row, order by cr_date like 1,1,20110102 2,2,20110101 3,null,20110105 => 3,5,20110105 because this row is after 20110103 4,3,20110104 5,null,20110103 => 5,4,20110103 because this row is before 20110105
update test_table set col1 = (select max(col1) from test_table) + rownum where col1 is null;
My scenario is to insert values into 'out' column by comparing 's' and 'IP' columns of temp table.The exact situation is at first need to go to ip column,take a value and then go to source column and check for the same value of ip which is taken previously.Then after corresponding ip of that source column should be inserted back in previous source column.
The situation is marked clearly in file which i am attaching with '--' comments at respective places.I am also pasting the code which i tried out,unfortunately it is giving error as exact fetch returns more than requested number of rows since there are duplicates in the table.I tried it using nested for loops.Also implemented using rowid,but it didnt work.
fixing the errors or if there is any new logic that can be implemented.
DECLARE i_e NUMBER(10); BEGIN FOR cur_1 IN(SELECT IP from temp where IP IS NOT NULL) LOOP FOR cur_2 IN(SELECT IP from temp where s=cur_1.IP)
From two given tables, how do you fetch the values from two columns using values from one column(get values from col.A if col.A is not null and get values from col.B if col.A is null)?
I'm trying to Rank Username based on the Total Sum of amount waived but I want to avoid Ranking the Overall Total at the bottom, plus I dont want them in Ranking order, I want the order to stay the same as it currently is.
SELECT DECODE(GROUPING(USERNAME),1,'TOTAL',0,UPPER(USERNAME)) as "USERNAME", SUM(CASE WHEN TO_CHAR(DATE_PROCESSED,'MON') = 'JAN' THEN AMOUNT_WAIVED ELSE 0 END) AS JAN, SUM(CASE WHEN TO_CHAR(DATE_PROCESSED,'MON') = 'FEB' THEN AMOUNT_WAIVED ELSE 0 END) AS FEB,
On my APEX page i have region which has sql query as source and it displays as HTML table the query result to the user.
I want to display addinonal column with a hyperlink inside, and that hyperlink would have CGI/URL-parameters which contains the other values of the HTML row.
So, let's say my APEX region queryes columns as "select c1, c2, c3, c4 ..." and displays out values "V1, V2, V3, V4" then i want to have addional output column with such hyperlink:
a href="f?p=100:7:13467554876288::NO::c1,c2,c3,c4:v1,v2,v3,v4">My link column with CGI-parameters</aHow can i create such hyperlink?
The overall idea is that the link would forward to a page which loads those values "v1,v2,v3,v4" into form fields and user can proceed from there.
I want to pass multiple column values of a row in an interactive report page to hidden items in another page through column link. And I did it successfully. However, I found I need to pass more than 3 columns of a row in this report, while a column link only permits me to pass 3 column value at most. Is there anyway that I can pass more column values to hidden items in another page?
I have a query find window that allows you to search on various attributes.Also i have a radio button within the query find that allows you to filter the results either as a single record or mulitiple records For example consider the data below.....
I have a View that joins the 2 tables together so for Record_Id = 1 the view returns 3 rows
I would like to have a query find window that allows you to search using: Record_No Line_Desc
and has a radio button to allow you to either show the records as a single line or as all detail lines Therefore i would like the following selections:
1) Enter no search criteria but select Single radio option will return 1 record with default line description of Line1 2) Enter no search criteria but select Multiple radio option will return all 3 records 3) Enter Line_Desc = Line1 with Single radio option brings back one record with Line_Desc = Line1 4) Enter Line_Desc = Line2 with Single radio option brings back one record with Line_Desc = Line2 5) Enter Line_Desc = Line3 with Single radio option brings back one record with Line_Desc = Line3 6) Enter Line_Desc = Line1 with Multi radio option brings back one record with Line_Desc = Line1 7) Enter Line_Desc = Line2 with Multi radio option brings back one record with Line_Desc = Line2 7) Enter Line_Desc = Line3 with Multi radio option brings back one record with Line_Desc = Line3
I need the form to select from the view but then perform a rank after it has selected the relavant data. Then the radio button would use the ranking to select either one record or multi records.
INSERT INTO T VALUES (1,'JAMES'); INSERT INTO T VALUES (1,'DOLLY'); INSERT INTO T VALUES (2,'MICHEAL'); INSERT INTO T VALUES (2,'FLASH'); INSERT INTO T VALUES (3,'JAMES'); INSERT INTO T VALUES (3,'MARY'); INSERT INTO T VALUES (4,'JAMES'); INSERT INTO T VALUES (4,'DOLLY'); INSERT INTO T VALUES (5,'JAMES'); INSERT INTO T VALUES (5,'DOLLY'); INSERT INTO T VALUES (6,'JAMES'); INSERT INTO T VALUES (6,'MARY');
SELECT * FROM T ORDER BY 1
ID NAME 1 JAMES 1 DOLLY 2 MICHEAL 2 FLASH 3 JAMES 3 MARY 4 JAMES 4 DOLLY 5 JAMES 5 DOLLY 6 JAMES 6 MARY
each 'ID' has two values always. I want to rank the data based on same pair 'name' in an 'ID'
for example, my desired output is:
ID NAME RANK 1 JAMES 1 1 DOLLY 1 2 MICHEAL 1 2 FLASH 1 3 JAMES 1 3 MARY 1 4 JAMES 2 ---> THAT IS RANK 2 BECAUSE THIS IS THE 2ND TIME JAMES AND DOLLY ARE IN THE SAME 'ID' 4 DOLLY 2 -----> SAME AS ABOVE 5 JAMES 3 ---> THAT IS RANK 2 BECAUSE THIS IS THE 3RD TIME JAMES AND DOLLY ARE IN THE SAME 'ID' 5 DOLLY 3 -----> SAME AS ABOVE 6 JAMES 2 ---> THAT IS RANK 2 BECAUSE THIS IS THE 2ND TIME JAMES AND MARY ARE IN THE SAME 'ID' 6 MARY 2 -----> SAME AS ABOVE
I'm looking for a script to partition the data into sections where the VALUE is the same over a constant period of time with no breaks. I'd like to give each partition a value to identify it by.
So the outcome of the script would be the following -
I was trying to do something with trunc(date_time) but that didnt work out right as the blocks of data can carry over several days as seen in the rows with IDENTIFIER = 8.
I need to return an ordered list of documents. The documents may belong to a set id (optional) and if so, are either a "master" or a "duplicate" type. For each set there can be only one master but many duplicates. My goal is to group all the sets together such that each master is proceeded by its duplicates.
There's also a documents table containing the documentid and among other things a page_count. In the following example I want to sort the documents first by page count but preserving the master/dupe grouping. Any documents which don't belong to a set or are just a duplicate without a master i want at the end of my set but also ordered by page count.
Here's an example set that I would want to order by:
As you can see I have a little bit of everything here. Docs 2001 and 2002 are the typical set of 1 master and its duplicate. Docs 2010, 2011, and 2012 is the same just a set of 3. Doc 2004 is a master but without any duplicates. Docs 2003 and 2014 are duplicates without a master (these docs have a master in the table but that doc isn't in the set i need to order by). Docs 2008 and 2009 do not belong to a set and as such do not have a master/dupe type.
The result i'm looking to achieve will be ordered as follows:
As I said above I first want to get the groupings of master/dupes and order ascending on the masters page count. For each duplicate of a master I then want to order the duplicates by page count. After I finished ordering all the master/dupe groups I then want to move on to the rest of the documents which will contain documents that don't belong to a set along with documents which are duplicates but have no master in my set. However, documents which are masters but without duplicates should have been ordered along with the other master/dupes groupings.
With this all in mind I have just been completely overwhelmed as to where to even start. Am I using analytic functions? Hierarchical stuff?
In the example the client 123 appears from 2010/10/04 to 2010/10//08 (5 consecutive days), so this client must appear in the output. In the example customer 456 does not appear at least 4 consecutive days, so should not appear in the output.
Due to an ORA-08177: can't serialize access for this transaction in our Java application using serializable transactions and after running some tests, we decided to increase INITRANS from 3 to 5 for our tables.
what would be the disadvantages from this measure? Space? I have done some tests with large tables with INITRANS 3 and 5, populated with large amounts of data and the space occupied is the same in bytes and blocks. Performance? Something else?
AID UCD U_TXT UDATE PID 116 1 Req Documents 01-OCT-2011 100 116 2 AGG APPR 01-OCT-2011 101 116 3 Docs received 02-oct-2011 102 116 1 Tmp received 02-oct-2011 103 117 2 Notice sent 03-oct-2011 104
UCD - We have total 19 codes (1 to 19), each can have multiple rows for one AID.. like 1 repeated twice for AID 116. PID - Primary id (Primary key column)
Output I am looking -------------------- AID COL1 COL1_TXT COL1_DATE COL2 COL2_TXT COL2_DATE..ETC 116 1 'Tmp received' 02-oct-2011 2 AGG APPR 01-OCT-2011 117 2 Notice sent 03-oct-2011
If the same UCD repeated multiple times then we should get the max(PID) record for that UCD and for that AID
I tried with group by AID,PID. but couldn't bring the rows to columns. I have attached the script with the post
i want to restrict the user if he/she enters any 3 consecutive sequence of numbers,characters,alphanumerics and special characters for example aaa, aAa, @@@, ---- , 111, 123 are not valid.
valid sequences are a1w,?1A,aa1,WW2,78a,-#a
i want to show the invalid sequence in a single query using regular expression function. suppose for example if user enters aaa,$$$,123 then the query output is aaa,$$$,123.
i have written two different queries for that but i want a single query
SELECT REGEXP_SUBSTR('EEE','([a-z])\1\1',1,1,'i') FROM DUAL; SELECT REGEXP_SUBSTR('111','([0-9])\1\1',1,1,'i') FROM DUAL; SELECT REGEXP_SUBSTR('@@@','([^-$])\1\1',1,1,'i') FROM DUAL; -it is not checking for -(hypen) characters
$x being a range of non-consecutive values like so: 1,3,5-9,13,18,21 and so on...
I realize I can query using an array of operands and such, but these ranges will be in upwards of 100 or more items. I want to minimize the number of queries I have to do and the length of them. Is there any resource you can point me to that can optimize something like this?
I had created a Primary key and wanted to compress as per my senior instructions.Below are my results the size increased after compression.
select compression from dba_indexes where index_name = 'TEST_IDX'; Compression ---------- DISABLED select sum(blocks) no_of_blocks, (sum(blocks)*8192)/(1024*1024)size_MB
[code]....
We ran a compression on the primary key index TEST_IDX
ALTER INDEX SCOTT.TEST_IDX REBUILD INITRANS 15 TABLESPACE DATA_01 COMPRESS; ANALYZE INDEX SCOTT.TEST_IDX VALIDATE STRUCTURE;
Now when i ran the below select statement:
select compression from dba_indexes where index_name = 'TEST_IDX'; Compression ---------- ENABLED select sum(blocks) no_of_blocks, (sum(blocks)*8192)/(1024*1024)size_MB
[code]....
As you can see after compression the blocks and size has been increased, but i ran for many tables and other indexes, we observed the blocks and size was reduced by 50-70%, i am not sure why this happened to the index compression.
SELECT DISTINCT a.emp_id, a.cal_id, TO_CHAR(a.ts_date, 'DD/MM/YYYY') tsdate, a.ts_date, 1 as days FROM tmsh_timesheet a INNER JOIN project b ON TO_CHAR(b.proj_id) = a.proj_id INNER JOIN tmsh_ts_calendar c ON c.cal_id = a.cal_id INNER JOIN (SELECT a.cal_id, a.emp_id, MAX(a.status) as status, a.create_dt, a.create_by FROM tmsh_stat_hist a
1.2.0.2 on RHL.. 3 Log Groups with 1 member each. db_recovery_file_dest string /oracle/oraarch For the purpose of increasing log file size, if i use ALTER DATABASE ADD LOGFILE GROUP 1 SIZE 300M; but it creates Log Group with 2 member. one is at /oracle/oraarch location and other at /oracle/oradata (db_create_file_dest).
We are using ORACLE MANAGED FILE SYSEM . I want only 1 member at /oracle/oraarch (to keep the previous setting intact ...just increasing the size from 100 to 300M). If I manually give the path where to create the logfile member, I get this error: ALTER DATABASE ADD LOGFILE GROUP 1 '/oracle/oraarch/DB/onlinelog/' SIZE 300M; ALTER DATABASE ADD LOGFILE GROUP 1 '/oracle/oraarch/DB/onlinelog/' SIZE 300M * ERROR at line 1: ORA-00301: error in adding log file '/oracle/oraarch/DB/onlinelog/' - file cannot be created ORA-27038: created file already exists Additional information: 1
version : Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
i want to ,remove consecutive occurance from string
Example I/P: 'POWELL POWELL BRIAN K AND BONNIE POWELL JARRELL JARRELL' to O/P : 'POWELL BRIAN K AND BONNIE POWELL JARRELL'I tried the below code is Working fine , But i wanted to do this using Regexp or Some other Better Method WITH T