I have a table of N records with: Name SeqNo ID Col4 ... ColX
where Name and ID are non-unique, and SeqNo is a monotonic non-consecutive sequence 0 .. N that is unique within ID..I'd like to generate the following 'groups': For each record where SeqNo = 0, sorted by Name, create the 'group where ID is the same, ordered by SeqNo irrespective of the
values of any of the other columns. For instance, if the table contained:
I got my desired results by brute-forcing via four sub-queries:
Sub-query 1 - Generate the sorted Names with SeqNo = 0 Sub-query 2 - Expand above with the additional columns, maintaining original order Sub-query 3 - For each of the records from sub-query 2, generate the 'dependents' having the same ID and SeqNo != 0 Sub-query 4 - Expand above with the additional columns, maintaining original order of sub-query 1 Main query - Create UNION of 2 and 4, sorting by original order and SeqNo
if there were not a simpler approach - after all, this must be a fairly common issue when generating BOMs.
I am using 10.2.4.0 of oracle. I am having one requirement, in which i have to divide the set of records into certain groups , so that they can be executed partly but not in one run.
So in the 'SELECT' clause itself i want to assigns particular value (may be 1 )to first 50000 records then another value(may be 2) to next 10000, like wise. And again the total count of records will also varry time to time , if the total count of record set is less than 10000 , then it should only assign '1' to all the records. i will set the group values (1,2,3...) as another column itself.
We have certain records like SQL, PL/SQL, Reports, Forms, OAF etc in a table. We wanted to capture rating for each of these criteria. So we want a form to be displayed dynamically..
We have a new implementation which will be using ASM with RAID. The data area needs to be 3TB, and the recovery area, to be used for archive logs and RMAN backups, needs to be 1TB.
The configuration i'm thinking about now is:
+DATA diskgroup: 5*600G disks, using RAID 1+0 (this will include control files) +FRA diskgroup: 2*500G disks, using RAID 1 +LOG diskgroup: 2*1G disks, using RAID 1 (this is only for redo logs)
So here are my questions: 1. Am I right in supposing that we would get the best performance on the +FRA and +LOG diskgroups by not using RAID 1+0? 2. It has already been agreed to use RAID 1+0 for the +DATA diskgroup, but I can't see the added benefit of this. ASM will already stripe the data, so surely RAID 0 will just stripe it again (i.e double striping the data). Would it not be the better just to mirror the data at a hardware level?
I am able to assign a user to a user group using the User Admin in Apex.I don't know how I would be able to assign a role (that I know how to define that for an individual user).The only thing I can see is a name for User Group and a Description! My requirement is to define a group of people to be assigned to one group/role, so that every change to that role can be automatically be applied to each user in that group
I have 5 MViews that I want to refresh in two occasions, every Sundays and at the 1st of the month. I created a Refresh Group for the weekly and that work fine. But when I tried to created the second Refresh Group for the monthly I get a "materialized view is already in a refresh group".
You can only have a materialized view in one refresh group? What options to I have to refresh it in different intervals?
We recently migrated a database from 9i to 10g (overdue we know!!) and discovered that dbms_mview.refresh default behavior was turned upside down - meaning that 10g didn't first truncate the MV to refresh it. We're trying to unwind a lot of legacy issues, but it also turns out that we also have 100 REFRESH GROUPs and 100 MATERIALIZED VIEWs. That means a 1 to 1 relationship between RGs vs MVs. There is one MV defined to each RG.
These are my questions:
1) Does a 1 to 1 relationship between RGs and MVs make sense to anybody? The original implementors are gone and we can't fathom the reason for this.
2) Is there any reason why I shouldn't convert these 100 groups to plain and simple 100 MVs? I don't want the delete/insert refresh behavior of dbms_refresh.refresh and I do want the truncate behavior of dbms_mview.refresh ATOMIC=FALSE for refreshing a standard MVIEW
i want to create a group xyz and add some users to xyz group and want to grant/revoke permissions to xyz. So that all the users present in that group will have the same permissions as of the xyz group. so that instead of giving the permissions to users individually i can give it at a time.
the thread title was a bit confusing, couldn't come up with anything short to describe the question. What I am looking for is a query which will put records into groups based on matching values in one of two columns. So if two records have a matching value in column 1 or column 2 they are in the same group. See the example bellow and expected output for a "better" explanation:
--setup CREATE TABLE foo (foo_id NUMBER NOT NULL PRIMARY KEY, record_number NUMBER, record_value VARCHAR2(1));
[Code]...
--expected output
group# foo_id record_number record_value 1 1 1 A 1 2 1 B 1 3 2 B 1 4 2 C 2 5 3 D 3 6 4 E 3 7 5 E
My initial thought is that is feels a little bit like the sequential seat problem but not quite close enough. I know it could be done iteratively with PL/SQL but I am thinking there must be a way to do it in SQL I am not seeing yet.
customers ------------------------------------------------------------------------------------------ custid credit amt month -------------------------------------------------------------------------------------------- 001 C 2000 Jan-2012 001 D 5000 Feb-2012 001 C 3000 Mar-2012 001 C 3000 Apr-2012 001 D 7000 May-2012
I Have to write a single query to calculate the sum of credit and sum of debit value separately.
I am new to PL/SQL and how to create a trigger to compute the population of the school from the groups of students and store back in population. It also needs to check that there is a min of 10 students to a school.
CREATE OR REPLACE TYPE group_type AS OBJECT ( group_nameVARCHAR2(20), tutor_idNUMBER(5),
I've a situation where I've very less redo logs generated. Let us say 10MB. Which solution will be better ?
1. Create one redo log group about 12 MB in size. 2. Create two redo log groups about 5 MB each in size as recommended by Oracle.
Even though solution 1 is also appropriate for me because I've less redo generated than the redo log group size. My whole redo will fit in this and I can raise checkpoint forcefully after certain period of time let us say every 3 seconds.
In one of our DB I found scenario one is implemented. So I want to know pros and cons of both of these practices.
We have a big problem in the underlying devices that ASM disk groups depends on. We have SAN disks (EMC DMX) presented to us as /dev/sda, /dev/sdb, etc.These disks have actually multipath setup. For example, - /dev/emcpowera has two different paths as:
1./dev/sda 2./dev/sdp
We were using the direct path /dev/sda to format the disk (as fdisk /dev/sda), and then used oracleasm to create disks (oracleasm createdisk ASMDISKDATA0 /dev/sda1). Then, ASM disk groups were created with those lables (volumes), and then database was created using the ASM disk groups.
Now our platform folks are telling us that we should use the multipath /dev/emcpowera instead of direct path /dev/sda as the direct path is not guaranteed across reboot.
So the questions are:
1.Is there a way to re-link the disk group to asm disk to the multipath devices (/dev/emcpowera1 instead of /dev/sda1)? 2.Is this even an issue for ASM? If the /dev/sda fails after reboot, can Oracle ASM automatically discover the other path /dev/sdp to the physical EMC disk?
we want to truncate a oracle Table in the Oracle DB. After the truncate the fact table will be loaded again. After the new load in the fact table we want to tell the times ten db to refresh the cache table. The cache Table is a user owned read-only cache group with no autorefresh. We want to tell times ten in a PL/SQL Block from Oracle DB that starts the refresh from the cache group in times ten. The refresh should not be a autorefresh because the refresh should only start if the fact table will new loaded after the truncate.
i have implemeted grid control for monitoring database. we have different database environment like PROD , STAGE, TEST, BETA.
Now my requirement is, i need to configure PROD & STAGE alert for one group (group1@oracle.com) means, what ever alerts generated by grid should sent notification to group1@oracle.com and TEST AND BETA database alerts should to group2@oracle.com.
Now i am following the oracle manual to configure notifications. unfortunately i could not find to complete my requirement.
[URL]
did any one implemented alerts to send different groups.
During ASM Disk Groups creation after the ASM instance creation, receive the following error: Disk Group ORAASMGROUP2 already exists. Cannot be created again
The Grid infrastructure was deinstall one time and still the same issue.
I am trying to update records in the target table based on the records coming in from source. For instance, if the incoming record is present in the target table I would update them in the target else I would simply insert. I have over one million records in my source while my target has 46 million records. The target table is partitioned based on calendar key. I implement this whole logic using Informatica. Looking at the informatica session log I find that the informatica code is perfectly fine but its in the update part it takes long time (more than 5 days to update one million records). find the TARGET TABLE query and the UPDATE query as below.
TARGET TABLE: CREATE TABLE OPERATIONS.DENIAL_REGRET_FACT ( CALENDAR_KEY INTEGER NOT NULL, DAY_TIME_KEY INTEGER NOT NULL, SITE_KEY NUMBER NOT NULL, RESERVATION_AGENT_KEY INTEGER NOT NULL, LOSS_CODE VARCHAR2(30) NOT NULL, PROP_ID VARCHAR2(5) NOT NULL, [code].....
I have written the following PL/SQL procedure to delete the records and count the number of records has been deleted.
CREATE OR REPLACE PROCEDURE Del_emp IS del_records NUMBER:=0; BEGIN DELETE FROM candidate c WHERE empid in (select c.empid from employee e, candidate c where e.empid = c.empid and e.emp_stat = 'TERMINATED' ); [code]....
I am running a query in our Clarity PPM database to return a list of all Support projects. This returns a simple list of project code and project name:
The query has the project resource tables associated with it, so I am able to list all resources allocated to the project. But for now i am only selecting a DISTINCT list of projects.
I have a separate query which returns a list of support resources.
select res.full_name, res.unique_name , dep.description from niku.srm_resources res, niku.pac_mnt_resources pac, niku.departments dep where res.unique_name = pac.resource_code and pac.departcode = dep.departcode and res.is_active = 1 and description like 'IMS%' and UPPER(dep.description) like '%SUP%'
What I need to be able to do in the first query, is return only projects that do NOT have a resource that appears in the resource list in the second query.
(the res.unique_name field in the second query can be linked to the same in the first query)
Logically, the process would be: 1. Identify Support Project 2. Identify Resources allocated to the project team 3. Compare with List of Support Resources 4. If any Resources in that list do NOT appear on the project, then return project.
Type Specification : CREATE OR REPLACE TYPE ArrayCounterSum AS OBJECT ( -- AUTHOR : CLIVE.GREGORY -- CREATED : 03-04-2010 14:44:02 14:44:02 -- Modified : S. Glass - Removed read function to increase performance
[code]...
Type Body : TYPE BODY ArrayCounterSum AS 2 3 4 5
[code]...
So the output will be : 6_10_15_13_14 (Sum of all rows and return as an array)My Goal is to AVG the ROWS and retrun it into array. so what change should I make in above code in order to get the AVG of all records.
I have a table that contains history for vehicle positions. In order to find the latest positions quickly, I've included a LATEST column that is 1 if the record is the latest position and 0 otherwise. The table is maintained via a stored procedure. The procedure first sets the latest record for the vehicle to history...
UPDATE vehicle_positions SET latest = 0 WHERE vehicle_id = <vehicle ID> AND latest = 1
Is it possible for me to end up with 2 latest records?Consider this scenario...
Session #1: UPDATE vehicle_positions SET latest = 0 WHERE vehicle_id = 123 AND latest = 1 Session #2: UPDATE vehicle_positions SET latest = 0 WHERE vehicle_id = 123 AND latest = 1 Session #1: INSERT INTO vehicle_positions (vehicle_id, longitude, latitude, insert_time, latest) VALUES (123, 32.8533, -117.1180, SYSDATE, 1) Session #2: INSERT INTO vehicle_positions (vehicle_id, longitude, latitude, insert_time, latest) VALUES (123, 32.8534, -117.1178, SYSDATE, 1)
I'd end up with 2 latest records. How can I protect against this? I considered using SELECT FOR UPDATE, but seems like there are too many negatives going that route
I just loaded the 10g client EM on my Windows workstation to connect to our 10g database.Everything works great except I dont see any place where I can edit records. I have to use SQL Plus to edit records but was hoping to use a GUI like I had with 9i client OEM.Before we had 10g, I used to have the 9i OEM and it let me edit records in 9i OEM where I could add, delete and update records with a user friendly interface.
how to retrive alternate records from the table in the answer table the first row wil be in upper case and the second row wil be in lower case ...like wise wat is the querry
WITH DATA AS ( SELECT '100' GRP, '01-JAN-2012' EFFECTIVE_DATE, '30-JUN-2012' TERMINATION_DATE from DUAL UNION ALL SELECT '100' GRP, '01-JUL-2012' EFFECTIVE_DATE, '31-JUL-2012' FROM DUAL union all [code]......
The above mentioned output is produced by using the following business rules:
-- row no 1 = Valid records -- row no 2 = effective date is 1 day after termination date of row no 1. Means valid records -- row no 3 = effective date is equal to the row no 2 termination date -- Means invalid record -- row no 4 = effective date is 1 day after termination date of row no 2 Means valid records -- row no 5 = The gap between row no 4 termination date and row no 5 effective date is more than 1 day. Means record is invalid.
Query always compare effective date with previous valid record termination date by using the above mentioned condition and return result accordingly.
Summary is that there should be 1 day gap between termination date and next effective date as well as effective date is always less than termination date in same rows. In addition, if record is invalid then next record check with previous valid record termination date.