SQL & PL/SQL :: Nesting Subqueries To Sum Data In Different Ways
Feb 18, 2013
I'm trying to create a query based upon an IAL (I'm using IFS). The IAL contains sales, grouped by month, for each customer. The output of the query should be as follows:
Cust_No
Spend in Month
Spend in Quarter
Spend in Year
My first thought was to have three subqueries, summing data from the IAL where the month of sale was last month, in the last 3 months, and in the last 12 months. Is this the right way to go? And what is the syntax?
In mys store procedure I am using a subquery with INNER JOIN. This subquery calls a user defined function which takes main query fields as parameter values. But i am facing issue for accessing main query fields. my query is like below:
SELECT ID, Name, sub.Desc, sub.Date FROM MainTable main INNER JOIN ( SELECT * FROM RMF_GetData(main.ID) ) sub ON main.ContactID = sub.ContactID
I need data from a function in a table format based on some case conditions. Hence i need to join it with main table. But here oracle gives error as "invalid identifier" for main.ID parameter.
i was just working on one of my SQL assignments from my database management course, and thus far, this is the first that I just can't figure out. The question is:
Quote: Increase the credit limit of any customer who has any order that exceeds their credit limit. The new credit limit should be set to their maximim order amount plus $1,000. This must be done in 1 SQL statement
The bolded part is what I'm having trouble with.
What I have thus far:
UPDATE Customers SET CreditLimit = 1000 + (SELECT MAX(Amount) FROM Orders, Customers WHERE Cust = CustNum) WHERE CustNum IN ( SELECT Cust FROM Orders WHERE Cust = CustNum AND CreditLimit < Amount);
So there's two tables that I'll be working with, Customers (the table I'm updating), and Orders (the table where the order amount is found). With the code I have so far, it does seem to be updating the correct tables at the very least, but not with the correct values. It's essentially updating the CreditLimit column with the new value of 1000 + the maximum amount in the order table, which is very close to what I want it to do, but I want it to be 1000 + the maximum amount for that specific customer.
CustNum is the primary key for the Customers table, and Cust is the foreign key that links each together.
(about the formatting, it looked much prettier in SQL Worksheet Plus)
I created the table has a column empid.i referred this to another table by creating constraints(empid int constraint fk_emp references empl_details(empid)).now how can i insert those tale's empid values to 2nd table's empid?
I would like to pass my 1Z0-047 certification, but I don't understand the limitation on the scalar subqueries, especially for the having clause.
Here is my scalar subquery because it returns only one value.
CODEselect avg(list_price) from product_information
I use it in a having clause as a scalar subquery and it works
CODEselect status ,avg(list_price) from product_information group by status having (select avg(list_price) from product_information) >= avg(list_price);
but it is documented that it can't works :
QUOTE There are also important restrictions on scalar subqueries. Scalar subqueries can�t be used for: Default values for columns RETURNING clauses Hash expressions for clusters Functional index expressions CHECK constraints on columns WHEN condition of triggers GROUP BY and HAVING clauses START WITH and CONNECT BY clauses
I have a scenario where I need to get field name upc_id that do not exist in other subquery.
q-1)WITH INITIALKEYCAT AS (SELECT UPC_ID,SALE_LOCATION_ID,MIN(APPROVAL_DATE) AS INITIALDATE FROM ITEM_KEYCAT WHERE APPROVAL_DATE IS NOT NULL GROUP BY UPC_ID,SALE_LOCATION_ID having MIN(APPROVAL_DATE)>=to_date('2011/01/01', 'yyyy/mm/dd') AND MIN(APPROVAL_DATE)<to_date ('2011/04/01','yyyy/mm/dd'))
[Code]....
Now from Q-2) I want to get UPC_ID that do not exist in UPC_ID of Q-1).How can I write this.
We have multiple environments and our dev and UAT ones are now different from staging and live (I know, but I am not in a position to get this fixed). I have a set of updates that need to go through to live and in some cases they reference rows that do not exist in the UAT environment, and yet they have to (rigid, dumb process) go through that environment.
Basically, the insert I need to do takes info from two tables and does an insert into a third. That target table has a not null constraint on the affected fields, so the insert fails, quite rightly.
There's lots of info available on how to do conditional inserts with single sub-queries, using DUAL and EXISTS (or rather NOT EXISTS, but that's easy to swap), but those don't seem to easily translate for this one.
The sql that works when everything exists is:
insert into wmcontent.wm_manda_corpserv_companies (wm_manda_company_code, wm_corp_company_code) values ( (select wm_company_code from wmcontent.wm_m_and_a_company where wm_company = 'SW'), (select min(oid) from wmcontent.wx_category where content_type = 2 and name = 'SW') );
In desperation I even tried using "log errors reject limit unlimited" but, no doubt due to my misunderstanding of how that works, I ended up getting the error "ORA-06550: line 38, column 1: PL/SQL: ORA-00972: identifier is too long" as a result.
i have the performed level 0 rman backup on 15th feb 2013, and after that i have incremental 1 and archives and controlfile backup. Before this no backup is the database.
Now i want to restore it to 12th feb 2013.so is there any ways to restore the database to that point time.can we restore controlfile to point in time.
I have an APEX Page , page no1 for person's details that got fields like name, Date of Birth. When user enters the details, it get saved. Now when we want to edit a person's details, we click on that person then it goes to next page, where we can change/details of the person.But when i tried to do that, i am getting Not a valid month error.. log message shows that it is coming from : Automatic Row Processing (DML)- an after SUBMIT process. Is there a way to avoid this error?I changed date formats for several different ways.
I don´t care about the order of the values in the row. In other words, I want to get disjoint sets of data connected by any of both values.Every pair in the input table is unique.
I have seen in the web that it is possible to do using connect by and hierarchical retrieving but I've been trying to make a lot of combinationts and I can reproduce the output.
whether Oracle has any capability of automatically checking which lossless compression algorithm it should apply by analyzing a data stream on data load? Does Oracle have any compression advisors/wizards that would make recommendations as to type and level of compression?
i have configured physical standby in my local system, to check logshipping i created a table at primary db, wen i tried to check in standby, it says table does not exist..below are primary & standby alert entries..
Primary alert log
Fatal NI connect error 12514, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=172.16.0.98)(PORT=1522))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=STAND)(SERVER=dedicat ed)(CID=(PROGRAM=d:oracle11gappadministratorproduct11.1.0db_1inORACLE.EXE)(HOST=A960M)(USER=SYSTEM))(SERVER=dedicated))) VERSION INFORMATION: TNS for 64-bit Windows: Version 11.1.0.6.0 - Production
I have few tables in Oracle 9i/10g , and they already have data in them. I am trying to migrate the data coming from various source systems into these Oracle tables. There is a chance that after loading I might get some unwanted data into these tables.
How do I remove just that data which I have loaded recently, and do not disturb the original data it already has.
Need to backup those tables and reload the data back if there is any problem, but I am looking at a different approach. I just don't want to change the existing system, as lot of users use the system.
I have database A (Working in Live environment) and Database B copy of Database (Not live) I have Restored whole database (A) RMAN backup file on Database (B) Previous week now i don't want to change anything in any schema and want to import only updated and new records in the table in Database B
There are around 20 schema If for example i have everything in new database B all required database objects like Procedure,functions, packages with indexes in all tables and data in tables, i just want to add new data and updated data.
I'm trying to add edmx file in my project for first time. I want to choose the oracle provider ODP.net but cannot find Oracle in the data source list. I have oracle 11g installed , odp and odt installed and can access it from the solution as well. I saw the Oracle listed under data source when I tried to connect the solution to the database through server explorer. The solution is connected to Oracle database through ODP.
Statspack has been configured for Active Dataguard on Primary database.We got an spike of Buffer busy waits for about 5 min in Active Dataguard, this was causing worse Application SQL's response time during this 5 min window.Below is what i got from statspack report for one hour
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment ~~~~~~~~ ---------- ------------------ -------- --------- ------------------- Begin Snap: 18611 21-Feb-13 22:00:02 236 2.2 End Snap: 18613 21-Feb-13 23:00:02 237 2.1 Elapsed: 60.00 (mins) [code]...
Why there could sudden spike of demand on UNDO data in Active Data Guard ?
when i press when button pressed trigger, i want first the form will delete all the previous data and then populate the data from the table, that's why i used clear_block first, but this clear_code is not working here. my coding is given below
go_block('show'); clear_block(NO_VALIDATE); declare cursor c1 is select * from qtr_demand order by 1; begin [code]..........
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
1) Split values from "INST" Column : suppose 23 2) Find all values from "NUM" column for above splitted value i.e 23 ,
Eg:
For Inst : 23 , It's corresponding "NUM" values are : 1234,1298
3) Save these values into
A table Y : INST, NUM are column names.
INST NUM 23 1234,1298
1) I have a thousand records in Table X , and for all of those records i need to split and save data into Table Y.Hence, I need to do this task with best possible performance.
2) After this whenever a new data comes in Table X, above 'split & save' operation should automatically be called and append corresponding data wherever possible..
We are migrating a proc application as described below.
Old Env: UNIX Old DB: Oracle 8i
New Env: Linux New DB: Oracle 11g
New modules are successfully compiled in Linux environment. But we are facing issues in writing the output of VARCHAR datatype to a file.
find below the extract of code. EXEC SQL BEGIN DECLARE SECTION; varchar mcolmnvarchar[4]; EXEC SQL END DECLARE SECTION;
EXEC SQL DECLARE crs CURSOR FOR SELECT NVL(colmn,' ') FROM table1
memset(mcolmnvarchar.arr,'�',4); //Was added for only Linux migration. Not present in unix env.
EXEC SQL FETCH c1 INTO :mcolmnvarchar;
cout << "Data at Stage one"<< mcolmnvarchar << endl; mcolmnvarchar.arr[mcolmnvarchar.len]='�'; cout << "Data at Stage two"<< mcolmnvarchar << endl; fprintf(fptr,"%-4s",mcolmnvarchar.arr);
Above code works absolutely fine in Unix env with Oracle 8i. But with Linux env & Oracle 11g it is not working. No compilation or run time errors. Data at Stage one prints the output of database properly. But after null terminator code, Data at Stage two statement prints without any value. Value is lost after null terminator code.
I am trying to transfer data from Oracle database table into another table resides in a SQL server 2008. A database link is already set, and data can be selected/queried using
select * from a_table@db_link
the thing is when trying to insert Arabic data from Oracle to a SQL server table
insert into a_table@db_link values( 'N''some data in Arabic'' , '11-oct-00', 'some data in English' ); commit;
its inserted alright, but when we try to display the data it shows like this instead of displaying Arabic, the english data is alright, but also date show as garbage?!!
when trying to insert arabic data directly through SQL server its fine though.I suggested that we transfer data through ODBC to flat files like Acess and then to SQL server but the team rejected it since they're going to do it daily and the data is huge( we are talking more than 28000 records).
I need to export only the data from schemas or tables, how to do that with Oracle Data Pump? when we use schemas parameter this export all schema, not only the data right?
I am working in a bank as an system consultant, i have a SAN Storage Area and oracle as below.
SAN 1
This interface includes the DATA FILES of the oracle tablespace
SAN 2
SAN1 Mirrors the DATA FILES of the oracle tablespace to SAN 2
1. Can i rely on real time data recovery from SAN2 ? 2. if SAN1 (Data Files are currupted) will the SAN2 Data Files will be currupted as well. 3. If the SAN2 is currupted then what Oracle Features can be used to have uncurrupted data.
i did everything writen but when i do *SQL>alter database recover managed standby database disconnect from session;*
i go and look in the standby database AlertLog file ,and thats whats writen
*ORA-01157: cannot identify/lock data file 1 - see DBWR trace file ORA-01110: data file 1: 'F:ORACLEPRODUCT10.2.0ORADATADBSYSTEM01.DBF' ORA-27041: unable to open file OSD-04002: غير قادر على فتح الملف O/S-Error: (OS 3) The system cannot find the path specified.
[code]....
strange thing that it realises the primary database in drive F and it goes to it but i dont understand what could be the reason of this ,although im doing this command while primary database is shutdown!
I configure logical standby online .when I execute dbms_logstdby.buid,first
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
it was blcoked by other sesson,then i kill the holding session,but no work.then i cancel this step and execute it again . the error is
SQL> EXECUTE DBMS_LOGSTDBY.BUILD; BEGIN DBMS_LOGSTDBY.BUILD; END; * ERROR at line 1: ORA-01354: Supplemental log data must be added to run this command ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 3669 ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 3755 ORA-06512: at "SYS.DBMS_LOGMNR_D", line 12 ORA-06512: at line 1 ORA-06512: at "SYS.DBMS_INTERNAL_LOGSTDBY", line 370 ORA-06512: at "SYS.DBMS_LOGSTDBY", line 157