I have a need to use the Alias name of a column within the same select statement( because I can't have another select statement using the first select as table - BO tool limitation).
Ex:
Select dept_id, agency, sum(quantity) as "sum_qty" where sum_qty > 500;
Currently oracle won't allow using alias name Sum_qty in the same select statement. Is there a way to use alias within the same select statement?
Table Two TableTwoId Name 1 Jones 2 Smith 3 Edwards 4 Camden
My SQL to fetch all records with Smith works great:
select Name from TableTwo Inner Join TableOne on TableTwo.TableTwoId in (TableOne.pocOne, TableOne.pocTwo) where Name = 'Smith'
Now I need to create an alias for the Name field. Here is my attempt:
select myAliasName from TableTwo Inner Join TableOne on TableTwo.TableTwoId in (TableOne.pocOne, TableOne.pocTwo), (select Name as myAliasName from TableTwo) where myAliasName = 'Smith'
This attempt pulls up all the records instead of just Smith records. How I can create an alias for the Name field in my above query?
We have a query which uses start by and connect with method. this query works fine in our earlier version 10. But when we migrated to 11g, we are facing problem with the query.
Scenario 1: ------------ There is a table in Schema 1 and created a view for the same.
Table : create table alias_test1 ( A varchar2(10),B varchar(10),C varchar2(20),D varchar2(40), E varchar2(10),F number(1)); View: create or replace view alias_vw1 as select A, B,C,D, E,F from alias_test1
Values : Insert into A.ALIAS_TEST1 (A, B, C, D, E, F) Values ('Block1', '136', 'TOTBANK', 'Total Bank', ' ', 1); Insert into A.ALIAS_TEST1 (A, B, C, D, E, F) Values ('Block2', '136', 'PPRSEGHKB', 'HKB', 'TOTBANK', 2);
Now in the schema B, created a synonym for the view to access. create synonym alias_vw1 for alias_vw1@link.world
In a query can we have the same alias for more than 1 table as in the following example
Select C.ContractNum, B.Billnum, B.Billamt,A. From Contractmaster C, Billdetails B, Address A, Currencymaster c, Where B.billtype = 1 and C.Contractnum = B.Contractnum and C.customerref = A.Customerref and c.currencycode ='EUR'
In the above query, Table Contractmaster has an alias C --in capitals while table Currencymaster has an alias c - in small caps ...
Is this possible in Oracle 11g ? Also i found that the table Currencymaster has no Join conditions I executed the query, without any errors!
I am getting [Error] PLS-00402 (182: 1): PLS-00402: alias required in SELECT list of cursor to avoid duplicate column names error in my SP.I have created alias for each column and still i am getting the error.
for my_rec_lot in (SELECT LLP.BOOK_VALUE LLP_BOOK_VALUE,LLP.COMMISSION LLP_COMMISSION,LLP.CURRENCY LLP_CURRENCY,LLP.EXCHANGE_RATE LLP_EXCHANGE_RATE,LLP.EXPENSES LLP_EXPENSES,
I have a stored procedure that does a "select name into v_name" SQL statement, which works fine. The only problem is when the query finds no data (the procedure will error because there is no value to put into the variable). Now i have a work around to this by running the query first with a count statement (which will always have results) and then if it is not equal to 0, then i will run the select into.
My question is, is there a better way to handle this kind of issue?
The following code is working fine,But the thing is if column already exists in the table,then also the other statements should be executed instead of coming out of procedure.SO how can I handle that exception??
SQL> CREATE OR REPLACE PROCEDURE sp_execparameters(tname IN VARCHAR2, colname IN VARCHAR2,datatype IN VARCHAR2) 2 AS 3 v_sqlstr1 VARCHAR2(1000); 4 BEGIN 5 v_sqlstr1 := 'alter table '||tname||' add '||colname ||' '|| datatype ; [code].........
we are using oracle database. We hold the company records in the database. The records of the company should be available at anytime but over years the database keeps growing. Now how to handle the old data. All the data is important but if this goes for few more years then we need more and more disk space to handle. Is there any efficient methodologies to handle the old data? For us old mean the data that is 10 year old.
During a duplicate process to a new database name, rman crashed after the restore but before the switch datafile all.So now, we have under ASM the data files under the correct (new) diskgroup but v$datafile contains the previous names (and so diskgroup) and v$datafile_header is empty. RMAN is completly lost, our solution is to manually rename each file under SQL*Plus using ALTER DATABASE RENAME FILE. Unfortunately, we are using or migrating to OMF, so file names are meaningless and we are unable to associate ASM files with database files.
Any way (query or anything else) to associate the ASM files to the database files. Here's an abstract of what we have for one (small) tablespace:
ASMCMD [+ORAXQG1_L136_DG1/ORAXPG1/DATAFILE] > ls -l N47CAW* Type Redund Striped Time Sys Name DATAFILE UNPROT COARSE NOV. 12 10:00:00 Y N47CAW1.276.799152039 DATAFILE UNPROT COARSE NOV. 12 10:00:00 Y N47CAW1.318.799151641 SQL> select file#, name from v$datafile where ts#=17 2 / [code]...
1.Header(Contains the File Name,Branch Name,MIS date) 2.Body(Customer Details) 3.Footer (File Name,Contians Total Number of Records and Number of Customers)
from the above code I want to execute both the inner block exception and outer block exception and is there any way to pl/sql engine that execute the outer exception first and inner next
I trying to Assign XML content to the clob variable inside the pl/sql block, But i am getting the Below Error:
declare t clob; begin t := 'xml content exceeds 32000 characters'
update test clob_cloumn = t; where id =2;
exception when others then null; End;
ORA-06550: line 5, column 4: PLS-00172: string literal too long
I need to handle this exception, i know it length exceeds 32000 characters, but even though i need to handle the exception and to perform other operation after handling the exception.
I'm working on a plsql program and i'm using collections. I loop the collection and delete rows of it depending on the edits of my program. Here is the question.
if my collection holds rows [1]value [2]value [3]value
i can simply do something like FOR indx in invoice.first..invoice.lasthowever if i delete row 2 of my collection i get an error. no data found. ive been researching this site
[URL].......
rows [1]value [3]value [4]value
is there a way to tell plsql i just want it to loop the collection from top to bottom regardless of the index values?
Physically DB1 .... DBN connected sequentially, so I want to prevent segmentation if some DB is unaccessible, but at the same time fight unneeded redundancy which uses too much link bandwidth to send N-1 LCR-s to all members of a single N-way group (so I want to split one big N-way zone into smaller ones and sequentially connect them into chain - it significantly reduces load on link if N is big enough (>10)). Also I want to have 2 DB in intersection zone to prevent single point of failure.
This scheme has one drawback - if change originated on DB3 or DB4, then it will be propagated (more correctly - applied and captured again) to DB5 and DB6 by both DB1 and DB2 (and, as far as I know, I have no means in capture rules to detect state of DB2 from DB1 and vise versa), so on DB5 and DB6 I get:
but it seems that it does not handle uniqueness conflicts. What is the best way to handle uniqueness conflict (is there a better way than to write custom error handler) and how serious is the impact on insert performance of having unique constraint and corresponding error handler. (In real world I will have to deal with tables with metainformation and without any keys).
Also, how to proceed with no error or raise exception from apply error handler with error that caused this handler to run? In oracle docs I can find only example that modifies LCR and runs lcr.EXECUTE(TRUE), but what to do if I don't want to reexecute LCR, but merely check error code and propagate error if it is not ORA-00001?
I created a custom type what it has a clob member variable:
CREATE TYPE custom_type AS OBJECT( c_type INTEGER, c_number NUMBER(38, 8), c_varchar2 VARCHAR2(4000 CHAR), c_clob CLOB, [code]........
The inserting and updating works with constructor: ... custom_type (to_clob('foo')) . But if the data is longest than 4000 characters then the PHP isn't access to it.
So: The normal case: $sql = ("INSERT INTO table ( clob_field ) VALUES ( EMPTY_CLOB() ) RETURNING clob_field INTO :clob"); $stid = oci_parse($conn, $sql); $clobdescr = oci_new_descriptor($conn, OCI_DTYPE_LOB); oci_bind_by_name($stid, ':clob', $clobdescr, -1, OCI_B_CLOB); oci_execute($stid); $clobdescr->save('more than 4000 chars'); ...
This case: I tried: $sql = ("INSERT INTO table ( ctype ) VALUES ( custom_type(EMPTY_CLOB()) ) RETURNING ctype.c_clob INTO :clob"); $stid = oci_parse($conn, $sql); $clobdescr = oci_new_descriptor($conn, OCI_DTYPE_LOB); oci_bind_by_name($stid, ':clob', $clobdescr, -1, OCI_B_CLOB); oci_execute($stid); $clobdescr->save('more than 4000 chars');
ORA says: "ORA-00904: CTYPE.C_CLOB: invalid identifier";
I am trying to execute the below package. While executing i face a problem where when NO DATA FOUND the excpetion is handled and coming out of the loop.but i want to to continue the loop after handling the exception.
Is there anyway i can modify the code
CREATE OR replace PACKAGE BODY pkg_purge_archive_check AS PROCEDURE Purge_archive_tables_check (purgerows IN NUMBER) IS v_num_1 NUMBER(10); v_num_2 NUMBER(10); v_multiplier NUMBER(10);
I am trying to handle PK violation error on a certain table, on INSERT, my best guess is I should use a trigger. The basic idea is this:
The table consists of 7 columns, and 6 of them are PK, and the seventh one is "amount". I want to handle PK violation in such way that, if it occurs during INSERT, then instead of inserting a new row, it should just update the "amount".