SQL & PL/SQL :: Trigger To Handle Primary Key Violation?
Sep 5, 2012
I am trying to handle PK violation error on a certain table, on INSERT, my best guess is I should use a trigger. The basic idea is this:
The table consists of 7 columns, and 6 of them are PK, and the seventh one is "amount". I want to handle PK violation in such way that, if it occurs during INSERT, then instead of inserting a new row, it should just update the "amount".
Physically DB1 .... DBN connected sequentially, so I want to prevent segmentation if some DB is unaccessible, but at the same time fight unneeded redundancy which uses too much link bandwidth to send N-1 LCR-s to all members of a single N-way group (so I want to split one big N-way zone into smaller ones and sequentially connect them into chain - it significantly reduces load on link if N is big enough (>10)). Also I want to have 2 DB in intersection zone to prevent single point of failure.
This scheme has one drawback - if change originated on DB3 or DB4, then it will be propagated (more correctly - applied and captured again) to DB5 and DB6 by both DB1 and DB2 (and, as far as I know, I have no means in capture rules to detect state of DB2 from DB1 and vise versa), so on DB5 and DB6 I get:
but it seems that it does not handle uniqueness conflicts. What is the best way to handle uniqueness conflict (is there a better way than to write custom error handler) and how serious is the impact on insert performance of having unique constraint and corresponding error handler. (In real world I will have to deal with tables with metainformation and without any keys).
Also, how to proceed with no error or raise exception from apply error handler with error that caused this handler to run? In oracle docs I can find only example that modifies LCR and runs lcr.EXECUTE(TRUE), but what to do if I don't want to reexecute LCR, but merely check error code and propagate error if it is not ORA-00001?
In my application I am facing peculiar error. For easy understanding I am considering emp table. In my package P1 I have two procedures Proc1,Proc2.
Proc1 : It is receiving employee details and for checking if employee already exists it is calling Proc2. Proc2 : Here it checks if employee exists. If exists it deletes old record, commit and inserts new record in sequence.
But however in few instances Proc2 is returning unique constraint (CONSTRAINT_NAME) violated. I am very much confused why this error is coming.
I want to create a trigger with the requirement of achieving Primary key functionality.
I have a "EMP" table. the table already contains a duplicate data on "EMPNO" column. i want to restrict entering duplicate data further into table for that i want to create a trigger.
where can i find different triggering events of DML(like update, delete etc...)and DDL(database and schema level).
starting full resync of recovery catalog RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of backup command at 08/22/2011 22:19:07 RMAN-03014: implicit resync of recovery catalog failed RMAN-03009: failure of full resync command on default channel at 08/22/2011 22:19:07 ORA-00001: unique constraint (ODBA.DF_U2) violated
Searching metalink, its related to bug 6014994 and the proposed workaround is to delete the constraint:
Cause: Dropping a datafile from a tablespace followed immediately by adding another datafile to the same tablespace will cause this Unique Key violation. Taking a RMAN debug trace will show the file# related to the error This is reported as bug <<6014994>> Unpublished on Metalink and fixed in 11g RMAN RESYNC catalog signals DF_U2 violated constraint when a file# is reused in the same tablespace Solution WORKAROUND: Drop df_u2 constraint
where can I delete the constraint, is it possible to do in RMAN or in the target instance?
drop table t1 / create table t1 ( id int PRIMARY KEY ) / insert into t1 select 1 from dual union all select 2 from dual union all select 2 from dual union all select 3 from dual /
ERREUR à la ligne 1 : ORA-00001: violation de contrainte unique (SYSTEM.SYS_C0011990)
how to get the rows that produced the constraint violation ?
I am using the Toad10.After i enter the user, Pwd it displays the Toad error that""Access violation at address 6761CB21 in module 'ORA805.dll'. Read of address 00000010 ""
while i try to use the F4 functionality, it again displays the above msg and Object not found...After i execute the query, all the characters data displays in chinees.
What i have to do to overcome the following error..whenever im using toad im getting this error since start writing and executing procedures and functions.
access violation at address 006b5b1b in module 'toad.exe' read of address 0000011d
Using apex 4.2. I have created form item on page and item attribute DISABLES IN SETTINGS , set value as YES and SESSION STATE value as YES.
SETTINGS->DISABLES-> SESSION STATE
by doing this i am getting the error. Error is
Session state protection violation: This may be caused by manual alteration of protected page item P98_CHECK_AMOUNT. If you are unsure what caused this error, please contact the application administrator for assistance.
I have the search screen in my form so after searching if i select the row by using the button it will navigate the first tab page that is "gas" screen here if i tried to change the value like update and save the form it is not allowing me to update the value raising the error message "oracle unable to insert the record". if i see the "display error" in menu it is having the select statement with error "unique key violation error ora-00001".
We are trying to import data into existing tables in a schema using data pump
However the foreign key tables are being imported first and then the master table data thus violating the constraints
Apparently it seems larger tables are being imported first regardless of referential integrity constraints thus causing constraint violation (contrary to my understanding)
Is it a normal behaviour during data pump import?
Is it possible that the keys being sequence generated are causing this?
As I understand import will commit after each table In that case can we defer commit at all at the expense of large undo, set constraints to deferrable and try the import?
I have a stored procedure that does a "select name into v_name" SQL statement, which works fine. The only problem is when the query finds no data (the procedure will error because there is no value to put into the variable). Now i have a work around to this by running the query first with a count statement (which will always have results) and then if it is not equal to 0, then i will run the select into.
My question is, is there a better way to handle this kind of issue?
The following code is working fine,But the thing is if column already exists in the table,then also the other statements should be executed instead of coming out of procedure.SO how can I handle that exception??
SQL> CREATE OR REPLACE PROCEDURE sp_execparameters(tname IN VARCHAR2, colname IN VARCHAR2,datatype IN VARCHAR2) 2 AS 3 v_sqlstr1 VARCHAR2(1000); 4 BEGIN 5 v_sqlstr1 := 'alter table '||tname||' add '||colname ||' '|| datatype ; [code].........
we are using oracle database. We hold the company records in the database. The records of the company should be available at anytime but over years the database keeps growing. Now how to handle the old data. All the data is important but if this goes for few more years then we need more and more disk space to handle. Is there any efficient methodologies to handle the old data? For us old mean the data that is 10 year old.
I am using RAC to RAC data guard environment. Os hp-unix. during primary(old standby) to standby(old primary) switchover I am ussing:
SQL>ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
I have 11.2.0.1 environment with standby and when performing switch over - it was hanging for more than 2 hours and i had to cancel it. This is what all i see in the database alert.log:
Errors in file /u02/app/oracle/diag/rdbms/drpdb/drpdb1/trace/drpdb1_m000_15160.trc: ORA-01155: the database is being opened, closed, mounted or dismounted Thu Apr 04 17:44:03 2013 Errors in file /u02/app/oracle/diag/rdbms/drpdb/drpdb1/trace/drpdb1_m000_15780.trc: ORA-01155: the database is being opened, closed, mounted or dismounted Thu Apr 04 17:59:03 2013 Errors in file /u02/app/oracle/diag/rdbms/drpdb/drpdb1/trace/drpdb1_m001_16373.trc:
[code]....
ORA-01155: the database is being opened, closed, mounted or dismounted.
1.Header(Contains the File Name,Branch Name,MIS date) 2.Body(Customer Details) 3.Footer (File Name,Contians Total Number of Records and Number of Customers)
from the above code I want to execute both the inner block exception and outer block exception and is there any way to pl/sql engine that execute the outer exception first and inner next
I trying to Assign XML content to the clob variable inside the pl/sql block, But i am getting the Below Error:
declare t clob; begin t := 'xml content exceeds 32000 characters'
update test clob_cloumn = t; where id =2;
exception when others then null; End;
ORA-06550: line 5, column 4: PLS-00172: string literal too long
I need to handle this exception, i know it length exceeds 32000 characters, but even though i need to handle the exception and to perform other operation after handling the exception.
I'm working on a plsql program and i'm using collections. I loop the collection and delete rows of it depending on the edits of my program. Here is the question.
if my collection holds rows [1]value [2]value [3]value
i can simply do something like FOR indx in invoice.first..invoice.lasthowever if i delete row 2 of my collection i get an error. no data found. ive been researching this site
[URL].......
rows [1]value [3]value [4]value
is there a way to tell plsql i just want it to loop the collection from top to bottom regardless of the index values?
I created a custom type what it has a clob member variable:
CREATE TYPE custom_type AS OBJECT( c_type INTEGER, c_number NUMBER(38, 8), c_varchar2 VARCHAR2(4000 CHAR), c_clob CLOB, [code]........
The inserting and updating works with constructor: ... custom_type (to_clob('foo')) . But if the data is longest than 4000 characters then the PHP isn't access to it.
So: The normal case: $sql = ("INSERT INTO table ( clob_field ) VALUES ( EMPTY_CLOB() ) RETURNING clob_field INTO :clob"); $stid = oci_parse($conn, $sql); $clobdescr = oci_new_descriptor($conn, OCI_DTYPE_LOB); oci_bind_by_name($stid, ':clob', $clobdescr, -1, OCI_B_CLOB); oci_execute($stid); $clobdescr->save('more than 4000 chars'); ...
This case: I tried: $sql = ("INSERT INTO table ( ctype ) VALUES ( custom_type(EMPTY_CLOB()) ) RETURNING ctype.c_clob INTO :clob"); $stid = oci_parse($conn, $sql); $clobdescr = oci_new_descriptor($conn, OCI_DTYPE_LOB); oci_bind_by_name($stid, ':clob', $clobdescr, -1, OCI_B_CLOB); oci_execute($stid); $clobdescr->save('more than 4000 chars');
ORA says: "ORA-00904: CTYPE.C_CLOB: invalid identifier";
I am trying to execute the below package. While executing i face a problem where when NO DATA FOUND the excpetion is handled and coming out of the loop.but i want to to continue the loop after handling the exception.
Is there anyway i can modify the code
CREATE OR replace PACKAGE BODY pkg_purge_archive_check AS PROCEDURE Purge_archive_tables_check (purgerows IN NUMBER) IS v_num_1 NUMBER(10); v_num_2 NUMBER(10); v_multiplier NUMBER(10);
declare type osd_refone is ref cursor; osd_ref osd_refone; l_status number; [code]......
abc_reports in this pack "ab_report" it is the function it having the ref cursor as out parameter . when am executing the above anonymous block am getting the below error,so how can i print the out ref cursor data in my block.
ERROR at line 8: ORA-06550: line 8, column 12: PLS-00221: 'OSD_REF' is not a procedure or is undefined ORA-06550: line 8, column 3: PL/SQL: Statement ignored