SQL & PL/SQL :: Unique Constraint Versus Distinct?
Apr 30, 2013
about the functionalty w.r.t. unique constraint and Distinct clause. Below is the example which is confusing me lot.
--Below statement will create table and unique constraint
Create Table A (A Varchar2 (10) Unique);
Insert Into A Values (Null);
Insert Into A Values (1);
Insert Into A Values (2);
[code]...
If we are saying each null value is having a unique value, then why oracle distinct showing records.
View 3 Replies
ADVERTISEMENT
Mar 26, 2007
From a step by step instructions I'm asked to put the following into sql*plus:
CREATE TABLE Lab2Lecturer
(staffNO VarCHAR2(10) NOT NULL,
title VARCHAR2(3),
fName VARCHAR2(30),
[code]...
Then the following:
INSERT INTO Lab2Lecturer
(staffNO, title, fName, lName, streetAddress, suburb, city, postCode, country, lecturerLevel, bankNO, bankName, salary, workLoad, researchArea)
VALUES
('1000', 'Dr', 'Johanna','Santoso',
'3 Robinson Av', 'Kew', 'Melbourne', '3080', 'Australia', 'C', '1000567237', 'CommBank', 65000.00,1.0, 'O-R DB');
and finally,
INSERT INTO Lab2Lecturer
(staffNO, title, fName, lName, streetAddress, suburb, city, postCode,country, lecturerLevel, BankNO,bankName, salary, workLoad, researchArea)
VALUES
('1000', 'Dr', 'Justine', 'Martin', '6 Algorithm AV', 'Montmorency', 'Melbourne', '3089', 'Australia', 'D', '1000123456', 'CommBank', 89000.00, 1.0, 'CBR');
when I try entering in the second one I get an error 'unique constraint violated'.So whats wrong exactly?
View 1 Replies
View Related
Oct 4, 2013
My DB version is
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
These error codes I'm getting in production.First of all, I've no duplicates present in that table for which this error has been raised.I've checked the index and related columns as well. NO DATA is there.So NO CHANCE for unique constraint violation.
SELECT * FROM ORDER_OCC_REQUISITION_X_REF WHERE LAB_ORDER_OCC_TEST_ID IN(SELECT LAB_ORDER_OCC_TEST_ID FROM LAB_ORDER_OCC_TEST WHERE LAB_ORDER_OCC_ID = 7944858);
no rows selected
Now when I'm trying to insert one row inside this table I'm getting this error, as you are seeing no records for this occurrence_id.
SELECT * FROM USER_INDEXES WHERE INDEX_NAME = 'ORD_OCC_REQ_UQ_TEST_IX_04';
--ORDER_OCC_REQUISITION_X_REF (Table name)
--MERGE_DT, LAB_ORDER_OCC_TEST_ID, TEST_ID, ACTIVE_YN (columns for the index 'ORD_OCC_REQ_UQ_TEST_IX_04')
As you can see there is no data then this error should not be raised. Update procedure.
/*******************************************************************************************************************
* Name : UPDATE_REQUISITION_X_REF
* Description : This Procedure update ORDER_OCC_REQUISITION_X_REF table with requisition_id
* that was generated due to merge process.
* In parameters : IN_merge_id NUMBER The order_ref_no of the orders to be merged (comma seperated)
***********************************************************************************************************************/
PROCEDURE UPDATE_REQUISITION_X_REF ( IN_merge_id IN TT_ORD_REQUISITION_WORK_AREA.merge_id%TYPE)
IS
[Code]....
View 8 Replies
View Related
Aug 23, 2011
Our RMAN backup failed because of the error:
starting full resync of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 08/22/2011 22:19:07
RMAN-03014: implicit resync of recovery catalog failed
RMAN-03009: failure of full resync command on default channel at 08/22/2011 22:19:07
ORA-00001: unique constraint (ODBA.DF_U2) violated
Searching metalink, its related to bug 6014994 and the proposed workaround is to delete the constraint:
Cause:
Dropping a datafile from a tablespace followed immediately by adding
another datafile to the same tablespace will cause this Unique Key violation.
Taking a RMAN debug trace will show the file# related to the error
This is reported as bug <<6014994>> Unpublished on Metalink and fixed in 11g
RMAN RESYNC catalog signals DF_U2 violated constraint when a file# is reused in
the same tablespace
Solution
WORKAROUND: Drop df_u2 constraint
where can I delete the constraint, is it possible to do in RMAN or in the target instance?
View 1 Replies
View Related
Feb 5, 2013
I've set up Oracle Streams replication between multiple N-way zones like this:
CODE------------------------------
Nway1 |Intersect| Nway2
DB1 --|-- DB3 --|-- DB5 DB1, DB2, DB3, DB4 - N-way1
| | | | | DB3, DB4, DB5, DB6 - N-way2
DB2 --|-- DB4 --|-- DB6 DB3, DB4 - intersection zone
| |
-------------------------------
Physically DB1 .... DBN connected sequentially, so I want to prevent segmentation if some DB is unaccessible, but at the same time fight unneeded redundancy which uses too much link bandwidth to send N-1 LCR-s to all members of a single N-way group (so I want to split one big N-way zone into smaller ones and sequentially connect them into chain - it significantly reduces load on link if N is big enough (>10)). Also I want to have 2 DB in intersection zone to prevent single point of failure.
This scheme has one drawback - if change originated on DB3 or DB4, then it will be propagated (more correctly - applied and captured again) to DB5 and DB6 by both DB1 and DB2 (and, as far as I know, I have no means in capture rules to detect state of DB2 from DB1 and vise versa), so on DB5 and DB6 I get:
CODEORA-00001: unique constraint (DUMMYUSR.UNIQUE_RECORDS) violated error
I've set up standard conflict handler for apply process:
CODE
declare
cols DBMS_UTILITY.NAME_ARRAY;
begin
cols(1) := 'no';
cols(2) := 'name';
cols(3) := 'ddate';
dbms_apply_adm.set_update_conflict_handler(
object_name => 'DUMMYUSR.DUMMYTBL',
method_name => 'DISCARD',
resolution_column => 'no',
column_list => cols
);
end;
but it seems that it does not handle uniqueness conflicts. What is the best way to handle uniqueness conflict (is there a better way than to write custom error handler) and how serious is the impact on insert performance of having unique constraint and corresponding error handler. (In real world I will have to deal with tables with metainformation and without any keys).
Also, how to proceed with no error or raise exception from apply error handler with error that caused this handler to run? In oracle docs I can find only example that modifies LCR and runs lcr.EXECUTE(TRUE), but what to do if I don't want to reexecute LCR, but merely check error code and propagate error if it is not ORA-00001?
View 1 Replies
View Related
May 14, 2010
I have faced an error as below:-
1 : 23000 : java.sql.BatchUpdateException: ORA-00001: unique constraint (SDS1.PK_EXP_TXT) violated
1 : 23000 : java.sql.SQLException: ORA-00001: unique constraint (SDS1.PK_EXP_TXT) violated
java.sql.BatchUpdateException: ORA-00001: unique constraint (SDS1.PK_EXP_TXT) violated
at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:367)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:9119)
[Code] ......
After i check data in all_indexes table, i have found some old data which are belongs to 2008 and 2009. The sample as below:-
OWNER INDEX_NAME LAST_ANALYZED
SDS1PK_ACTION 31-OCT-09 07:04:49
SDS1AK_ACTION 31-OCT-09 07:04:49
SDS1PK_COND 31-OCT-09 07:04:50
SDS1AK_COND 31-OCT-09 07:04:50
SDS1COND_FK1 31-OCT-09 07:04:50
SDS1COND_FK2 31-OCT-09 07:04:50
is it the problem due to the old data not remove from the all_indexes table ?.. if YES is it I have to delete the old data manually from the all_indexes table ?
View 7 Replies
View Related
Jun 15, 2012
I have 3 columns in a table: colX, colY, colZ.
Trying to find a way to prevent duplicates with these, but only if colX is not null.
For example, if there are already values for: colX = 1, colY = 1, colZ = 1
then:
Allowed: colX = null, colY = 1, colZ = 1
Not allowed: colX = 1, colY = 1, colZ = 1
I can't create a unique constraint on these columns because there are many null values for column colX, and as mentioned, when colX is null, colY and colZ can be any values.
I also tried using a before insert trigger to find duplicates before posting and raise an error if found, but this causes an ORA-04091 mutating error since the trigger in the table is referencing itself to check for duplicates.
Also, I know there is something called a function based index, but I cannot use those with my code, so I need another solution if possible.
View 4 Replies
View Related
Nov 4, 2012
I also put in the relevant code in case it's needed.
SQL> @lab_05_01.sql
SQL> -- Oracle Database 10g: Administration Workshop II
SQL> -- Oracle Server Technologies - Curriculum Development
SQL> --
SQL> -- ***Training purposes only***
SQL> -- ***Not appropriate for production use***
SQL> --
SQL> -- This script performs a batch promotion update.
SQL> -- The logic of the updates is not important -
[code]....
View 2 Replies
View Related
Mar 15, 2011
I have disabled the primary key on a table using this...ALTER TABLE C_LOV_BOROUGH DISABLE CONSTRAINT CLOVBOROUGH_PK CASCADE;
but now when I try to insert duplicate value. It gives this error.
ORA-00001: unique constraint (LOT_DBA.CLOVBOROUGH_PK) violated
View 5 Replies
View Related
Aug 15, 2011
I have two design alternatives and need to understand how expensive (speed) is one of them against the other for a medium size table (100K-200K records):
create table xyz
(
f1 number not null,
f2 varchar2(20) not null,
f3 number not null,
f4 varchar2(50),
[code]....
the idea is to optimize the design by using a PK instead of the 3 keys and there is a debate that searching a unique index field(2nd scenario) is of the same speed than searching a PK field (1st scenario).
View 5 Replies
View Related
May 22, 2013
I have a table where I want user to fill in unique values for a field which is easy to do.
Problem is sometimes the values can be null so an ordinary unique constraint does not work because multiple null records. Is there a way of validating only non null values to ensure all data entered that is non null is unique?
View 1 Replies
View Related
Aug 2, 2011
I have created a table below, my TL asked me to create a local unique constraint for the below table.
I went through all sites and could not find the correct solution, how to create LOCAL UNIQUE CONSTRAINT ON SUB PARTITION TABLE and LOCAL UNIQUE INDEX ON PARTITION TABLE. Creating Local Unique constraint should take care of creating local unique index creation.
Unique key columns are DET,GDS,ARRIVE_DT
CREATE TABLE SUB_PAR_TAB
(
ID VARCHAR2(100) NOT NULL,
REGION VARCHAR2(40) NOT NULL,
SOURCE VARCHAR2(80) NOT NULL,
DET VARCHAR2(80) NOT NULL,
GDS VARCHAR2(40) NOT NULL,
ARRIVE_DTDATE,
SYS_SOURCE VARCHAR2(25) ,
[code]........
View 5 Replies
View Related
Aug 3, 2008
I got this error message in my replication environment.
ORA-12012: error on auto execute of job 2182370
ORA-12008: error in materialized view refresh path
ORA-00001: unique constraint (TE.S_TE_MTH_DBASE_SALES_INFO_A_U1) violated
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 189
Materialized view and Base table having only one unique index. There are no referable constraints in my base table. Because source table not refer any other tables.
Even; materialized view log created using rowid because there are no primary key constraints in my base table.When I manually refresh materialized view I got the below error message
ORA-00001: unique constraint (TE.S_TE_MTH_DBASE_SALES_INFO_A_U1) violated
How it's possible? Because there is parent/child relationship in my source table and there is no way to enter duplicate records it's unique index.
View 8 Replies
View Related
Jun 27, 2012
I have in a plsql block somewhere a statement like
INSERT INTO TABLE1( id , col)
SELECT id, col
FROM TABLE2;
This statement returns an error ORA-00001: unique constraint because id is a primary key on TABLE1. I would like to know what is the value of id that raised the exception.
View 15 Replies
View Related
Apr 18, 2010
I have two different java process trying to insert in the same time in the same table for the same trade. The structure of the table Report is
create table report
( TRADE_ID NUMBER,
VERSION NUMBER,
MESSAGE_TIME TIMESTAMP)
There is a unique key on (TRADE_ID and VERSION) So if a new trade_id is inserted, the version is set to 1 and the second becomes 2 and so on. The version is calculated as last version of the trade_id ie. version + 1. It was woking fine till a new Java process was build that fired inserts through ten different java instances at the same time resulting in unique key error. So in detail what is hapenning is if three records of trade_id's comes in at the same time it should allocate versions in a first come first serve basis and there should be three versions of trade id 1,2 and 3. Now due to the multiple instances they all seems to get fired at once and all ending up with version one and thus resulting in unique key constrain error while trying to insert into the table.
View 3 Replies
View Related
Oct 17, 2011
I am trying to import the dump using the impdp utility and followed the below steps I have disabled all the constraints and executed the below command
impdp DIRECTORY=EXPDP_DAILY_BACKUP_DIR DUMPFILE=expdp_QA_full_11102011_214501.dmp
logfile=imp_EXPDP_QA_full_12102011_203254.log remap_schema=prod:prod remap_tablespace=prod:prod
schemas=prod TABLE_EXISTS_ACTION=truncate content=data_only.
but I am getting the below error like this for 1 or 2 tables . and If I import those tables seperately its getting imported successfully. i am not getting the below error always .
ORA-31693: Table data object "PROD"."DAS_ID_GENERATOR" failed to load/unload and is being skipped due to error:
ORA-00001: unique constraint (PROD.DAS_ID_GENERATOR_P) violated
ORA-31693: Table data object "PROD"."TKT_DIST_SRV_STAT" failed to load/unload and is being skipped due to error:
ORA-00001: unique constraint (PROD.SERVER_STATS_P) violated
View 8 Replies
View Related
May 27, 2013
i have a table with a clob column and i have 150 records i want retrieve distinct values from the clob using distinct operator on clob will not work
View 1 Replies
View Related
Aug 14, 2013
Using Oracle 11g, below is the table, partitions, unique and non-unique local index:
CREATE TABLE DOCA( DOCA_ID NUMBER NOT NULL , DOCA_BKG_PAX_ID NUMBER NULL , ROW_PURGE_DATE DATE NULL ,)PARTITION BY RANGE(ROW_PURGE_DATE)INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))( PARTITION P2007 VALUES LESS THAN (TO_DATE('01/01/2008', 'dd/mm/yyyy')), PARTITION P200801 VALUES LESS THAN (TO_DATE('01/02/2008', 'dd/mm/yyyy')),) TABLESPACE T0; ALTER TABLE DOCA ENABLE ROW MOVEMENT;
CREATE UNIQUE INDEX XPKDOCA ON DOCA( DOCA_ID ASC, ROW_PURGE_DATE ASC)LOCALREVERSE TABLESPACE I0; ALTER TABLE DOCA ADD CONSTRAINT XPKDOCA PRIMARY KEY (DOCA_ID); CREATE INDEX XFKDOCA_DOCA_BKG_PAX_ID ON DOCA( DOCA_BKG_PAX_ID ASC)LOCALREVERSETABLESPACE I0;
I would like to know the difference between the performance of the unique and non-unique local indexes?.
View 10 Replies
View Related
Dec 25, 2007
i want to get table name, constraint name, constraint type with join processes in string type. this is what i want: alter table tablename add constraint constraintname constrainttype(columnname)
View 1 Replies
View Related
Dec 27, 2011
Procedure and function. exact reason when we go for function or procedure?
View 3 Replies
View Related
Aug 26, 2011
In the article regarding gathering CBO Statistics, it states: QUOTE When an Oracle database is created, a job will be scheduled that will generate the database statistics for you. You will still need to collect system statistics however, as these are not collected by the automatic statistics gathering mechanism.
what is the difference between "database statistics" and "system statistics"? In other words, do I need to run this script for each schema owner in my 10g/11g instance?
variable whoami varchar2(20);
begin
select user into :whoami from dual;
end;
exec dbms_stats.gather_schema_stats( -
ownname => :whoami, -
options => 'GATHER AUTO', -
estimate_percent => 15, -
cascade => true).
View 2 Replies
View Related
Oct 6, 2011
if one of the columns is given as
ABC varchar2(10)
the size of the data in bytes that this column going to hold.
View 5 Replies
View Related
Mar 3, 2010
What's the difference between a dirty buffer and a redo buffer?
My understanding is that a dirty buffer is a changed buffer or whenever data changes in the buffer cache, it's marked as dirty. Also, a redo buffer keeps track of changes that were made to the data, so it's also referring to changed data as well...DWBn writes dirty buffers to disk and LGWR writes redo data to redo log filesHow can we differentiate between the two?
View 2 Replies
View Related
Jan 18, 2013
what is the difference between incremental and differential backup?
View 5 Replies
View Related
Oct 7, 2010
I have a function which returns a preformatted SQL but with duplicates as follows
FSI
..FSIL
..FSIL
....IS123
....IS123
....IS345
....IS345
....IS547
....IS547
..FSIR
..FSIR
....IS98777
....IS98777
....IS34567
....IS34567
....IS67799
....IS67799
I have to eliminate the above result set without changing the order. a distinct on the function returns a jumbled result set.
View 1 Replies
View Related
Feb 21, 2012
I need to take the distinct values from VARRAY.. I have wrote following simple example. But it does not work. how to get the distinct value from VARRAY.
declare
type t is varray(10) of varchar2(10);
t1 t;
type r is table of varchar2(10) index by binary_integer;
r1 r;
begin
t1 := t('A','B','A','B','A','B','C');
select distinct * into r1 from table(select * from t1);
END;
View 1 Replies
View Related
Jan 16, 2013
We had an issue with a PL/SQL package taking hours to run as a concurrent program. Database version is 10.2.0.4.0, running on Linux x86 64-bit. A tkprof'd trace file revealed the problem SQL statement to be a cursor. This one SQL statement would run for 3+ hours. I copied the SQL statement and ran it in TOAD and it completed in seconds, returning the exact same result set. To resolve the issue in the PL/SQL package I created a global temp table and ran the exact same SQL statement as an INSERT into the global temp table.
Again, instead of hours, the SQL statement completes in seconds. If I revert the change, it goes back to taking hours. I've attached the relevant sections of the tkprof showing the two SQL statements (identical other than the insert in front of one) and the resulting explain plans and performance data. I've always been under the impression that a cursor was a better option than a temp table and I've never run into a situation where the same SQL statement runs so much longer when executed as a cursor.
Attached File(s)
SQL_As_Cursor.jpg ( 274.02K )
Number of downloads: 7
Explain_for_SQL_As_Cursor.jpg ( 189.43K )
Number of downloads: 4
SQL_as_Insert.jpg ( 277.38K )
Number of downloads: 4
Explain_for_SQL_As_Insert.jpg ( 180.66K )
Number of downloads: 2
View 2 Replies
View Related
Dec 9, 2008
Environment Setup
Oracle Server 11g on HP-UX
Oracle Client on Windows
I am using swingbench tool to generate load on DB and using OLTP like benchmark i am comparing the performance of plain data and encrypted data.
I have created two different database. one for tde and other for plain. I have populated same number of rows in both databases. Then i start running the benchmark and i use SAR to collect disk I/O's, VSAR to CPU usage.
From the sar report it seems that,
Oracle plain has faster transactions, it uses minimum CPU. But when look in tot the Reads/Writes TDE has lower than the plain.
If TDE needs to encrypt the data to store in the disks it should occupy more space than the plain data. Then the I/O should be more in TDE..
Note: Bcz the DB parameters are same, number of rows in the tables are same. File system and its block size are same. I will run the swingbench seperately for both the databases.
I am attaching the excel sheet for sar results. Let me know if you need more information
View 7 Replies
View Related
Oct 19, 2010
I want to move data between two instances and recommended we create a local database link to PULL data from remote database located here (supplier on site) but they want to PUSH data to us. I thought you could only PULL data over a database link but then read the link [URL] where PUSH is considered ? I was going to use standard creatas like create table A as select * from table A@<remote_db_link> which works well and fast ( tried and tested) but some are saying they think PUSH quicker/better ?
we do have data "PUSH" already but this does not use a db link - effectively it calls a local proceedure here and passes a row of data and is slow ie for a 1000 row table to be pushed to us we have our local proceedure called 1000 times.
I have always suggested a PULL with db_link as the fastest method - any proof OR info on a fast PUSH method ( that is quicker than PULL ) ? can you REALLY push ?
View 2 Replies
View Related
Jul 20, 2010
With a very large database (VLDB) for a data warehouse (DW) using primarily a STAR based schema in an environment in which time (both human and CPU) is orders of magnitude more valuable than storage capacity, is there any signficant difference in query performance when tables have all fixed length (CHAR) columns compared to tables with variable length (VARCHAR2) columns?
I realize this is one of those "in general" questions so considering "a given VLDB DW environment" with all other things being equal, what, if any, is the time based performance difference between a database of tables with all fixed sized columns versus one of tables with variable length columns ?
View 2 Replies
View Related