I'm having trouble creating a view that has a not null column. Using this script you can see that the resulting table doesn't have a not null constraint for the first column even though both source columns for that row are not null. Is there anyway to force the view to mark that first column as not null? (I need it for ODP.NET otherwise I get an error there)
DROP TABLE MYTABLE;
CREATE table MYTABLE
( COL1 NUMBER(2) NOT NULL,
col2 number(2)) ;
drop table mytable2;
CREATE table MYTABLE2
SQL> Describe Stu_Table Name Null? Type ----------------------------------------- -------- ---------------------------- STU_ID VARCHAR2(2) STU_NAME VARCHAR2(10) STU_CLASS VARCHAR2(10)
now when i try to modify this Stu_id column to not null its give me error.
SQL>ALTER TABLE Stu_Table MODIFY Stu_Id int(3)not null; ALTER TABLE Stu_Table MODIFY Stu_Id int(3)not null * ERROR at line 1: ORA-01735: invalid ALTER TABLE option
and when i try to add new column with not null its also gives me error
SQL> ALTER TABLE Stu_Table add C1_TEMP integer NOT NULL; ALTER TABLE Stu_Table add C1_TEMP integer NOT NULL * ERROR at line 1: ORA-01758: table must be empty to add mandatory (NOT NULL) column
Attached query is running fine if inline view B (Marked in Comments) returning value. If inline view B returns NULL then it fails to return result. I want if Inline view B is returning NULL then it should pass ZERO to main query.
I have a table that mimics an hierarchy of items (parent-child relationship). The top level (level 1) never has a NULL value, but it is possible for any of the lower values to have a NULL value. For example, levels 1, 2, 5, 7, & 8 may have values, but 3, 4, & 6 may all be NULL.
I need a View that can return the same data, but with out NULL values. The View would move the NOT NULL values up the levels, so there is consecutive data on each level starting at level. So using the example, the View would move value in 5 up to 3, 7 up to 4, and 8 up to 5. Then level 6 to 8 would have NULL.
I have done this already by creating a 2nd table and using a stored proc to copy the data over to the new table they I want it. But I was told a View or Materialized view "may" work, and perform quicker during the queries. I looked at the Oracle functions LAG and LEAD, but by definition they work by rows, I need to work by columns on the same row.
LAG (value_expression [,offset] [,default]) OVER ([query_partition_clause] order_by_clause) LEAD (value_expression [,offset] [,default]) OVER ([query_partition_clause] order_by_clause)
Is what I am asking possible in a View or Materialized View?
I have a table with column A which contains very few null values. I need to select these rows. I am considering two options:
a) create function based index on NVL(A, 0) and use this in where clause NVL(A, 0)=0 (column doesn't have values 0) b) create function based index on NVL2(A, 0, NULL) and and use this in where clause NVL2(A, 0, NULL) = 0
First idea was option A. But I realized in option B the index will be much smaller, because most of values of column A isn't NULL so NVL2 will return NULL and index will not have as much leafs as in NVL. It is good idea to use NVL2? Is there any against to use option B instead of A?
I have a table with columns emp_i, LOC_C and SUBSID_C. I want to find all emp_i's with LOC_C OR SUBSID_C as always NULL. Please note that the value should be NULL, always, for all dates.
The query --- should return 102 as LOC_C OR SUBSID_C is ALWAYS NULL. should return 103 as LOC_C is ALWAYS NULL. should not return 101, as LOC_C is not ALWAYS NULL. In other words, the query should give list of emp_i who never ever had a non-null value for LOC_C OR SUBSID_C. The purpose is to find the emp_i for which the columns LOC_C and SUBSID_C are never used.
I tried the query: --------------------------------------------------- SELECT DISTINCT ORG_EMP_I FROM tab1 WHERE ORG_GRP_I = 58 AND ORG_EMP_I NOT IN ( SELECT DISTINCT org_emp_i FROM tab1 ap WHERE ap.ORG_GRP_I = 58 AND trim(ap.LOC_C) IS NOT NULL OR ap.ORG_SBSID_C IS NOT NULL ) ---------------------------------------------------
I can't create a unique constraint on these columns because there are many null values for column colX, and as mentioned, when colX is null, colY and colZ can be any values.
I also tried using a before insert trigger to find duplicates before posting and raise an error if found, but this causes an ORA-04091 mutating error since the trigger in the table is referencing itself to check for duplicates.
Also, I know there is something called a function based index, but I cannot use those with my code, so I need another solution if possible.
Our organization is attempting to learn more about the partitioning features of Oracle 11g. I've been reading the partitioning manuals, and I have not found a clear answer on this topic, but I suspect I know the answer.
If you create a range partitioned table; using interval partitioning, say something like this:
CREATE table range_parti ( CODE NUMBER(5), DESCRIPTION VARCHAR2(50), CREATED_DATE DATE) PARTITION BY RANGE (created_date) INTERVAL (NUMTOYMINTERVAL(1,'MONTH')) ( PARTITION my_parti VALUES LESS THAN (TO_DATE('01-NOV-2007','DD-MON-YYYY')) );
but you try to insert a null value as the partition key, you get the following error:
SQL> INSERT INTO range_parti VALUES (1,'one',NULL); INSERT INTO range_parti VALUES (1,'one',NULL) * ERROR at line 1: ORA-14400: inserted partition key does not map to any partition Elapsed: 00:00:00.07
Is there no way to tell it to use a default partition for NULL values? Or specifically designate a partition for NULL values WITHOUT having to manually list out each partition? It seems it works if you don't use the INTERVAL keyword, list out your partitions, and use MAXVALUE. However, our hope to avoid having that as it creates monstrously huge DDL statements for tables that have lots of date ranges, and we will be forced to manually add new partitions each month as data is added/time passes.
It appears from my experience so far, if your column can allow nulls, you cannot use interval range partitioning on that column.
In V$Archived_log, the column "name" showed No any content?
select name,SEQUENCE#,ARCHIVED,APPLIED,DELETED,STATUS,COMPLETION_TIME from v$archived_log
NAME SEQUENCE# ARC APP DEL S COMPLETIO ------------------------------ ---------- --- --- --- - --------- 2565 YES NO YES D 20-JUL-10 2566 YES NO YES D 21-JUL-10 2567 YES NO YES D 21-JUL-10 2568 YES NO YES D 22-JUL-10 2569 YES NO YES D 22-JUL-10 2570 YES NO YES D 22-JUL-10
In the below code, do I need the 'NOT NULL' after the 'state char(2)'? I am guessing that I do not need it since I have the CHECK constraint on the column.
CREATE TABLE employee( id PRIMARY KEY, first varchar(20) NOT NULL, middle varchar(20), [code]....
I have a table with multiple rows for the KEY attribute(its not a primary key) and a Rank for each row.
I want a query which fetches one row per KEY attribute.The row with lesser Rank should be considered. But in-case if the value is null for any column the value for next Rank should be considered.
WITH TMP_TBL AS ( SELECT * FROM ( SELECT 'A' DUN,'1' RNK,'A21' col1,NULL col2,'A41' col3,NULL col4 FROM dual UNION ALL SELECT 'A','2','A122','A23',NULL,NULL FROM dual UNION ALL SELECT 'A','3','A32','A33',NULL,'A35' FROM dual [code].......
DUN is the KEY attribute . RNK is the Rank for each Row. COL1... COL4 are data attributes
I want this to be done with SQL only. So I tried various ways but none were successful.Finally I created a Multi Row function row_nvl and it worked.
SELECT DUN, row_nvl(rownvl_param_type(RNK,col1)), row_nvl(rownvl_param_type(RNK,col2)), row_nvl(rownvl_param_type(RNK,col3)), row_nvl(rownvl_param_type(RNK,col4)) FROM TMP_TBL GROUP BY DUN
But I don't think my manager will allow me to deploy a Multi Row function .
I dont want to print the repeated value(NAME) of C1 multiple times as below.
C1C2C3C4 NAMEJOHN10ABC SMITH30DEF ROBERT60XYZ
I could do it using the below query using union with the rownum.
select * from ( select rownum rn, c1,c2,c3,c4 from table_new ) where rn =1 union select * from ( select rownum rn, decode(c1,null,null),c2,c3,c4 from table_new ) where rn between 2 and 3
Is there any other way of displaying using a single sql query.
I need a generic query to generate total # of records for each table in a schema, total # of records that are not null for each column in the table, and total # of records that are null for each of those columns in those tables.
ex:
the output should look like this.
owner schema table_name total# recs in the table, column_name, ------ ------ ---------- ------------------------- -----------
# of records not null # of records null ---------------------- --------------------
previously i set null constraint to the column and creating some rows and need to change new entering values as not null constraint to the column in oracle without disturbing the old records. how can I do that.
I am having issue with IMPDP on ORACLE VIRTUAL COLUMNS.I am having following table with Virtual column defined with Not null. Expdp is fine without any issue.
DDL : ------ CREATE TABLE alert_hist ( alertky INTEGER NOT NULL, alertcreatedttm TIMESTAMP(6) DEFAULT systimestamp NOT NULL, alertcreatedt DATE GENERATED ALWAYS AS (To_date(Trunc("alertcreatedttm"))) VIRTUAL NOT NULL
When I do the import (IMPDP) it got failed with the following error.
. . imported "TESTSCHEMA"."VALART" 359.1 KB 4536 rows ORA-31693: Table data object "TESTSCHEMA"."ALERT_HIST" failed to load/unload and is being skipped due to error: ORA-39097: Data Pump job encountered unexpected error -1
After that I dropped the Virtual Not null column and recreated that column with Nullable.
DDL : ----- alter table alert_hist drop column alertcreatedt; alter table alert_hist add alertcreatedt DATE GENERATED ALWAYS AS (To_date(Trunc("alertcreatedttm"))) VIRTUAL;
After that I took the expdp and impdp , it went fine with out any issue.
In my real view, the diff calculate column is very long (with case etc...) I must reuse many time the diff calculate column to calculate other columns.I do not want use subquery because there is many join to do in the same query. How can I reuse a calculate column in the same view?
My boss make a requirement in exist database as some user can view salary column at employment table by SQL and some user can view salary column at employment table by SQL.
The boss do not like to make changes front SQL. Ooracle 11g vault or Oracle Label Security is best for this requirement? my oS is 2008 32 bit window and DB is 11.2.0.1
I am running a GROUP BY query on a few columns of enumerated data like:
select count(*), Condition, Size group by Condition, Size;
COUNT(*) CONDITION SIZE -------- ---------- -------- 3 MINT L 2 FAIR L 4 FAIR M 1 MINT S
Well, let's say I also have a timestamp field in the database. I cannot run a group by with that involved because the time is recorded to the milisec and is unique for every record. Instead, I want to include this in my group by function based on whether or not it is NULL.
For example:
COUNT(*) CONDITION SIZE SOLDDATE -------- ---------- -------- ---------- 3 MINT L ISNULL 2 FAIR L NOTNULL 2 FAIR M NOTNULL 2 FAIR M ISNULL 1 MINT S ISNULL
basically this view has columns based on the previous tables column ( MEASURE_ID) values and the values will be corresponding value in column Percentage.
I have an issue in materialized view which has got one of the null able column and query on this column taking approximately 2 mins where as other indexed columns takes less than 10 sec.
Here is the summary
SQL> Select Count (1), Count (VAT_NO) From Mv_customer;
If an index is created on VAT_NO will that improve the performance. What kind of index can be created considering very less number of records has got VAT_NO
I need to create a materialized view with a clob column based on a varchar2 column of a table.This is because in the mv the clob column data gets appended one after another.
I want to implement a business rule such as we have for each id at most 1 dat null. So, I've created this unique index on test.
create unique index x_only_one_dat_cess_null on test(id, case when dat_cess is null then 'NULL' else to_char(dat_cess, 'dd/mm/yyyy') end);
insert into test values (1, sysdate); insert into test values (1, sysdate - 1); insert into test values (1, null); insert into test values (1, null); -- ----- insert into test values (2, sysdate); insert into test values (2, sysdate - 1); insert into test values (2, null);
The 4th insert will cause an error and this is what I wanted to implement. OK. Now the problem is that for non-null values of dat, we can't have data like this
because of the unique index (the 2nd and the 3rd row are equal). So just for learning purposes, how could we allow at most one null value of dat and allow duplicates for non-null values of dat.