SQL & PL/SQL :: ORA-01400 / Cannot Insert NULL Into Columns
Sep 3, 2010i am getting following error while trying to insert value into columns of table.
ORA-01400: cannot insert NULL into ("demo"."col1"."col2")
i am getting following error while trying to insert value into columns of table.
ORA-01400: cannot insert NULL into ("demo"."col1"."col2")
Oracle version 11.2.0
OS Linux
I have a table with no primary key constraints with some roles containing null value/duplicates. I then decided to alter the table to add composite primary key constraints on four columns (a, b, c, and d). I did this by using the same script that was used to create the original table but this time adding the not null constraints.
I then took and export of the original table. I now want to import the data to the newly created table but I am now getting the error: ORA-01400: cannot insert NULL into (string).
I will like to perform the import without NULL. Is there a parameter in impdp that I can use? I tried DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS but it didn't work.
Beside options using impdp is there a way to do an insert statement like this insert into table a (select * from table) excluding NULL;?
Basically, I need to load the data into the newly created table without NULL.
the above error while tryin to run my control file in sqlloader as i need to load the csv data into oracle...what sequence i need to write so that i do not face the above error.
View 2 Replies View RelatedI have 8 columns. Some of them might be null.I want to display all 8 columns in my result. Not null columns will be first and null at the end.Here is a sample data :
Employee table :
Employee_id Emp_fname emp_lname emp_mname dept salary emp_height emp_weight
1 aaa ddd d1 100 6 180
2 bbb ccc 120 169
3 dfe d2 5.9 223
The expected result is :
result1 result2 result3 result4 result5 result6 result7 result8
1 aaa ddd d1 100 6 180
2 bbb ccc 120 169
3 dfe d2 5.9 223
I have a table containing hundreds of columns and I would like to be able to qualify my select statements so that only those columns containing a value are returned. Something like:
Select (non null columns) from tablename where columnX = 'whatever'
We are working on a migration project and we need to move 75 million rows from source system to target system.
Total number of columns in source system - 90 cols.
Out of the 90 columns 10 cols are system fields and rest 80 are properties for each record.
We are required to migrate all system cols and some required properties. In total we will migrate around 25 columns[10+15] for each record.
Before actaul migration , we need to do a data cleansing activity and hence we move the data to a staging table.
To create the staging table, we considered the below appraoches.
1. Create the staging table with around 30 coloumns so as to fit the data from source system[map the columns based on datatype]
2.Create the staging table with actual columns[90 columns] and import only the required properties. The rest all columns will remain NULL.
Do the data cleansing and move to target system.
My question here is, if we go with approach 2, We will not mix the data, as there will be a one-to-one mapping. But many columns will not have data and remain NULL. Will it affect the performance since we deal with 75 million rows.
I wanted to print 'null' when the column value is null. Actually, i am doing something like this
select empno||','||''''||ename||'''''||','||comm||','||sal from emp
It gives the following output for example
7369,'pointers',,200
If I use the above values to form a insert statement it throws
an error. As 'comm' value is not there.
I wish to get something like
7369,'pointers','',200
or
7369,'pointers',null,200
from the above select query
note I dint copy paste the query exactly from my sql*plus session as I am away from my oracle machine
I am experiencing somewhat same issue...but have been unable to resolve it(new to Oracle) I am getting the infile from flat file(data dump from SQL) using sqlldr to upload data to the Oracle table...since the data is already in the flat file...I cannot do anything in the SQL to pre-format the data...
Sample of ERROR I am getting - Column CREATE_DATE which has date and time - happens to other date time columns also if remove the CREATE_DATE from Control file(happens to every single line of record):
==========================================
Record 2: Rejected - Error on table LGCY_CHS.METS_CHS_USER_PRIV, column CREATE_DATE.
ORA-01841: (full) year must be between -4713 and +9999, and not be 0
[code]...
Flat file: (3 lines of data)
5|Annie|1|AR|84601D0A-6D9D-4D0F-86EB-2FDD9D7E680B|0|0|1|1|1|0|1|0|kgarbin|XPLTMCE01|2005-04-07 13:54:42.087|Annie|VAXP60|2008-10-03 16:54:59.583|2008-10-03 16:54:59.583
11|Beverly|1|BA|9A2D6304-E997-4B40-96E5-2221E521B077|1|0|0|1|0|0|0|0|kgarbin|XPLTMCE01|2005-04-07 13:54:42.087|BEVERLY|VAXP60|2008-10-03 09:39:33.973|2008-10-03 09:39:33.973
29|KGarbin|1|KG|B229FCF9-BED0-4E50-9804-83324B677C67|0|0|1|1|1|1|0|1|kgarbin|XPLTMCE01|2005-04-07 13:54:42.087|Gfoote|VAXP60|2008-09-08 10:05:01.690|2008-09-08 10:05:01.690
I have 8 columns. Some of them might be null.I want to display all 8 columns in my result. Not null columns will be first and null at the end.
Here is a sample data :
Employee table :
Employee_id Emp_fname emp_lname emp_mname dept salary emp_height emp_weight
1 aaa ddd d1 100 6 180
2 bbb ccc 120 169
3 dfe d2 5.9 223
The expected result is :
result1 result2 result3 result4 result5 result6 result7 result8
1 aaa ddd d1 100 6 180
2 bbb ccc 120 169
3 dfe d2 5.9 223
I regularly upload various tab-delimited text data files with SQL Loader. The control files always specify TERMINATED BY X'09'. Certain columns in those data files may be null for some rows, i.e. there is no character between two subsequent tabs. It always works like a clock.
Now, I have run into a specific case where I have to strip a data column of double quotes that may or may not enclose the actual data (side effect of a text export from Excel).
I tried simply adding OPTIONALLY ENCLOSED BY '"' behind the terminated delimiter and it did work for the file and column in question. Still, with this new option, files with null values are no longer decoded correctly. The loader seems to simply skip those values, which provokes column shifts and results in either a corrupt load, or a load failure.
For now, I've got a workaround dropping the new option and executing an SQL script eliminating double quotes directly in the database, but that obviously cannot last.
How to insert null record (for some columns) in table using loop.
sample data of x_tab
order_id order_name
231 xxx
123
345
111 vvvv
when we have a primary key on 4 columns and we have, say 20 million rows and we want to add one extra row. How does oracle check whether the data on the primary key is unique to the record being added compared to the 20 million rows. Does it actually compare the record being added to all the rows present in the table?
View 2 Replies View RelatedI would like to UPDATE the columns p1 and p2 of my table student (studentid:pk,name,p1,p2,...) for a given studentid.and I have a when-button-pressed trigger with this
UPDATE student
SET student.p1=:validation.proj1,
student.p2=:validation.proj2
where UPPER(student.studentid)=UPPER(:validation.studentid);commit_form;
when I run my form with a correct studentid, I got this error: FRM-40508: ORACLE error: UNABLE to INSERT record
but it is cworking correctly in sqlplus; and I have all priveligies.
I have the following query:
select col_1,col_9 from
book_temp b
where b.col_1 is not null
order by to_number(b.col_16)
;
What I want to add is the following:
COL_9
=====
NULL
A
B
NULL
C
D
E
F
NULL
G
I need to connect the NON-NULL rows to the preceding NULL row.
I am running a GROUP BY query on a few columns of enumerated data like:
select count(*), Condition, Size
group by Condition, Size;
COUNT(*) CONDITION SIZE
-------- ---------- --------
3 MINT L
2 FAIR L
4 FAIR M
1 MINT S
Well, let's say I also have a timestamp field in the database. I cannot run a group by with that involved because the time is recorded to the milisec and is unique for every record. Instead, I want to include this in my group by function based on whether or not it is NULL.
For example:
COUNT(*) CONDITION SIZE SOLDDATE
-------- ---------- -------- ----------
3 MINT L ISNULL
2 FAIR L NOTNULL
2 FAIR M NOTNULL
2 FAIR M ISNULL
1 MINT S ISNULL
I have a table which has a not null column. the column is date field. I am trying to change it to Null. But it is giving a error.
I am using below query.
ALTER TABLE T_test
modify (paid_to_date null)
I have task. I am using oracle forms 6i. I want to import excel data to oracle forms(its common task using ole2 package). But this time I want to map the columns i.e my database table having 5 columns. and the excel file is having 2 or 3 columns then i suppose to map those columns and accordingly insert it to my table.
So far i have import column heading of excel to oracle forms, then i've provide list item for mapping each column. so that user can map excel column to database columns. Now I am confuse how to write the code so that selected columns should get inserted into database.
-more details
I have table with columns id, name, location, address, plan. in those columns i need to insert records form excel. user having a excel with 3 columns col1, col2, col3. on the form i've fetch column headers of excel and in front of that i've provide database column list , so user can match excel column with database column. e.g.
COL1 --> list value of database column
COL2 -->list value of database column
COL3 -->list value of database column
Once user map those column i want to insert those values into my database table (table with columns id, name, location, address, plan). and i am confuse about this code.
My scenario is to insert values into 'out' column by comparing 's' and 'IP' columns of temp table.The exact situation is at first need to go to ip column,take a value and then go to source column and check for the same value of ip which is taken previously.Then after corresponding ip of that source column should be inserted back in previous source column.
The situation is marked clearly in file which i am attaching with '--' comments at respective places.I am also pasting the code which i tried out,unfortunately it is giving error as exact fetch returns more than requested number of rows since there are duplicates in the table.I tried it using nested for loops.Also implemented using rowid,but it didnt work.
fixing the errors or if there is any new logic that can be implemented.
DECLARE
i_e NUMBER(10);
BEGIN
FOR cur_1 IN(SELECT IP from temp where IP IS NOT NULL)
LOOP
FOR cur_2 IN(SELECT IP from temp where s=cur_1.IP)
[Code]...
create table test
(
id int ,
dat date
)
/
I want to implement a business rule such as we have for each id at most 1 dat null. So, I've created this unique index on test.
create unique index x_only_one_dat_cess_null on test(id, case when dat_cess is null then 'NULL' else to_char(dat_cess, 'dd/mm/yyyy') end);
insert into test values (1, sysdate);
insert into test values (1, sysdate - 1);
insert into test values (1, null);
insert into test values (1, null);
-- -----
insert into test values (2, sysdate);
insert into test values (2, sysdate - 1);
insert into test values (2, null);
The 4th insert will cause an error and this is what I wanted to implement. OK. Now the problem is that for non-null values of dat, we can't have data like this
iddat
------------
124/10/2013
123/10/2013
123/10/2013
1
because of the unique index (the 2nd and the 3rd row are equal). So just for learning purposes, how could we allow at most one null value of dat and allow duplicates for non-null values of dat.
when i follow this steps mention on this website
[URL].........
to modify column from null to not null i got this error and on this website its show successful
my steps are
first i create a table
SQL> create table Stu_Table(Stu_Id varchar(2), Stu_Name varchar(10),
2 Stu_Class varchar(10));
Table created.
Then insert some rows into Stu_Table
SQL> insert into Stu_Table (Stu_Id, Stu_Name) values(1,'Komal');
1 row created.
SQL> insert into Stu_Table (Stu_Id, Stu_Name) values(2,'Ajay');
1 row created.
SQL> insert into Stu_Table (Stu_Id, Stu_Name) values(3,'Rakesh');
1 row created.
SQL> insert into Stu_Table (Stu_Id, Stu_Name) values(4,'Bhanu');
1 row created.
SQL> insert into Stu_Table (Stu_Id, Stu_Name) values(5,'Santosh');
1 row created.
SQL> select * from Stu_Table;
ST STU_NAME STU_CLASS
-- ---------- ----------
1 Komal
2 Ajay
3 Rakesh
4 Bhanu
5 Santosh
Table Structure is like this
SQL> Describe Stu_Table
Name Null? Type
----------------------------------------- -------- ----------------------------
STU_ID VARCHAR2(2)
STU_NAME VARCHAR2(10)
STU_CLASS VARCHAR2(10)
now when i try to modify this Stu_id column to not null its give me error.
SQL>ALTER TABLE Stu_Table MODIFY Stu_Id int(3)not null;
ALTER TABLE Stu_Table MODIFY Stu_Id int(3)not null
*
ERROR at line 1:
ORA-01735: invalid ALTER TABLE option
and when i try to add new column with not null its also gives me error
SQL> ALTER TABLE Stu_Table add C1_TEMP integer NOT NULL;
ALTER TABLE Stu_Table add C1_TEMP integer NOT NULL
*
ERROR at line 1:
ORA-01758: table must be empty to add mandatory (NOT NULL) column
Initially i have inserted the data into table like
Date xxx yyyy
1/1/12 1 1
2/1/12 null null
3/1/12 null null
4/1/12 1 1
5/1/12 1 1
6/1/12 null null
in above example data is null for some date here my requirement is how can i copy before not null data(1/1/12) to *2/1/12, 3/1/12* .
I want to create a report by using one field and one text as columns name in layout but display the all the columns. I mention the 5 column names in query.how can I write function in summary column.
View 4 Replies View RelatedI am running a fairly busy Oracle 10gR2 DB, one of the tables has about 120 columns and this table receives on average 1500 insertions per second. The table is partitioned and the partitioning is based on the most important of the two timestamp columns. There are two timestamps, they hold different times.
Out of these 120 columns, about 15 need to be indexed. Out of the 15 two of them are timestamp, at least one of these two timestamp columns is always in the where clause the queries.
Now the challenge is, the queries we run can have any combination of the 13 other columns + one timestamp. In reality the queries never have more than 7 or 8 columns in the where clause but even if we had only 4 columns in the where clause we would still have the same problem.
So if I create one concatenated index for all these columns it will not be very efficient because after the 4th or 5th column the sorting would no longer be very useful and I believe the optimiser would simply not use the rest of the index. So queries that use the leading columns of the index in sequence work well, but if I need to query the 10th column the I have performance issues.
Now, if I create multiple single column indexes oracle will have to work a lot harder to maintain all these indexes and it will create performance issues (I have tried that). Besides, if I have multiple single column indexes the optimiser will do nested loops twice or three times and will hit only the first few columns of the where clause so I think it will kind of be the same as the long concatenated index.
What I am trying to do is exactly what the Bitmap index would do, it would be very good if I could use the AND condition that a Bitmap index uses. This way I could have N number of single column indexes which the optimiser could pick from and serve the query with exactly the ones it needs. But unfortunately using the Bitmap index here is not an option given the large amount of inserts that I get on this table.
I have been looking for alternatives, I have considered creating multiple shorter concatenated indexes but this still would not address the issue since many queries would still not be served properly and therefore would take a very long time to complete.
What I had in mind would be some sort of multidimensional index, I am not even sure if such thing exists. But essentially it would be some sort of index that could serve a query efficiently regardless of the fact that the where clause has the 1st, 3rd and last columns of the index.
So considering how widely used Oracle is and how many super large databases there are out there, this problem must be common.
I have a two question.
Question 1:How to select all columns from table except those columns which i type in query
Question 2:How to select all columns from table where all columns are not null without type each column name which is in empty data
how to insert data in oracle table without writing insert statement in oracle 9i or above. i am not going to write insert all, merge, sqlloder and import data.
View 2 Replies View RelatedI wish to make this simple statement with Toad GUI
INSERT INTO EXCLUDE_xxx
VALUES ('xxx',
'xxx',
'xxx',
'xxx',
SYSDATE);
Insert record is greyed out. How to insert new rows with Toad (click click)?
when i tried to insert the details from oracle froms..the data inserts twice to the DB..
my table structure:
create table app_sri
(a_id integer primary key,
p_first_name varchar2(30),
p_last_name varchar2(20),
p_age number(3)
);
here a_id can be genarated through simple sequence(pid_seq)...
trigger on app_sri
create or replace trigger pid_trg
[Code]....
form insertion code..
Begin
insert into app_sri(null,'robo','Big',100);
commit
End;
the data inserted...but twice
what is the reason behind the double insertion?
Suppose that, I have two tables: emp, dept
emp records the empid, emp_name, deptid
dept records the deptid, dept_name
Here is a record, it's a president or some special position in company, so it's deptid is set to NULL. Here comes the question, how can I print all the emp_name with their deptartment name?
I know how to print all the emp_name with their department name if they have dept_id, but is that possible that I merge the record with dept_id NULL?
INSERT INTO LKP_ASSET_LOCATION (LOCATION) VALUES ('AMERICA'S CUP VILLAGE')
View 2 Replies View RelatedIn 11g, When I am trying to insert the records with select insert option it is failing.
Below is my Query:
insert into table_1 select * from table_2
The above query is not inserting any records into table_1. But when i query select * from table it is returning records.
The same select Insert query is inserting records properly in 10g.