Server Utilities :: Mapped 100th Column From CSV File After 5th Column
Sep 10, 2012
I have written the code for sql loader, I have given termination of file is | , i have given five columns name in sequence...My Requirement is that i want to mapped 100th column from Csv file after 5th column how to do it.
OPTIONS (ERRORS=100000000)
LOAD DATA
CHARACTERSET UTF8
INTO TABLE "TARGET_STG_AMC"
TRUNCATE
[code]....
I have a standard schema named ABC and 600 more schema's over there in my database.They all has same table name and column name as on standard schema. But in some tables number of columns varying. So I need to compare all schemas with my standard schemas column name. I create below script but it is generating output in infinite loop.
SET SERVEROUTPUT ON DECLARE V_COLS VARCHAR2(20);
BEGIN FOR CUR_CCD IN(SELECT DISTINCT TABLE_NAME,OWNER FROM ALL_TABLES WHERE OWNER LIKE 'CCD_MAIN' [code]....
My requirement is to load the data from feed file into two tables based on the value of a column in feed file.
say column in feed file is "Activity_Value" . If the Activity_value is 10, the data from feed file should be loaded into Table A and If the Activity Value is 20 the data from feed file should be loaded into Table B.
CONTROL FILE: LOAD DATA INFILE 'sample.txt' INSERT INTO TABLE TEST_RECORDS WHEN REC_TYPE_HDR='HDR' FIELDS TERMINATED BY '|' TRAILING NULLCOLS ( [code]....
The value 603 should have loaded into TRAN_TYPE field or column, instead it loaded into the next field or column LINE_COMP.
I want to insert calculated data while loading the data from sql loader.
eg let say column INTERNAL_TRANSACTION_ID,WAIT_TIME,MESSAGE_GUID,TRS_SIZE,RETURN_MESSAGE_GUID are null in input file then i want to populate column DEV_FLAG as 'U' ELSE 'D'
My control file looks like below but doesn't have DEV_FLAG.
load data BADFILE '/backup/temp/rajesh/RIO/BadFiles/FILENAME' append into table TEMP_rio_RESP_TIME_LND TRAILING NULLCOLS ( INSTALLATION_ID CHAR TERMINATED BY "|" OPTIONALLY ENCLOSED BY '"', TRANSACTION_ID CHAR
I have the following table intra_trades with t_id as the primary key. There is a trigger on that table that gets the next sequence and inserts it into the t_id column for every insert. I need to load data into that table using SqlLoader as chunks of 3000 rows and return the t_id back the script that Sqlload the data so that it can use that t_id's for the next process in the script.
The problem is that the only unique key on that table is the t_id which has a sequence on it and it is the pk. There can be duplicate rows in that table to meet the business needs for the company. So it is hard to associate the rest of the data in a row with t_id. The only thing I can think of is return the t_ids in the order it inserted so if the script keeps the order of rows in the memory it can associate the tid with the rest of the intra_trades info.How can I make the sqlloader return an array of t_ids that inserted? I need to return the t_ids's in the order it inserted so that the script can associate the t_id with the rest of the rest of the data in a row.
I have csv file which having parent and child related data.i need to load parent data in to parent table and get parent reference id and store in child table with child data. i am not able to find how to get parenet refernce id using control file. In my csv file i have 2 parent name rows. I need to create one A record in parent table and get that parent primary(P_ID) for 'A' record and put into child table for c_name test1 and test2 records.
My controll file ------------------- LOAD DATA INFILE 'mydata.data' APPEND INTO TABLE parent when (1:1) = 'p' fields terminated by ',' optionally enclosed by '"' trailing nullcols
[code].....
able to load parent data and generate P_ID, but i am not able to get P_ID for child records.
Sl#Emp_noNameAddress 00101Tom1/B-XYZ street 00202Jon1/C-XYZ Street
Employee Datafile 001, 01, Tom, 1/B-XYZ street 002,02,Jon, 1/C-XYZ Street
Above is a sample data file. Now I would like to import the data into an Oracle table called employee using Oracle 9i SQL Loader utility. But the table has only 3 fields (Emp_no,Name & Address), so I would like to skip Sl# while loading data. I do not want to manually modify data file. How should I write .ctl file.
Sample .ctl file.
load data INFILE 'dataEmployee' BADFILE 'Employee.bad' DISCARDFILE 'Employee.dis' into table Employee fields terminated by ',' TRAILING NULLCOLS (Emp_no NULLIF Emp_no = BLANKS, Name NULLIF Name = BLANKS, Address NULLIF Address = BLANKS )
I need to copy one table which is having NCLOB column into our new DB(11g) from old DB (10g). As this table having huge records (1 crore) it is difficult to use direct insert statement (INSERT INTO ... SELECT * FROM @remotedb). I don't have DBA privilege to use loader.
I have one issue while loading the value through sql*loader the last column data is SG1 and when its loaded , it is length of this columns is showing 4 char. Unable to understand, how to find this extra space. Though used TRIM but does not work.
I successfully loaded a record into table with BFILE data type. However I don't understand how to verify if I did this correctly. I am also suspicious of the value FSSLPB//I. FSSLPB is the directory object name. When I delete the record from the table the BFILE column value defaults to (null) so it seems the table is initialized for accepting bfiles.
I am trying to:
1. Load an external LOB into a staging table using SQLLDR, then 2. Using PL/SQL insert the ext LOB into a BLOB column from the staging table to an end user table (or application table).
I am overwhelmed with reading docs and not getting the complete picture for different approaches to loading into Oracle.
I was cloning a schema user1 as user2 in the same database.
user1 had quota on 2 tablespaces user1_data and user1_index.
I created user with name as user2.
I created tablespace user2_data only and granted user2 unlimited quota on that tablespace only (did not grant him 'resource' role or unlimited tablespace privilege) Now exported user1 schema as follows
during import i encountered following errors for so many constraints
"ALTER TABLE "table2" ADD CONSTRAINT "constraint_name1" FOREIGN KEY ("CTR_ID") REFERENCES "table1" ("CTR_ID") ENABLE NOVALIDATE" IMP-00003: ORACLE error 2270 encountered ORA-02270: no matching unique or primary key for this column-list IMP-00017: following statement failed with ORACLE error 2270:
I found that the it happened as the primary key of table1 was not created for which error was logged in the log file
. . importing table "table1" 19441 rows imported IMP-00015: following statement failed because the object already exists: "ALTER TABLE "table1" ADD CONSTRAINT "T1_PK79" PRIMARY KEY ("CTR_" "ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 F" "REELISTS 1 FREELIST GROUPS 1) TABLESPACE "USER1_INDEX" LOGGING ENABLE " . . importing table "table5" 0 rows imported
However, I checked that the T1_PK79 does not exist in the user2 schema though it exists in user1 schema Neither the index for priamry key (T1_PK79) existed in user2 schema not the table <table1> existed before this import Then what could be the reason that I am getting an error "IMP-00015: following statement failed because the object already exists"?
I assume tablespace for index would not be an issue here as other indexes got created properly in user2_index tablespace during this import.
I tried this twice, once with user2 schema and then with user3 schema as well (with different tablespace), but result is the same.
There were no users connected to the database during export and no background jobs were modiying any data in schema user1 while export.
I am having issue with IMPDP on ORACLE VIRTUAL COLUMNS.I am having following table with Virtual column defined with Not null. Expdp is fine without any issue.
DDL : ------ CREATE TABLE alert_hist ( alertky INTEGER NOT NULL, alertcreatedttm TIMESTAMP(6) DEFAULT systimestamp NOT NULL, alertcreatedt DATE GENERATED ALWAYS AS (To_date(Trunc("alertcreatedttm"))) VIRTUAL NOT NULL
When I do the import (IMPDP) it got failed with the following error.
. . imported "TESTSCHEMA"."VALART" 359.1 KB 4536 rows ORA-31693: Table data object "TESTSCHEMA"."ALERT_HIST" failed to load/unload and is being skipped due to error: ORA-39097: Data Pump job encountered unexpected error -1
After that I dropped the Virtual Not null column and recreated that column with Nullable.
DDL : ----- alter table alert_hist drop column alertcreatedt; alter table alert_hist add alertcreatedt DATE GENERATED ALWAYS AS (To_date(Trunc("alertcreatedttm"))) VIRTUAL;
After that I took the expdp and impdp , it went fine with out any issue.
I want to load data from a file using sqlldr.I have a table commissions ( technician_id char(5) , tech_name char(30) , Comm_rcd_date DATE , Comm_Paid_date DATE , comm_amt number(10,2) )
my file is 00001,TIMOTHY TROENDLY,2011-03-04T01:45:12+0006,2011-03-04T01:45:12+0007,123.56 00002,KENNETH KLEMENZ,2011-03-04T01:45:12+0006,2011-03-04T01:45:12+0009,123.56 00003,SHUNDAR ARDERY,2011-03-04T01:45:12+0006,2011-03-04T01:45:12+0005,123.56 write a ctl file to load this data.
I am trying to extract the BLOB column in Oracle Apex and show the contents of it as a popup message.
The files which are stored in the blob column are being zipped using windows zip function in Perl and being inserted in the dataabse using some perl program as follows,
my $zip = Archive::Zip->new(); my $zipfile = $zip->addFile( @$_->[1] ,basename(@$_->[0]));
While extracting the blob column we are using the utl_compress.lz_uncompress_extract() to extract the uncompressed version of file, but we are getting the following error as
"ORA-29294: A data error occurred during compression or uncompression".
I even tried the solution as mentioned in oracle forum
I want to store a pdf file into a database column of BLOB type. The pdf file on the client system not on the database server. Is there any way i can achieve this?
I have one hirarchical query which return the parent to child hirarch level data. it has 11 level child data. i want to create column based on number of child in hirarchy. though i know it is 11 but it can change also.Is there any way i can create the column dynamically
On my APEX page i have region which has sql query as source and it displays as HTML table the query result to the user.
I want to display addinonal column with a hyperlink inside, and that hyperlink would have CGI/URL-parameters which contains the other values of the HTML row.
So, let's say my APEX region queryes columns as "select c1, c2, c3, c4 ..." and displays out values "V1, V2, V3, V4" then i want to have addional output column with such hyperlink:
a href="f?p=100:7:13467554876288::NO::c1,c2,c3,c4:v1,v2,v3,v4">My link column with CGI-parameters</aHow can i create such hyperlink?
The overall idea is that the link would forward to a page which loads those values "v1,v2,v3,v4" into form fields and user can proceed from there.
I'm trying to do a pivot query in oracle to get the years from a column and make a separate column for each. I found an example of the code to use on the internet and i changed it for my own tables but i'm getting errors. Namely a "FROM keyword not where expected" error at the beginning of the 'avg(...' statements.
I have copied the code used in
select stud_id, 2006, 2007, 2008, 2009 from ( select stud_id, avg(case when year=2006 then ((present/poss)*100) else null end) 2006, avg(case when year=2007 then ((present/poss)*100) else null end) 2007, avg(case when year=2008 then ((present/poss)*100) else null end) 2008, avg(case when year=2009 then ((present/poss)*100) else null end) 2009 from attendance.vw_all_attendance_perc group by stud_id );
(1) how can i fill some value in a table column based on some existing column value automatically without user intervention. my actual problem is i have 'expiry date' column and 'status'. the 'status' column should get filled automatically based on the current system date. ex: if expiry date is '25-Apr-2011' and current date is '14-May-2011', then status should be filled as 'EXPIRED'
(2)hOw can i build 'select' query in a report (report 6i) so that it will show me list of items 'EXPIRED' or 'NOT EXPIRED' or both expired and not expired separately in a single report based on user choice. 'EXPIRED' & 'NOT EXPIRED' can be taken from the above question no. 1.
<Column name> value 1 value 2 value 3 . . . value n
Since the column is small it can fit in a page more then one time. I know how to make it print more then once: i switch the repeating frame to print down and across and i modify the frame that contains it so the horizontal elasticity is variable.
After these changes my report looks like this: <Column name> value 1-----------------------value n+1-----------------------value m+1 value 2-----------------------value n+2-----------------------value m+2 value 3-----------------------value n+3-----------------------... .------------------------------- . .------------------------------- . .------------------------------- . value n-----------------------value m
What i want is my report to look like this:
<Column name>-------<Column name>------------<Column name> value 1-----------------------value n+1-----------------------value m+1 value 2-----------------------value n+2-----------------------value m+2 value 3-----------------------value n+3-----------------------... .------------------------------- . .------------------------------- . .------------------------------- . value n-----------------------value m