I have a requirement to add all the objects of schema A of database X to schema B of database Y. Schema A and B does not have any common objects.
Apart from tables,indexes schema A also have BLob object, materialize views, type. The size of database X is 50 GB. Through initial research I found data pump , traspoting tablespaces , and RMAN duplication .
Is the any other method I should be looking at? Is RMAN duplication even a option because I just want to copy a schema not the whole database? The data base has other schemas too.
I'm having a problem with SQL loader and the control file. I want to load a delimited file. The script will eventually be automated where the file name is passed in to the script, it's not a static name.
It's a simple SQL loader Unix script that I have created as follows
I have a rather complicated process to import text files into my DB.I'm given thousands of files every day, separated by "," and with 80 fields each. With a bash script, I take the 45 fields I need and then split each file into x number of files grouping the rows by three fields.Then I use SQL Loader to insert them into de DB.
The problem is that now I must insert on two tables and the "WHEN" clause doesn't allow the use of > and <.
To make things a litle clearer take this text file (already splited and grouped and ready to be inserted): ... 1,1,135,1900,0,12,114,2011/08/25 17:19:00,135,... 1,1,135,1900,0,13,119,2011/08/25 17:19:00,136,... 1,1,135,1900,0,14,117,2011/08/25 17:19:00,137,... 1,1,135,1900,0,15,113,2011/08/25 17:19:00,138,... 1,1,135,1900,0,16,119,2011/08/25 17:19:00,139,... ... When field 6 is higher or equal to 14, it must go to table a.When field 6 is lower than 14, it must go to table b.I can't use external tables as I'm in a different server.
I have a csv file extracted from mainframe which has to be loaded into oracle using sqlldr utility.The numbers are in the format +0000003333, -0000003232.44 etc
I have to convert it to 3333 and -3232.44 and insert into the table.
I have used syntax like
Load file....append into table (t_num expression "to_number(':tnum,'99999.999')")
I tried a lot to load data to table from excel(.csv) using sql*loader the oracle version of sql*loader doesn't support the control file created using notepad(.ctl).Though i given a filename with extension as .ctl it seems as a .txt file. Is there any alternate way to create it?
I am trying to load multiple XML files into Oracle DB using SQL Loader. The filenames of the XML files starts with a description and then numbers, where the numbers are different each time.
Here's my CTL file:
LOAD DATA INFILE * INTO TABLE XML_TABLE TRUNCATE xmltype(XML_TABLE) FIELDS (
[code]....
I don't want to keep having to go into the ctl file and change the numbers of the xml file. Is there a way where I could just load all .xml files that begins with 'description'? Like maybe
exporting a big table (many rows = 3.000.000). Using the command exp the error message returned is "expdat.dmp > EXP-00028: failed to open expdat.dmp for write". Is there a possibility to export this table in multiple files (as a splitter)?
I want to load data into more tables from many files ,based on first column value,which is FILLER field.i am trying to test this scenario with two oracle tables with similar definition. and load one record on each table using WHEN/POSITION keywords. for this , i added first column as reference column in the data which i have in ctl file itself.
1st table loaded with 1st record. But, 2nd record not loading.if i missed anything with WHEN/POSITION keyword ?
This is the error in log file for 2nd table(WD1):
Record 2: Rejected - Error on table WD1, column TAB. ORA-01841: (full) year must be between -4713 and +9999, and not be 0
Table WD1: 0 Rows successfully loaded. 1 Row not loaded due to data errors. 1 Row not loaded because all WHEN clauses were failed. 0 Rows not loaded because all fields were null. [code]....
I am loading data using sqlldr command in UNIX to an oracle table and want to concatenate timestamp to a file name in the "create_file_name" column in the code below.
I have the below code within the control file..
LOAD DATA TRUNCATE INTO TABLE TABLEA TRAILING NULLCOLS ( file_type POSITION(1:5) CHAR, business_date POSITION(16:23) DATE "YYYYMMDD", create_file_name "FILE_NAME" EXPRESSION "SELECT TO_CHAR(CURRENT_TIMESTAMP(3), 'YYYYMMDDHH24MISS') FROM DUAL")
The load fails with SQL Loader error: "Expecting valid column specification, ",", ")", found keyword EXPRESSION found instead of column. How the timestamp to a filename can be appended?
I am using perl script to dynamically generate the control file.If I have data in the control file as well as in the datafile, how would i write the control file in that case. Is the below one correct?
load data INFILE '*' INFILE '/export/home/test/test.csv' INSERT INTO TABLE EMP fields terminated by "," optionally enclosed by '"' trailing nullcols ( empno, empname, sal, deptno ) [code]....
Is there any way that if my control file contains half of the data and my data file contains the other half of the data can i club this data into a logical record in the control file to populate the DB?
My exact 2nd requirement is, my DB contains 5 cols and for 1 col the data is common(countryName) which i have to pass to the control file dynamically and the .csv file contains the data for the other four cols. How could i combine these in the ctrl file and populate the DB?
so if the DB contains CountryName, empid, ename, sal and dept..I will get the CountryName to the ctrl file and csv contains the data for empid, ename, sal and dept. How would i combine these data into a logical record and populate the DB?
I exported oracle database from one server in a dmp file. then i imported it in another oracle database server. when i saw the imported data the columns which were storing german data is in rubbish characters.
then i remember that the database from where i exported is having nls language as german. i executed this statement to set the nls on the new server
alter system set nls_lang=german SCOPE=SPFILE
but now my database is not getting started always giving me error - cannot access NLS data files or invalid environment specified.
i also set the path NLS_LANG=german in the solaris environment.
I have a bunch of data in 50 excel files. I need to load all these 50 files into 50 different tables. I would like to do this in one script. I went through the forum to get this information, people suggested create a shell script etc or list the sqlldr command multiple times etc.
provide some clarity on this as to what's the best approach.If it is through shell scripting provide the shell script and instructions to execute it. Iam new to shell scripting.
I used search option but i didn`t find answer to my question. My problem is with importing files into table( 2 columns > 1 number, 2 clob ). I can`t use SQL Loader becouse i need to load 30.000 tar.gz files from 1 folder, IMPORT option in SQL developer didn`t work too.
i have more than 100 dumpfiles to import into my oracle 11g database. i know how to import(impdp) for same named dumps but here all the dumpfile names are totally different(ex: aa.dmp,bb.dmp,).
I am trying to generate dynamic control file, as the files I want to upload are coming from different source and their name is constantly changing but following a fix pattern and naming convention.
I am able to generate dynamic control file through SQL. But while calling from BATCH file, i am unable to sent the file name as parameter.
All the examples i have searched are for UNIX, how to do it with BATCH File in WINDOWS.
how to create control file and how to load the data through command window in our database using sql * loader.i am having structure in my database and .csv file in my desktop.
i load data to this table from a file using sqlldr.
what is the proper data type should i use in control-file for both the fields.? i dont mention any datatye in ctl file which is working fine with given dataset.
I have 780(12*65) csv files generated from 65 databases.Now I have to load this 780 csv files into 12 tables created in my database for some monitoring and reporting purpose.to call the sql loader I am plannig to create 780 lines like below.
we know creating 780 control files is the difficult task.So I have created only 12 control files. is there any mechanism to pass a varible (planning to declare it in the sqlldr line) to the infile clause like below in sql loader?
create table revenue ( person varchar2(23), month varchar2(3), rev_amt number )
and i have data in a file like below
Person Jan Feb Mar Apr Mai Jun Jul Aug Sep Oct Nov Dez -------------------------------------------------------- Schnyder,345,223,122,345,324,244,123,123,345,121,345,197 Weber,234,234,123,457,456,287,234,123,678,656,341,567 Keller,596,276,347,134,743,545,216,456,124,753,346,456 Meyer,987,345,645,567,834,567,789,234,678,973,456,125 Holzer,509,154,876,347,146,788,174,986,568,246,324,987 Müller,456,125,678,235,878,237,567,237,788,237,324,778 Binggeli,487,347,458,347,235,864,689,235,764,964,624,347 Stoller,596,237,976,876,346,567,126,879,125,568,124,753 Marty,094,234,235,763,054,567,237,457,325,753,577,346 Studer,784,567,235,753,124,575,864,235,753,864,634,678
i want to load it into the table in the following way.
Person Month Revenue ------------------------- Schnyder Jan 345 Schnyder Feb 223 Schnyder Mar 122 Schnyder Apr 345 Schnyder Mai 324 Schnyder Jun 244 Schnyder Jul 123 Schnyder Aug 123 Schnyder Sep 345 Schnyder Oct 121 Schnyder Nov 345 Schnyder Dez 197 ........ ... ... How to write control file to load this data into the above revenue table.