PL/SQL :: How To Automate Task Of Dumping Table Data Into CSV File
Apr 25, 2013
I have written a below procedure to dump the table data to .csv file.But the problem is i have 20 tables which is holding 75 studies data. Means every table will have 75 studies related data.what i supposed to do is i need to export the data from 20 tables for each study. but this procedures requires me to run the procedure 75(studies)*20(tables) times. is there any technique instead of i manually giving the table name and study name , will it take from any text file where we defined 75 studies in that. or easy there any better way .
create or replace procedure dump_table_to_csv1(p_tname in varchar2,
p_dir in varchar2,
p_filename in varchar2)
is
l_output utl_file.file_type;
l_theCursor integer default dbms_sql.open_cursor;
[code]........
As the title of this topic illustrates, i'm having trouble dumping relevant data from columns of different tables. I am using isql*plus. I have three tables appropriatly related. A 'course' table, 'student' and 'next_of_kin' tables. I have many students enrolled on various courses but only a hanfull of courses offer the module option 'Database Systems'. I have no 'module' table but i know the three course names which provide the module option. I intend on producing a report hich lists all students enrolled on the courses which provid the module option 'Database Systems'.
I have attempted the report but i keep getting a 'cartesian product' displaying all next_of_kin names instead of the appropriate. Also i am struggling to come up with the right WHERE statement to depict only the three courses which provide the module option 'Database Systems' as defined by 'courseNo' in both 'course' and 'student' tables.
Here is the most recent attempt:
--set echo off --set pagesize 24 --set feedback off --set linesize 78 col A format 99999999 heading 'Student No'
how to automate a data from oracle into excel...i have a table "emp" in oracle database now i need colums of emp ex:firstname ,last name, id from that table into excel.
so i need a script which when you schedule it it should create a excel file in particular postion,i was told we have to crete a directory from sql and using utl_file then we have to write a script and then schedule that.out look in excel should be
We are designing a three tiered system (client, application/web server, database server) that will allow clients through a web interface to select a text file from the operating system and load that file into a intermediate table (import database table). Many users will do this concurrently and data will load into a single table. The text files come in monthly for about 100 firms. No user is able to insert or update the data of another users data (there is a check out system). Their are about 30 to 40 users that will be using the system doing various functions but it is possible for 10 to 20 users to import data at one time. The files can have anywhere from 2000 to 25000 records at a record length of 398.I am concerned about having a good design strategy as well as decent performance.
Problems with each of the Oracle loaders.
1) External tables - Can not read data text files on the application server(which is where they want the text files to go) secondly you cannot create a instance of a external table. Multiple users will be using the external table to point to different text files and loading at the same time.
2) Sqlloader - is mainly a OS level tool and I am not sure how I could programatically point it to a different text file each time a user wants to load. The client will have to have the ability through code to point sqlloader to the correct file name.
I had a creative approach and was wondering if this would work. I would like to use external tables just like a connection pool. I would propose first a scheduled OS job to move files to the database server. I would create about 20 external tables with 20 different directory objects. Using a stored procedure for the user to call and pass in file name and audit info as needed. I would use a Load lock pool table (my invention) to load the name or a code for the external table in use. The procedure loads this code into my load lock pool table when a external table is in use and deletes the name when the load is completed. The procedure would check through a series of if statements whether a particular external table was in use. If in use (exist in load lock pool table) I would check the next available external table until a external table not in use is encountered. Now potentially 20 users at one time but not likely would be laoding into the same table at one time.My questions
1) Could Oracle handle this strategy? What do I need to consider performance wise with the possibility of so many users loading into a single table at one time?
2) Do any of you maybe have another strategy to do this?
I have written a java code which reads 2 millions of data under a particular column from CSV file and store it into a set. Now there is a table in Oracle database which contains 10 millions of records for that particular column. Now, I want to form a SQL query which select those records under that particular column from the database table which is in CSV file but not in database table. For e.g.
If I consider the CSV file name as employee.csv and it has column called employee_name under which the records are as follows
I have a table for which I need to take the data backup into a file. I have oracle installed in Unix box. I have tried spooling in form of insert statements, but its taking time as the table size is huge.
suggest an approach to pump data into file, which can be inserted to a new table in same or different schema.
i have problems to display icons in a task bar in an application that is being migrated from oracle 4.5 to 10g in windows xp.The task bar shows all the icons in the original aplication but when the application runs in 10g, just some of the icons are shown, not all of them.All the icons are in the same file direction "C:fileiconos" and they're all gif files.
I have added the file locatoin to the configuration of the registry.dat,forms, in the registry of the computer, in the orion and orion-web files and in the default file following advice given to other threads here but the problem stays in the application.I have added also the variables to the registry of the coputer for UI_ICON and UI_ICON_EXTENSION with their values.
I have a requirement to extract the data from a table using the UTL file utilities.
My problem is, Say i have a table t1 with column C1,C2, C3, C4, C5. This table t1 gets loaded everyday. i need to pickup the data only that which has changed/inserted in the last load. How can i achieve this ? There is no timestamp in this table.
we have a database application which is done frequently.in these we load data throught Sql loader, we create an DB instances, we do several DML operation on the database.
now for such task in an application we need to keep an logging track of each task performed in PL/SQl procedure packages.
I am new to oracle designer, forms. The requirement is to select a csv file in a form ,read the file and load selected columns from a csv file into a table.
I am using CLIENT_TEXT_IO. I want to know how to extract the data from selected columns from csv file and insert into a table if the lenth of the columns are of variable length.
Another condition is that if there are duplicate rows based on orderid then take the maximum order seq nbr.Do I need to use temp table for this logic?
i have to export data from emp table which has address column and address column contain comma, when i am running below script, the comma part in address field comes in next tab in csv file, is there any way we can avoid shifting to next tab and can have complete address in one tab.
set echo off set verify off set termout on set heading off set pages 50000 [code]....
I need a way to ftp file to remote server by reading data from table. I searched a couple of sites which asked me to use Chris xutl_ftp package..but unfortunately the site is no accessible..
Here is the code
CREATE OR REPLACE PACKAGE UTL_FTP AUTHID CURRENT_USER AS /** * LICENSE: GNU Lesser General Public License (LGPL) * Copyright (C) 2003-2006 Russ Johnson (john_2885@yahoo.com)
I have a Clob file as a in parameter in my PROC. . File is comma separated.need procedure that would parse this CLOB variable and populate in oracle table .
Database was recently upgraded from 10.2.0.4 to 11.2.0.3 and the EM dbcontol repository was recreated.If I schedule a sql tuning advisor task for any sql query, i get this error. I have also tried to drop sysman user and recreate it, but no luck.
Type Findings Recommendations Rationale Benefit (%) Other Statistics New Explain Plan Compare Explain Plans Error ORA-01727: numeric precision specifier is out of range (1 to 38)
I had a task to reschedule database jobs to a different date as they were Running Statistics job running during business hours..
I used dbms_job.change procedure for the same On Thursday i.e. 17th June i changed a job as follows exec dbms_job.change(11, NULL, TRUNC(SYSDATE+2), 'TRUNC(SYSDATE+7)');
When i checked the status today querying user_jobsi found that the job was started today at 00:57 hrs.
I checked the alert logs as well but couldn't find any errors. How to proceed with the troubleshooting for this issue.
I want to get all the column values in a table and save them into a text file.Beside UTL_FILE, is there any other method which will result better performance in writing to text file?
I have an oracle DB version 11.2 running on oracle enterprise linux 5.9. How to transfer data from the oracle DB to a flat file on a windows server. What i have done so far is to use utl_file to create a csv file on the oracle server and am now attempting to transfer this file.
I was going to use scp or rcp but am unable to get this to work(was looking at filezilla). Another option i can use is ftp as i have a UNIX script which i can run to do this. All this is done through an oracle package which is run hourly through dbms_scheduler. I have been using sp_host_command to run unix commands directly from pl/sql so can use this to run a unix script for last resort if i cant find an easier way to automate this.
I was wondering if there is any way to know in which Tablespace and Datafile my Table is located. I have exported a table and about to delete it as i am partitioning it.
My requirement is to to truncate the table and load it with the data present in file. In the control file, I used the "TRUNCATE" command as well.In case, if the file has some invalid data and sqlldr fails, my existing data will be lost. Is there any option in which the sqlldr does not TRUNCATE the table in case of a failure.
Not able to understand what's wrong with the code. I am trying to import data to a table using a CSV file. I have exported the data (CSV) from the interactive report and I am just trying to insert the same data to the table, through a process. When, I tried to do so; its throwing an error message saying NO_DATA_FOUND and file is not getting inserted into wwv_flow_files table.
But when I removed the data from the CSV file for the comments field and then tried importing the file, the process worked. I don't understand whats the problem with the code.
I have a sample app setup in my workspace for this weird problem.
[URL]
Workspace details:
CSV file with comments field and data in it - when trying to import - throws an error message NO_DATA_FOUND
CSV file with comments field and without data in it - tried importing - this worked
I have requirement as follows. I need to load the data to the target table on every Saturday. My source file consists of data of several sates. For every week i have to load one particular state data to target table. If first week I loaded AP data, then second week on Saturday karnatak, etc.
Provide code also how can i schedule the data load with every Saturday with different state column values automatically.