is it possible to upload very large files in oracle's tables. For example 1-2 gigabyte video file or even more. In other words is it possible to use oracle as file server to upload very large files and store them?
i have this table structure create table file (id number, media_file blob).how i upload pdf or jpg files from this table to computer for example to C:myfiles
I have to cleanup data from our tables (Production Environment) that contain millions of rows. The question is apart from the solution of the partitioned tables what alternative recommended solution suggests Oracle?
To delete these tables by using a cursor PL/SQL block or to import all the database and in the tables that we want to remove the old rows to use the QUERY option of the data pump utility.
I have used both ways and i have to admit that datapump solution is much much faster than the deletion that suffers from I/O disk.The question again is which method from these two is more reliable and less risky for the health of the database.
I have used webutil_file_transfer.Client_To_AS_with_progress to upload files from client to Application Server using Forms 10g.However, now i want to save file in database and not upload to database as blob.I mean I want to save the file from client TO a folder available in the database server.I was wondering, there is no documentation available on WEBUTIL.
We have Oracle 10g(10.2.0.4) RAC on AIX 5.3, there are 3 RAC instances on each node.From Oct 9th, we found one of the instance in node 1 generated a large size trace files in udump. The largest size of the trace file take up 3G. But there's only about 15G $ORACLE_BASE direcotry. After some time, we should delete some trace files for releasing the space.
Here is a part of alert log when this issue happen:
Sun Oct 9 23:18:15 2011 Errors in file /oracle/app/oracle/admin/bzywk/udump/bzywk1_ora_3166258.trc: ORA-00600: internal error code, arguments: [17087], [0x70000010DF9F580], [], [], [], [], [], [] Sun Oct 9 23:18:16 2011 Trace dumping is performing id=[cdmp_20111009231816]
[code]....
I checked with some trace files, I found all these files contains a huge info of processes.
Pretty new to APEX and was hoping to accomplish with native functionality.
Using the latest APEX, there is nice functionality that creates a wizard for you for a single table. I wanted to have a single wizard but pick a table then have the ability to map my fields for insert/update.
I have a large table with 450 column and we are only using nearly 170 columns and our BD block size is 8k.The DBA informed that there is an row chaining happening in the Database.My question is if we have data available in 170 column .why row chaining is happening.
The DBA informed us to remove the unnecessary columns .. Does those empty columns have any impact on the chaining.If we increase the size of DB block to 32k . does it will resolve the issue.
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
11.2.0.3 This is for a build. We are still in development. No risk of data loss. As part of the build, I drop the user,re-create it, re-create the objects. Allows us to test the build all the way through. Its our process. This user has some tables with several 1000 partitions. I ran a 10046 trace and oracle is using pl/sql to do loops to do DML against the data dictionary. Anyway to speed this up? I am going to turn off the recyclebin during the build and turn it back on. anything else I can do? Right now I just issue 'drop user cascade'. Part of is the weak hardware we have in the development/environment. Takes about 20 minutes just to run through this part of the script (the script has alot more pieces than this) and we do fairly frequent builds. I can't change the build process. My only option is to try to make this run a little faster.
Create small functional indexes for special cases in very large tables.
When there is a column having one values in 99% records and another values that have to be search for, it is possible to create an index using null value. Index will be small and the rebuild fast.
Example
create index vh_tst_decode_ind_if1 on vh_tst_decode_ind (decode(S,'I','I',null),style)
It is possible to do index more selective when the key is updated and there are many records to create more levels in b-tree.
create index vh_tst_decode_ind_if3 on vh_tst_decode_ind (decode(S,'I','I',null), decode(S,'I',style,null) )
To access the record can by like:
SQL> select --+ index(vh_tst_decode_ind_if3) 2 style ,count(*) 3 from vh_tst_decode_ind 4 where 5 decode(S,'I','I',null)='I' 6 group by style 7 ;
Anyway, I've loaded 5 .csv files through an external table and after doing it I tried to delete them.
But this error comes "Cannot delete 'filename': It is being used by another person or program".
I closed Oracle Developer and tried again deleting them manually, and the result was the same.
Tried restarting and deleting one .csv and it worked, but when I open sql dev and tried deleting the other files couldn't do it.
The question is: files that were used on external tables can't be deleted if developer is working?
The thing is that I've created a Stored Procedure that delete the files and obviously can't work. So, I should delete every time I load a csv file after restarting the computer.
I created a music database, and I'm having trouble inserting the audio, video, and lyrics (.doc) into their respective tables. I searched through the forums and found some example code, but I'm not sure how to modify it to fit my purposes.
What I need is a procedure that can insert a complete record into the track table (including an .mp3 file for each row), one that can insert a record into the lyrics table (including .doc file for each row), and a procedure that can insert a single record into the Video table (including an .mv4 file).
I am trying to spool data from tables into flat files. I am using the following scripts to accomplish it
1. A cmd file (windows) that makes a call to a sql file 2. The SQL file which generates another query file at the run time, depending upon the table name passed to it 3. The run time query file , that executes the final query and spools the data into a txt file | delimited
For e.g. :
Actual command passed C:Spool_utilityspool_utility TABLE_NAME
set echo off SET newpage 0 SET feedback off SET linesize 32767 set pagesize 0 [code]........
The above file generates a table_name.sql file with the actual table name at run time and gets executed and the output is written to the table_name.txt file.
This works perfectly fine. But the issue is when someone passes some wrong table name or if there is a actual run time error while executing the query , the error with details itself itself gets written to the end spool file.
For e.g. : if i do this just to generate an error and execute it from command line, the query generates an error and writes the error to the spool file , but at the command prompt where I executed the command I do not see any error and the process seems to have run perfectly well
set xxx on xxx off as above
spool &1.sql;
Prompt Select * from &1 where rownum><10---this will cause the issue
spool off set termout ON @ &1
EXIT
Eg of spool file generated :
from table_name WHERE rownum><=10 * ERROR at line 62: ORA-00936: missing expression
My question is, is there any way i can capture this runtime error and return this error to my calling sql script spool_utility.sql and then propagate it to the calling command file and do some tasks for eg removing the spool file and writing the actual error to a log file . Basically any way to know at my OS calling level that the entire spooling operation was unsuccessful.
I want to load data into more tables from many files ,based on first column value,which is FILLER field.i am trying to test this scenario with two oracle tables with similar definition. and load one record on each table using WHEN/POSITION keywords. for this , i added first column as reference column in the data which i have in ctl file itself.
1st table loaded with 1st record. But, 2nd record not loading.if i missed anything with WHEN/POSITION keyword ?
This is the error in log file for 2nd table(WD1):
Record 2: Rejected - Error on table WD1, column TAB. ORA-01841: (full) year must be between -4713 and +9999, and not be 0
Table WD1: 0 Rows successfully loaded. 1 Row not loaded due to data errors. 1 Row not loaded because all WHEN clauses were failed. 0 Rows not loaded because all fields were null. [code]....
I have a bunch of data in 50 excel files. I need to load all these 50 files into 50 different tables. I would like to do this in one script. I went through the forum to get this information, people suggested create a shell script etc or list the sqlldr command multiple times etc.
provide some clarity on this as to what's the best approach.If it is through shell scripting provide the shell script and instructions to execute it. Iam new to shell scripting.
I have 780(12*65) csv files generated from 65 databases.Now I have to load this 780 csv files into 12 tables created in my database for some monitoring and reporting purpose.to call the sql loader I am plannig to create 780 lines like below.
we know creating 780 control files is the difficult task.So I have created only 12 control files. is there any mechanism to pass a varible (planning to declare it in the sqlldr line) to the infile clause like below in sql loader?
the following situation, I have a directory named /dat/global/stock/ inside this i will get files named differently for example below.abcdef.112dfgrt.2......
Here i want to load this file one by one into the external tables and generate one more file based on some enrichment.
Step 1. Have to take first file and to load into the ext table. Step 2. Enrichment Step 3.File generation.
Now here i am facing a problem that in that particular directory i usually get 1000 files so i need to get file one by one and to put in one more directory. how can i get file one by one and generate file by using oracle loader
I have written make files that compile .pc files in unix. This was for several projects that use an oralib source code directory.Just running proc on one target .pc file works fine on unix. I am trying to use proc - Oracle 10.2.0 - in windows and I keep getting:
Quote:unable to open include file #include <stdio.h> and other C library headers.
I am doing all development under cygwin, this way I can write a makefile just like under unix instead of using nmake.All C library headers are in /usr/include When I run proc on Solaris as that:
proc program.pc No problems, and I do get program.c.
However in windows I get the previous error message. I have tried to do proc include=/user/include program.pc and proc include=/user/include parse=full program.pc but I still get the same error message.
I need to upload data from XML file to oracle database table.For this I have referred some websites in which solutions given in a such a way that loading entire XML file into LOB column inside the table.But I want only data to be extracted from XML file.
ERROR at line 1: ORA-22288: file or LOB operation FILEOPEN failed No such file or directory ORA-06512: at "SYS.DBMS_LOB", line 523 ORA-06512: at line 10