1. I generate file_name through a function.
2. File name like File_DD_MM_YY_FLXXX.
3. Now this FLXXX is the filenumber. It will be 1 for the 1st run of the day and for each of the new run it will be +1. Again next day it will start from 1.
So right now I am using a sequence and resetting it at 12 AM to 0. Is it a good approach to solve the scenario?
We are facing one issue on one of the database. The database is generating large trace files(14000) from last two days. That consumes around 15G space on the disk. And the content of the trace files is not having any meaningful message to debug:
*** TRACE DUMP CONTINUED FROM FILE /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc ***
... (Many lines with above message)
The alert log is having one repeated error yesterday:
Thu May 6 22:00:03 2010 Errors in file /apps/oracle/admin/fs90uat/bdump/fs90uat_j000_11811.trc: ORA-12012: error on auto execute of job 2647927 ORA-04063: ORA-04063: package body "ORACLE_OCM.MGMT_DB_LL_METRICS" has errors ORA-06508: PL/SQL: could not find program unit being called: "ORACLE_OCM.MGMT_DB_LL_METRICS" ORA-06512: at line 1
The corresponding trace file is having error:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORACLE_HOME = /apps/oracle/product/10.2.0/db_1 System name: SunOS Node name: corpqadb30 [Code] .......
Quote:drop table p; create table p (qty number(3), beg_no number(5)); insert into p values(5, 110); insert into p values(8, 786);
drop table s;
create table s (used_no number(5)); insert into s values(111); insert into s values(113); insert into s values(791);
Table p: it has ticket quantity and ticket begining number. Thus according to first record ticket number will begin at 110 and will end at 110+5 (Beg_no +qty). According to second record ticket number will begin at 786 and will end at 786+8 (Beg_no +qty). This table can have many records.
Table s: it has ticket numbers which are sold. The ticket will always be any number from table and will lay in any record in this format between beg_no and beg_no+qty
Im generating a csv file using TEXT_IO Utility.The file generated contains one blank line at the last record.Say there is 10 records. Open the generated csv file in notepad and move the cursor down. After 10 records, the cursor will move down one more time.
How to avoid that blank line as im using cursor in my script. The script is attached below.
I have written a Java class which generate and place the XML file in a particular location. This the java class is loaded into Oracle using the loadjava command. When I call the sql java procedure java classn will be called and the XML file is generating.
But my problem is with xml header i.e in the header it is getting as
<?xml version = '1.0' encoding = 'UTF-8'?>
Where as I am generating the xml file from by just java call instead from Oracle the header is as follows
<?xml version="1.0" encoding="UTF-8" ?>
I am thinking that this could be reason with different jar using Oracle side.
select sum(bytes)/(1024*1024*1024) "GB" from dba_segments where owner='JACK';
The above select query give the output of Schema size with 15 GB. When i perform the same schema export, the dump file size generating is 2 GB. What is the difference between the two scenarios as how come there could be a variation in file size?
redo generation. As I found the below statement in another forum."Undo segment generates the redo data also, because undo segment is database changes, so it generates the redo data also."
How a Undo segment can generate Redo and Undo datas.
I am trying to generate dynamic control file, as the files I want to upload are coming from different source and their name is constantly changing but following a fix pattern and naming convention.
I am able to generate dynamic control file through SQL. But while calling from BATCH file, i am unable to sent the file name as parameter.
All the examples i have searched are for UNIX, how to do it with BATCH File in WINDOWS.
and the error I'm getting is "incompatible versionnumber 3.1 in the dumpfile mydmpfile.dmp"
The dump file was exported using oracle 11.2.0.2.0. I tried to download/unzip the client version of instantclient 11.2.0.2 and add it to the PATH variable in windows and then re-run the script, but it didn't work.
How I should go from here to import this dump file without reinstalling the whole database?
I have a csv file extracted from mainframe which has to be loaded into oracle using sqlldr utility.The numbers are in the format +0000003333, -0000003232.44 etc
I have to convert it to 3333 and -3232.44 and insert into the table.
I have used syntax like
Load file....append into table (t_num expression "to_number(':tnum,'99999.999')")
what's the difference between checkpoint_change# and controlfile_change#. what's the checkpoint_change# use for ? does it use for recover ? what's the controlfile_change# use for ? when the controlfile_change# will be increase ?
SQL> select controlfile_sequence# from v$database;
I need to export large number of records from select into the text file. It's about 2milion records.I can do it by PLSQL (see below) where executing of process takes time too much. How to export to text file faster?
We are facing a different issue in our database. From yesterday night, the archive log generated with 5 digit. But it supposed to be 6 digit. Hence we are not able to apply the logs in DR Location.
I'm trying to load a csv file into an external table and when I select the table 0 rows is the result.
The log file has the following errors:
KUP-04021: field formatting error for field DEPTNO KUP-04023: field start is after end of record KUP-04101: record 1 rejected in file /usr/tmpclie.csv error processing column EMPNO in row 2 for datafile /usr/tmpclie.csv ORA-01722: invalid number
i need to write to word doc from pl/sql . these are letters which needs to printed . Can i use the same UTL_FILE and will i be able to control the font etc from pl/sql
11gr2, HP-UX, 64-bit. From last one week, we are seeing tremendous increase in archive-log growth. Its comes almost around 110GB per a day (db size=600g). I don't think its usual.Alertlog is clean and we don't see any alerts. Remedy app is built on this db and its creating tables on fly with LOB columns and indexes on them. As a first step. shall I disable logging for indexes on LOB columns ?
Database : Oracle 8i Application Server: Oracle AS 9i Developer Suite : Oracle 6i(forms & reports)
I have created some character reports in oracle reports 6i.. when reports used run from my ERP(oracle 6i oriented) ... report usually took time to create on server. Sometimes my ERP used to hang up due to busy reports generation. And then we have to kill some processes to finally create charater reports on emergency basis.What is the valid reason for slow generation of report(character file )?
on weekends we have too many archive logs generated .i have taken the data of a week and found that average archive log generated from monday to friday is 7 files per day but on satuarday and sunday the average is 60 files and FG1 gets full. on weekends we have all type of backups running like incremental,archival and logical backups and on sunday we have full physical backup
what is the reason of too many archive log files generations at weekends. is it due to hot and logical backups , if yes then how ?
We need to generate barcode through oracle forms and print the barcode using Zebra Printer.
How to generate the barcode using oracle forms and which software we should use to generate the barcode as well as how can we print the barcode label with customize information on label with barcode.
I have oracle 9i running on HP-UX, I would like to find how much redo we are generating in a given period of time, is there any script that I can use to get this information?
I'm in 11203, and generation of ASH report is very slow.AWR and ADDM reports are generated quickly.To understand what happen, I check the wait event on the session that is executing ASH report, and I found that this session is waiting 99% with "controlfile sequential read". Is there any way to make the generation of ASH report quick ?Why the generation need to access to the controlfile?.
I have a base table by name EMP_MASTER.The Create Statement goes something like this....
CREATE TABLE EMP_MASTER(ST_CODE NUMBER,EMP_CODE NUMBER);We would like to insert records in such a way that there are 10 st_codes and for each st_code we need to insert 100 records.
FOR EX : for st_code 1 we need to have emp_code from 1 to 100 and then we need to insert st_code 2 and the emp_code must be from 101 to 200 and so on...... It must go in this way till we have st_code 10 and hope the emp_code will be in the range of 901 to 1000.
We need some thing similar to proc or PL/SQL block(declare begin..... end)