Im generating a csv file using TEXT_IO Utility.The file generated contains one blank line at the last record.Say there is 10 records. Open the generated csv file in notepad and move the cursor down. After 10 records, the cursor will move down one more time.
How to avoid that blank line as im using cursor in my script. The script is attached below.
1. I generate file_name through a function. 2. File name like File_DD_MM_YY_FLXXX. 3. Now this FLXXX is the filenumber. It will be 1 for the 1st run of the day and for each of the new run it will be +1. Again next day it will start from 1.
So right now I am using a sequence and resetting it at 12 AM to 0. Is it a good approach to solve the scenario?
I have written a Java class which generate and place the XML file in a particular location. This the java class is loaded into Oracle using the loadjava command. When I call the sql java procedure java classn will be called and the XML file is generating.
But my problem is with xml header i.e in the header it is getting as
<?xml version = '1.0' encoding = 'UTF-8'?>
Where as I am generating the xml file from by just java call instead from Oracle the header is as follows
<?xml version="1.0" encoding="UTF-8" ?>
I am thinking that this could be reason with different jar using Oracle side.
select sum(bytes)/(1024*1024*1024) "GB" from dba_segments where owner='JACK';
The above select query give the output of Schema size with 15 GB. When i perform the same schema export, the dump file size generating is 2 GB. What is the difference between the two scenarios as how come there could be a variation in file size?
redo generation. As I found the below statement in another forum."Undo segment generates the redo data also, because undo segment is database changes, so it generates the redo data also."
How a Undo segment can generate Redo and Undo datas.
I have created the below stored procedure and calling the procedure in when-button-pressed trigger. Problem here is that when I cancel the file selection in file open dialogue box its raising exception.
PROCEDURE TEST IS buffer_lines client_text_io.file_type; v_outputstr VARCHAR2 (32767); p_delimiter VARCHAR2 (10) := '","'; v_transaction_no VARCHAR2 (10); BEGIN [code].......
We are facing one issue on one of the database. The database is generating large trace files(14000) from last two days. That consumes around 15G space on the disk. And the content of the trace files is not having any meaningful message to debug:
*** TRACE DUMP CONTINUED FROM FILE /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc ***
... (Many lines with above message)
The alert log is having one repeated error yesterday:
Thu May 6 22:00:03 2010 Errors in file /apps/oracle/admin/fs90uat/bdump/fs90uat_j000_11811.trc: ORA-12012: error on auto execute of job 2647927 ORA-04063: ORA-04063: package body "ORACLE_OCM.MGMT_DB_LL_METRICS" has errors ORA-06508: PL/SQL: could not find program unit being called: "ORACLE_OCM.MGMT_DB_LL_METRICS" ORA-06512: at line 1
The corresponding trace file is having error:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORACLE_HOME = /apps/oracle/product/10.2.0/db_1 System name: SunOS Node name: corpqadb30 [Code] .......
I am trying to generate dynamic control file, as the files I want to upload are coming from different source and their name is constantly changing but following a fix pattern and naming convention.
I am able to generate dynamic control file through SQL. But while calling from BATCH file, i am unable to sent the file name as parameter.
All the examples i have searched are for UNIX, how to do it with BATCH File in WINDOWS.
We need to generate barcode through oracle forms and print the barcode using Zebra Printer.
How to generate the barcode using oracle forms and which software we should use to generate the barcode as well as how can we print the barcode label with customize information on label with barcode.
i need to write to word doc from pl/sql . these are letters which needs to printed . Can i use the same UTL_FILE and will i be able to control the font etc from pl/sql
11gr2, HP-UX, 64-bit. From last one week, we are seeing tremendous increase in archive-log growth. Its comes almost around 110GB per a day (db size=600g). I don't think its usual.Alertlog is clean and we don't see any alerts. Remedy app is built on this db and its creating tables on fly with LOB columns and indexes on them. As a first step. shall I disable logging for indexes on LOB columns ?
Database : Oracle 8i Application Server: Oracle AS 9i Developer Suite : Oracle 6i(forms & reports)
I have created some character reports in oracle reports 6i.. when reports used run from my ERP(oracle 6i oriented) ... report usually took time to create on server. Sometimes my ERP used to hang up due to busy reports generation. And then we have to kill some processes to finally create charater reports on emergency basis.What is the valid reason for slow generation of report(character file )?
on weekends we have too many archive logs generated .i have taken the data of a week and found that average archive log generated from monday to friday is 7 files per day but on satuarday and sunday the average is 60 files and FG1 gets full. on weekends we have all type of backups running like incremental,archival and logical backups and on sunday we have full physical backup
what is the reason of too many archive log files generations at weekends. is it due to hot and logical backups , if yes then how ?
I have oracle 9i running on HP-UX, I would like to find how much redo we are generating in a given period of time, is there any script that I can use to get this information?
I'm in 11203, and generation of ASH report is very slow.AWR and ADDM reports are generated quickly.To understand what happen, I check the wait event on the session that is executing ASH report, and I found that this session is waiting 99% with "controlfile sequential read". Is there any way to make the generation of ASH report quick ?Why the generation need to access to the controlfile?.
I have a base table by name EMP_MASTER.The Create Statement goes something like this....
CREATE TABLE EMP_MASTER(ST_CODE NUMBER,EMP_CODE NUMBER);We would like to insert records in such a way that there are 10 st_codes and for each st_code we need to insert 100 records.
FOR EX : for st_code 1 we need to have emp_code from 1 to 100 and then we need to insert st_code 2 and the emp_code must be from 101 to 200 and so on...... It must go in this way till we have st_code 10 and hope the emp_code will be in the range of 901 to 1000.
We need some thing similar to proc or PL/SQL block(declare begin..... end)
I am trying to move my archive from linux to window as the log switch occur.... for this i mount my directory of window in linux .. but it doesn't moving... dest 1 on linux and dest 2 for windows...
In alert log the permission denied appears... the mounted directory in linux of windows is owned by root ... and is not changing to oracle.
Redo is getting generated very high. how to find out the reason ? database kept under 2 node cluster. chcked alert log trace and log writer trace files. pasted the content as below:
--alert log trace from node1 ( node2 also has same type of message ). Archive destination disk group - TXCOM_BACKUP_01 having enough space ( 80gb )
Mon Jan 7 00:49:10 2013 Thread 1 advanced to log sequence 448546 (LGWR switch) Current log# 1 seq# 448546 mem# 0: +TXCOM_DATA_01/txcom/onlinelog/group_1.274.785770579 Current log# 1 seq# 448546 mem# 1: +TXCOM_DATA_01/txcom/onlinelog/group_1.302.802265189 Mon Jan 7 00:49:10 2013
[code]...
In the alert log, I am able to see the archive destination disk group ( TXCOM_BACKUP_01 ) is getting DISMOUNTED and again getting MOUNTED during every archive file generation. .
Mon Jan 7 00:49:20 2013 SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted SUCCESS: diskgroup TXCOM_BACKUP_01 was mounted SUCCESS: diskgroup TXCOM_BACKUP_01 was dismounted
archive destination parameter in both nodes are not configured. it should read diskgroup name. ( +TXCOM_BACKUP_01 ) and corresponding size limit. Should i configure this ?
SQL> show parameter db_recovery
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_recovery_file_dest string db_recovery_file_dest_size big integer 0
[code]...
should i bring the database to mount stage and set log_archive_max_proesses to high count ? now value is 2 ( default )
I am getting below mentioned error in alertlog file very frequently.
ORA-06512: at "CTXSYS.DRUE", line 160 ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 747 ORA-06512: at "BEE_CODE_05252013.SS_INDEX_JOB_PKG", line 496 Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc: Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc: Errors in file /beprddb/diag/rdbms/beprddb/beprddb/trace/beprddb_ora_10581.trc: Tue Jul 02 15:31:15 2013 [code]....
I am using : Database : Oracle 8i Application Server: Oracle AS 9i Developer Suite : Oracle 6i(forms & reports)
I have created some character reports in oracle reports 6i.. when reports used run from my ERP(oracle 6i oriented) ... report usually took time to create on server. Sometimes my ERP used to hang up due to reports lock. And then we have to kill some processes to finally create charater reports in emergency basis.
What is the valid reason for locks on report(character file )creation.?