How To Check Whether Connection Is Lost In Middle Of Batch Process
Jul 6, 2012
I am using Oracle 11g database. The situation is I need to run a batch process a fixed intervals and copy data into my table from a remote database using a DB Link. I have 3 tables,1. Original data Table, 2. Exported data Table, 3. Status table. All the data from the remote database will be loaded into the exported data table. As and when each copying is finished the status table will be updated with 'Y' if export is success and with 'N' if export is unsuccessful. When the exporting is over, then status table is checked, if all the status' is Y then the data from Exported data table will be copied into the Original data table after truncating all the rows of Original data table. If the status is N then the Original data table will be intact.
How to check whether the exporting was success for all the exported data ? what to do if the connection through the DB Link was lost during the exporting?
Since our upgrade from 10.2.0.4 to 11.2.0.3, we are experiencing occaisional overnight job failures throwing the ORA-03135 Connection lost contact errors. It doesn't happen often, but every once in a while. Even so, it is bugging me because I can't find anything wrong!
We have a dual server setup, with one IA64 HPUX 11.31 server hosting the database, and a second IA64 HP-UX 11.31 running all of the application code. I'm seeing errors in the Alert log that is pointing me to a specific cjq0 trace file, and the contents of that trace file are pretty much as follows:
*** 2013-08-15 02:05:45.541 Waited for process J001 to initialize for 107 seconds *** 2013-08-15 02:05:45.544 Process diagnostic dump for J001, OS id=19302 ------------------------------------------------------------------------------- os thread scheduling delay history: (sampling every 1.000000 secs) 0.000000 secs at [ 02:05:45 ] NOTE: scheduling delay has not been sampled for 0.312200 secs 0.000000 secs from [ 02:05:41 - 02:05:46 ], 5 sec avg 0.000000 secs from [ 02:04:46 - 02:05:46 ], 1 min avg 0.000031 secs from [ 02:00:45 - 02:05:46 ], 5 min avg loadavg : 0.31 0.17 0.09 [code]....
There were jobs that completed successfully just prior to this failure, and subsequent jobs ran successfully as well. I'm stumped - not sure what's going on here. I have reviewed a lot of posts relative to the ORA-03135, and followed as many of the recommendations as I can, but I'm still seeing this errors periodically. Their frequency has not changed since I started applying these changes, so I know I haven't stumbled on the cause as yet.
My application is using "Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production" and Oracle 11g client v11.1.0.6.0.My application server runs continuously and checks for new requests. But problem occurs when I do not send any request to my server for 40-45 minutes i.e. server is idle or doing nothing except checking new incoming requests for those 40-45 minutes. After that if I try to connect my server by sending some request, it shows following error message and my server crashes:
Message String: ORA-03135: connection lost contact Process ID: 7586034 Session ID: 61
I'm having an issue with stale optimizer statistics for some SQLs that are run in a batch process. The problem is that the process runs many times during the day - sometimes 20 to 30 times. And each time, the tables are updated, i.e. rows are inserted or deleted, etc.
So eventually the optimizer statistics for those tables become stale and the performance of the SQLs start to slow down (a lot). How best to gather the optimizer stats on the tables so they don't become stale when the batch process runs each time? The problem is that I also can't add/modify the code in the batch process because it is delivered by the vendor as is.
I am running Oracle Database 10g R2 on windows 2003, I want to create a batch file to check if the database is idle or not, and if it is idle shut it down and start it up.
develop a Oracle stored function or procedure to confirm a availability of datafile on a specific local directory to be read by Oracle External table. The fine looks like filenameddmmyyyy.csv.
I have create a Oracle directory named data_dir that is mapped with physical directory c: mep
Once it is confirmed that datafile is available the ETL process is started.
I working on data transferring from one database server to another database server. but at starting i want to check weather remote database is connected or not?
for e.g.
a := get_remote_connection; --- Calling function for check remote connection return boolean if a then <call my data_transfer_proc> else exit; end if; reply.
1) How to add a new column to the existing table's particular position, instead of atlast.
2) I created a table without mentioned the datatype size as below Create table dummy (name char, age number). Then what is the default size will be allocated for those column's?
We have a fact table t1 in the warehouse which has above 6 million records.There is to be an update like this where t2 has aid+bid as composite primary key. column aid repeats in t1.There's performance problem and we'v been told to break this huge update into pieces with few commits in the middle.
update t1 set t1.aid = (select t2.aid from t2 where t1.bid = t2.bid )
I've tried cursor loop with 3 commits in the middle based on if condition that evaluates on every iteration.
I installed web logic server and form & report on my linux system and gave configuration option later . so now web logic admin server is running on different port and wls_reports server is running to different port in cluster. How its running on cluster and when i run
localhost:9002/reports/rwservlet/getserverinfo?server="report_server_name" i am getting rep-51002 error.
I know this question must have been asked several time. I just want to know the many ways in which we can have a DB up and running even if we do not have any backup of control file not even the trace of the same.
i was worked on oracle 11g.1, oracle data miner 11.1.0.4, sql developer 3.
unfortunately, after finishing my tables and models ,the hard disk damaged and was replaced with other.the oracle setup was not on system partition,it was on D:/ and i had windows backup for C, D partition. i restored oracle partition D which contains old oracle files and i installed oracle again but on partition F.
recreate paramater file if you lost init.ora or spfile.ora
STEPS
SQL> startup ORA-01078: failure in processing system parameters LRM-00109: could not open parameter file 'D:ORACLEORA92DATABASEINITGPT2.ORA' How to resolve ORA-01078 with LRM-00109 error?
For resolving these error, you should need to replace init.ora or spfile if you have backup of same files. If you don't have backup then go to alert.log and copy all parameters in one file and save as "init.ora". Modify same init.ora with some parameters which need single quotation mark " ' " like background_dump_dest,controlfiles,db_name and "dispatchers ......)". Connect as sysdba in SQL*Plus and execute command "create spfile from pfile=" or startup pfile or startup.
While i'm trying some of fail over and switch over scenario's on my primary and standby db configuration. But suddenly i completely loss my standby db box, due to hardware failure. so now i'm not able to recover that box. It is almost non recoverable hardware failure to that box. My primary db is still up and running well. When i check the alert logs of primary db it has some errors related to standby db. Due to configuration it tries to sync the standby db, which is not possible to connect anymore. So my main question is how get rid of that errors/stop that errors from primary db. which parameter's i've to turn off in primary db. Herewith i'm giving some of initSID parameter of Primary db and a snap of alert log from primary db.
My primary db is in MAXIMUM AVAILABILITY mode. Snap from alert files
Errors in file /ora9isoft/odb/OH1/admin/wbdata/bdump/wbdata_lgwr_1248.trc: ORA-12541: TNS:no listener ****************************************************************** LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2 ****************************************************************** Creating archive destination LOG_ARCHIVE_DEST_2: 'wbdata.wbhouse' LGWR: Transmitting activation ID a043c21d [code]....
Our system administrator upgraded from apex 4.1 to 4.2 last night. This morning, we no longer have the dev-toolbar at the bottom of the page when I run a page from Application Builder. Not sure where to look to sort that out.
- First time to execute: Using all indexes on 2 tables
- Second time to execute: Using only indexes on first table, full table scan on the other
- Third time to execute: Do FTS on both of tables.
Now, I show the objects and relate information here:
The Tables:
system@dbwap> select count(*) from my_wap.news_relation;
COUNT(*) ---------- 272708
system@dbwap> select count(*) from my_wap.news_content;
COUNT(*) ---------- 95092
system@dbwap> desc my_wap.news_content; Name Null? Type ----------------------------------------------------- -------- ---------------- ID NOT NULL NUMBER(11) SUBJECT NOT NULL VARCHAR2(500) TITLE VARCHAR2(4000) STATE NUMBER(1) IMGPATH VARCHAR2(500) ALIGN VARCHAR2(10)
I installed a server on a Linux server Oracle11.2.0.3 5.8 from 64 bits.The user who installed the Oracle software is "oracle" and belongs to oinstall and dba groups.
ID ORACLE uid=503(oracle) gid=504(oinstall) groups=504(oinstall),505(dba),506(cvargas) context=user_u:system_r:unconfined_t
The software has been installed correctly and have been able to create two databases. As a requirement of the application, you must create an operating system user that launches processes.This user belongs to dba.But when trying to access Oracle via sqlplus gives this error:
-bash-3.2$ sqlplus davinci/xxxxxxx SQL*Plus: Release 11.2.0.3.0 Production on Fri Dec 7 23:22:40 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. ERROR: ORA-12547: TNS:lost contact Enter user-name:
However, if you connect using a network descriptor works correctly.
SQLPLUS DAVINCI/xxxx@tns_alias
So there must be a problem with access to local resources.This user has the same profile file variable (. Bash_profile)
SQL*Plus: Release 11.1.0.7.0 - Production on Tue Mar 29 21:02:42 2011 Copyright (c) 1982, 2008, Oracle. All rights reserved. ERROR: ORA-12547: TNS:lost contact addbctl.sh: exiting with status 9 /d01/oracle/VIS/db/tech_st/11.1.0/appsutil/scripts/VIS_localhost/addbctl.sh start
[code]....
BTW I have not set $ORACLE_HOME, $ORACLE_BASE, $ORACLE_SID yet.
We are trying to restore a database to a cold backup DBID.However while our datafiles were backuped up for 6 months retention we found out subsequent delete obsolete command wiped out the Control file related to this backup piece..Now what is left with this backup piece is just datafiles..
Can we still recover the database with a subsequent controlfile autobackup for the same DBID.?
We are having a major issues with the batch run. we are using oracle 11g db. We run the scripts to populate the tables and then call scripts to run the extractions. The issue here each time we run the sql it takes so much inconsistent time.We have created index and run the db stats then run the extractions.The sql sometimes takes 10 minutes or sometimes takes hours to run? This is major show stopper of the project.