ODP.NET :: ManagedDataAccess OracleConnection.close Is Extremely Slow
Jul 26, 2013
In a vb.net app we use Oracle.ManagedDataAccess.dll (version version 4.112.3.50) to read 1000s of records of a ResultSet. Once the records are read and processed, we call connection.close() which takes 15 minutes or more to complete. The more records read the longer the close takes.This is a minimized code example:
dbcon = New OracleConnection()dbcon.Open()cmd = New OracleCommand(sqlString, dbcon)reader OracleDataReader = cmd. ExecuteReader (CommandBehavior.SingleResult)While (reader.Read()) 'consume and process recordEnd While reader.Dispose()cmd.Dispose()dbcon.Dispose()
I am running a custom script that creates about 100,000 rows of demo data.
The table I am loading in to is fairly wide (100 columns), and only has about 10,000 rows at the moment.
The script goes really fast for the first 10K rows (100 inserts per second), and then incrementally gets slower. By 20,000 rows it is doing about 1 row per second. At this rate, it will never finish!.
Each insert is a separate statement, using bind variables and wrapped in a single transaction. I've tried dropping the indexes first but it didn't make a difference.
OEM shows it's 100% CPU bottleneck with no other information I can glean.
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
TABLE NAME: ========== create table TEST_PREC (NO NUMBER(4,2)); DECLARE BEGIN INSERT INTO TEST_PREC VALUES (12.34); DBMS_OUTPUT.PUT_LINE('the no of records before commit '||SQL%ROWCOUNT); commit; /* What's happening inside commit */
How do I close a window in query mode?My form has a button "Populate from Charge Master" which opens a window containing a List Item (Department Names) and 2 buttons,
1. "Select the Dept" and 2. "Cancel"
The WHEN-BUTTON-PRESSED trigger code for "Populate from Charge Master" button is as follows:
begin SHOW_WINDOW('WINDOW_POP_FROM_CHARGE_MASTER'); go_block('bl_med_dep'); enter_query; end;
BEGIN hide_window('WINDOW_POP_FROM_CHARGE_MASTER'); show_window('WINDOW1'); GO_BLOCK('BL_CHARGE_COMPANY'); END;
Here WINDOW1 is the main window and WINDOW_POP_FROM_CHARGE_MASTER is the window which must be closed when "Cancel" button is pressed.
When I remove "enter_query" statement in "populate from charge master" trigger code, I am not able to select any of the departments from the list. and If I have that statement, I am not able to hide/close the window.
We have been getting the below error message, while backing up the DB(Full DB backup) via netbackup. The DB version is 10.2.0.3 (64bit). The archive backup goes fine. the problem is with the full backup.
input datafile fno=00241 name=/u106/oradata/iwhdbqa/iwh_mvlog_01x.dbf channel CH02: starting piece 1 at 29-MAY-12 RMAN-03009: failure of backup command on CH01 channel at 05/29/2012 21:41:28 ORA-19506: failed to create sequential file, name="iwhdbqa_20120529210533_db_ipnc7n6l_1_1", parms=""
[code]...
I came to know that this is a netbackup configuration/client bug. Can we do something from the oracle side.
The RMAN COMMAND STRINGS we use is as follows:
SET SNAPSHOT CONTROLFILE NAME TO '/u101/app/oracle/product/10.2.0.5/db_1/dbs/iwhdbqa_snapshot_db.snap'; RUN { ALLOCATE CHANNEL CH01 TYPE 'SBT_TAPE' parms='SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64.1' ; SEND 'NB_ORA_SERV=atlbackupmaster, NB_ORA_POLICY=iwh-dbqa_oracle, NB_ORA_CLIENT=iwh-dbqa-bu'; ALLOCATE CHANNEL CH02 TYPE 'SBT_TAPE' parms='SBT_LIBRARY=/usr/openv/netbackup/bin/libobk.so64.1' ;
I need to rescue some valors from Excel, and then close the excel file without saving changes and without messages.
I am using THe DDE package and when using
DDE.EXECUTE(convid, '[Save()]', 10000);
there is no problem, but the changes are saved. I have tried '[Exit()]' and '[Close()]' but always have error message ORA-106555. Is there any way of doing this without errors?
I am running a report which is called for a set of employees picked up in the cursor. For each employee the report is run in a different window. However, the report runs fine and the output gets saved in the local machine but the window doesn't get closed automatically.
How to close the reports window automatically without manual intervention?
In my environment found maximum open cursor exceeds error. So how can I found the open cursor list and how can I close that cursor without restarting. Any SQL commands to close the open cursor.
i design one master detail block and one command button for saving the data....as i click on save button data get saved and after that if i want to close form then at this time it again asking me for save changes window..
I am using Skilllbuilders modal page 2.0 to edit the form. Sometimes, the end user clicks on the edit link to trigger the modal page and sees the form , but he/she could not do any changes. Thus, when clicking Apply Changes button, we will be redirected to the page 102 ( the empty page ). How can I get around and make the page 102 to close automatically in case I am redirected to it ?
I put this in a JavaScript a dynamic action to be fired as page loads:
I am trying to create the Physical StandBy Database in the same server. Till last 2 Final steps, everything went on well. In the final steps, when I try to open the StandBy Database, it throws the following Error:
******************************************************* SQL...> alter database open; alter database open * ERROR at line 1: ORA-01154: database busy. Open, close, mount, and dismount not allowed now *******************************************************
I tried creating the Physical Standby with the following Steps.
Environment: Oracle Release 10.2.0.1.0 / Windows 2003 Server Enterprise Edition SP2 Primary DB = 'PrimDB' StandBy DB to be created ='StBy1DB'
In the same Server, location for
PrimDB datafiles='F:oracleproduct10.2.0oradataPrimDBData', StandBy Datafiles='E:StandBy_DBData' PrimDB Control Files='F:oracleproduct10.2.0oradataPrimDBControl' StandBy Control Files= 'E:StandBy_DBControl'
[code]....
Step 1 Create the Oracle Service for StandBy DB "StBy1DB' and Create the Standy DB Password file.
The issue is slow insertion in particular table(i.e A Table) it means insertion in all other tables(i.e B, C, D tables) in same schema is going properly but only when i am trying to insert in one particular table(i.e A table) in same schema it takes long time to complete insertion. Daily insertion is 6000 rows.
I have check all the details like Tablespace size, Analyzing of table, Analyzing of indexes and all. There is no any error alertlog file.
We have two database instances on the same server. One was left at 9.2.0.7 and one was upgraded to 10.2.0.3. Connecting externally (sqlplus '/as sysdba') to the 9.2.0.7 database is lightning fast. Connecting externally to the 10.2.0.3 database is very slow, comparatively speakiing. This is on an IBM AIX-5L (64-bit) machine. We are using "tnsnames".
I am migrating a oracle 9i database to 11g r3. I can only use imp. As the database is huge, I have split the exp dump by schemas. In my recent test, i have split the schema into 4 seperate threads to be imported into the new oracle11g database. The 4 thread of imp consist of almost similar sizes of schema (Eg thread 1 - Schema 1, 2 ,3. Thread 2 - Schema 4,5,6 etc)
All the dump files are in the same mount point. When i execute the import (4 threads) together, the total import timing is each thread is between 2.5 days to 3.5 days.
Then i proceed to try only 1 thread, only 2 hrs. So could this be a IO issue or oracle memory problem?
We have done cloning of our ORACLE APPLICATION(11i),after that performance of ERP is getting slow (like fetching of data). What we can do to increase the performance.
we are busy updating one databasee from a windows platform 2003 oracle 10G to a linux and oracle 11r2
We exported/imported the data and it looks ok Explain plans look the same . but our heavy batches are twice slower than on the windows box ,the two top events are disk related, sequential and scattered reads there are 90% of the time of the batch job , i read some white paper and found that using ASM can be bad in some cases the same with the linux for this particular kind of scattered reads , i was just wondering if just changing the SGA to 10GB instead of 4GB to get more cache and speedup the things .
We are using one software it is a test tool for verify the data base posting speed from server to client systems. In windows 2008 R2, database posting speed is very slow when compare to windows 2003 server .
Server configuration is same for both servers ( RAID 5 , RAM 4 GB) how we can improve writing performance in Oracle
Database : Oracle 8i Application Server: Oracle AS 9i Developer Suite : Oracle 6i(forms & reports)
I have created some character reports in oracle reports 6i.. when reports used run from my ERP(oracle 6i oriented) ... report usually took time to create on server. Sometimes my ERP used to hang up due to busy reports generation. And then we have to kill some processes to finally create charater reports on emergency basis.What is the valid reason for slow generation of report(character file )?
i have a nightly import ( about 20 tables ) and it takes up to 5 hours..we have one table of about 800,000 lines and the rest are between 1000 and 200,000 this is very slow when i monitor the import i see a very long amount of wait for the SQLnet from client ,
i run the import on the Database server itself .. if i check the current statement i see it's moving from one to one for instance i have
SELECT /*+ all_rows ordered */ "A".ROWID, 'REPORT', 'CONTRACT_LVL', 'SYS_C001329497' FROM "REPORT"."CONTRACT_LVL" "A" WHERE NOT (LENGTH (bonus_nat) <= 31) then SELECT /*+ all_rows ordered */ "A".ROWID, 'REPORT', 'CONTRACT_LVL', 'SYS_C001684584' FROM "REPORT"."CONTRACT_LVL" "A" WHERE NOT (LENGTH (outcome_cd) <= 1)
etc and it takes hours DB is on windows 2003 runnin oracle RDBMS 9.2.0.7 while the import screen show 185000 lines imported..I also see a lot of consistent gets for this sessions raising at that time..Would it be better to export import without statistics ?
I need also to mention that the dump file comes from a linux hosted Database don't think it will make the difference for a exp/imp.It's a peoplesoft Database there are a lot of tables more than 15000 and if i take the table mentioned above and i want to check its constraints it takes decade before toad can display them.I have seen that we have a incredible amount of constraints on those tables it might be the reason .
I just wonder if the system catalog needs to be tuned ? /* Update */ why but now the huge number of wait is no as "Library cache lock".
I am working on an SAP application migration project using Oracle 10.2.0.2 database. We are migrating the application from Windows to Solaris.
During the process we are facing problem with very slow insert operation on a particular table.The server's capacity is very good and so no resource bottleneck.
The table contains around 2,70,000 rows and inserting at around 100 rows per 10 seconds.
The table contains following data types.
SQL> desc SAPDATDB.CAF_GP_VALDEF; Name Null? Type ----------------------------------------- -------- ---------------------------- VAL_UUID NOT NULL NVARCHAR2(34) VAL_GUID NOT NULL NUMBER(10) VAL_CLOB NCLOB
We have a function that is called in various other PL/SQL packages, and performance has always been very good. On 29th Sept we upgraded our db to 10.2.0.5.0 and since then, a package that calls the function has gone from ~4mins, to ~2.5hrs to run.
In PL/SQL Developer, a simple select that calls the function has gone from ~0.5secs to retrieve the first 100 rows, to ~12secs. I ran a profile of the main package, which highlighted the where the bottleneck was (a fetch from an explicit cursor). Running an explain plan on the cursor SQL doesn't really show up anything untoward.
However, I found that if I subtly changed the cursor SQL, (so that it did the same thing, but was written differently), it fixed the performance problems.
where ade_start_date between cpDate-cpDays and cpDate-1 /*and ade_start_date < cpDate and ade_start_date >= (cpDate-cpDays)*/
From this, we thought that there may have been a bad cached execution plan which the change of code forced a recalculation of. However, about 2 hours later, the changed code ran slowly again. So a further subtle change was made, which fixed the issue again. Until this morning, when it was running slowly again.
This feels like it is CBO/stats related potentially, but is out of my area of knowledge unfortunately. We have our DBA investigating this, but there may be things I can test to narrow down the possibilities in the meantime.
but problem is that when above query is used or oracle reports then and lexical parameter is used then i slows down more than 100 times
SELECT pd, vr_date, x.vr_no, vr_sn, cac,
[code]....
above query is copy of top query but differnce is that &P_Unt_Cls lexical parameter is used instead of and Unit in(29,34,35,36,37,38,44,45,60,70,71), I am unable to understand why it slow down query.
I am running one simple delete statement in one table with rownum<10000 but it is taking nearly 10 to 15 mins.Table doesn't have any child table rows and triggers.
I have big trouble with some Update query on Oracle 11G.I have a set of tables (5) of identical structures and a view that consists in an UNION ALL of the 5 tables.None of this table contains more than 20 000 rows.Let's call the view V_INTE_NE. Each of the basic table has a PRIMARY KEY defined on 3 NUMBERS(10,0) -> INTE_REF / NE_REF / INSTANCE.
Now, I get 6 rows in another table and I want to update my view from the data of this small table (let's call it SMALL). This table has the 3 columns INTE_REF / NE_REF / INSTANCE.
When I try to join the two tables :
SELECT * FROM T_INTE_NE T2 WHERE EXISTS ( SELECT 1 FROM SMALL T1 WHERE T2.INTE_REF = T1.INTEREF AND T2.NE_REF = T1.NEREF AND T2.INTE_INST = T1.INSTANCE )
I get the 6 lines in 0.037 seconds
When I try to update the view (I have an INSTEAD OF trigger that does nothing (just return for testing even without modifying anything), I execute the following query :
UPDATE T_INTE_NE T2 SET INTE_STATE = -11 WHERE EXISTS ( SELECT 1 FROM SMALL T1 WHERE T2.INTE_REF = T1.INTEREF AND T2.NE_REF = T1.NEREF AND T2.INTE_INST = T1.INSTANCE ) The 6 rows are updated (at least TRIGGER is called) in 20 seconds. However, in the execution plan, I can't see where Oracle takes time to achieve the query : Plan hash value: 907176690
[code]....
Predicate Information (identified by operation id):
2 - access("T2"."INTE_REF"="T1"."INTEREF" AND "T2"."NE_REF"="T1"."NEREF" AND "T2"."INTE_INST"="T1"."INSTANCE")
Note- dynamic sampling used for this statement (level=2)
Statistics ----------------------------------------------------------- 3 user calls 0 physical read total bytes 0 physical write total bytes 0 spare statistic 3 0 commit cleanout failures: cannot pin
[code]....
I get exactly the same execution plan (when autotrace is ON).Furthermore, if I try to do the same update on each of the basic tables, I get the rows updated instantaneously.