SQL & PL/SQL :: Bulk Inserts Get Extremely Slow After 10000 Queries?
Feb 20, 2010
I am running a custom script that creates about 100,000 rows of demo data.
The table I am loading in to is fairly wide (100 columns), and only has about 10,000 rows at the moment.
The script goes really fast for the first 10K rows (100 inserts per second), and then incrementally gets slower. By 20,000 rows it is doing about 1 row per second. At this rate, it will never finish!.
Each insert is a separate statement, using bind variables and wrapped in a single transaction. I've tried dropping the indexes first but it didn't make a difference.
OEM shows it's 100% CPU bottleneck with no other information I can glean.
In a vb.net app we use Oracle.ManagedDataAccess.dll (version version 4.112.3.50) to read 1000s of records of a ResultSet. Once the records are read and processed, we call connection.close() which takes 15 minutes or more to complete. The more records read the longer the close takes.This is a minimized code example:
dbcon = New OracleConnection()dbcon.Open()cmd = New OracleCommand(sqlString, dbcon)reader OracleDataReader = cmd. ExecuteReader (CommandBehavior.SingleResult)While (reader.Read()) 'consume and process recordEnd While reader.Dispose()cmd.Dispose()dbcon.Dispose()
our system has always been running on mysql database and recently we have switched to oracle. As the current system is coded using mysql query syntax, when i run this program using oracle database, i got a error. The language that I'm using is JSP.
this is the error message:
The following query could not run on oracle. To convert these mysql queries to oracle compatible queries.
SELECT productID,productName FROM products order by productName;
select newsID,newsDate,newsHeadLine1 from news order by newsDate Desc limit 3
SELECT fuji_products.productID, productName_Display FROM products,products_availability where products_availability.productID=products.productID and (product_status='enabled' or product_status='all') AND category='12'
SELECT catID, catSub1 from category where catSub = '"+ prodCat +"' AND catSub1 is not null group by catSub1 order by catSub1
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
i want to replace 4 digit number in a given string with the same number incremented by 10000.
That mean in the given sting 1201 should be replace by 11201 (Icremented BY 10000).
Input String:
<query><matchAll>true</matchAll><row><columnId>1201</columnId><dataType>31</dataType><op>Like</op><val>North America - Houston</val></row><row><columnId>1212</columnId><dataType>31</dataType><op>!=</op><val>Agreement Date Mismatch</val></row><row><columnId>1212</columnId><dataType>31</dataType><op>!=</op><val>Facility Type Mismatch</val></row><row><columnId>1224</columnId><dataType>31</dataType><op>Like</op><val>y</val></row></query>
Required output :
<query><matchAll>true</matchAll><row><columnId>11201</columnId><dataType>31</dataType><op>Like</op><val>North America - Houston</val></row><row><columnId>11212</columnId><dataType>31</dataType><op>!=</op><val>Agreement Date Mismatch</val></row><row><columnId>11212</columnId><dataType>31</dataType><op>!=</op><val>Facility Type Mismatch</val></row><row><columnId>11224</columnId><dataType>31</dataType><op>Like</op><val>y</val></row></query>
i have multiple inserts to make in a table that is in an Oracle database...i already try several ways to do it but it always giving erros... how to make multiple inserts at same time.
We have two databases one localdb with user rakdb and another one remotely remotedb with user rakdb .We need to be in sync with data in one table called om_item, where the users are inserting data on daily basis and the user sends us the insert script everday to run it on local databse to insert the new records in local database.I managed to create a file which records all the inserts into one text file in one directory.Can we have a scheduler to pick this text file from the specified folder and send mail using utl_mail.
CREATE TABLE ITEM (IT_CODE VARCHAR2(12),IT_NAME VARCHAR2(20)); INSERT INTO ITEM VALUES ('A','AAA'); CREATE OR REPLACE DIRECTORY MY_DIR AS 'C:TEMP'; CREATE OR REPLACE PROCEDURE it_status
[Code]..
Procedure created.
EXEC it_status HOST TYPE c: empaaaa.txt INSERT INTO ITEM (IT_CODE, ITEM_NAME) VALUES ('A','AAA'); COMMIT;
I'm on 11.2 DB and need to create an audit table that will be populated by DB triggers of other tables (after Insert,Update and Delete). The triggers will only ever be inserting data into the audit table. I have read that for insert only tables, you should define the 'pctfree' as 0. Is this correct? Do I need to set any other params (like pct_used) for tables only ever being inserted into?
I want to load 10 millions records from staging table to master table.One logic must be take during the load, the logic is rows already present in master table means, we need to update corresponding rows in master table otherwise rows insert in target table.
I have been using bulk collect and forall method to load data. it shows better performance compare then cursor row by row process. As per oracle doucmentation, we cannot use SELECT statements inside FORALL condition so we could not use logic inside the forall condition.
I have a data in one table with 6 columns where user may be updating values in all of these 6 columns or he may enter 3 or 4 columns based on that inserts should take place, this is similar to my previous thread , i am using if condition to check column for null if its not null then i will make a insert , but is there any other easier way to do this.
insert into ot_po values ('ss-po',1,ph_sys.nextval); insert into ot_inspect_head values (inh_sys.nextval,'ss-ins',1,'ss-po',1); commit; select * from ot_inspect_item
--Now if the inspection user issues the update statement , it will delete this row --from ot_inspect_item and reinserts the values with values based on --ii_flex_01,ii_flex_02,ii_flex_03 [code]...
I have prepared shell scripts to do the parallel inserts on my DB table (LEGACY_SYSTEM).
There is a trigger (AFTER INSERT ON EACH ROW) associated with the above table. I am calling a package.function inside the trigger to do the required operation and finally it will insert records into my target table (PRICE_CHANGE).
Expectation: ------------ If I insert 10 rows into LEGACY_SYSTEM table, it should do few updates and finally insert 10 rows into PRICE_CHANGE table.
Result: ------- 10 rows got inserted into LEGACY_SYSTEM. All the updates are successful but I could see only 4 rows in PRICE_CHANGE table. If I run it for the second or third time, all the results will be perfect.
Instead of these shell script, if I insert one by one rows manually into LEGACY_SYSTEM table, I am getting all the expected results and the results are consistent. If you look at my scripts below, you will understand the problem better..
I am calling test_global.sh through the UNIX session and all the records got inserted into LEGACY_SYSTEM table and few rows are missing from PRICE_CHANGE table.
If I remove the '&' symbol and execute, the results are perfect. But the requirement is not to remove the '&' symbol. I have been facing this problem for the past 1 month.
I have a plsql Proc, which accepts a few parameters and inevitably loops through a cursor and runs a bunch of insert statements. With quite a few IF conditions.
Each insert statement has a value which i want to increment by (+1) every time an insert statement is executed in the same loop.. This is for a student housing database and this is for their room preferences so 1 is the first, 2 is there second preference e.t.c.
Please take a look at the code below: in the Insert values() I have put a? Where I want the number to increment from.
There are a lot more inserts which I haven't put below. I hope I have made myself clear as this has been quite difficult to explain. So for example if the 2nd two inserts are run, then I was the first one to insert with a 1 and the second with a 2.
BEGIN FOR rec IN c1 LOOP IF c1%FOUND THEN INSERT INTO table (PK_A, fk_rms_id, application_type, application_person_type) VALUES (NULL, rec.pk_rms_id, app_type, app_person_type) RETURNING PK_APPLICATION_NO INTO x; [Code] ........
I am trying to create a procedure that inserts parameters into a table and then returns the number of rows inserted back to calling block. the procedure is compiling fine but is not returning the number of rows inserted. My code is as follows;
STORED PROCEDURE CREATE OR REPLACE PROCEDURE CarMasterInsert_sp ( registration IN VARCHAR2, model_name IN VARCHAR2, car_group_name IN VARCHAR2, date_bought IN DATE, cost IN NUMBER, miles_to_date IN NUMBER, miles_last_service IN NUMBER, status IN CHAR, rowsInserted OUT NUMBER) [code]....
I'm running 11.2.0I am looking at tuning a sql statement, and the question was brought up as to the max inserts per transactions in 11g, and if it exceeds 1000.I haven't found a solid answer yet, but I thought that 10g was higher than 1000.
My first thought was to implement a commit loop on every 1000 rows, as that is how things were handled in the past.But I found an article that talks about redo logs and performance and how it's a horrible practice to do the commit loop.
What I haven't found is what is the better methodology in doing this?My scenario could encounter inserts as much as 20,000 at a time.
During batch process record is entered in detail table as well as summary table.
The process first checks if record exists in summary table for same group_no and if 'yes' then "updates" the record with the newly added amount (sums it) else inserts a new record Whereas in the detail table it inserts the record directly
Now if the batch process runs in parallel, (out of many) two different sessions insert same group_no; This is because while sesond session inserts a record, first session inserting the same record (group_no) has not yet committed ; So second session Not knowing that already there is same Group_no (101) inserted, again inserts another record with same group_no rather than summing it.
Can it be solved without using temp table, select for update?
I have a table named student_details with columns "NAME","ADDRESS","COURSE" with several rows of data already insertedI have to add one more column "ID" which increments automatically.
I tried to do this using SEQUENCE but no values got inserted for already existing rows in "ID". how to write a script that automatically increments and inserts values for already existing rows also.
I was given a task by manager to keep track of changes on a given table including os_user who made it.Should I create a trigger on it (on any update, insert, delete etc.) or there is a better way of doing it ?I think there could be some info already in some data dictionary views or something like it.
I have a Bash script that counts the rows of a csv file, extracts the fields and makes inserts in a sql file. Then it logs into SqlPlus and calls the insert file. The sql file looks like this:
I rely on "WHENEVER SQLERROR EXIT" for things to go the right path. However sometimes because of the contents of the CVS files (which I can't control) some rows don't get inserted but SqlPlus doesn't see that as an error, doesn't exit and I end up with the wrong number of rows being informed in the second insert.Is there some kind of "if-then-else" construct in Sql? After all the inserts are made, do a "select count (*)" and compare that number to the one informed by the script. If they match, make the final insert and commit; else exit.
The issue is slow insertion in particular table(i.e A Table) it means insertion in all other tables(i.e B, C, D tables) in same schema is going properly but only when i am trying to insert in one particular table(i.e A table) in same schema it takes long time to complete insertion. Daily insertion is 6000 rows.
I have check all the details like Tablespace size, Analyzing of table, Analyzing of indexes and all. There is no any error alertlog file.
We have two database instances on the same server. One was left at 9.2.0.7 and one was upgraded to 10.2.0.3. Connecting externally (sqlplus '/as sysdba') to the 9.2.0.7 database is lightning fast. Connecting externally to the 10.2.0.3 database is very slow, comparatively speakiing. This is on an IBM AIX-5L (64-bit) machine. We are using "tnsnames".
I am migrating a oracle 9i database to 11g r3. I can only use imp. As the database is huge, I have split the exp dump by schemas. In my recent test, i have split the schema into 4 seperate threads to be imported into the new oracle11g database. The 4 thread of imp consist of almost similar sizes of schema (Eg thread 1 - Schema 1, 2 ,3. Thread 2 - Schema 4,5,6 etc)
All the dump files are in the same mount point. When i execute the import (4 threads) together, the total import timing is each thread is between 2.5 days to 3.5 days.
Then i proceed to try only 1 thread, only 2 hrs. So could this be a IO issue or oracle memory problem?
We have done cloning of our ORACLE APPLICATION(11i),after that performance of ERP is getting slow (like fetching of data). What we can do to increase the performance.
what is the error in this procedure.I am getting error where s.msg_id = Tbl_Repo_Salo (i.msg_id));
Create or replace Procedure Proc_Stg_Clear(P_Err_Code OUT NOCOPY number, P_err_msg OUT NOCOPY varchar2) as v_count number; Type Repo_Salo is table of Tbl_Tml_msg_Salo_Stg%ROWTYPE; Type Repo_Smb is table of Tbl_Tml_msg_SMB_Stg%ROWTYPE;
we are busy updating one databasee from a windows platform 2003 oracle 10G to a linux and oracle 11r2
We exported/imported the data and it looks ok Explain plans look the same . but our heavy batches are twice slower than on the windows box ,the two top events are disk related, sequential and scattered reads there are 90% of the time of the batch job , i read some white paper and found that using ASM can be bad in some cases the same with the linux for this particular kind of scattered reads , i was just wondering if just changing the SGA to 10GB instead of 4GB to get more cache and speedup the things .