Deadlock In Database Because Of Concurrent Data Deletion
Feb 4, 2013
I am facing Deadlock issue in my transaction when record is been deleted in the same table in parallel from a PLSQL package. The package is been called from the Java code.The example and the table structure is given below.
Col1 of tab1 is foreign key to col1 of tab2.
Commit will not be done until all the below dml scripts are executed in both environment.
1st session:
Delete from tab1 where col1=1001;
Delete from tab2 where col1=2001;
Result: Deletion successful
2nd session:
Delete from tab1 where col1=1002;
Delete from tab2 where col1=2002;
Result: Deletion successful
1st session:
Delete from tab1 where col1=1003;
Delete from tab2 where col1=2003;
Result: Deletion successful
2nd session:
Delete from tab1 where col1=1004;
Delete from tab2 where col1=2004;
Result: Query is executing for a longer time.
1st session:
Delete from tab1 where col1=1003;
Delete from tab2 where col1=2003;
Result: Query is not executed and throws Deadlock in the back end.
I have a RAC system with DR set-up, this is a test environment and it doesn't have any backup, why DR is required but it exist. Since this is a test a lot or archives gets generated and deleting the archives has become a daily job for this server manually.
I want have a script to delete archive logs which is in non-ASM (i.e. filesystem) after ensuring that the archive log has been applied in standby database. If this can done only by RMAN.
I was carrying out an experiment in order to crash the database and recover it.
The database was running, I moved the control file to another location and to my surprise the database was still running. I tried issuing checkpoint and transaction but it didn't affect database operations. I tried doing log switch. It completed successfully. According to my understanding and Oracle Certification books database was supposed to crash. But it didn't.
I tried this not only on RHEL, Windows XP but also in Solaris 5.10. The database version is Oracle 10.2.0.4 Standard Edition.
We are designing a three tiered system (client, application/web server, database server) that will allow clients through a web interface to select a text file from the operating system and load that file into a intermediate table (import database table). Many users will do this concurrently and data will load into a single table. The text files come in monthly for about 100 firms. No user is able to insert or update the data of another users data (there is a check out system). Their are about 30 to 40 users that will be using the system doing various functions but it is possible for 10 to 20 users to import data at one time. The files can have anywhere from 2000 to 25000 records at a record length of 398.I am concerned about having a good design strategy as well as decent performance.
Problems with each of the Oracle loaders.
1) External tables - Can not read data text files on the application server(which is where they want the text files to go) secondly you cannot create a instance of a external table. Multiple users will be using the external table to point to different text files and loading at the same time.
2) Sqlloader - is mainly a OS level tool and I am not sure how I could programatically point it to a different text file each time a user wants to load. The client will have to have the ability through code to point sqlloader to the correct file name.
I had a creative approach and was wondering if this would work. I would like to use external tables just like a connection pool. I would propose first a scheduled OS job to move files to the database server. I would create about 20 external tables with 20 different directory objects. Using a stored procedure for the user to call and pass in file name and audit info as needed. I would use a Load lock pool table (my invention) to load the name or a code for the external table in use. The procedure loads this code into my load lock pool table when a external table is in use and deletes the name when the load is completed. The procedure would check through a series of if statements whether a particular external table was in use. If in use (exist in load lock pool table) I would check the next available external table until a external table not in use is encountered. Now potentially 20 users at one time but not likely would be laoding into the same table at one time.My questions
1) Could Oracle handle this strategy? What do I need to consider performance wise with the possibility of so many users loading into a single table at one time?
2) Do any of you maybe have another strategy to do this?
Maybe you can tell me if you think it's right or wrong for Oracle to behave like that. What basically happened is that I've set up a trigger to capture deleted rows from testtab into testtab_del.
I have inserted and immediately after - deleted a row from testtab. Then I inserted another row, committed, and checked my testtab_del table.
I've seen that val1 was inserted and committed into testtab_del. This happened in spite of the fact that this row never existed as a *committed row*.
What do you think about this behavior in this scenario? Works as designed or not?
SQL> show user USER is "ANDREY" SQL> col tcol for a10 SQL> drop table testtab;
I have a DR setup with the following configuration
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS; CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE ENCRYPTION FOR DATABASE OFF; # default CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY
I dont want to backup the STDBY DB but I want the ARC files to be removed when applied so my flash area does not fill up. Is there some command(rman or not) that can fire off this policy?
I am working on 11g and AIX...We got deadlock recently and we need to investigate why it happen and resolution, so it will not happen in future. When we saw deadlock trace file ,it shows 2 sqls queries on different tables..and lock is exclusive lock.
----- Information for the OTHER waiting sessions ----- SELECT 1 FROM HOS_DTL WHERE ( HOS_LCN_DTL.HOS_LCN_DTL_ID = :1 ) FOR UPDATE WAIT 180
and
----- Current SQL Statement for this session ------ DELETE FROM EI_INV WHERE ( EI_INV.EI_INV_ID = :1 )
But these two tables not related at all 'HOS_DTL' and 'EI_INV' . Can deadlock happen if sqls queries fired on those table are not related at all and deadlock trace file can show those two sqls ..?
While i am trying to execute below mentioned query i am facing "DEADLOCK FOUND WHEN TRYING TO GET LOCK" error.
UPDATE PLAN SET PLAN_PMS_BLOCK_ID ='' WHERE PLAN_PMS_BLOCK_ID<>'' AND PLAN_PMS_BLOCK_ID NOT BETWEEN '0' AND '9' AND PLAN_PMS_BLOCK_ID NOT BETWEEN 'A' AND 'Z' AND PLAN_PMS_BLOCK_ID NOT IN('-') AND LENGTH(PLAN_PMS_BLOCK_ID )=1;
I keep getting an ora-04020: deadlock detected while trying to lock object XDB. SDNRB..The statement I'm trying to issue is:
REVOKE execute on abc."descript_T" FROM PUBLIC;
I am not able to find a solution for this besides the fact of trying the operation again at a later time.I did, but get the same error every single time.
I am using oracle 10.2.0.3 and i am receiving very slow response time for the below query and sometimes resulting in a deadlock throwing ora-60 error.
DELETE FROM GBC_CORE.SPI_ELEMENT_ID TRGT WHERE (TRGT.URI) NOT IN ( SELECT DISTINCT FROMTOURI.URI FROM ( SELECT SERVICEACCESSNAME AS URI, SUBSTR( SERVICEACCESSNAME,1,INSTR( SERVICEACCESSNAME ,'_')-1) AS FROM_URI, SUBSTR( SERVICEACCESSNAME,INSTR( SERVICEACCESSNAME ,'_')+1,LENGTH(SERVICEACCESSNAME)) AS TO_URI FROM TRPT.V_TRPT_SPI_VIEW@DBLNK_FCE_TRPT ) FROMTOURI, [code]...
i am getting a below error whenever executing the below select query. some times it will show dead lock detected while waiting for resource and terminated...some times it executes and gives result..but all the time it writes an alert to alert log
Env: Linux / Oracle 11.2.0.3.3..Error from alert log:Errors in file /u01/oracle/oracle/diag/rdbms/bdrdb/bdrdb/trace/bdrdb_p017_6076.trc:ORA-00060: deadlock detected while waiting for resourceORA-10387: parallel query server interrupt (normal) Trace file info... bdrdb_p017_6076.trc:Trace file /u01/oracle/oracle/diag/rdbms/bdrdb/bdrdb/trace/bdrdb_p017_6076.trcOracle Database 11g Enterprise Edition Release 11.2.0.3.0 - ProductionWith the Partitioning, OLAP, Data Mining and Real Application Testing optionsORACLE_HOME = /u01/oracle/oracle/product/11.2.0/dbhome_1System. [code]....
You have a stock_amt value in one table and there is a procedure that updates and substracts from this stock_amt.
lets say in a store a have a stock amount of 50 items and this procedure, for each sale is subtracting from this value and it is not allowed to go below zero. The process, beside the update, on this column (set stock_amt = stock_amt - x) is doing a lot of other updates on other tables and it total it takes like 0.5 seconds. Everything is fine till I want to execute this procedure by 50 users in parallel.
The initial implementation, to avoid some dead locks and we put a lock on that column (stock_amt) but there where to much waits. we cannot hold that lock for 0.5 seconds.
What will be the best approach for this? For this stock amt problem, maybe the solution can be a trade like: do not update that column every time but once in a while, by another process or by a materialize view logic.
but what if my column is a critical value like a Prepay balance or bank balance and it needs to be updated in near real time. What will you do?
I have a deadlock trace file to analyse and i used to be able to see the rowid as a 16 bit hex value in the trace file, which i could then query on to get the actual real world row name.I see in the 11GR2 deadlock trace the formatting is different and for the life of me i am unable to see the rowid. Has Oracle stopped reporting this now, or is their another way to get this value? The deadlock graph shows me
what could be the reason for a LMSn process not heartbeating after a global enqueue service deadlock was detected? The happened in a 3instance RAC and after the LMS1 process stop heartbeating oe of the instances crashed afterwards and another instance crashed some minutes after. reason for the process crash after resolving deadlock?
As part of an interface I've implemented, I have a 'listener' concurrent request that registers for a dbms_alert and uses wait_one to detect appropriate alerts. When an alert has been detected, a loader concurrent request is kicked-off, before the listener loops back to the wait_one call. I've set a nominal wait_one timeout period of ten minutes. It is not envisaged that many alerts will be raised - maybe several a day at the moment, but I am always reluctant to use polling unless I can avoid it.
Our EBS system has gradually ground to a halt after the release of this program, and the DBAs suspect this as a possible cause. CPU usage was very high overall, with various java-related usage figure also being high.
The overall idea is to have a 'poor-mans' equivalent of OAI, so I'd hope to add more listeners, with each being the equivalent of an OAI adapter.
Surely a big powerful thing like Oracle EBS is not going to keel over just because of a few processes doing little more than waiting for alerts?
I am trying to execute two scripts at the same time (concurrent) in Oracle SQL Developer. I know we can schedule a job using DBMS_job package and define the job. But is there any other way of doing it using Threads ?
I would like to prevent the deletion of a record based on some column value.
Something like:
IF (COLA value < 10) THEN SHOW ALERT; RECORD IS NOT DELETE ELSE DELETE RECORD AS NORMAL END IF;
I want this to happen even before the record is marked for deletion, but I don't know where to put it. I tried a pre-delete trigger, but it does not work they way I want.
When i opened the form and after querying the records if I press CTL+ Down Arrow the record is deleted.
This is the normal runtime form behavior. When ever pressing CTL+ Down arrow I don't want to delete the record, I tried with KEY-DELREC but it is not happening.
We had an escalation wherein one of team members accidentally deleted an LDAP entry for a database. We use Oracle Net Manager to add/delete the connect descriptor.
Are there any logs using which we can find out as to who deleted the entry.
I have a stored procedure that is run from a command within our Clarity application.
The procedure involves some SQL Reads and SQL Inserts.
We have experienced users running the SP at the same time (slim chance to do this) and it creating duplicate entries.
if there is a clever way of preventing the same SP to be run concurrently?
Initially I was thinking of having the first step of the SP to interrogate a flag into a custom table - which the SP then sets to 1 if it is running, and 0 at the end.
Are there better more efficient/effective ways of doing this?
I want to run multiple Reports Concurrently, being called from my Form.
Suppose I have 10 Reports Say Report1, Report2, ....Report10.
I am using "RUN_PRODUCT" to call these Reports from my Form. But it's taking too long time to run all the Reports one after another. Can I run all these Reports Concurrently at the Same time.
I want to add a new validation to restrict concurrent user and/or session from a client. (we have almost 60 client firms using the software to enter daily trasnactions). All users from all clients are connecting to the database using a common functional ID.
What I did was: 1) Add a column 'user_logged_in' in the master table for client and update it as Y when user from that client logged on to the system, 2) Insert the application logon details (we can figure out the client details from this) into a global temp table, 3) Create a logoff trigger to update the 'user_logged_in'flag in client master table by using values from global temp table when session logged off and 4) Restrict the users from same client if the flag is 'Y'
But the problem in this case is logoff trigger will not be executed in case if the session got killed or terminated abnormally.