Trigger Performance Impact When Not Firing?
Apr 30, 2013
I'm trying to find some information on the performance impact of a trigger on a heavily updated table when the condition to fire the update trigger is NOT met. In other words I guess what I'm really trying to find out is what the performance impact of the system checking the condition on the trigger to determine if it should fire or not is.
For example I have a batch job that inserts and updates a table heavily, but the batch job almost never updates the column in question on the trigger to the value that would cause it to fire, but it does update that column to other values often.
I know about the many downsides of using triggers in general, but I'm working with a third party application, so more optimal solutions aren't an option.
View 1 Replies
ADVERTISEMENT
Jun 9, 2010
I have a database trigger that fire on delete or update. It works fine. But after few days, only delete is working. When I update, nothing is happening. Then I re-create the trigger and it works fine. Then the problem comes back after few days.
The code is for both action so I wonder why this happens:
AFTER
DELETE OR UPDATE
ON AGENCY.CCS_TEMPLATE REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
I checked the trigger on all_objects table and it is valid. On schema browser it is compiled.
View 1 Replies
View Related
Nov 5, 2012
I am navigating from a Master form to a child form. The when-create-trigger in the child form is sometimes executing and sometimes not. Not firing of this trigger is causing the child form to open without any initially assigned values. What is the root cause and also the scenarios where the when-create-trigger fires.
View 15 Replies
View Related
Feb 4, 2011
How to find the tables in the database on which high DMLs are firing.
View 5 Replies
View Related
Sep 9, 2011
in my oracle enterprise manager under " user i/o " .i am having basically four category:
if we rank them out of ten it would be like :
read by other session 2/10
db file scattered read 1/10
direct path read .5/10
db file sequential read 6.5/10
and all these are comming for 2 tables involved for almost all time .some way to handle "read by other session" and " db file sequential read " .
i am rebuilding indexes of these involved table once in 10days and statistics for the these tables are collected every day using "analyze table "xxx" compute statistics."tell me the indepth approach i should take to minimize the impact as users are complaing for performance.
View 2 Replies
View Related
Nov 28, 2011
my sql query has three tables in from clause so it has two join conditions and one where condition.
account_no is number data type and v_account_no is varchar2() data type
The where clause is :
"where account_no=to_number(v_account_no)" with this condition in my sql query has the cost 392
we just modify the where clause as where v_account_no=to_char(account_no) with this condition in the sql query has the cost 11.
what is impact of this data type conversion and difference between these two "to_number() and to_char()" in performance wise to reduce the cost of query?
View 8 Replies
View Related
Nov 6, 2012
I have been used to the consciousness that we should use the minimum length for varchar2 field that can store the data we need manipulate. But recently I was told that it has little impact on performance if we assign a much longer size.
View 13 Replies
View Related
Jan 25, 2012
After many tests I can't make work and update of the same table inside the same table.
Trying to avoid Mutating Table Error now I have
ORA-00036: maximum number of recursive SQL levels (50) exceeded
Sample Data :
create table test_compound (USERID VARCHAR2(10),APP VARCHAR2(15),LAST_UPDATED_ON TIMESTAMP);
insert into test_compound values ('user1','1',systimestamp);
insert into test_compound values ('user2','2',systimestamp-4);
insert into test_compound values ('user3','3',systimestamp-6);
CREATE OR REPLACE TRIGGER trigger_test
FOR UPDATE ON test_compound
COMPOUND TRIGGER
TYPE t_tab IS TABLE OF VARCHAR2(50);
l_tab t_tab := t_tab();
[code].......
When I execute :
update test_compound
set last_updated_on=systimestamp
where userid='user1' and app='1';
The trigger should update the first row and all the data from test_compound table where userid='user1'. Maybe the problem is that updating the same table inside the trigger is firing in a recursive way the trigger.
View 13 Replies
View Related
Nov 1, 2011
I have rather large compound triggers that I discovered were not firing this morning, so I created a simpler compound trigger to test:
CODECREATE OR REPLACE TRIGGER "test"
FOR INSERT OR DELETE OR UPDATE OF KI_NM ON CHEMAXON.CB1ASSAYS
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
[code]...
It's just not firing. The tables are all in the owner's schema (who has DBA rights). My Google-fu is failing me, and I'm not sure how to start troubleshooting general trigger failure.
View 2 Replies
View Related
Oct 22, 2011
Our Team is planning to find a new architecture for our new project. In which we have to fire query to multiple database and then we have to collect all responses from them.(Suppose there are 10databases on which we have to fire query)
I searched a lot,the only thing I got is...It could be possible only through Database link(DbLink),Is there any other way to fire query on distributed databases...?
View 20 Replies
View Related
Sep 8, 2009
SQL> select * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
5 rows selected.
I have a problem with views and nested selects which I cannot explain. Here is a trimed down version of the research I have done. notice the following:
1) all code is executed from the same user CDRNORMALCODE. this user has all views and procedural code
2) all data is owned by a different user CDRDATA. This user has no views and no code.
My problem is this:
If I reference the table directly with a delete statement that uses a nested select (i.e. IN clause with select), the index I expect and want is used.But if I execute the same delete but reference even the most simple of views (select * from <table>) instead of the table itself, then a full table scan is done of the table.
Here is an execute against the table directly (owned by cdrdata). Notice the reference to the table in the table schema on line 3. Also please notice INDEX RANGE SCAN BSNSS_CLSS_CASE_RULE_FK1 at the bottom of the plan.
SQL> show user
USER is "CDRNORMALCODE"
SQL>
SQL> explain plan for
2 delete
[code]...
OK, here is an update. The views I am useing normally have instead of triggers on them. If I remove the instead of trigger the problem looks like it goes away, when I put the trigger back the problem comes back.But why would an instead-of-trigger change the query plan for a view?
SQL> DELETE FROM PLAN_TABLE;
5 rows deleted.
SQL> explain plan for
2 delete
3 from BSNSS_CLSS_MNR_CASE_RULE_SV
[code]...
View 10 Replies
View Related
Dec 10, 2012
One of my clients need to remove three(of four) CPU to comply the licensing agreement with Oracle.
To avoid problems and also to list the possible problems that removing the CPU can bring, I wish to make a survey of the possible impacts, especially in performance, that removal can cause.
How can I get this information?
View 8 Replies
View Related
Mar 8, 2013
I have created custom table in inventory and after register table & columns, i create event alert on it. event alert is not firing.
View 1 Replies
View Related
Jul 20, 2012
We have a change in IT Dept in our organisation and we have been asked us about the impact of the following changes
1. Change in listener Port
2. Change in value of send and Recieve buffer in listener.ora
3. Change in Archive Size from 40M to 20M
View 2 Replies
View Related
Mar 3, 2011
I am attempting to read from the maillog of our server, but I wish to make as few changes as possible for fear of blocking other systems access to the file.
I was initially going to call create directory maillogs as '/var/log/maillog' and then drop directory maillogs; when I was done but I found my user does not have "create any directory" permissions.
Rather than compromise security of the existing database configuration, I thought I would permanently add the maillogs to the list of available data directories. Are there any implications to the filesystem if I do this, or should I be able to add this without consideration of affects.
Understand that I will only be opening the file for (R) READ TEXT access only.
Primarily I am concerned that Oracle (in the background) will keep a file pointer open or something of that nature that would block other programs from writing to the file even after I close the file pointer. I want to make as little impact as possible to the file system.
View 4 Replies
View Related
Dec 21, 2010
Which config file is used to change the os version(RHEL 4.0 to RHEL 5.5) by OS admin what will be its impact on ORACLE Databases.
View 3 Replies
View Related
Aug 28, 2013
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - 64bit
PL/SQL Release 11.1.0.6.0 - CORE 11.1.0.6.0
TNS for Linux: Version 11.1.0.6.0 - NLSRTL Version 11.1.0.6.0
When I checked the status of $emctl status dbconsole. I got this error OC4J Configuration issue.
I was told by the forums to drop and recreate the repos. But I noticed the moment you drop the repos you will loose the passwords of SYSMAN and DBSNMP. emca -deconfig dbcontrol db -repos dropemca -config dbcontrol db -repos createI tried default passwords does not worked. it looks like the only option I have is to reset the passwords. but my main concerns is this impact the DB in generally or its common. or is there any other way we can get the OEM back(I already dropped the repos).
View 2 Replies
View Related
Oct 21, 2012
Any impact is there if is do the following:
ALTER TABLE MY_TABLE TRUNCATE PARTITION P1 UPDATE GLOBAL INDEXES;
will there be any lock on the table during this operation? DML operations will work without any issue or not?
View 10 Replies
View Related
Feb 29, 2012
I am looking at a performance issue at the moment and trying to replicate on a test system. I am initially looking at the impact of upto-date statistics on the main schema's objects.
For this I wanted to:
first run the batch with whatever stats were present in the database Flashback the db to before the batch . Gather stats Re-run the batch with updated stats and compare results.
However, I inadvertently ran the stats job before running the load the first time! I have the SCN from when the environment was set up like production (ie before the stats were run) so am I correct in saying that if I flashback to this point then the stats will be "old" and I can just run the batch then? I know I can verify this when I Flashback the database by looking at LAST_ANALYZED on tables etc but it would be good to know this before hand as it's a 12 hour batch.
View 1 Replies
View Related
May 19, 2007
I am using Oracle 9i and Unix on my system and trying to execute a UNIX shell command through external procedure in C.I created a shared lib (libextproc.so) for the following function.
int sysrun(char *command)
{
return system(command);
}
This function runs fine when caled through a driver function in C, meaning that the shared lib is fine.In PL/SQL, I have used the following method to invoke a UNIX command:-
create or replace library shell_lib as '/home/ECETRAonsite/oracle/OraHome1/lib/libextproc.so';
/
create or replace function sysrun (syscomm in varchar2)
return binary_integer
as language C
name "sysrun"
library shell_lib
parameters(syscomm string);
/
Now when I call this PL/SQL function to invoke the command, it is run succesfully but does not create the file.
SQL>
1 declare
2 rc number;
3 begin
4 rc := sysrun('/bin/touch /home/ECETRAonsite/oracle/OraHome1/test/sach');
5 dbms_output.put_line('Return Code='||rc);
6* end;
SQL> /
Return Code=0
PL/SQL procedure successfully completed.I have verified that the path for 'touch' is correct.Following are my configuration files.
listener.ora
-------------
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
[code]...
View 1 Replies
View Related
Oct 28, 2009
I am using prebuilt MV to perform replication of about 300-400 master tables from one database to another database. I am wondering about the impacts on triggers in general replication.
IS there a general rule to enable/disable a trigger before a refresh.
View 1 Replies
View Related
Jul 1, 2011
How can "call one trigger of item in trigger of form"
View 5 Replies
View Related
Sep 30, 2010
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
View 24 Replies
View Related
Jun 16, 2010
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
View 2 Replies
View Related
Jun 16, 2011
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
View 2 Replies
View Related
Nov 15, 2010
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
View 13 Replies
View Related
Apr 12, 2013
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
View 13 Replies
View Related
Aug 9, 2010
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
View 6 Replies
View Related
Dec 6, 2011
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
View 1 Replies
View Related
Oct 17, 2011
The following query gets input parameter from the Front End application, which User queries to get Reports.There are many drop down boxes like LOB, FAMILY, BRAND etc., The user may or may not select values from drop down boxes.
If the user select any one or more values ( against each drop down box) it has to fetch all matching values from DB. If the user does'nt select any values it has to fetch all the records, in this case application will send a value 'DEFAULT' (which is not a value in DB ) so that the DB will fetch all the records.
For getting this I wrote a query like below using DECODE, which colleague suggested that will hamper performance.From the below query all the variables V_ are defined in procedure which gets the values selected by user as a comma separated string here V_SELLOB and LOB_DESC is column in DB.
DECODE (V_SELLOB, 'DEFAULT', V_SELLOB, LOB_DESC) IN
OPEN v_refcursor FOR
SELECT /*+ FULL(a) PARALLEL(a, 5) */
*
FROM items a
WHERE a.sku_status = 'A'
[code]...
View 9 Replies
View Related