SQL & PL/SQL :: How To See A Tables Transactions For The Day Or Interval
Jun 12, 2012
I have a problem, we have some datas in a table for example 7500 rows in a table name called table1 upto 11:am today. but after 11:25 am i have only 5500 rows. in that table.
the table can be accessed by many users here. we dont know when the delete happended in that table. is there any query to find the transaction log of this particular table.
the deletion should be happended between 11:00 am to 11:30 am. but we have retrieved the data using timestamp query. but we need to know when the query issued and by which user the query has been issued.
declare v_amount NUMBER; v_paymentno INTEGER := &sv_paymentno; v_playerno INTEGER; v_payment_date DATE:= SYSDATE; begin select 500 into v_amount from dual; select 44 into v_playerno from dual; insert into penalties values (v_paymentno, v_playerno, v_payment_date, v_amount); end;
I forgot to add the commit statement and now I have hung transaction with dirty data with v_paymentno 27. Is there a way to commit or rollback that transaction?
We found out an error from alert log of our Oracle 10.2.0.5 DB : ==================================== .. Wed Jan 30 16:45:01 EAT 2013 DISTRIB TRAN bea1.67AA54355C4A74ECDEE0 is local tran 6.42.332492 (hex=06.2a.512cc) insert pending prepared tran, scn=8151148567799 (hex=769.d6509cf7) Wed Jan 30 16:45:02 EAT 2013 Errors in file /oradata/sfapdb/bdump/sfapdb_reco_2739.trc: ORA-24756: transaction does not exist Wed Jan 30 16:45:02 EAT 2013 Errors in file /oradata/sfapdb/bdump/sfapdb_reco_2739.trc: ORA-24756: transaction does not exist .. ====================================
There is no useful information from the trace log as shown below: ==================================== Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, Data Mining and Real Application Testing options ORACLE_HOME = /ap/oracle10 System name:HP-UX Node name:scvap2 Release:B.11.23 Version:U Machine:9000/800 Instance name: sfapdb Redo thread mounted by this instance: 1 Oracle process number: 18 Unix process pid: 2739, image: oracle@scvap2 (RECO)
*** SERVICE NAME:(SYS$BACKGROUND) 2013-01-30 16:45:01.941 *** SESSION ID:(1749.1) 2013-01-30 16:45:01.941 *** 2013-01-30 16:45:01.940 ERROR, tran=6.42.332492, ose=0: ORA-24756: transaction does not exist *** 2013-01-30 16:45:02.059 ERROR, tran=6.42.332492, session#=1, ose=0: ORA-24756: transaction does not exist ====================================
I also found out there are some records (trans_id = "6.42.332492") in SYS.PENDING_TRANS$/ SYS.PENDING_SESSION$/dba_2pc_pending with "prepare" status.
This transaction is launched from a Weblogic Server via JDBC. Since it is abnormal so I have no choice to force commit/purge this transaction. Is that a bug of Oracle DB ? or Weblogic coding problem ?
I need to create a report that will show all activity for the day that the report is run. The report will be run or auto-refreshed throughout the day. So, at 10am, 11am, 2pm, 4pm etc. I just need to display quantities received, shipped etc. for that same day. Since the report will not be run at a fixed time, I can't use sysdate - .5 for example.
I have problem with my Replication using oracle 9i.It does not push transactions automatically when refresh time comes but it works fine when pushed manually.I have two sites:
1- Master a-Master group 2- Materialized view site a- Materialized view group (Asynchronous) b- Materialized view with refresh occur automatically on future date every 5 minutes c- refresh type FORCE.
I'm running 11.2.0I am looking at tuning a sql statement, and the question was brought up as to the max inserts per transactions in 11g, and if it exceeds 1000.I haven't found a solid answer yet, but I thought that 10g was higher than 1000.
My first thought was to implement a commit loop on every 1000 rows, as that is how things were handled in the past.But I found an article that talks about redo logs and performance and how it's a horrible practice to do the commit loop.
What I haven't found is what is the better methodology in doing this?My scenario could encounter inserts as much as 20,000 at a time.
I have some transactions in my table with date and time.
i want to pass from date, to date and from time , to time as parameter.
when i pass one date and two time parameters, it works fine. but when i try to pass from date and to date (two date parameters) and two time parameters then it does not work accurately.
e.g. i want to pass 05-Aug-2010 and 06-Aug-2010 and time from 08:00:00 and 14:00:00 then it only retrieves data of both dates having only this time range. however i need to get transaction of 05-aug-2010 from 08:00:00 to 06-aug-2010 14:00:00.
There are 2 people on separate machines each executing a transaction through the same form, processed through a when-button-pressed trigger.The first session processed correctly. For the second user, the session seems to have picked up the non-PACKAGE variables of the first session in what was passed through to the data base. Values associated with the 2nd session's PACKAGE-based types appear to have passed through correctly.
Hence, the second user's transaction processed with a combination of values from the two sessions, with the second user's PACKAGE-based variables merged with the first user's non-PACKAGE variables. There is no use of context variables. There are some global values, but none of them are used in this trigger.The values in question, that appear to have passed from the first session to the second, are based on contextual LOV selection: after selecting a transaction type, users are prompted to select from a LoV specific to that type. Value property set "Validate from List=>Yes".
The 2nd session's PACKAGE-based values do not correlate to the non-PACKAGE values, leading us to conclude that the latter values somehow came through from the first session. We are running IAS 10g R2 on Oracle 10gR2 (10.2.0.3).Each user session is created as user logs into the application and hence logically there should not be any overlap between sessions of different users concurrently.
I did have a look at the code and predict nothin wrong with the source code since the system has been in use for a few years now and only occurred a couple of times in last few months. Also one more noticeable thing is that the issue is not reproducible. I would believe somethin goin wrong in the middle tear or with the session management. Are there any known issues in session management in the Forms server Or something?
I have a table that cannot be changed with a field called transaction_reference in the transactions table. This field contains any number of some values in a look-up table called codes.
The table codes contains 'AA', 'BB', 'CC'.
A typical transaction_reference field may look like 'CC BB' or 'AA' or 'AA CC' or 'AA CC BB' - any number, any order.My goal is to get a count of records grouped by another field from the transactions table.
Transactions table example: transaction_id | transaction_reference | family --------------------------------------------- 1 | AA BB | foo 2 | BB CC | bar 3 | BB | hello 4 | AA CC BB | foo 5 | BB AA | bar
So the results should look like:
family | code | count foo | AA | 2 foo | BB | 2 foo | CC | 1 bar | AA | 1 bar | BB | 2 bar | CC | 1 hello | AA | 0 hello | BB | 1 hello | CC | 0
If the counts of 0 (like the third to last and last line above) don't show up I'm ok with that.I put together an explode function like this one here but I'm really not sure where to go from here. I can split the transaction_reference, but I'm not sure what to compare it to or how.
I realize that a field in the transactions table for AA, BB, and CC would be ideal, but I can't do that... the powers that be won't let me change the table.
for each exploded segment from transaction_reference look for it in the codes table if it exists, add 1 to the count
I am trying to find some way how to tune and optimize the server performance in following situation. There are 100s of sessions inserting records to one table. Sessions are communication threads in java application, each thread is receiving messages that are to be stored in the table. Each message must be commited and then is ACK sent to remove client. Two problems are raising of course - much of ITLs on the table and lots of very small transactions. I can adjust the java application, but cant do much about the design.
I was thinking about some "caching" - if the messages are stored in memory and bulk-inserted to database by single thread the performance would be much higher. However, there would be possible loss of data - the message could be lost from memory cache and client already received ACK.
I have created a job using DBMS_SCHEDULER and I want it to run every 30 seconds:
begin dbms_scheduler.create_job(job_name => 'jobu', job_type => 'PLSQL_BLOCK',
[Code]....
My question is how can I take the value 30 from a configuration table? Let's say I have a query like select value from config_table where property = 'job_interval' that returns the number 30. How can I set this value to be the repeat interval for my job?
i have to divide into 3 groups and take a count 7am-12pm, 12pm-7pm, 7pm-7am groups
It looks so complicated to me, because IN time and OUT time together how we do it.
suppose one person 6am IN and out 8PM means he will be in 7am-noon , noon to 7pm, 7pm-7am -- 1, 1 1 on 3 interval another scenario is if one person in 2am in the morning it has to be previous days count. Is this possible to do it in query.
Interval partitioning I keep getting the below error on a table.A more discerning eye is needed
PARTITION DEC_2012 VALUES LESS THAN (TO_DATE('01-01-2013', 'DD-MM-YYYY')), * ERROR at line 26: ORA-14037: partition bound of partition "DEC_2012" is too high
CREATE TABLE STATISTICS_PART ( ID_KEY NUMBER(10) NOT NULL, LUD DATE DEFAULT sysdate, [code]....
I have one page with an interactive report (Page 10) and another with a form to update each entry (Page 20).
The table that these pages refer to have a column called MY_INTERVAL_COL of type INTERVAL DAYS TO SECONDS.
I can successfully display the contents of MY_INTERVAL_COL by extracting DAYS, MINUTES, and SECONDS on Page 10:
TO_CHAR(EXTRACT(DAY FROM MY_INTERVAL_COL)) AS MY_DAYS, TO_CHAR(EXTRACT(HOUR FROM MY_INTERVAL_COL)) AS MY_HOURS, TO_CHAR(EXTRACT(MINUTE FROM MY_INTERVAL_COL)) AS MY_MINUTES,
However, the Automated Row Fetch process appears to ignore columns of type INTERVAL DAYS TO SECONDS on Page 20.
:P20_INTERVAL (database column MY_INTERVAL_COL is used) :P20_DAYS (trying to convert :P20_INTERVAL using TO_CHAR(EXTRACT(DAY... :P20_HOURS (and so on...) :P20_MINUTES
Does Automated Row Fetch ignore columns of type INTERVAL DAYS TO SECONDS?
Other than the obvious to me, where interval partitioning creates partitions as needed. Is there any performance benefit from using interval partitions vs date range partitions.
One draw back for me is that developers do access the partition name in some of their queries, so if I use date range partitioning this will not break their code. I could not find a way to assign a name to a partition when using intervals, is this always system generated or can this be over-ridden.
I am running Oracle 11.1.0.7 soon to be running on 11.2.0.0
I'm having trouble using interval data types in a procedure. I need to pass a number of minutes as a parameter, and then use them for arithmetic on a timestamp with time zone. This works no problem:
set serveroutput on create or replace procedure tstz(mins varchar) as begin dbms_output.put_line(systimestamp - interval '10' minute); end; [code]...
I've tried a few variations of data type and type casting for the parameter, but I can't make it work.