I need to create a report that will show all activity for the day that the report is run. The report will be run or auto-refreshed throughout the day. So, at 10am, 11am, 2pm, 4pm etc. I just need to display quantities received, shipped etc. for that same day. Since the report will not be run at a fixed time, I can't use sysdate - .5 for example.
I am trying to get the last 7 days of record from today date, this query runs every night and I always want the last 7 days. Example - today is 3/13/2013 so I want record from 3/7/2013 to 3/13/2013 and tomorrow it would be 3/8/2013 to 3/14/2013
I am using calendar to display some data but there seems to be some problem with the focus for current date. For instance : 25 Sep 2012 is today's date but the calendar focus somehow remains on 24th Sep 2012 and I am failing to understand where do I set this right. Few users see focus on correct date while few see it on previous date.
Also in another tab of the application I am using a Interactive report which displays data based on whatever calendar entries were made earlier. Here I have two regions Today and Tomorrow which pick the values from report and display them in respective region but here too Today(picks yesterday date) and Tomorrow(picks today) .
I am confused as to where to set this right because like I said few users (based in AU) are seeing this incorrect values while users located in another region see it correct.
declare v_amount NUMBER; v_paymentno INTEGER := &sv_paymentno; v_playerno INTEGER; v_payment_date DATE:= SYSDATE; begin select 500 into v_amount from dual; select 44 into v_playerno from dual; insert into penalties values (v_paymentno, v_playerno, v_payment_date, v_amount); end;
I forgot to add the commit statement and now I have hung transaction with dirty data with v_paymentno 27. Is there a way to commit or rollback that transaction?
We found out an error from alert log of our Oracle 10.2.0.5 DB : ==================================== .. Wed Jan 30 16:45:01 EAT 2013 DISTRIB TRAN bea1.67AA54355C4A74ECDEE0 is local tran 6.42.332492 (hex=06.2a.512cc) insert pending prepared tran, scn=8151148567799 (hex=769.d6509cf7) Wed Jan 30 16:45:02 EAT 2013 Errors in file /oradata/sfapdb/bdump/sfapdb_reco_2739.trc: ORA-24756: transaction does not exist Wed Jan 30 16:45:02 EAT 2013 Errors in file /oradata/sfapdb/bdump/sfapdb_reco_2739.trc: ORA-24756: transaction does not exist .. ====================================
There is no useful information from the trace log as shown below: ==================================== Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production With the Partitioning, Data Mining and Real Application Testing options ORACLE_HOME = /ap/oracle10 System name:HP-UX Node name:scvap2 Release:B.11.23 Version:U Machine:9000/800 Instance name: sfapdb Redo thread mounted by this instance: 1 Oracle process number: 18 Unix process pid: 2739, image: oracle@scvap2 (RECO)
*** SERVICE NAME:(SYS$BACKGROUND) 2013-01-30 16:45:01.941 *** SESSION ID:(1749.1) 2013-01-30 16:45:01.941 *** 2013-01-30 16:45:01.940 ERROR, tran=6.42.332492, ose=0: ORA-24756: transaction does not exist *** 2013-01-30 16:45:02.059 ERROR, tran=6.42.332492, session#=1, ose=0: ORA-24756: transaction does not exist ====================================
I also found out there are some records (trans_id = "6.42.332492") in SYS.PENDING_TRANS$/ SYS.PENDING_SESSION$/dba_2pc_pending with "prepare" status.
This transaction is launched from a Weblogic Server via JDBC. Since it is abnormal so I have no choice to force commit/purge this transaction. Is that a bug of Oracle DB ? or Weblogic coding problem ?
I have a problem, we have some datas in a table for example 7500 rows in a table name called table1 upto 11:am today. but after 11:25 am i have only 5500 rows. in that table.
the table can be accessed by many users here. we dont know when the delete happended in that table. is there any query to find the transaction log of this particular table.
the deletion should be happended between 11:00 am to 11:30 am. but we have retrieved the data using timestamp query. but we need to know when the query issued and by which user the query has been issued.
I have problem with my Replication using oracle 9i.It does not push transactions automatically when refresh time comes but it works fine when pushed manually.I have two sites:
1- Master a-Master group 2- Materialized view site a- Materialized view group (Asynchronous) b- Materialized view with refresh occur automatically on future date every 5 minutes c- refresh type FORCE.
I'm running 11.2.0I am looking at tuning a sql statement, and the question was brought up as to the max inserts per transactions in 11g, and if it exceeds 1000.I haven't found a solid answer yet, but I thought that 10g was higher than 1000.
My first thought was to implement a commit loop on every 1000 rows, as that is how things were handled in the past.But I found an article that talks about redo logs and performance and how it's a horrible practice to do the commit loop.
What I haven't found is what is the better methodology in doing this?My scenario could encounter inserts as much as 20,000 at a time.
I have some transactions in my table with date and time.
i want to pass from date, to date and from time , to time as parameter.
when i pass one date and two time parameters, it works fine. but when i try to pass from date and to date (two date parameters) and two time parameters then it does not work accurately.
e.g. i want to pass 05-Aug-2010 and 06-Aug-2010 and time from 08:00:00 and 14:00:00 then it only retrieves data of both dates having only this time range. however i need to get transaction of 05-aug-2010 from 08:00:00 to 06-aug-2010 14:00:00.
There are 2 people on separate machines each executing a transaction through the same form, processed through a when-button-pressed trigger.The first session processed correctly. For the second user, the session seems to have picked up the non-PACKAGE variables of the first session in what was passed through to the data base. Values associated with the 2nd session's PACKAGE-based types appear to have passed through correctly.
Hence, the second user's transaction processed with a combination of values from the two sessions, with the second user's PACKAGE-based variables merged with the first user's non-PACKAGE variables. There is no use of context variables. There are some global values, but none of them are used in this trigger.The values in question, that appear to have passed from the first session to the second, are based on contextual LOV selection: after selecting a transaction type, users are prompted to select from a LoV specific to that type. Value property set "Validate from List=>Yes".
The 2nd session's PACKAGE-based values do not correlate to the non-PACKAGE values, leading us to conclude that the latter values somehow came through from the first session. We are running IAS 10g R2 on Oracle 10gR2 (10.2.0.3).Each user session is created as user logs into the application and hence logically there should not be any overlap between sessions of different users concurrently.
I did have a look at the code and predict nothin wrong with the source code since the system has been in use for a few years now and only occurred a couple of times in last few months. Also one more noticeable thing is that the issue is not reproducible. I would believe somethin goin wrong in the middle tear or with the session management. Are there any known issues in session management in the Forms server Or something?
I have a table that cannot be changed with a field called transaction_reference in the transactions table. This field contains any number of some values in a look-up table called codes.
The table codes contains 'AA', 'BB', 'CC'.
A typical transaction_reference field may look like 'CC BB' or 'AA' or 'AA CC' or 'AA CC BB' - any number, any order.My goal is to get a count of records grouped by another field from the transactions table.
Transactions table example: transaction_id | transaction_reference | family --------------------------------------------- 1 | AA BB | foo 2 | BB CC | bar 3 | BB | hello 4 | AA CC BB | foo 5 | BB AA | bar
So the results should look like:
family | code | count foo | AA | 2 foo | BB | 2 foo | CC | 1 bar | AA | 1 bar | BB | 2 bar | CC | 1 hello | AA | 0 hello | BB | 1 hello | CC | 0
If the counts of 0 (like the third to last and last line above) don't show up I'm ok with that.I put together an explode function like this one here but I'm really not sure where to go from here. I can split the transaction_reference, but I'm not sure what to compare it to or how.
I realize that a field in the transactions table for AA, BB, and CC would be ideal, but I can't do that... the powers that be won't let me change the table.
for each exploded segment from transaction_reference look for it in the codes table if it exists, add 1 to the count
I am trying to find some way how to tune and optimize the server performance in following situation. There are 100s of sessions inserting records to one table. Sessions are communication threads in java application, each thread is receiving messages that are to be stored in the table. Each message must be commited and then is ACK sent to remove client. Two problems are raising of course - much of ITLs on the table and lots of very small transactions. I can adjust the java application, but cant do much about the design.
I was thinking about some "caching" - if the messages are stored in memory and bulk-inserted to database by single thread the performance would be much higher. However, there would be possible loss of data - the message could be lost from memory cache and client already received ACK.