SQL & PL/SQL :: Calculating The Difference Of Timestamps In Seconds
Jan 2, 2013
I have to create the following table. The fields Trend_Date, Price and Trend are already given. I have to calculate the field permanently and to insert the value in this permanent table.
Fields:
The field price belong to the value of a product during the trade.
The field trade_date belongs to the moment of the trade.
The field trend belongs to the future behavior of the the price. Here, the price of the present moment is compared to the following price (possible characteristics: 'UP', 'DOWN', 'STABLE').
The field permanently belongs to the time (in seconds) how long the value of the field Trend_Date (depending on the price) is still true.
For example:
Row 1: The trend in row 1 is 'UP' and it has a price of '11'. Until row 3 this remains true (the price is greater or equal to 11). In this case, the difference between row 1 and row 3 are 9801 (rounded) seconds.
Row 2: The trend in row 2 is 'DOWN' and it has a price of '12'. This remains true till to the end (the price is never greater than 12) In this case, the difference between row 2 and row 11 are 97346 (rounded) seconds. To calculate the 97346 seconds the field has to consider that between row 2 and row 11 are two days. There will be no trade between 18:00 and 07:00 o'clock. This belongs to 7 hours for each days, in seconds (2*46800) 93600.
-> 190945-93600 = 97346s
Row 6: The trend in row 6 is 'UP' and it has a price of '5'. This remains true till to the end (the price is never smaller than 5) In this case, the difference between row 6 and row 11 are 65729 (rounded) seconds. To calculate the 65729 seconds the field has to consider that between row 65729 and row 11 are one days. There will be no trade between 18:00 and 07:00 o'clock. This belongs to 7 hours for each days, in seconds (1*46800) 46800.
-> 112528-46800 = 65729s
Row 9: The trend in row 9 is 'STABLE' and it has a price of '8'. Until row 10 this remains true (the price is equal to 8 ). In this case, the difference between row 9 and row 10 is 14418 (rounded) seconds.
Row 11: Is empty because there are no values to compare.
I have a date column in a table, I need to fetch records from 27:11:11 00:00:10 to 30:11:11 00:00:10 (DD:MM:YY HH:MI:SS). i.e between seconds/min/hrs.
expecting Something like this,
SELECT * FROM TABLENAME WHERE MODEDATE BETWEEN TO_CHAR('27/11/11 00:00:10','DD-MN-YY HH24:MI:SS') AND TO_CHAR('30/11/11 01:10:10','DD-MN-YY HH24:MI:SS')
I have a date field which is passed as in parameter defaulted to sydate With the passed date as input I need to run an eod for every 15 mins by calculating
Using the above v_sdate and v_edate I am calculating the start date and end date and compare these in my final select query.Now in case of rerun of old date, I am calculating start date as : v_sdate := TRUNC (in_sdate, 'MI') - 15 / 1440;
But in the above calculation, I am not considering seconds, but I need to consider those also, because say I ran the eod at 23-oct-2012 12:23:13 then my start date in case of re run should start from 12:23:14 secs , how can I achieve that?
I am trying to create a function that when called will add the salary and commission a certain way to return an employee's annual salary.Here's my code
create or replace function Get_Annual_Comp (Sal in number, Commission in number) return number as [code]...
When I run the query, I get the proper rows return; however, my function does no calculation. If I input random numbers, I get the proper value returned. What I want is for my function to return the salary and commission of the employee specified in my select's where clause to be calculated as an annual salary.
I am using the below sql query to calculate working hours. The problem which i am facing is that query is taking lot of time to calculate the working hours. reduce the execution time of this query or if there is any other way to calculate working hours
The following query take 63.499 sec
SELECT sql_calc_found_rows gstime, MAX(stoptime) AS mx, MIN(starttime) AS mn,
SQL> SELECT MAX (upd_time), MIN (upd_time), COUNT (serial) FROM (SELECT * FROM trans UNION ALL SELECT * FROM trans_archive); MAX(UPD_T MIN(UPD_T COUNT(SERIAL) --------- --------- ------------- 23-OCT-13 01-JAN-11 5289261
I need to calculate seconds between MAX (upd_time) and MIN (upd_time) and then calculate trans/sec. Number of trans COUNT (serial).
SQL> desc trans; Name Null? Type ----------------------------------------- -------- ---------------------------- SERIAL NOT NULL NUMBER(11) UPD_TIME NOT NULL DATE MESSAGE NOT NULL VARCHAR2(255 CHAR) ENTITY_TABLE NOT NULL VARCHAR2(32 CHAR) ACTION NOT NULL VARCHAR2(12 CHAR)
[code]....
trans_archive the same DDL.
my first try to get intervall between max and min date in secons:
SQL> SELECT EXTRACT (DAY FROM (MAX (UPD_TIME) - MIN (UPD_TIME))) * 24 * 60 * 60 + EXTRACT (HOUR FROM (MAX (UPD_TIME) - MIN (UPD_TIME))) * 60 * 60 + EXTRACT (MINUTE FROM (MAX (UPD_TIME) - MIN (UPD_TIME))) * 60 + EXTRACT (SECOND FROM (MAX (UPD_TIME) - MIN (UPD_TIME))) DELTA FROM (SELECT * FROM TRANS UNION ALL SELECT * FROM TRANS_ARCHIVE); SELECT EXTRACT (DAY FROM (MAX (UPD_TIME) - MIN (UPD_TIME))) * 24 * 60 * 60 * ERROR at line 1: ORA-30076: invalid extract field for extract source
Basically what I want is that I need to get desired result in such a way that, whenever Transaction type is Sales Order Issue, I want last TRANSACTION_COSTED_DATE of 'Intransit Shipment'
INVENTORY_ITEM_ID TRANSACTION_COSTED_DATE TRANSACTION_TYPE R 123 28-06-2012 21:36 Intransit Shipment 123 23-07-2012 01:25 Sales order issue 28-06-2012 21:36 123 30-07-2012 05:20 Sales order issue 28-06-2012 21:36
[Code]...
Lag with offset 1 doesn’t work as it will only go to previous row, What I want is that it should go to row above where transaction type is Intransit Shipment
A function should accept two parameters: from_date and to_date which returns no.of Saturdays and Sundays between these dates and also show the dates of those weekends.
I am trying to find sum for one record for each partition but while taking that timestamp giving me bit trouble, i have tried to reproduce the table and some little data
CREATE TABLE TEST_COUNT (END_TIME DATE ,SUCCESSFUL_ROWS NUMBER ,FAILED_ROWS NUMBER ,TBL_NAME VARCHAR (4) ,PARTITION_NAME VARCHAR (240) )
CREATE TABLE "ALLOCATEASSOCIATES" ( "PROJID" VARCHAR2(30) NOT NULL ENABLE, "ASSOCIATEID" NUMBER(*,0) NOT NULL ENABLE, "ALLOCATIONSTARTDATE" DATE, "ALLOCATIONPERCENT" NUMBER(*,0),
[code]...
Given that 1. An associate must be allocated to at-least and a maximum of 100% at any given point of time 2. User selects 2 dates between which inconsistency of allocation needs to be displayed
If the end user selects 1st Apr 2012 and July 31st 2012 between which reports needs to be generated, am looking for the following output
The Allocation_Inconsistency denotes that the associate has a deficit of allocation between the 2 dates. The associate with ID 2 has a deficit of 75% of allocation from 1st Apr 2012 till 15th Apr 2012. Similarly 25% deficit between 16th Apr 2012 and 15th June 2012 and so on so forth. However, there is no allocation deficit for the month of July as he is allocated 100% for this month and hence is not appearing in the expected output.
11.2.0.2 on Solaris..I have such a large post on a very basic space calculation.
We have several tablespaces starting with WLMCS in our DB..I just wanted to calculate the total space consumed in the disk by all these tablespace combined .When I queried DBA_DATA_FILES.MAXBYTES and DBA_DATA_FILES.USER_BYTES , I've noticed that ,
When AUTOEXTEND is NO: MAXBYTES is 0 for these datafiles . But USER_BYTES won't be 0 for these files
When AUTOEXTEND is YES: MAXBYTES will be a non-zero value for these datafiles . USER_BYTES won't be 0 either for these files-- Not including datafile names for better readability.
SYS > select tablespace_name, maxbytes/1024/1024, user_bytes/1024/1024, autoextensible from dba_data_files where tablespace_name like 'WLMCS%';
11 rows selected.To calculate the space consumed , I made 2 assumptions.Are the below 2 assumptions right?
Assumption 1. Whenever MAXBYTES = 0 , USER_BYTES should be considered for the space calculation.
Assumption 2. Whenever you have non-zero values for both MAXBYTES and USER_BYTES , MAXBYTES should be considered for the space calcuation.I did the calculation (adding up of) based on the above assumptions. Is this calcualtion Correct ?
-- Not including datafile names for better readability.
Our database was generating archivelog(50MB) every 30seconds! I think this is not normal because what I did is open our database, I was the only one who is connected, I'm not running anything, but our database is still generating archivelogs!
Our redo logs: 6groups 3members.
This are the things I saw on our alert logs: - advanced to log sequence - cannot allocate new log, sequence - checkpoint not complete - private strand flush not complete
What I did is change the log mode of our database to noarchivelog then open the database, then returned it to archivelog mode then that fixed the problem. But the thing is after 6hours its abnormal behavior goes back again.
I'm using Apex 4.2.1 against Oracle 11gR2 and mod_plsql.create a date picker item that allows users to select seconds as well as hours and minutes.
I have searched this Forum and see that others have asked this question. The answers have all been "No". However, I've seen no such question since version 4.1 has been released, and so, am hoping there now is a way to do this.
I have adjusted the Date Format field to be "DD-Mon-YYYY HH24:MI:SS", both under the date picker item itself as well as under the "Global" parameters section of my application. All to no avail. The date picker shows only hours and minutes for the time portion.
How to select the transactions out of the database that occurred within 70 seconds of each other. The toll_date field is a TIMESTAMP field.
Problem is, I seem to only get transactions that occurred within 70 minutes of each other. On the timestamp field I break the math down into the seconds in a day and I add 70. I then subtract that value and add that value to the timestamp and I should get anything between those values right?
I ran this following query and somehow i feel the results are wrong.
SQL> select to_char(starttime,'dd-mm-YYYY hh24:mi:ss') from report where dateofmonth between to_timestamp_tz('22-Apr-2013 12:00:00','dd-mm-YYYY hh24:mi:ss') and to_timestamp_tz ('23-Apr-2013 14:00:00','dd-mm-YYYY hh24:mi:ss');
SQL> select to_timestamp_tz(starttime,'dd-mm-YYYY hh24:mi:ss') from report where dateofmonth between to_timestamp_tz('22-Apr-2013 12:00:00','dd-mm-YYYY hh24:mi:ss') and to_timestamp_tz ('23-Apr-2013 14:00:00','dd-mm-YYYY hh24:mi:ss');
view the below select statement..why it's adding extra zero's...
select to_timestamp('2001-05-22 12:00:18.600','YYYY-MM-DD HH:MI:SS.ff3AM') from dual output: 5/22/2001 12:00:18.600000000 PM ---why it's adding extra zeors's my output should be as " 5/22/2001 12:00:18.600 PM"
Have a table which has 3 columns id,name,time where time is of datatype timestamp and it stores the time when the row was inserted. Need an query which accepts 2 parameters as input Ex: Start_Time,End_Time and all the rows in between the above mentioned timestamps must be deleted.
I have a large table and want to calculate just a few values. Therefore, I don't want to create a new table, I want to update the table. Here an example:
I want to calculate the VALUE_LAG with ID = 4 only (-> two values).
create table zTEST ( PRODUCT number, ID number, VALUE number, VALUE_L1 number );
[Code]..
I tried this, but obviously, windows functions are not allowed in the update statement.
update zTEST set VALUE_L1 = lag(VALUE) over (partition by PRODUCT, order by ID) where ID = 4
I have written a stand alone (Java SE 1.6) JMS client program to consume AQ's messages via Oracle JMS API (aqapi.jar). The queue is a multiple consumer queue, and i just created one subscriber on it. My JMS client program receives messages asynchronously by setting the MessageListener using the setMessageListener method.
Watching the work of the program, I found significant delays in receiving messages that are up to several seconds. When I turned on the diagnostic trace, I found that in the absence of messages listener (AQjmsSimpleScheduler) gradually increases the delay time up to 15 seconds:
Thread-1 [Mon May 06 22:14:23 MSK 2013] AQjmsSimpleScheduler.feedData: Got a non null message, the sleep time is reset to 0 Thread-1 [Mon May 06 22:14:23 MSK 2013] AQjmsListenerWorker.run: sleep 0 millisecond. Thread-1 [Mon May 06 22:14:23 MSK 2013] AQjmsSimpleScheduler.feedData: Got a null message, the sleep time is doubled to 1000 Thread-1 [Mon May 06 22:14:23 MSK 2013] AQjmsListenerWorker.run: sleep 1000 millisecond. Thread-1 [Mon May 06 22:14:23 MSK 2013] AQjmsListenerWorker.doSleep: try to wait for 1000 milliseconds
[code]....
Thus, in the worst case, the delay between placing the message in the queue and receiving it by the JMS client is 15 seconds.
Can I control this latency? For example, I would like to explicitly set the levels of the time delays.