How to select the transactions out of the database that occurred within 70 seconds of each other. The toll_date field is a TIMESTAMP field.
Problem is, I seem to only get transactions that occurred within 70 minutes of each other. On the timestamp field I break the math down into the seconds in a day and I add 70. I then subtract that value and add that value to the timestamp and I should get anything between those values right?
Is there a way to query an oracle database in an automated fashion by a timestamp field based on current timestamp, like: 04/29/08 00:00:00 - 72 hours?
I have a SP for comparing 80 diff column values in 8 table pairs and it is taking a huge lot of time to process as I have to process around 10k records.
I ran this following query and somehow i feel the results are wrong.
SQL> select to_char(starttime,'dd-mm-YYYY hh24:mi:ss') from report where dateofmonth between to_timestamp_tz('22-Apr-2013 12:00:00','dd-mm-YYYY hh24:mi:ss') and to_timestamp_tz ('23-Apr-2013 14:00:00','dd-mm-YYYY hh24:mi:ss');
SQL> select to_timestamp_tz(starttime,'dd-mm-YYYY hh24:mi:ss') from report where dateofmonth between to_timestamp_tz('22-Apr-2013 12:00:00','dd-mm-YYYY hh24:mi:ss') and to_timestamp_tz ('23-Apr-2013 14:00:00','dd-mm-YYYY hh24:mi:ss');
view the below select statement..why it's adding extra zero's...
select to_timestamp('2001-05-22 12:00:18.600','YYYY-MM-DD HH:MI:SS.ff3AM') from dual output: 5/22/2001 12:00:18.600000000 PM ---why it's adding extra zeors's my output should be as " 5/22/2001 12:00:18.600 PM"
Have a table which has 3 columns id,name,time where time is of datatype timestamp and it stores the time when the row was inserted. Need an query which accepts 2 parameters as input Ex: Start_Time,End_Time and all the rows in between the above mentioned timestamps must be deleted.
I have to create the following table. The fields Trend_Date, Price and Trend are already given. I have to calculate the field permanently and to insert the value in this permanent table.
Fields:
The field price belong to the value of a product during the trade. The field trade_date belongs to the moment of the trade. The field trend belongs to the future behavior of the the price. Here, the price of the present moment is compared to the following price (possible characteristics: 'UP', 'DOWN', 'STABLE'). The field permanently belongs to the time (in seconds) how long the value of the field Trend_Date (depending on the price) is still true.
For example:
Row 1: The trend in row 1 is 'UP' and it has a price of '11'. Until row 3 this remains true (the price is greater or equal to 11). In this case, the difference between row 1 and row 3 are 9801 (rounded) seconds.
Row 2: The trend in row 2 is 'DOWN' and it has a price of '12'. This remains true till to the end (the price is never greater than 12) In this case, the difference between row 2 and row 11 are 97346 (rounded) seconds. To calculate the 97346 seconds the field has to consider that between row 2 and row 11 are two days. There will be no trade between 18:00 and 07:00 o'clock. This belongs to 7 hours for each days, in seconds (2*46800) 93600. -> 190945-93600 = 97346s
Row 6: The trend in row 6 is 'UP' and it has a price of '5'. This remains true till to the end (the price is never smaller than 5) In this case, the difference between row 6 and row 11 are 65729 (rounded) seconds. To calculate the 65729 seconds the field has to consider that between row 65729 and row 11 are one days. There will be no trade between 18:00 and 07:00 o'clock. This belongs to 7 hours for each days, in seconds (1*46800) 46800. -> 112528-46800 = 65729s
Row 9: The trend in row 9 is 'STABLE' and it has a price of '8'. Until row 10 this remains true (the price is equal to 8 ). In this case, the difference between row 9 and row 10 is 14418 (rounded) seconds.
Row 11: Is empty because there are no values to compare.
i m creating the dynamic table every month to maintain the particular month data seperately .when the records getting inserted in the table,trigger will automatically insert the records in the dynamic table. only date alone(without timestamp) getting inserted in the dynamic table from staging. so by default ,00:00:00 is getting appended with date instead of actual timestamp. tried select to_date(to_char(:new.ACTN_DATE,'YYYY-MM-DD HH24:MI:SS'),'YYYY-MM-DD HH24:MI:SS') INTO v_temp_actn_date from dual; but i am getting only date alone . in my table and dynmaic table datatype for date column is date
I'm trying to generate count of the number of entries in a table for each day.The problem is the date column is of datatype timestamp and looks like this "2006-12-30 18:42:03.0"
How would I generate a report of number of entries in the table for each date (I'm not intrested in the "time" only the "date" i.e YYYY-MM-DD)?
SELECT COUNT(*) FROM my_table_name WHERE my_date_column LIKE '2006-12-30%' GO
It returns zero rows ( and I kno there are rows in the table) I'm using Oracle 10g.
how to caluclate days between two dates of single timestamp filed and with this
query Select * from m_activity_transaction where actn_opp_id in ( Select actn_opp_id from m_activity_transaction where ACTN_ACTV_ID = 218 Group by actn_opp_id
[code]...
and i nedd to caluclate no.of days between two dates like 27-JAN-12 11.06.20.000000 AM and 08-FEB-12 05.32.54.000000 PM where actn_id is unique AND ACTN_OPP_ID IS NOT UNIQUE.
(both these fields a_std and a_time are coming as varchar from the parent table in a cursor.(basically they are time period and actual arrival time respectively)
i was juggling with the attempt to make varchar to timestamp or date..but caught with Round up /Round down)
Formula ->
A = Round down [A_TIME - A_STD] B = Round up [A_TIME) - 10 minute + A_STD]
where
A_TIME VARCHAR2(8) N Time (Format" HH:MM AM/PM") eg "3:50 PM" A_STD VARCHAR2(5) N Standard time (Format" HH:MM") eg "1:00"
Allowed values for A & B after round up/down = multiple of 10 ( 11:00,11:10,11:20 etc.)
I have 3 tables, user_login_event, person and resource_viewed_event. What I want to do have a report for each month, users logged in our application and then show for each month, how many records were created in table person and how many resource views events were logged in resource_viewed_event.
Lets only worry about the timestamp fields in these tables now as I want to use them to join the tables together or at least build correlated subqueries along the months. I have tried several options, all not leading to a desired result:
Left outer join. Works but its incredibly slow:
SELECT distinct to_char(ule.TIMESTAMP,'YYYY-MM') as "YYYY-MM", count(distinct ule.id) as "User Logins", count(distinct ule.user_id) as "Users logged on", count(distinct p2.id) as "Existing Users", count(distinct p1.id) as "New Users", count(distinct r1.id) as "Resources created"
[code]....
Tried the same with left outer joins of temporary tables created through select statements:
select distinct ule.month as "Month", count(distinct p1.user_id) as "Users created", count (ule.id) as "Logins", count (distinct ule.user_id) as "Users logged in", count(rv.id) as "Resource Views", count(distinct rv.resource_id) as "Resources Viewed"
[code]....
Tried the same with left outer joins of temporary tables created through select statements:
select distinct ule.month as "Month", count(distinct p1.user_id) as "Users created", count (ule.id) as "Logins", count (distinct ule.user_id) as "Users logged in", count(rv.id) as "Resource Views", count(distinct rv.resource_id) as "Resources Viewed"
[code]....
another approach is to create my own temporary tables using select statements and create fixed Month values which I can use to directly link the sets together.
select distinct ule.loginday as "Month", count(distinct ule.id) as "Logins", count(distinct ule.user_id) as "Users logged in", count(distinct p1.user_id) as "Users created", count(distinct p2.user_id) as "Existing users1"
[code]....
performance is OK with 2 tables but the example above takes forever to execute.
Tried an approach with union but this creates new rows for each table
SELECT DISTINCT p1.MONTH AS "Month", COUNT(DISTINCT p1.user_id) AS "Users created", NULL AS "Logins", NULL AS "Users Logged in", NULL AS "Resource views", NULL AS "Resources viewed" FROM (SELECT To_char(person.created_on_date, 'YYYY-MM') AS MONTH,
since the orgid 1 has changed the dept from org1 to org2 I do not want this to be appeared in the final count. Results should only include the orgid 2 since it didn't changed any dept.
There could be anything after the 2nd ~ in string 2 is there a easy way of trimming string2 to the first 14 Characters? Or do I have to find the 2nd instance of ~ and then remove everything after (and including) that?
one is "ora" it is a 8i version 2nd is "orcl" it is a 11g version
"Oracle" is the my local database. i wrote following program for comparing the row by row data in both the tables. Q)Is it BEST practice? If not let me know the best practice to compare data in tables? Q) If am not using the order by clause its giving me wrong output even though both the data tables has same data. WHY?
Recently i have started working on PLSQL coding. I have a requirement. Either error or un-processed record count is 90% of to be processed records then the script has to fail. Currently I am having a situation where error count is 1 and total to be processed is also 1.
in the below V_ERR is error count V_UPS is un processed count V_PROCESSED_COUNT is total to be processed.
I am expecting PASS result but it is giving FAIL.
DECLARE V_ERR NUMBER:=0; V_UPS NUMBER:=0; V_PROCESSED_COUNT NUMBER:=0; NIN NUMBER; BEGIN V_PROCESSED_COUNT:=1; [Code] .......
I want to do a comparision for the missing rows between two diffrent tables
TBL1 and TBL2 both with the same structure but with diffrent data some data is identical. though my data is huge i wanted to make sure the technique i am using
As part of our project, we need to perform table comparisons in two different databases. I am currently looking for various options to accomplish this.
One of them is doing minus operation between these two tables. Also, i have looked at the data compare option in toad utility.
I am working in form 6i, database 9i. I have datablock on table t1.
table t1: name(varchar2), date(varchar2)
datablock: name(varchar2), date(varchar2)[i have insert date with time stamp]
for date column, i am inserting date with time stamp.While querying data, user just enters only date(no time stamp), i should be able to query data. I tried in data block where condition
I have a SQL query which joins several large tables (so indexes matter here) from Oracle database. In the where condition I use IS NULL with one of the date field values. Query takes 40 sec to run and if I comment this one line...it takes 1 sec to run. This date field is an index on the table and I learn that --
1. IS NOT NULL in where clause uses an index 2. IS NULL in where clause does not use an index
Is there any work around to make the query faster...other than changing all the NULL date values in the table to some string. In other words can I force it to use the index.
SF at oracle.com/technetwork/issue-archive/o53plsql-083350.html states that you can compare two database tables (of the same structure) by defining a nested table type (using %ROWTYPE) and two NT variables of that type, and loading the contents of each table into its respective NT variable, before comparing them using the = operator. Having read the Oracle documentation which states that you can only compare NTs for equality if they don't contain record types, I was surprised to read this, but figured I would try it because I must be misunderstanding SF, but it didn't work.
SCOTT@ORCL> create table empcopy3 as select * from emp;
Table created.
declare type emp_ntt is table of emp%rowtype; emp_nt1 emp_ntt;
[Code]....
But SF goes on to say he timed the execution of his NT equality method, comparing it with a SQL-only equivalent, and so I must be missing something. My understanding is that using %ROWTYPE declares a record type.