Convert DATE To EPOCH Time Taking Care Of Daylight Savings?
Nov 8, 2012
How can I convert a DATE to EPOCH time taking care of daylight savings?
I tried with this code but there is a difference of 36000 seconds. eg:Sysdate_To_Epoch('04-Sep-2012') gives 1346716800 whereas it should give 1346680800.
CREATE OR REPLACE FUNCTION Sysdate_To_Epoch(v_date IN DATE)
RETURN NUMBER
IS
I am trying to get the number of seconds between March 3 2010 and march 31 2010 in Oracle. I am in Eastern time in the US. Everything I try just keeps coming up with 30 days * 86400 seconds per day = 2592000. 1 hour was lost when we switched to daylight savings time so the correct answer is 2588400.
How do a create a function in Oracle that will give me the number of seconds between 2 dates or timestamps that is aware of the loss of an hour in March and the gain of an hour in November?
During the daylight savings time our scheduler jobs are running either an hour before/after depending upon the time switch.
I went through the Oracle documentation and found below suggestions which I have already tried in vain.
Document says scheduler first picks the timezone from the start date of the job if provided so i tried setting the start date using the TO_TIMESTAMP_TZ('2012/01/22 18:50:00 US/Eastern','yyyy/mm/dd hh24:mi:ss tzr') which did not fix the problem. I have noticed that oracle automatically converted into tzh:tzm format.
second solution: setting default timezone of scheduler to the TZR i.e (US/Eastern) instead of the TZh:TZM value. I did that using below script
BEGIN DBMS_SCHEDULER.set_scheduler_attribute ( attribute => 'default_timezone', value => 'US/Eastern'); END;
above 2 solutions did not work for me. I have read on internet from some article that below query should return something like
"4/23/2012 11:02:13 US/Eastern" after setting the default timezone of scheduler to TZR but I am still getting "4/23/2012 11:02:13.715816000 AM -04:00". select dbms_scheduler.stime from dual;
I have to use the "4*3600" in order to get the date to show up correctly, but even then the date sometimes comes up wrong. If the date occurs in the morning, then the date shows up as the previous day. I am sure this is probably due to the offset I am adding in the above formula. If I don't add the 4 hour offset, then the date shows up 4 hours off.
I have a UI which is java and database in oracle 10g and database resides in India. Now the user use this application across the world. Date related value stores in IST format.
Now the requirement is whenever any user open the application in USA ,then date value should convert into their local time zone. So is there any way in oracle to convert and show the date value according to their local time zone.
A external reporting application ( SQL SERVER REPORT SERVICES) sends in some comma separated parameter values, which has to be queried against a table with 6-10 million records.
Length of the comma separated value can go upto 5000-6000 in length based on user input in the front end application. This application sends this value in comma separated. i.e., like
'AA1-11101,AA2-34346,AA4-534399,.....' like this at a time the application can send upto 500 values each of length 10. i.e, maximum length can be upto 5000. I used CLOB to handle this because since the length can be 5000 and varchar2 can handle only 4000 long literals.
But the time taken by the CLOB to verify against the table using INSTR is more compared to VARCHAR2.
I found that VARCHAR2 does'nt take much time. Is it a good idea to have VARCHAR2 in the PLSQL procedure as parameter instead of CLOB, since PLSQL VARCHAR2 can handle upto 32000 long values.
I want to know how I can find which query is taking more time , for example some query's are run from unix, java and from toad,sqlplus. and one query is taking much more time to execute, so how i can get that query and all the details.
I have a query which is executing fast in dev env,but very long time in qa env.What is the criteria when this behaviour occurs.Though qa is having more data than dev.But still it is taking long time for 1 rows also.When I am using the query rownum<=1.So What to check for this.
My oracle database version is 11.2.0.3.0 where i am having one schema in that schema i am having 3 same tables with same structure same data but with different name.
but problem is in first table when i perform select query it takes 5 sec, in another table it is taking 0 sec and in third table it is taking 10 sec.
We are firing a normal Drop command on our database and the database version is 10.2.0.4.The database is running on AIX v5.The command is taking more time than usual .
When i am monitoring the session i can see that a call is being made to procedure "aw_drop_proc".Could i ask you if this is something that is taking more time than usual.
We are not having any partitions on the nested tables .We have a pack of tables and we are dropping this pack through a procedure.The pack comprises of nested tables & normal tables.To drop a nested table it is taking around 6 seconds(Table with no rows) and a normal table(With no rows) it is taking 17 milli seconds.We have a partition on Normal table.
The same operation in windows is taking very less time when compared to AIX.
I have upgraded oracle database from 9i to 11g using export and import utility. After migration we are facing performance issue in report generation, We have observed that First execution of report is taking very long time and when we generate the same report 2 -3 times there is considerable change in the execution time and it is more better than the first execution.
2 days back I have restarted the database and found the same issue. There are around 300 Reports and it is not possible to generate all the reports 2-3 times every time we restart the database.
My Database version as followsOracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64biPL/SQL Release 10.2.0.4.0 - ProductionCORE 10.2.0.4.0 ProductionTNS for Linux: Version 10.2.0.4.0 - ProductionNLSRTL Version 10.2.0.4.0 - Production We have datagaurd setup as well - Huge archive logs are generating in our primary database - Archive logs are shipping to standby with no dealy - But applying the archive logs are taking more in our physical standby database -why it was taking more time to apply archivlogs (sync) in standby ? - What could be possible reasons..? Note : Size of standby redo logs are same as redo log file of primary database - Also standby by redo one or more than online redo log primary. Since i need to report my higer leve stating this is cause for delay in applying archive logs.
update tab1 set col1 = ( select col2 from tab2 where tab1.id = tab2.id) table 1 has arnd 10,000 rows
table 2 has arnd 1,700,000 rows and has a primary key on column id.This query is taking around 20 secs to execute. I checked the x-plan and most of time taken for table access by index rowid.I checked the stats for the tab2, its just three days old.
We are inserting data using JDBC (Java program) in Oracle 11gR2 DB and Timesten to oracle (using AWT cache group) . In reality insertion in oracle is faster than Timesten Cache DB.
Timesten version - TimesTen Release 11.2.1.7.0 (64 bit Linux/x86_64)
1. It is Client/Server Model. 2. Cpu has 4 core and we are using 3 core for insert the Data. 3. Perm and Temp size is big enough compare to Data Size 4. auto commit=0 5. durable commit=0 6. PassThrough=1 7. LogBufParallelism=3 8. LogPurge=1 9. LockWait = 0.1
In my code I am using delete statement which is taking too much time to execute.
Statement is as follow:
DELETE FROM TRADE_ORDER_EMP_ALLOCATION T WHERE (ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT) IN (SELECT ARTEMIS_SOURCE_SYSTEM_ID,NM_ARTEMIS_SOURCE_SYSTEM,CD_BOOK_KEY,ACTIVITY_DT FROM LOAD_TRADE_ORDER WHERE IND_IS_BAD_RECORD='N');
Every column in "IN" clause and select clause is containing index on it
Every time no of rows which to be deleted is vary (May be in hundred ,thousand or hundred thousand )so that I am Unable to use "BITMAP" index on the table "LOAD_TRADE_ORDER" column "IND_IS_BAD_RECORD" though it is containing distinct record in it.
Even table "TRADE_ORDER_EMP_ALLOCATION" is containing "RANGE" PARTITION over it on the column "ARTEMIS_SOURCE_SYSTEM_ID". With this I am enclosing table scripts with Indexes and Partitions over it.
way for fast execution in of above delete statement?
I am working on oracle 11g...I have one normal insert proc
CREATE OR REPLACE PROCEDURE test2 AS BEGIN INSERT INTO first_table (citiversion, financialcollectionid, dataitemid, dataitemvalue, [code]....
I am processing 1 lakh rows.tell me the reason why bulk collect is taking more time. ? According to my knowledge it should take less time. do i need to check any parameter?
create or replace PROCEDURE TESTPERFORMANCE ( o_statuscode OUT NUMBER, o_statusdescription OUT VARCHAR2, starttime out timestamp, time_after_query_TESTJOB out timestamp,
[Code]...
This procedure is taking around 35 minutes when there are 35000 records to loop over (i.e cursor has 35000 records) and TESTJOBTRANSACTIONS table has 90000 records. How to reduce execution time.
I want to use a function in join clause. so i go for pipelined function(using for loop to get record & 1 more loop to fetch in table type variable). i achieved what i required. but problem is it takes much time to fetch data. is there any other approach which returns table records without pipelined function.
select serialnumber from product where productid in (select /*+ full parallel(producttask 16) */productid from producttask where startedtimestamp > to_date('2013-07-04 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and startedtimestamp < to_date('2013-07-05 00:00:00', 'YYYY-MM-DD HH24:MI:SS') and producttasktypeid in