Conversion Hours Incorrect When Revert To Actual Date
May 18, 2011
I can't seem to understand why the hour is incorrect. Below query "dte_computation_on_data" is the old function they use to convert date and insert it to the table. Problem is when I revert it to the actual date the hour
is incorrect.
CODE
SELECT -- THIS HERE IS MY TEST TO REVERT TIME AND DATE ON THE FORMULA OF WITH RESPECT TO THEIR FUNCTION
to_char(TO_DATE('19700101', 'YYYYMMDD')+(tb1.dte_computation_on_data/86400),'MM/DD/YYYY') || ' ' ||
to_char(to_date(mod (tb1.dte_computation_on_data,86400) ,'sssss'),'hh24:mi:ss ') revert_test,
systimestamp,tb1.dte_computation_on_data
from
( SELECT -- THIS IS THE FORMULA OF THE OLD FUNCTION THEY USE TO CONVERT DATE TO NUMBER AND INSERTED ON THE ROW
floor((CAST(SYS_EXTRACT_UTC(systimestamp) AS DATE) - TO_DATE('19700101', 'YYYYMMDD')) * 86400) dte_computation_on_data
FROM dual)tb1;
I have to use the "4*3600" in order to get the date to show up correctly, but even then the date sometimes comes up wrong. If the date occurs in the morning, then the date shows up as the previous day. I am sure this is probably due to the offset I am adding in the above formula. If I don't add the 4 hour offset, then the date shows up 4 hours off.
I want to change a table datatype from date to number where already existing data should get convert.any possibility of doing where i tried like this but no get changing. Even as Julian format is working a bit i want the data to come as GMT format
Scenarios where i tried are like this.
ALTER TABLE Contacts ADD ALERT_DATE1 NUMBER(20,0) / UPDATE Contacts SET ALERT_DATE1 = TO_NUMBER(ALERT_DATE) / ALTER TABLE Contacts DROP COLUMN ALERT_DATE / ALTER TABLE Contacts RENAME COLUMN ALERT_DATE1 TO ALERT_DATE /
but second statement failing.
Julian fomat like
SELECT sysdate, TO_CHAR(sysdate, 'J'), TO_DATE(TO_CHAR(sysdate, 'J'),'J') FROM dual;
I am converting data from an old paradox table to a new oracle table, one of the problems im having is incompatibility with date and time formats:
some columns contain times in the format : "00:00:00" eg..... "15:00:00" some columns have date in the format: "dd/mm/yyyy" eg....... "21/08/2000" some columns have time and date eg.. "05/09/2000 15:49:39"
Currently I have the data held in tables within an access database, and in CSV format.
eg, I have dates like 03/04/2010 which i need to be 03-APR-10....
how I can get the following into Oracle date formats? there is over 1000 records so manual conversion is out of the question
I recently became involved with databases, and i've came across with a little obstacle. I have strings that represent a date, they are very oddly formatted and need to store them as dates. the string format looks like this: 'Monday, May the 13th of 2001'
I have been given some data in excel sheet to be uploaded in an Oracle Table. The dates are in Julian. The date in Julian in excel sheet is as :-'110048'.
In the excel file, I found that the cell was formatted as General and when I changed the formatting to Date I got the result as '19/04/2211'.
tell me a way to convert this Julian to mm/dd/yyyy format to be inserted into a table in Oracle.
Tried this :-
SQL> SELECT to_char(to_date(to_char(110048), 'J'),'DD/MM/YYYY') FROM dual;
create table sand_program1(prog_code varchar2(20), filing_date number(15)); Result : Both the tables are created.
Now I create a procedure below as mentioned, to check if the filing_date is greater than the prog_end_dt or not.If the filing_date is greater than prog_ end_dt, then it should go to the 1st "dbms_output.put_line" message else it should go to the 2nd "dbms_output.put_line" message.
Here's my procedure:
create or replace procedure test_sand(p_program_cd in number) is v_prog_end_dt date; v_filing_date number; begin [code]....
#runs the selected process ${ORACLE_HOME}/bin/sqlplus -s ${USER_PASS} >${TMP_FILE} <<EOF set pause off set verify off set pagesize 0 set linesize 2000 set timing off [code].........
I am passing a Date to the Oracle Procedure in `date +'%b_%d-%H:%M:%S'` format.
Due to some business requirements a table field needs to change from date to timestamp in order to handle the millisecs.
1>When i alter the row , for a table with 150 million recs will there be a conversion. Is there a recommended way to convert the field. Mind you this field is used as a part of composite PK.
2> There is a interfacing application which connects and copies the data to its system and is using the date type, will that application be able to continue to work without any changes, if it does not care about the millisecs.
3> Will there be performance impact on an existing application that uses the date field to sort
I have a standard workflow process which was started but got completed without performing all the process.The process had to generate two(Comments & Approval) notifications but generated only one & as the user responded to the notification it got completed without invoking the approval process.Now I need the workflow to be rolled back to the initial step to restart the whole process again.
When we are trying to create number data type column of a table with precision greater than actual value,it's accepting the definition of the table . But we are unable to insert any values into the table.how internally it stores the value
SQL> drop table precision_test; Table dropped SQL> create table precision_test(name number(2,5)); Table created SQL> insert into precision_test values (1); insert into precision_test values (1) [code]....
I have a procedure which will execute on every Monday. Same is not executed last Monday. Can I execute the Procedure on some other day with out changing the actual procedure?
Update employee e set e.dept_id = (select d.dept_id from dept d
[Code].....
The above is not the exact code which I am executing but an exact replica of the logic implied in my code.
Now, when i display the value of 'rows_updated' it returns a value greater than 0,i.e 3 but it should ideally return 0 since there are no records matching for the condition: (select d.dept_id from dept d where e.dept_name = d.dept_name)
So, I executed the statement: select count(*) from employee e where emp_id = 1234 and exists (select 1 from emp_his ee where e.emp_id = ee.emp_id) and the result was 3 which is the same value returned by %rowcount.
why this is happening as I am getting incorrect values in %rowcount for the number of rows updated.
I currently have around 560 rows of bad data where
GL_Encumbered_Date and TO_CHAR(PRD.GL_ENCUMBERED_DATE,'YYYY' display correctly.
The problem lies in
PRD.GL_CANCELLED_DATE which displays correctly (example: 30-APR-10) but TO_CHAR(PRD.GL_CANCELLED_DATE, 'DD-MON-YYYY') for the same row displays as 30-APR-0010.
I have separated out the incorrect year with:
to_char(prd.gl_cancelled_date,'YYYY') where each of the 560 rows display 0010 instead of 2010. I am unsure if arithmetic with dates is allowed within SQL- but am curious if the operation date + number will result in a date.
If this is so, how could one go about taking 0010 as a date and add 2000 to create 2010 for: PRD.GL_CANCELLED_DATE.
Would like to ask expert here, how could I insert data into oracle table where the value is 03 ? The case is like that, the column was defined as varchar2(50) data type, and I have a csv file where the value is 03 but when load into oracle table the value become 3 instead of 03.
I look after a team of DBAs and I have a request to free up space on our very expensive storage system. However the answers on how to do this differ and i'd like to ask for external input...So not being a techincal person I see the world as quite black and white. Meaning that you delete data and you free space but after doing much reading I understand this is not the case, as you essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table. We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data. What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
How about deleting the data and reclaiming the space. Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.
how they have done this as it must be a common problem that people have created some different solutions. What commands, procedures have been used?
I installed 11.1.0.6. on Linux RedHat 5.5. Then upgraded to 11.1.0.7. However when I issue sqlplus /nolog and do select banner from v$version it shows 11.1.0.6. Although the install and upgrade were successful. I wonder why it does not show 11.1.0.7. I upgraded using the installer and not dbua could this be the reason why?
The module_name variable should be assigned the value 'covenants', because :GLOBAL.FORMS_PATH is '' (nothing). I stepped thru the code and in the debug system values window there is nothing in :GLOBAL.FORMS_PATH. But instead, it gives me: 'FORM_NAMEcovenants' where 'FORM_NAME' is the form the menu was called from.
We just upgraded to 11g and have run into incorrect results for some of our LEFT JOINs. If the table, view, subquery, or WITH clause that is being LEFT JOINed to contains any constants, the results are not correct.
For example, a test (nonsensical) view such as the following is created:
create or replace view fyvtst1 as select spriden_pidm as fyvtst1_pidm, 'Sch' as fyvtst1_test from spriden where spriden_last_name like 'Sch%' ;
When I run the following query, I get correct results; that is, only those with "Sch" starting their last name are listed.
select spriden_pidm, spriden_last_name, fyvtst1_pidm, fyvtst1_test from spriden join fyvtst1 on fyvtst1_pidm = spriden_pidm ;
However, when I change the JOIN to a LEFT JOIN, the last column contains "Sch" for all rows, instead of NULL:
select spriden_pidm, spriden_last_name, fyvtst1_pidm, fyvtst1_test from spriden left join fyvtst1 on fyvtst1_pidm = spriden_pidm ;
We've discovered other quirky things related to this. A WITH clause with similar logic as the above view, when LEFT JOINed to a table will also cause the constant to appear in each row, instead of NULL (and only the value where there is a join). But when additional columns are added to the WITH, it behaves correctly.
This is easy enough to rewrite - but we have WITHs and views containing constants in numerous places, and cannot hope to track down every single one successfully before the incorrect results are used.
Finally, the NO_QUERY_TRANSFORMATION hint will force the query to work correctly. Unfortunately, it has a huge negative performance impact (one query ran for an hour, vs. 1 second in 10g).
CREATE TABLE tmp_guid ( c1 raw(16) not null ,c2 raw(16) not null ); begin
[code]...
It seems that a combination of a unique index and extended stats are to blame. Removing any one of them causes the query to also produce correct results.Extended stats basically captures the fact that despite being unique, c1 depends on c2.
I am writing a procedure for the front-end. The end-users need to insert multiple rows of data into history tables in the database (11G). My problem is: the multiple actually parameters is not a fix amount, this time, the amount could be 5, next time, it could be 12. I currently used one string and pass the actual parameter (P_id, number) as '2, 4, 5, 7, 8', the procedure was executed successfully, but cannot insert any data into history table.
See my procedure below (the base table has clob data, I have to consider insert ... select *), I tried to use to_number (CONTACT_MSG_ID), it doesn't work well:
PROCEDURE ARCHIVE_XREF_CONT_EMAIL(P_ID IN VARCHAR2) IS BEGIN INSERT INTO TRC_XREF_CONT_EMAIL_MSGS_HIST SELECT * FROM TRC_XREF_CONT_EMAIL_MSGS [code].......
My problem is I have 3 tables (TEST_TBL1, TEST_TBL2, TEST_TBL3). TEST_TBL2 and TEST_TBL3 are in remote database and I use database link to join them. The following query returns incorrect result (I seems that it ignore the where clause)
SELECT * FROM TEST_TBL1 JOIN TEST_TBL2@db_remote USING (KEY1) JOIN TEST_TBL3@db_remote USING (KEY2) WHERE KEY1=XXX OR KEY2=YYY;
I am on 11R1 (11.1.0.7)
FOR EXAMPLE:
Local database: CREATE TABLE TEST_TBL1 ( KEY1 NUMBER(5) NOT NULL,