SQL & PL/SQL :: Precision Greater Than Actual Value
Apr 6, 2012
When we are trying to create number data type column of a table with precision greater than actual value,it's accepting the definition of the table . But we are unable to insert any values into the table.how internally it stores the value
SQL> drop table precision_test;
Table dropped
SQL> create table precision_test(name number(2,5));
Table created
SQL> insert into precision_test values (1);
insert into precision_test values (1)
[code]....
2114 rows I got the same result if I use NVL(o.scn,0) rather than TO_NUMBER(o.scn). I can't find out why this happens. Obviously, the ">" condition doesn't match when the difference is too small.
AFAIK, ora_rowscn is a NUMBER while my "scn" column is a NUMBER(12) (which should be sufficient to store my DB or remote DB's SCN).
I am trying to run copy command to copy data from one database to other.I 'm getting this error while running copy command. "ORA-01727: numeric precision specifier is out of range (1 to 38)"
I have a table T1.In that table i have a column id, i gave a number datatype for id as number(2,2). when i try to insert the value im getting an error.
SQL> desc t1; Name Null? Type ----------------------------------------- -------- ---------------------------- ID NUMBER(2,2) NAME VARCHAR2(10) NAME1 NUMBER
SQL> insert into t1(id) values(2); insert into t1(id) values(2) * ERROR at line 1: ORA-01438: value larger than specified precision allowed for this column
SQL> insert into t1(id) values(2.5); insert into t1(id) values(2.5) * ERROR at line 1: ORA-01438: value larger than specified precision allowed for this column
SQL> insert into t1(id) values(10.15); insert into t1(id) values(10.15) * ERROR at line 1: ORA-01438: value larger than specified precision allowed for this column
SQL> insert into t1(id) values(10.5); insert into t1(id) values(10.5) * ERROR at line 1: ORA-01438: value larger than specified precision allowed for this column
Procedure countyname(i_lat IN Number, i_lon IN NUMBER, o_countyname OUT VARCHAR2);
The procedure works fine when the input parameter values are small precision like 30.653, -618.765 etc.it fails if the Input parameter values have more precise like 35.694872140886...I think the IN Number can only take upto certain precission.
Is there any way I can specify the precision for a NUMBER input parameter in a procedure?
ex: Procedure countyname(i_lat IN Number(30,10), i_lon IN NUMBER(30,10), o_countyname OUT VARCHAR2);
when I tried the above statement it doesn't compile it gives PLS-00103 error.
We have truncated number based on the decimal value. i tried to truncate number based on the precision using cast function. i got an error "value larger than specified precision allowed for this colum".
create table TEST_NUMBER (id number(4,1)); insert into TEST_NUMBER values(1234.789888888888);
[code]...
ORA-01438: value larger than specified precision allowed for this column 01438. 00000 - "value larger than specified precision allowed for this column" *Cause: When inserting or updating records, a numeric value was entered that exceeded the precision defined for the column. *Action: Enter a value that complies with the numeric column's precision, or use the MODIFY option with the ALTER TABLE command to expand the precision.
POWER(47.3616, 27.1261) returns: 2.80403309600359E45 which causes "number precision too large" error in my variable which is defined as NUMBER(20,20).
I've tried ROUNDing and TRUNCing the product, but still get the same long output. The only thing that seems to work is SUBSTR.
Is that the ONLY way to deal with this? It seems like there would be a better way or is there another type I should use these super long decimal results?
SQL> SELECT POWER(47.3616, 27.1261) AS Exp_Val 2 FROM dual; EXP_VAL ---------- 2.8040E+45 SQL>
I have a procedure which will execute on every Monday. Same is not executed last Monday. Can I execute the Procedure on some other day with out changing the actual procedure?
I am working on an application that pulls data from an Oracle view into Microsoft Excel (Oracle 11g, MS Excel 2003). It is an automated pivot table created through vba. The problem I'm having is that the decimal points from number columns are being truncated - they don't make it to the pivot table.
When I use the pivot table wizard and set the external data source using a SQL string (select * from view), the precision is not lost. When I create the pivot table automatically in vba, the precision is lost. The pivot table settings, regardless of how the data is brought in, shows the format of the number column as general. This tells me that Excel is trying to figure out what the data type is, and can't (not smart enough).
When looking at the description of the view, the data type is NUMBER (no precision). The table that it is pulling from has the precision set (NUMBER(11,3)). I have tried using the following, but it doesn't work:
select to_number(field_name, '99999999.999') field_name from view
SQL> insert into t51 values (100000000000000000.00000); insert into t51 values (100000000000000000.00000) * ERROR at line 1: ORA-01438: value larger than specified precision allowed for this column
I am creating a query where I am trying to take phone call lengths and put them into buckets of length ranges 0:00 - 0:59, 1:00 - 1:59 etc. Even if there are no calls in the call table I need to return the range with a zero (hence the left join and nvl). When I do this the left join acts like an equal join, I suspect there is some reason left joins only work if there is an equal condition in the join (instead of >= and < that I use, or similarly I could use BETWEEN). I also have a question about performance (below).
The create table script for the lookup is like this:
INSERT INTO DURATION_RANGES (RANGE_TEXT,RANGE_LBOUND,RANGE_UBOUND) VALUES ('00:00 - 00:59',0,59); INSERT INTO DURATION_RANGES (RANGE_TEXT,RANGE_LBOUND,RANGE_UBOUND) VALUES ('01:00 - 01:59',60,119); etc.
The query is: select r.range_text as duration_range, nvl(count(*),0) as calls, nvl(SUM(call_duration),0) as total_duration from
[code]...
As I say, it is not returning all ranges in the duration_ranges table, so acting like an inner join. I realize one solution would be to populate duration ranges with every value possible (instead of ranges) so join is an equal join, but that would make the duration_range table larger.
My questions: 1. Is it possible to get the left join to work with the duration range values as they currently are? 2. Even if 1 is possible, would it be better performance to have exact values (but a larger lookup table) and do an equals join instead of >=, < or BETWEEN? Performance now is not bad.
What I mean is (with only one time value and not lbound and ubound:
INSERT INTO DURATION_RANGES (RANGE_TEXT,RANGE_LBOUND,RANGE_UBOUND) VALUES ('00:00 - 00:59',0); INSERT INTO DURATION_RANGES (RANGE_TEXT,RANGE_LBOUND,RANGE_UBOUND) VALUES ('00:00 - 00:59',1); INSERT INTO DURATION_RANGES (RANGE_TEXT,RANGE_LBOUND,RANGE_UBOUND) VALUES ('00:00 - 00:59',2);
I have two sql queries. They run the one after another.
Query 1: select * from capital where member_status = 'MEMBER' AND rownum <= 25 order by price desc
Query 2: select * from capital where member_status = 'MEMBER' AND rownum > 26 order by price desc
Question is, in the query 2 I want records greater than row number 25. In query 2, I don't want the records that were fetched in Query 1. Is there any way to do this without using rownum?
Select to_char(to_date('10-02-2006 10:30:00 AM', 'DD-MM-YYYY HH:MI:SS AM'), 'HH:MI:SS AM') as a1, to_char(to_date('10-02-2006 01:30:00 PM', 'DD-MM-YYYY HH:MI:SS AM'), 'HH:MI:SS AM') as a2, Case when to_char(to_date('10-02-2006 10:30:00 AM', 'DD-MM-YYYY HH:MI:SS AM'), 'HH:MI:SS AM') >
[code]...
from the above query i was expecting value '2' but its returning '1'. As I am using TO_CHAR its trying to compare characters. Is there a way, to compare times alone like less than, greater than?
I look after a team of DBAs and I have a request to free up space on our very expensive storage system. However the answers on how to do this differ and i'd like to ask for external input...So not being a techincal person I see the world as quite black and white. Meaning that you delete data and you free space but after doing much reading I understand this is not the case, as you essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table. We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data. What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
How about deleting the data and reclaiming the space. Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.
how they have done this as it must be a common problem that people have created some different solutions. What commands, procedures have been used?
I am writing a procedure for the front-end. The end-users need to insert multiple rows of data into history tables in the database (11G). My problem is: the multiple actually parameters is not a fix amount, this time, the amount could be 5, next time, it could be 12. I currently used one string and pass the actual parameter (P_id, number) as '2, 4, 5, 7, 8', the procedure was executed successfully, but cannot insert any data into history table.
See my procedure below (the base table has clob data, I have to consider insert ... select *), I tried to use to_number (CONTACT_MSG_ID), it doesn't work well:
PROCEDURE ARCHIVE_XREF_CONT_EMAIL(P_ID IN VARCHAR2) IS BEGIN INSERT INTO TRC_XREF_CONT_EMAIL_MSGS_HIST SELECT * FROM TRC_XREF_CONT_EMAIL_MSGS [code].......
Database was recently upgraded from 10.2.0.4 to 11.2.0.3 and the EM dbcontol repository was recreated.If I schedule a sql tuning advisor task for any sql query, i get this error. I have also tried to drop sysman user and recreate it, but no luck.
Type Findings Recommendations Rationale Benefit (%) Other Statistics New Explain Plan Compare Explain Plans Error ORA-01727: numeric precision specifier is out of range (1 to 38)
I can't seem to understand why the hour is incorrect. Below query "dte_computation_on_data" is the old function they use to convert date and insert it to the table. Problem is when I revert it to the actual date the hour is incorrect.
CODE SELECT -- THIS HERE IS MY TEST TO REVERT TIME AND DATE ON THE FORMULA OF WITH RESPECT TO THEIR FUNCTION to_char(TO_DATE('19700101', 'YYYYMMDD')+(tb1.dte_computation_on_data/86400),'MM/DD/YYYY') || ' ' || to_char(to_date(mod (tb1.dte_computation_on_data,86400) ,'sssss'),'hh24:mi:ss ') revert_test, systimestamp,tb1.dte_computation_on_data from ( SELECT -- THIS IS THE FORMULA OF THE OLD FUNCTION THEY USE TO CONVERT DATE TO NUMBER AND INSERTED ON THE ROW floor((CAST(SYS_EXTRACT_UTC(systimestamp) AS DATE) - TO_DATE('19700101', 'YYYYMMDD')) * 86400) dte_computation_on_data FROM dual)tb1;
I'm using ASM on LUNs from an EMC SAN, fronted by PowerPath. Right now I have only one fiber path to the SAN, so /dev/emcpowera3 maps directly to /dev/sda3, for example. Oracle had a typo in what they told me to do in /etc/sysconfig/oracleasm*, so the scan picks up both devices.
#/etc/init.d/oracleasm querydisk -p ASMVOL_01
Disk "ASMVOL_01" is a valid ASM disk /dev/emcpowera3: LABEL="ASMVOL_01" TYPE="oracleasm" /dev/sda3: LABEL="ASMVOL_01" TYPE="oracleasm"
But I don't think it can be using both. How do I see which one it's actually using?
Where are the details of the Actual Responder Details stored in the case of the below scenario
If a notification for one user was closed by another user through access to the first user's worklist, the name of the second user, who actually took the action, is displayed as the responder.
I am only able to find the Original Recipeint username in the wf_notifications table.But where will the details of the Actual Responder name displayed in the notification be stored.
When I use the "wf_notification.Responder" function, I get the Original Recipeint username of the user but not the one who acted on the notification. Is there ANY way by which I can get the Actual responder user name who acted on the notification
select responder from wf_notifications where notification_id = <nid> also gives me the Original Recipeint username of the user but not the one who acted on the notification
I have an Oracle application that deals with a 29 hour clock. so the days begin at 5am instead of midnight. I am trying to get data from a table where time is up to certain number of hours, but when it reached the 00 clock my whole query returns no records, even though there is plenty of records before the midnight hour, because the data is saved as lets say 2530 instead of 0130
I am using this
SELECT * FROM WORK.WORK_UNIT WHERE (to_char (current_garage) = :entry_blk.curr_garage OR 'all' = :entry_blk.curr_garage) AND route >= 0 AND run >= 0 AND ((package_id > 0 and work_code = 99) OR (package_id is null and work_code = 1)) and Nvl (Emp_No, 0) = 0 AND work_date = :entry_blk.p_work_date AND on_time <= TO_CHAR(SYSDATE + :Entry_Blk.up_to_hour/24, 'HH24MI')
since oracle deals with a 24hour clock, my code doesn't seem to work if there is data for after midnight (00 hour). I am using :Entry_Blk.up_to_hour/24 to determine up to how many hours I want to see data, i.e.
on_time <= TO_CHAR(SYSDATE + 2/24, 'HH24MI') will give me data up to 2 hours
I have the below query which is doing FTS and is very expensive causing load to timeout.
I did my analysis and found that table is having large number of records and hence FTS is taking long time causing timeout from app side.
I proposed to have this table partitioned but this is still pending with business and they in meantime want some solution other solution to fix this issue.
essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table.
We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data.
What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
deleting the data and reclaiming the space.
Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.