i am exporting / importing from 10.2.0.4.0 to 11.2.0.3.0 but while doing import some of rows are rejected ...
IMP-00019: row rejected due to ORACLE error 12899 IMP-00003: ORACLE error 12899 encountered ORA-12899: value too large for column XXXX (actual: 51, maximum: 50) Column 1 264 Column 2 123432 Column 3 Column 4 7
[code]....
having looked at data in source system i cant see see character "Â" in the column 11 i think this is what causing issue !!why is oracle adding this character to this field ? how can i fix this ? without having to modify table in new system to allow more characters?
1. When querying the "alert_log" table I created from the alert log using the script below, 2 new files were created ALERT_LOG_30499.bad and ALERT_LOG_30499.log.
The ALERT_LOG_30499.log. contains this error message:
error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log ORA-12899: value too large for column MSG (actual: 82, maximum: 80)
the ALERT_LOG_30499.bad , so far, only contains datafile resize information. The datafiles have plenty of space and there is plenty of space on the San slice the datafiles reside.
2. then each time I recreate the table and increased the increased the varchar2 size, the "actual" size will also increase in the log file.
error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log ORA-12899: value too large for column MSG (actual: 92, maximum: 90)
3. When I increased the varchar2 size to 120+ it gave me this error message:
[oracle@tds_dw bdump]$ cat ALERT_LOG_30715.log
LOG file opened at 03/09/11 14:46:20
Field Definitions for table ALERT_LOG Record format DELIMITED BY NEWLINE Data in file has same endianness as the platform Rows with all null fields are accepted
Fields in Data Source:
MSG CHAR (255) Terminated by "," Trim whitespace same as SQL Loader
TABLE DDL:
create table alert_log ( msg varchar2(80) ) organization external ( type oracle_loader default directory BDUMP access parameters ( records delimited by newline ) location('alert_damistst.log') ) reject limit 1000;
**** QUESTION I can still query the alert_log table in sqlplus, but those log and bad files are generated, is this an issue?
example of a piece of the results from " select * from alert_log; "
MSG -------------------------------------------------------------------------------- Thread 1 advanced to log sequence 5254 (LGWR switch) Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log Thread 1 cannot allocate new log Checkpoint not complete Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log Wed Mar 9 14:33:09 2011 Thread 1 advanced to log sequence 5255 (LGWR switch) Current log# 2 seq# 5255 mem# 0: /tds_oradata/redo02a.log Current log# 2 seq# 5255 mem# 1: /u02/damistst/REDO_LOGS/redo02b.log
I am using trim function in my select query. But still I am getting white space in my output. because of this, I am getting the error "value too large for column... " when I load the data into a table through sqlloader.
define APPName="&1" set heading off; set verify off; set newpage 0 set feedback off; set rtrimspool on; set termout off; set pagesize 40000;
I want to add column to table which has huge amount of data and fill with data from another table. What is the best way to do it? Is it faster to use CTAS instead of ALTER TABLE ADD COLUMN?
I have encountered some problems in SQL I want to create a table with a bunch of prepared data. For ease of use, I choose to generate a SQL file which contains all the sql clauses used to create the table and insert the data. So all the data can only be inserted to a table using sql clause.
My questions: 1) If data of a column is large (for example, 1 M text), how to insert it using SQL, is there a piecewise method. 2) And how can I insert BLOB data using SQL clause.
What I what is to enclose all the operations in a single SQL file, and when the table is needed, just execute this SQL file.
I'm currently doing migration from Oracle 10gR2 RDF to Oracle 11gR2 Semantic Technology.I followed the steps on the documentation and successfully created the network using the following:
----- EXECUTE SEM_APIS.CREATE_SEM_NETWORK('rdf_tblspace'); CREATE TABLE rdf_network_trace (id NUMBER, triple SDO_RDF_TRIPLE_S); --Created SEQUENCE andTRIGGER FOR rdf_network_trace id [code]....
when I looked at my Node Ids, they were like +635762253807433724+, +6118969225776891730+. The problem is, I am not the one who is assigning Node Ids, They were automatically generated when inserting TRIPLE data to the rdf table.
I'm loading data from text file separated by TAB and i got the error below for some lines. Event the column is CLOB data type is there a limitation of the size of a CLOB data type. The error is:
Record 74: Rejected - Error on table _TEMP, column DEST. Field in data file exceeds maximum length
I'm using SQL Loader and the database is oracle 11g r2 on linux Red hat 5. Here are the line causing the error from my data file and my table description for test:
create table TEMP ( CODE VARCHAR2(100), DESC VARCHAR2(500), RATE FLOAT, INCREASE VARCHAR2(20), COUNTRY VARCHAR2(500), DEST CLOB, [code]........
When we are trying to create number data type column of a table with precision greater than actual value,it's accepting the definition of the table . But we are unable to insert any values into the table.how internally it stores the value
SQL> drop table precision_test; Table dropped SQL> create table precision_test(name number(2,5)); Table created SQL> insert into precision_test values (1); insert into precision_test values (1) [code]....
I have a procedure which will execute on every Monday. Same is not executed last Monday. Can I execute the Procedure on some other day with out changing the actual procedure?
I look after a team of DBAs and I have a request to free up space on our very expensive storage system. However the answers on how to do this differ and i'd like to ask for external input...So not being a techincal person I see the world as quite black and white. Meaning that you delete data and you free space but after doing much reading I understand this is not the case, as you essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table. We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data. What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
How about deleting the data and reclaiming the space. Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.
how they have done this as it must be a common problem that people have created some different solutions. What commands, procedures have been used?
I am writing a procedure for the front-end. The end-users need to insert multiple rows of data into history tables in the database (11G). My problem is: the multiple actually parameters is not a fix amount, this time, the amount could be 5, next time, it could be 12. I currently used one string and pass the actual parameter (P_id, number) as '2, 4, 5, 7, 8', the procedure was executed successfully, but cannot insert any data into history table.
See my procedure below (the base table has clob data, I have to consider insert ... select *), I tried to use to_number (CONTACT_MSG_ID), it doesn't work well:
PROCEDURE ARCHIVE_XREF_CONT_EMAIL(P_ID IN VARCHAR2) IS BEGIN INSERT INTO TRC_XREF_CONT_EMAIL_MSGS_HIST SELECT * FROM TRC_XREF_CONT_EMAIL_MSGS [code].......
I can't seem to understand why the hour is incorrect. Below query "dte_computation_on_data" is the old function they use to convert date and insert it to the table. Problem is when I revert it to the actual date the hour is incorrect.
CODE SELECT -- THIS HERE IS MY TEST TO REVERT TIME AND DATE ON THE FORMULA OF WITH RESPECT TO THEIR FUNCTION to_char(TO_DATE('19700101', 'YYYYMMDD')+(tb1.dte_computation_on_data/86400),'MM/DD/YYYY') || ' ' || to_char(to_date(mod (tb1.dte_computation_on_data,86400) ,'sssss'),'hh24:mi:ss ') revert_test, systimestamp,tb1.dte_computation_on_data from ( SELECT -- THIS IS THE FORMULA OF THE OLD FUNCTION THEY USE TO CONVERT DATE TO NUMBER AND INSERTED ON THE ROW floor((CAST(SYS_EXTRACT_UTC(systimestamp) AS DATE) - TO_DATE('19700101', 'YYYYMMDD')) * 86400) dte_computation_on_data FROM dual)tb1;
I'm using ASM on LUNs from an EMC SAN, fronted by PowerPath. Right now I have only one fiber path to the SAN, so /dev/emcpowera3 maps directly to /dev/sda3, for example. Oracle had a typo in what they told me to do in /etc/sysconfig/oracleasm*, so the scan picks up both devices.
#/etc/init.d/oracleasm querydisk -p ASMVOL_01
Disk "ASMVOL_01" is a valid ASM disk /dev/emcpowera3: LABEL="ASMVOL_01" TYPE="oracleasm" /dev/sda3: LABEL="ASMVOL_01" TYPE="oracleasm"
But I don't think it can be using both. How do I see which one it's actually using?
Where are the details of the Actual Responder Details stored in the case of the below scenario
If a notification for one user was closed by another user through access to the first user's worklist, the name of the second user, who actually took the action, is displayed as the responder.
I am only able to find the Original Recipeint username in the wf_notifications table.But where will the details of the Actual Responder name displayed in the notification be stored.
When I use the "wf_notification.Responder" function, I get the Original Recipeint username of the user but not the one who acted on the notification. Is there ANY way by which I can get the Actual responder user name who acted on the notification
select responder from wf_notifications where notification_id = <nid> also gives me the Original Recipeint username of the user but not the one who acted on the notification
essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table.
We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data.
What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
deleting the data and reclaiming the space.
Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.
Is there any way to find out the division between the time taken for query parsing, creating execution plan and actual data retrieval seperately? If I enable 'set timing on' I see the elapsed time which is the total time taken for all these 3. Some of my queries are taking long time when I run it first time and so want to know what is it taking long? is it the parsing or creating the execution plan, if so what can I optimize.
We are using Oracle 10g and have 10 tablespaces defined for our Database which have 108 tables. Size of 108 tables is around 251 MB as seen during importing the dump. While creating these 10 tablespaces I used below parameters for allocation of space
SIZE 1M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE 1M;
which set the initial space for 10 tablespaces to around 1032Kb each. Now my Question is after importing the dump , how the disk space for 10 tablespaces increases to 398 MB in total ?
Is there any relation of Tablespace disk space and Actual Data present in the tables ?
We have a table with interval partition. This table is accessed very frequently. We are suppose to exchange partitions between this actual table from it's corresponding staging table.
In order to keep the newly created partitions empty, is there a way to restrict other applications from using it before we push data from staging table to it's actual table.
We are using oracle 10g in linux machine which runs as server. And there are around 25 clients which run at the same time parallel to each other. I am new to oracle 10g and also for client-server version of database. I faced some problems when i first put the server into actual use.
-> Users have been created and tablespace is USERS and temporary space is TEMP. RESOURCE privilege is given. -> Some times login fails for some clients. -> Some times all the client login will be successful. -> Query execution will be sometimes fast, but some times it takes too much time. -> Some times the connection will be lost and again I have bring UP the server.
to maintain the oracle server.
-> Should I have to make any configuration settings in the server. -> Should I change any values in the server. -> Is there any way to properly maintain the server.
I keep getting the "ORA-01401:inserted value too large for column". No biggie - I've dealt with this multiple times before (but obviously not enough in this instance).
The data being entered is a SINGLE digit number - a number like 1, 2 or 3 - nothing fancy, just a plain straight everyday single digit number. The field in question is / was set as field type "Integer". Now, there is no set field size for integers! - not in Oracle anyway. Since it wasn't happy, I decided I'll try field types of 'Number' and also "Varchar2" set to 10 bytes. I have deleted the column from the table and re-created it as well.
Here's the even more puzzling bit: I can INSERT data into this field, BUT I can not UPDATE the field with the exact same data. The data is being inserted from a csv file. The same exact csv file used to insert works, but the same data in the same file will not update only that particular column.
If I delete the specific column data from the csv file, all goes through fine. If I hard code the update for the field (eg SET field2 = '1' or even SET field2 = ' ') it still doesn't work. So I know it is not the csv file that is causing problems. I deleted all data from the csv file except the field in question - still no luck.
So after eliminating: 1. The field type 2. The field length 3. The data being inserted 4. The external source of the data
I need to add a new column to a very large table, and update it with 'N'. (this is similar as specifying default 'N'). I'm using Oracle 9i. Which is the best method regarding to speed, to update this column on entire table? The table contains ~30 millions of records. I've read that parallel DML (here UPDATE) does not work on unpartitioned tables. My table is not partitioned. If i specify:
update /*+ full(p) parallel(p,10) */ my_table p set p.my_column = 'N';
This, i think will not speed up the operation on 9i. Our business does not accept to use CREATE TABLE AS SELECT, then renaming table and recreating all indexes and so on.