Need Clob Variable To Write Large Volume Of Data?
Oct 6, 2010
I am facing a problem with utl_http.write_text in my pl/sql application. My requirement is to write data of size>32k. So I used a clob variable in write_text. But still it is showing numeric or value error when the data size is above 8k.
I have read that chunked transfer encoding will work. But I couldn't find out how this is done.
View 5 Replies
ADVERTISEMENT
Oct 6, 2009
I am having problem in inserting a long file (80000 B) into CLOB cloumn in oracle database 10g.
ERROR at line 23:
ORA-06550: line 23, column 1:
PLS-00172: string literal too long
I am calling the oracle stored procedure through unix shell script as fallows.
#!/bin/ksh
cd /usr/home/dfusr/backup
i=`cat pe_proxy_master_2160.Thu.log | sed "s/'/''/g"`
sqlplus username/password@db 1> temp.log <<!
DECLARE
BEGIN
Write_Text_To_CLOB(1,'$i');
COMMIT;
END;
/
The file pe_proxy_master_2160.Thu.log is about 50000 B.
My Stored Procedure is fallows
CREATE OR REPLACE PROCEDURE Write_Text_To_CLOB (
p_id IN NUMBER
, p_clob IN VARCHAR2
)
IS
[code]....
I also tried with ojdbc5.jar and ojdbc14.jar files in class path.
View 1 Replies
View Related
Jun 7, 2012
I'm looking for a way to insert strings larger than 40.000 characters in a CLOB-field without geting the "ORA-01461: can bind a LONG value only for insert into a LONG column".
Something like this:
insert into MyClobTable(ID,Data) values ('101','A string containing more than 40000 characters...')
The problem is that a Java-application concatinates the string from a MSSQL-DB so I don't store the string in my oracle-DB. As far as I'm aware this means I can't chop my string in pieces and use declare to put the pieces in variables, right?
Below is an example I found but I don't think I can apply it on my case, correct?
SQL> CREATE TABLE myClob
2 (id NUMBER PRIMARY KEY,
3 clob_data CLOB);
Table created.
SQL>
SQL> INSERT INTO myClob VALUES (101,null);
1 row created.
SQL>
SQL> declare
2 clob_pointer CLOB;
3 v_buf VARCHAR2(1000);
4 Amount BINARY_INTEGER :=1000;
[Code]...
PL/SQL procedure successfully completed.
SQL>
SQL> drop table myClob;
Table dropped.
SQL>
View 3 Replies
View Related
Aug 2, 2011
is it possible to execute large(clob) dynamic sql by DBMS_SQL .Is there any restriction like length ...
View 4 Replies
View Related
Jul 9, 2013
I have read the following article:[URL] 11070I want to know wherer if there exists a possibility of write a clob or an xml into a file on disk, if we do not have the CREATE ANY DIRECTORY privilege. Many functions, like UTL_FILE.FOPEN, or dbms_xslprocessor.clob2file, or dbms_xmldom.writetofile, need an Oracle directory to be created (with CREATE OR REPLACE DIRECTORY...). But if we don't have this privilege, is there a possibility to export a clob into a file as xml (the clob contains 100% xml, but this is the column data type, CLOB) if we don't have that privilege?The clob data contains 48200 characters.
View 8 Replies
View Related
Jan 21, 2011
I HAVE DECLARED A VARIABLE
VAR1 VARCHAR2(20000);
BUT STILL WHEN I ASSIGN SOME STRINGS TO THAT VARIABLE I GET "VALUE TOO LARGE" MESSAGE. WHAT SHOULD I DO?
View 2 Replies
View Related
May 7, 2013
I am trying to build an XML document in a CLOB PLSQL variable. We are using Oracle 11gr2 database.
But when I am reaching more than 32767 bytes my code is failing.
Is there anyway we can store more than 32767 bytes of data in a PLSQL variable of type CLOB.
I am capturing the below error message
(ORA-06512: at "SCMSA_HIST.SCMSA_POC_HANDSET_MOBILITY_PKG", line 1480
AND LENGTH OF xmlfile -> 33078
)
I am adding my code also here for further clarification
PROCEDURE GET_HANDSET_DATA_PRC (p_ntlogin_id IN VARCHAR2,
p_handset_data OUT NOCOPY CLOB)
IS
/******************************************************************************
NAME: GET_HANDSET_DATA_PRC
PURPOSE:
Date Ver By Description
---------- --- --- -----------
******************************************************************************/
CURSOR c_region_data
IS
SELECT NVL2 (T.ntlogin, T.ntlogin, pos.ntlogin) AS ntlogin,
NVL2 (T.first_name, T.first_name, pos.first_name)
AS first_name,
[Code]....
View 8 Replies
View Related
Jun 4, 2010
The prod stats has been implemented in development. The stats has been gathered 2 months back on dev while in production the stats has been gathered 2 weeks back.
My question shouldn't the high volume of data causes changes in plan in both the environment? My thinking is that plan can be different as the high volume of data are changing in prod it may lead to a different plan.
View 6 Replies
View Related
Sep 24, 2010
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
View 4 Replies
View Related
Aug 6, 2013
I have oracle 11gr2 database on linux os. It's total sga size is 500mb only. Now, if uses wants read the 1gb of data from database, then there is no sufficient memory in buffer cache. so how it will works. the transaction will get successful or it will fail.And i have another doubt, does oracle can read the data from memory only or it can also read directly from disk.
View 11 Replies
View Related
Jan 23, 2012
the large data FOR UPDATE in table column ?
claimClob clob:=claim; -- claim large data
v_buf varchar2(1000);
amount binary_integer:=1000;
position binary_integer:=1;
[Code]....
why the FOR UPDATE don't do nothing ?
View 4 Replies
View Related
May 8, 2010
Since XML-files only contain character data, we could/should store it in a CLOB, rather than a BLOB.
But, One of my friend having a table where a column is defined as bloband came to know that XML data are being stored. I searched for some article with keyword 'How to insert large XML data in BLOB' But did not work.How to store the large xml content in a Blob and How to extract it?
View 2 Replies
View Related
Oct 9, 2013
I have encountered some problems in SQL I want to create a table with a bunch of prepared data. For ease of use, I choose to generate a SQL file which contains all the sql clauses used to create the table and insert the data. So all the data can only be inserted to a table using sql clause.
My questions:
1) If data of a column is large (for example, 1 M text), how to insert it using SQL, is there a piecewise method.
2) And how can I insert BLOB data using SQL clause.
What I what is to enclose all the operations in a single SQL file, and when the table is needed, just execute this SQL file.
View 2 Replies
View Related
Aug 15, 2012
what could be effective data type to store large integer values like, 50,000; 10,000,000 etc.?
View 3 Replies
View Related
Apr 26, 2010
I have a query on , how to view the sample data from a very table which is large in size ( more than 10 million ).
I just need to see some sample data from a large table ( to see what kind of data which is application related ).
My question is :
Select *
from Sample_table
where rownum < 10
is this a Good way to view the sample data ?
I have understanidng that the rownum will be assigined to the rows once all the rows are reteived.
So what is the best way to view ?..I am not sure of any condition to put in the intial time of querying.
View 5 Replies
View Related
Oct 18, 2010
Scenario:
Our application is using a two instance, one for the live active data and the other for the reports data. We have a process which moves the data from the live instance to reports instance every night. In a single db environment the process is working without any issues. However when we move to the RAC environment the reports db's (insert) in large table get locked and we are unable to insert data to the reports db.
What we are performing is:
Insert into my_table_rpt select * from may_table_live@db_link_to_livedb;
Issues:
my_table_rpt get locked
We have found the workaround by disable locking in destination and subsequent to the insert enable locking
ALTER TABLE my_table_rpt DISABLE TABLE LOCK;
Insert the data to the reports database table
Then
ALTER TABLE my_table_rpt ENABLE TABLE LOCK
Question:
Why does the large destination table (my_table_rpt) get locked in the RAC environment?
View 2 Replies
View Related
Aug 8, 2013
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below
CREATE TABLE "APP"."MON_TXNS" ( "ID_TXN" NUMBER(12,0) NOT NULL ENABLE, "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE, "ID_PAYER" NUMBER(12,0), "ID_PAYER_PI" NUMBER(12,0), "ID_PAYEE" NUMBER(12,0), "ID_PAYEE_PI" NUMBER(12,0), "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE, "STR_TEXT" VARCHAR2(60 CHAR), "DAT_MERCHANT_TIMESTAMP" DATE, "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE), "DAT_EXPIRATION" DATE, "DAT_CREATION" DATE, "STR_USER_CREATION" VARCHAR2(30 CHAR), "DAT_LAST_UPDATE"
[Code]...
Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
SQL> explain plan for 2 delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
Explained. SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT--------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2798378986
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 || 1 | DELETE | MON_TXNS | | | | ||* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 || 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
-------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
View 6 Replies
View Related
Oct 8, 2013
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns. Is it possible to design report in below format(half columns on one side of page, half on other side ofpage :
Column1DataColumn11DataColumn2DataColumn12DataColumn3DataColumn13DataColumn4DataColumn14DataColumn5DataColumn15DataColumn6DataColumn16DataColumn7DataColumn17DataColumn8DataColumn18DataColumn9DataColumn19DataColumn10DataColumn20Data I am using Apex 4.2.3 version on oracle 11g xe.
View 2 Replies
View Related
Apr 20, 2009
am trying to write a simple sql which would delete data from last n months but it will keep the data for the first of each of those month from current sysdate
e.g
Jan 1 - 30 deletes 2 - 30 keeps data for 1st
Feb 1 - 28 deletes 2 - 28 keeps data for 1st
Mar
Apr Current sysdate
May
View 3 Replies
View Related
Jan 31, 2011
In my production Db, I have 3 mount points like /vol1/orcl,/vol2/orcl,/vol3/orcl which contains the datafiles.
I have to create a standby like the same location. how to write the db_file_name_convert parameter?
View 2 Replies
View Related
Apr 3, 2011
I am writing following query
SELECT DISTINCT a.list_type_code, a.list_type_name
FROM jls_list_type a, jls_list_control b
WHERE b.jalsa_srl = :jalsa_srl
AND b.list_no != a.list_type_code
ORDER BY list_type_code
I just want to display only those records from JLS_LIST_TYPE which is not present in other table JLS_LIST_CONTROL ... for this i wrote above query but it is not working.
View 9 Replies
View Related
Sep 17, 2012
So we need an mechanism to read data from LONG RAW and convert into actual file.
View 1 Replies
View Related
Jun 3, 2013
I have a query in oracle report in which i am getting this output.Manager Arnav have 2 employees Inder and kaushal whose salary is 10000 and 20000 respectively,
And another manager is Anjali whose employees are Kavya and inder whose salary is 40000 and 10000 respectively .as Inder is repeated I want the salary become 0 in place of 10000 second time.I am in dilemma,What should i do ,if i want to change 10000 to 0
Manager employee salary
Arnav Tiwari Inder 10000
kaushal 20000
Anjali Kavya 40000
Inder 10000[/b]
What should i do in the formula of salary.according to employee name .means if Name exists already then salary value should be 0 and if it comes for the 1st time then its actual value i.e 10000 should be printed.
View 1 Replies
View Related
Sep 28, 2011
I'm writing a PHP page to display some data from an Oracle database. Unfortunately, I can't copy the code because it's proprietary. One of the columns in the db is of type CLOB. I'm having trouble getting the data from the CLOB column.
The way the code is now, is there is a query string read into a variable, and the query string variable is read into a function that then retrieves the data from the db. Once the result set is returned, it is parsed to get data from all of the columns. The issue is that I've never worked with CLOB data before, so I'm having some difficulty extracting the data for that column. I know I can use the DBMS_LOB.READ function, but I'm not sure how to apply it in this case. The following is a generic form of the query string I'm using with b.clob_col being the column I'm having issues with.
$queryString = <<<EOD
SELECT a.col1,
a.col2,
a.col3,
b.col4,
b.col5,
b.clob_col,
from table1 b
inner join b.col4
on blah1
inner join a.col2
on blah2
where blah
and blah
EOD;
View 1 Replies
View Related
May 27, 2013
I have a question about the ADD VOLUME command, I can't understand the difference between ADD DISK and ADD VOLUME.What are the difference between them?When should I use each one?How can I control the stripping and mirroring (NORMAL and HIGH) adding VOLUMES in a DISKGROUP?Can I add a volume to a fail group?
View 1 Replies
View Related
Apr 12, 2011
I'm trying to write a package which will allow my users to send emails, based on templates, from our system. The email body templates are stored as text in a CLOB column in a EMAIL_TEMPLATES table, along with an email_id. Here's an example:
Quote:Dear [dearname],
I write with regard to your child, [childname] who is attending [schoolname].
Now, I'm trying to write a procedure that will take an email_id and the fields such as dearname, childname, etc., merge the data into the template and return the final body of the email.
The email body will then finally be inserted into a standard HTML email template and be sent using UTL_MAIL.
I was planning on using a series of nested REPLACE functions but I'm sure there must be a tidier way than that. Particularly as I would like the procedure to be as flexible as possible to allow me to easy adapt it to receive additional fields for different email templates in future.
View 3 Replies
View Related
Aug 29, 2013
Environment:Oracle 11.2.0.3 EE on Solaris.
I am trying to extract XML data stored in a CLOB and produce a flat file for the user's consumption. They have sent me what they think is the XSD for the XML data but when I look at the data in the CLOB I don't see the tags documented in the XSD. I'd like to produce an XSD based on the actual data to compare with what they've sent. Is this possible? I am able to query the XML data using the tags I see in the data using XML Table and it's working fine.
View 2 Replies
View Related
Sep 30, 2011
Is it possible to create trigger on the various tables and views exists (i.e. dynamic performance views) in data dictionary, when ever any DML operations performs by Oracle it self?
View 6 Replies
View Related
Nov 15, 2010
I am in the very early planning stages of a project the goal of which is to identify separate organizations which may in fact be the same organization.
Our first implementation of this task was a process designed to look for a few thousand organizations in a pool of a few hundred thousand organizations. To accomplish this we made heavy use of Oracle's Text index as well as a custom index type we created which utilized n-grams. This approach worked quite well for on-demand editing of the organizations, in which a user might log in and say in addition to what we already know about organization A we also know x, y and z does that change anything and worked acceptably well for the bulk processing we did on our "known" information once a week running for a couple of hours on the weekend.
We have now been tasked with reworking this initial implementation only now we want to look at a set consisting of several million organizations for potential matches which exist within the set. As in our initial implementation we will be breaking what we know about organizations into groupings so we aren't comparing a phone number to an email address and normalizing the data as much as we can so we ignore things like case and punctuation. Even after all this we are still talking about looking for similar values in a group which might be in the tens of millions (some types of data will have more than one value per organization).
My initial thought on the problem is to use n-grams though not in the way we did in the past. The basic idea here is that we break the search values up into all the substrings it is made of and look for other values which have a high number of those substrings in common.
SQL & PL/SQL was the best place for the question, but I could not think of a better one.
View 10 Replies
View Related
May 22, 2011
I was asked by my systems administrator if I could tell him how much redo log volume, on average, do I figure we generate in a day?
Just wondering how I might calculate this?
We have several production databases. If I wanted to calculate the above for one of them, would it be take all the redo logs for a day and total up the size in bytes? Maybe take a 5 day work week and take the average over the 5 days?
View 4 Replies
View Related