I am facing a problem with utl_http.write_text in my pl/sql application. My requirement is to write data of size>32k. So I used a clob variable in write_text. But still it is showing numeric or value error when the data size is above 8k.
I have read that chunked transfer encoding will work. But I couldn't find out how this is done.
I'm looking for a way to insert strings larger than 40.000 characters in a CLOB-field without geting the "ORA-01461: can bind a LONG value only for insert into a LONG column".
Something like this:
insert into MyClobTable(ID,Data) values ('101','A string containing more than 40000 characters...')
The problem is that a Java-application concatinates the string from a MSSQL-DB so I don't store the string in my oracle-DB. As far as I'm aware this means I can't chop my string in pieces and use declare to put the pieces in variables, right?
Below is an example I found but I don't think I can apply it on my case, correct?
SQL> CREATE TABLE myClob 2 (id NUMBER PRIMARY KEY, 3 clob_data CLOB);
I have read the following article:[URL] 11070I want to know wherer if there exists a possibility of write a clob or an xml into a file on disk, if we do not have the CREATE ANY DIRECTORY privilege. Many functions, like UTL_FILE.FOPEN, or dbms_xslprocessor.clob2file, or dbms_xmldom.writetofile, need an Oracle directory to be created (with CREATE OR REPLACE DIRECTORY...). But if we don't have this privilege, is there a possibility to export a clob into a file as xml (the clob contains 100% xml, but this is the column data type, CLOB) if we don't have that privilege?The clob data contains 48200 characters.
I am trying to build an XML document in a CLOB PLSQL variable. We are using Oracle 11gr2 database.
But when I am reaching more than 32767 bytes my code is failing.
Is there anyway we can store more than 32767 bytes of data in a PLSQL variable of type CLOB.
I am capturing the below error message
(ORA-06512: at "SCMSA_HIST.SCMSA_POC_HANDSET_MOBILITY_PKG", line 1480 AND LENGTH OF xmlfile -> 33078 )
I am adding my code also here for further clarification
PROCEDURE GET_HANDSET_DATA_PRC (p_ntlogin_id IN VARCHAR2, p_handset_data OUT NOCOPY CLOB) IS /****************************************************************************** NAME: GET_HANDSET_DATA_PRC PURPOSE: Date Ver By Description ---------- --- --- -----------
******************************************************************************/ CURSOR c_region_data IS SELECT NVL2 (T.ntlogin, T.ntlogin, pos.ntlogin) AS ntlogin, NVL2 (T.first_name, T.first_name, pos.first_name) AS first_name,
The prod stats has been implemented in development. The stats has been gathered 2 months back on dev while in production the stats has been gathered 2 weeks back.
My question shouldn't the high volume of data causes changes in plan in both the environment? My thinking is that plan can be different as the high volume of data are changing in prod it may lead to a different plan.
I am considering all of the capabilities and benefits of using Data Pump for exporting and importing extremely large data files. Would like to know if importing to tape is possible? If so, would the data be accessible if needed later?
I have oracle 11gr2 database on linux os. It's total sga size is 500mb only. Now, if uses wants read the 1gb of data from database, then there is no sufficient memory in buffer cache. so how it will works. the transaction will get successful or it will fail.And i have another doubt, does oracle can read the data from memory only or it can also read directly from disk.
Since XML-files only contain character data, we could/should store it in a CLOB, rather than a BLOB.
But, One of my friend having a table where a column is defined as bloband came to know that XML data are being stored. I searched for some article with keyword 'How to insert large XML data in BLOB' But did not work.How to store the large xml content in a Blob and How to extract it?
I have encountered some problems in SQL I want to create a table with a bunch of prepared data. For ease of use, I choose to generate a SQL file which contains all the sql clauses used to create the table and insert the data. So all the data can only be inserted to a table using sql clause.
My questions: 1) If data of a column is large (for example, 1 M text), how to insert it using SQL, is there a piecewise method. 2) And how can I insert BLOB data using SQL clause.
What I what is to enclose all the operations in a single SQL file, and when the table is needed, just execute this SQL file.
Our application is using a two instance, one for the live active data and the other for the reports data. We have a process which moves the data from the live instance to reports instance every night. In a single db environment the process is working without any issues. However when we move to the RAC environment the reports db's (insert) in large table get locked and we are unable to insert data to the reports db.
What we are performing is:
Insert into my_table_rpt select * from may_table_live@db_link_to_livedb;
Issues:
my_table_rpt get locked
We have found the workaround by disable locking in destination and subsequent to the insert enable locking
ALTER TABLE my_table_rpt DISABLE TABLE LOCK;
Insert the data to the reports database table
Then
ALTER TABLE my_table_rpt ENABLE TABLE LOCK
Question:
Why does the large destination table (my_table_rpt) get locked in the RAC environment?
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below
Plan hash value: 2798378986 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 || 1 | DELETE | MON_TXNS | | | | ||* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 || 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns. Is it possible to design report in below format(half columns on one side of page, half on other side ofpage :
Column1DataColumn11DataColumn2DataColumn12DataColumn3DataColumn13DataColumn4DataColumn14DataColumn5DataColumn15DataColumn6DataColumn16DataColumn7DataColumn17DataColumn8DataColumn18DataColumn9DataColumn19DataColumn10DataColumn20Data I am using Apex 4.2.3 version on oracle 11g xe.
am trying to write a simple sql which would delete data from last n months but it will keep the data for the first of each of those month from current sysdate
e.g
Jan 1 - 30 deletes 2 - 30 keeps data for 1st Feb 1 - 28 deletes 2 - 28 keeps data for 1st Mar Apr Current sysdate May
I am writing following query SELECT DISTINCT a.list_type_code, a.list_type_name FROM jls_list_type a, jls_list_control b WHERE b.jalsa_srl = :jalsa_srl AND b.list_no != a.list_type_code ORDER BY list_type_code
I just want to display only those records from JLS_LIST_TYPE which is not present in other table JLS_LIST_CONTROL ... for this i wrote above query but it is not working.
I have a query in oracle report in which i am getting this output.Manager Arnav have 2 employees Inder and kaushal whose salary is 10000 and 20000 respectively,
And another manager is Anjali whose employees are Kavya and inder whose salary is 40000 and 10000 respectively .as Inder is repeated I want the salary become 0 in place of 10000 second time.I am in dilemma,What should i do ,if i want to change 10000 to 0 Manager employee salary Arnav Tiwari Inder 10000 kaushal 20000 Anjali Kavya 40000 Inder 10000[/b]
What should i do in the formula of salary.according to employee name .means if Name exists already then salary value should be 0 and if it comes for the 1st time then its actual value i.e 10000 should be printed.
I'm writing a PHP page to display some data from an Oracle database. Unfortunately, I can't copy the code because it's proprietary. One of the columns in the db is of type CLOB. I'm having trouble getting the data from the CLOB column.
The way the code is now, is there is a query string read into a variable, and the query string variable is read into a function that then retrieves the data from the db. Once the result set is returned, it is parsed to get data from all of the columns. The issue is that I've never worked with CLOB data before, so I'm having some difficulty extracting the data for that column. I know I can use the DBMS_LOB.READ function, but I'm not sure how to apply it in this case. The following is a generic form of the query string I'm using with b.clob_col being the column I'm having issues with.
$queryString = <<<EOD SELECT a.col1, a.col2, a.col3, b.col4, b.col5, b.clob_col, from table1 b inner join b.col4 on blah1 inner join a.col2 on blah2 where blah and blah EOD;
I have a question about the ADD VOLUME command, I can't understand the difference between ADD DISK and ADD VOLUME.What are the difference between them?When should I use each one?How can I control the stripping and mirroring (NORMAL and HIGH) adding VOLUMES in a DISKGROUP?Can I add a volume to a fail group?
I'm trying to write a package which will allow my users to send emails, based on templates, from our system. The email body templates are stored as text in a CLOB column in a EMAIL_TEMPLATES table, along with an email_id. Here's an example:
Quote:Dear [dearname],
I write with regard to your child, [childname] who is attending [schoolname].
Now, I'm trying to write a procedure that will take an email_id and the fields such as dearname, childname, etc., merge the data into the template and return the final body of the email.
The email body will then finally be inserted into a standard HTML email template and be sent using UTL_MAIL.
I was planning on using a series of nested REPLACE functions but I'm sure there must be a tidier way than that. Particularly as I would like the procedure to be as flexible as possible to allow me to easy adapt it to receive additional fields for different email templates in future.
I am trying to extract XML data stored in a CLOB and produce a flat file for the user's consumption. They have sent me what they think is the XSD for the XML data but when I look at the data in the CLOB I don't see the tags documented in the XSD. I'd like to produce an XSD based on the actual data to compare with what they've sent. Is this possible? I am able to query the XML data using the tags I see in the data using XML Table and it's working fine.
Is it possible to create trigger on the various tables and views exists (i.e. dynamic performance views) in data dictionary, when ever any DML operations performs by Oracle it self?
I am in the very early planning stages of a project the goal of which is to identify separate organizations which may in fact be the same organization.
Our first implementation of this task was a process designed to look for a few thousand organizations in a pool of a few hundred thousand organizations. To accomplish this we made heavy use of Oracle's Text index as well as a custom index type we created which utilized n-grams. This approach worked quite well for on-demand editing of the organizations, in which a user might log in and say in addition to what we already know about organization A we also know x, y and z does that change anything and worked acceptably well for the bulk processing we did on our "known" information once a week running for a couple of hours on the weekend.
We have now been tasked with reworking this initial implementation only now we want to look at a set consisting of several million organizations for potential matches which exist within the set. As in our initial implementation we will be breaking what we know about organizations into groupings so we aren't comparing a phone number to an email address and normalizing the data as much as we can so we ignore things like case and punctuation. Even after all this we are still talking about looking for similar values in a group which might be in the tens of millions (some types of data will have more than one value per organization).
My initial thought on the problem is to use n-grams though not in the way we did in the past. The basic idea here is that we break the search values up into all the substrings it is made of and look for other values which have a high number of those substrings in common.
SQL & PL/SQL was the best place for the question, but I could not think of a better one.
I was asked by my systems administrator if I could tell him how much redo log volume, on average, do I figure we generate in a day?
Just wondering how I might calculate this?
We have several production databases. If I wanted to calculate the above for one of them, would it be take all the redo logs for a day and total up the size in bytes? Maybe take a 5 day work week and take the average over the 5 days?