SQL & PL/SQL :: Execute Large (clob) Dynamic Sql By DBMS_SQL?
Aug 2, 2011is it possible to execute large(clob) dynamic sql by DBMS_SQL .Is there any restriction like length ...
View 4 Repliesis it possible to execute large(clob) dynamic sql by DBMS_SQL .Is there any restriction like length ...
View 4 RepliesI'm working with old code that uses dbms_sql.execute to build/execute dynamic sql. In our case, the user can select varying columns(I think up to 20) with different where conditions as needed.
After building the sql, here's an example
WITH ph AS
(SELECT ph.* FROM po_header ph WHERE 1 = 2),
pf AS
(SELECT DISTINCT pf.order_id, pf.fund
FROM po_fau pf, ph
WHERE 1 = 1
AND ph.order_id = pf.order_id
[code]....
Where table records for
po_header = ~567746
po_fau = ~2153570
and PK "order_id" is a NUMBER(10) not null and a snippet of the code looks like
nDDL_Cursor := dbms_sql.open_cursor;
dbms_sql.parse(nDDL_Cursor, sSQLStr, 2);
FOR x IN 1 .. nCols LOOP
sCols(x) := '';
dbms_sql.define_column(nDDL_Cursor, x, sCols(x), 100);
END LOOP;
nError := dbms_sql.execute(nDDL_cursor);
why when the "execute" statement is fired off the elapsed time takes ~4.5 seconds but If I change "1 = 1" above to "1 = 2" it takes ~.2 seconds. If I run the above query interactively it takes ~.2 seconds. Shouldn't the above query when joining
ph.order_id = pf.order_id
return zero rows back instantly or does the "dbms_sql_execute" do some other type of parsing internally that takes cpu time.
I am using SQL Developer.I am self-teaching myself PL/SQL. What I am trying accomplish is to run a select query across many database links at runtime and to insert that query onto a local table. So far, I have only gotten as far as querying the database link names. I am stuck at where my variable calls the database link name. It does not recognize the database link name, and I can't quite grasp the reason why. Below is the first part of my script which does retrieves the column values no problem:
001 SET SERVEROUTPUT ON
002 DECLARE
003 db_link_varVARCHAR2(30);
004 source_cursorINTEGER;
005 destination_cursorINTEGER;
006 src_csrINTEGER;
007 dst_csrINTEGER;
008 rundate_varDATE;
009 instance_name_var VARCHAR2(30);
[code]....
If I break the script down to the bare select query, it will query absolutely fine.If I run the script in its entirety, it will not pick up the variable dbl as a dynamic database link. 02019. 00000 - "connection description for remote database not found"
If I comment out line 031 and prevent the looping of querying of database links (which will only fetch the first value of db_link from dba_db_links), the script will complete and I will have a row inserted into my local table.
I am suspecting it's the looping that is incorrect but I understand it is not going to do anything beyond line 059.
Let me the difference between the DBMS_SQL and Execute immediate .
*) Which one is faster ?
*) Where to use DBMS_SQL and Execute immediate (correct usage of both)?
I am using dbms_sql for creation and execution of dynamic query. But after execution,the results returned are not correct. Just wanted to know is there some way by which I can find the final text of dynamic query created by dbms_sql?
View 14 Replies View RelatedI'm looking for a way to insert strings larger than 40.000 characters in a CLOB-field without geting the "ORA-01461: can bind a LONG value only for insert into a LONG column".
Something like this:
insert into MyClobTable(ID,Data) values ('101','A string containing more than 40000 characters...')
The problem is that a Java-application concatinates the string from a MSSQL-DB so I don't store the string in my oracle-DB. As far as I'm aware this means I can't chop my string in pieces and use declare to put the pieces in variables, right?
Below is an example I found but I don't think I can apply it on my case, correct?
SQL> CREATE TABLE myClob
2 (id NUMBER PRIMARY KEY,
3 clob_data CLOB);
Table created.
SQL>
SQL> INSERT INTO myClob VALUES (101,null);
1 row created.
SQL>
SQL> declare
2 clob_pointer CLOB;
3 v_buf VARCHAR2(1000);
4 Amount BINARY_INTEGER :=1000;
[Code]...
PL/SQL procedure successfully completed.
SQL>
SQL> drop table myClob;
Table dropped.
SQL>
We can execute dynamic sql using both execute immediate and ref cursor..But what is the difference between the two and performance-wise which is better?
View 5 Replies View RelatedI am facing a problem with utl_http.write_text in my pl/sql application. My requirement is to write data of size>32k. So I used a clob variable in write_text. But still it is showing numeric or value error when the data size is above 8k.
I have read that chunked transfer encoding will work. But I couldn't find out how this is done.
I am having problem in inserting a long file (80000 B) into CLOB cloumn in oracle database 10g.
ERROR at line 23:
ORA-06550: line 23, column 1:
PLS-00172: string literal too long
I am calling the oracle stored procedure through unix shell script as fallows.
#!/bin/ksh
cd /usr/home/dfusr/backup
i=`cat pe_proxy_master_2160.Thu.log | sed "s/'/''/g"`
sqlplus username/password@db 1> temp.log <<!
DECLARE
BEGIN
Write_Text_To_CLOB(1,'$i');
COMMIT;
END;
/
The file pe_proxy_master_2160.Thu.log is about 50000 B.
My Stored Procedure is fallows
CREATE OR REPLACE PROCEDURE Write_Text_To_CLOB (
p_id IN NUMBER
, p_clob IN VARCHAR2
)
IS
[code]....
I also tried with ojdbc5.jar and ojdbc14.jar files in class path.
I m getting the following error when using the clob in execute immediate : 'ORA-22275: invalid LOB locator specified'
declare
v_final_output1 clob := empty_clob;
v_stmt varchar2(32000);
begin
select source into v_stmt from table t where t.id =123 ;
[code]....
Note that error comes after the execute immediate when i try to display the content of v_final_output1;
ORA-06502...I have database on oracle 9i on Solaris 9. I create a generate procedure that create dynamic procedure through DBMS_SQL. On this database I got the ORA-06502 error. When I tried to run the same procedure on the same database on oracle 8i on NT this work fine.
View 3 Replies View RelatedI am encountering an error message while updating a xmltype column using dynamic sql statement. I am using dynamic sql here as the table is not placed in the same schema from where the plsql procedure is invoked. The schema name is passed to the procedure as an argument. I am using below pseudo code for this purpose.
Create procedure myproc(p_schemaname varchar2, p_id number)
is
p_clob clob;
p_str varchar2(2000);
begin
[code]...
This is throwing an error 'missing expression' at the line of 'exeute immediate'.
But it works if I run a static sql update by hard coding the schema name in the statement.
I have oracle table with a column of datatype CLOB. In the clob column having the manipulation statements (update, insert) as records.
How to execute the statements exists in a clob column as records?
my table structure and one record is
CREATE TABLE ACTN_CFG_T
(
OBJ_ID NUMBER(32) NOT NULL,
ACTN_ID NUMBER(38) NOT NULL,
ACTN_SEQ NUMBER(2) NOT NULL,
RU_DEF CLOB NOT NULL
[code]....
I want to execute the update statement when i call the obj_id. I used the follwing procedure but error.
1 DECLARE
2 V_ABC CLOB;
3 BEGIN
4 SELECT RU_DEF INTO V_ABC FROM ACTN_CFG_T WHERE OBJ_ID = 1625;
5 EXECUTE IMMEDIATE V_ABC;
6 END;
error is...
ORA-06550: line 5, column 20:
PLS-00382: expression is of wrong type
ORA-06550: line 5, column 2:
PL/SQL: Statement ignored
why is this procedure with EXECUTE IMMEDIATE not working for me, when I pass a dynamic string to it.
this is the proc:
CREATE OR REPLACE FUNCTION MIGRATION.execute_sql (pSQL varchar2) RETURN varchar2 IS
vResult Varchar(200);
BEGIN
vResult := null;
[code]......
and when I invoke it like this it's fine:
select execute_sql ('select sysdate from dual') test from dual. However, when I try something like this, it won't work:
select execute_sql ('select max(al_code) from alert_log where al_user= '||user) test_sql from dual
the error message I get is:
ORA-00904: "MIGRATION": invalid identifier
ORA-06512: at "MIGRATION.EXECUTE_SQL", line 26
I want to execute a dynamic query which is stored in a Table.
Output of that query should be stored in database server.
Is there any way i can create a dynamic procedure? I have created a sample code but issue is i cannot make the below data type dynamic as per the query.
en com_fund_info_m%ROWTYPE;
CREATE TABLE TEMP (SQLSTATEMENT VARCHAR2(100))
DECLARE
TYPE r_cursor IS REF CURSOR;
c_emp r_cursor;
en com_fund_info_m%ROWTYPE;
[code].....
I have the following procedure body in a package.
PROCEDURE getrecordsForinspection(i_table_name in varchar2, i_thread_id in varchar2, i_max_count in number default null, o_results out sys_refcursor)
AS
v_sql varchar2(1000):= null;
begin
v_sql := 'update '||'i_table_name||' set status = '||'''IN_PROCESS-'||i_thread_id||''''||' Where final_status = '||''''STATUS_ACCEPTED'''||' and ('||i_max_count||' is null or rownum <= '||i_max_count||');';
EXECUTE IMMEDIATE(v_sql);
commit;
end;
when I execute the above procedure it gives the following error.
ORA-00911: invalid character
cause: Identifiers may not start with any ASCII characters other than letters and numbers.$#_ are also allowed after the first character. Identifiers enclosed by double quotes may contain any character other than a double quote. Alternative quotes(q'#....#') can not use spaces, tabs, or carriage returns as delimiters. For all other contexts, consult the SQL language reference Manual.
I think dynamic sql is not executed because of the pipe character in the sql statement.
I want to convert the below SQL to a dynamic sql to be executed from execute immediate statement.
UPDATE transaction SET loannum = lpad(loannum,12,'0')
How to use DBMS_SQL package. I tried with the following procedure.
CREATE OR replace PROCEDURE Crt_tab_inst(tab_name VARCHAR2,
col1_name VARCHAR2,
col1_value VARCHAR2)
IS
cur BINARY_INTEGER := dbms_sql.open_cursor;
fdbk BINARY_INTEGER;
[code]........
But when Iam executing the procedure, it is throwing the below error. This is the error Iam getting:
SQL> EXEC crt_tab_inst('MYTAB','MYCOL','NAME1');
BEGIN crt_tab_inst('MYTAB','MYCOL','NAME1'); END;
*
ERROR at line 1:
ORA-00984: column not allowed here
ORA-06512: at "SYS.DBMS_SYS_SQL", line 909
ORA-06512: at "SYS.DBMS_SQL", line 39
ORA-06512: at "SCOTT.CRT_TAB_INST", line 21
ORA-06512: at line 1
I'm using dynamic sql (DBMS_SQL) to define columns of ref cursor. It works Ok but the problem is when i'm using PL/SQL CURSOR in the REF CURSOR. Then,I'm getting :
Error at line 3
ORA-00932: inconsistent datatypes: expected NUMBER got CURSER
ORA-06512: at "SYS.DBMS_SQL", line 1830
ORA-06512: at "TW.PRINT_REF_CURSOR", line 28
ORA-06512: at line 9
Here is my code:
set serveroutput on
exec DBMS_OUTPUT.ENABLE(1000000);
declare
l_cursor sys_refcursor;
begin
[code]....
Is there a solution or bypass?
1. When querying the "alert_log" table I created from the alert log using the script below, 2 new files were created ALERT_LOG_30499.bad and ALERT_LOG_30499.log.
The ALERT_LOG_30499.log. contains this error message:
error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log
ORA-12899: value too large for column MSG (actual: 82, maximum: 80)
the ALERT_LOG_30499.bad , so far, only contains datafile resize information. The datafiles have plenty of space and there is plenty of space on the San slice the datafiles reside.
2. then each time I recreate the table and increased the increased the varchar2 size, the "actual" size will also increase in the log file.
error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log ORA-12899: value too large for column MSG (actual: 92, maximum: 90)
3. When I increased the varchar2 size to 120+ it gave me this error message:
[oracle@tds_dw bdump]$ cat ALERT_LOG_30715.log
LOG file opened at 03/09/11 14:46:20
Field Definitions for table ALERT_LOG
Record format DELIMITED BY NEWLINE
Data in file has same endianness as the platform
Rows with all null fields are accepted
Fields in Data Source:
MSG CHAR (255)
Terminated by ","
Trim whitespace same as SQL Loader
TABLE DDL:
create table
alert_log ( msg varchar2(80) )
organization external (
type oracle_loader
default directory BDUMP
access parameters (
records delimited by newline
)
location('alert_damistst.log')
)
reject limit 1000;
**** QUESTION
I can still query the alert_log table in sqlplus, but those log and bad files are generated, is this an issue?
example of a piece of the results from " select * from alert_log; "
MSG
--------------------------------------------------------------------------------
Thread 1 advanced to log sequence 5254 (LGWR switch)
Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log
Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log
Thread 1 cannot allocate new log
Checkpoint not complete
Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log
Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log
Wed Mar 9 14:33:09 2011
Thread 1 advanced to log sequence 5255 (LGWR switch)
Current log# 2 seq# 5255 mem# 0: /tds_oradata/redo02a.log
Current log# 2 seq# 5255 mem# 1: /u02/damistst/REDO_LOGS/redo02b.log
13076 rows selected.
I keep getting the "ORA-01401:inserted value too large for column". No biggie - I've dealt with this multiple times before (but obviously not enough in this instance).
The data being entered is a SINGLE digit number - a number like 1, 2 or 3 - nothing fancy, just a plain straight everyday single digit number. The field in question is / was set as field type "Integer". Now, there is no set field size for integers! - not in Oracle anyway. Since it wasn't happy, I decided I'll try field types of 'Number' and also "Varchar2" set to 10 bytes. I have deleted the column from the table and re-created it as well.
Here's the even more puzzling bit: I can INSERT data into this field, BUT I can not UPDATE the field with the exact same data. The data is being inserted from a csv file. The same exact csv file used to insert works, but the same data in the same file will not update only that particular column.
If I delete the specific column data from the csv file, all goes through fine. If I hard code the update for the field (eg SET field2 = '1' or even SET field2 = ' ') it still doesn't work. So I know it is not the csv file that is causing problems. I deleted all data from the csv file except the field in question - still no luck.
So after eliminating:
1. The field type
2. The field length
3. The data being inserted
4. The external source of the data
What else could possibly be the problem?
I HAVE DECLARED A VARIABLE
VAR1 VARCHAR2(20000);
BUT STILL WHEN I ASSIGN SOME STRINGS TO THAT VARIABLE I GET "VALUE TOO LARGE" MESSAGE. WHAT SHOULD I DO?
I am using trim function in my select query. But still I am getting white space in my output. because of this, I am getting the error "value too large for column... " when I load the data into a table through sqlloader.
define APPName="&1"
set heading off;
set verify off;
set newpage 0
set feedback off;
set rtrimspool on;
set termout off;
set pagesize 40000;
[code].....
I have a 27 million row table in the following format:
MEDCLM_MTH_SUM_KEY PRIMARY_DIAG_CD DIAG_CD2 DIAG_CD3 DIAG_CD4 DIAG_CD5 DIAG_CD6 DIAG_CD7 DIAG_CD8 DIAG_CD9 DIAG_CD10
2212990780 5552 78907 53170 5368
2231127242 V5481 7812 71595 4019 2761 2859 496 V4364 30501
I need to unpivot this data to get it to look like this:
MEDCLM_MTH_SUM_KEY DIAG_CD_LEVEL DIAG_CD
2212990780 PRIMARY_DIAG_CD 5552
2212990780 DIAG_CD2 78907
2212990780 DIAG_CD3 53170
[code]...
I was wondering if there was a quicker, more efficient way to do this.
I need to add a new column to a very large table, and update it with 'N'. (this is similar as specifying default 'N'). I'm using Oracle 9i. Which is the best method regarding to speed, to update this column on entire table? The table contains ~30 millions of records. I've read that parallel DML (here UPDATE) does not work on unpartitioned tables. My table is not partitioned. If i specify:
update /*+ full(p) parallel(p,10) */ my_table p set p.my_column = 'N';
This, i think will not speed up the operation on 9i. Our business does not accept to use CREATE TABLE AS SELECT, then renaming table and recreating all indexes and so on.
I'm trying to identify what PL/SQL or the exact code that's is causing massive UGA consumption. How can I identify it?
View 5 Replies View RelatedCan an undo tablespace be too large and actually hurt performance? I have seen a system with a dedicated 1 TB drive for undo tablespace (no guarantee) with an undo_retention of 7 days. Would this hurt performance? What about setting an undo_retention of 24 hours with no guarantee? The only mention I could find online said that it would not hurt performance but I wanted to double check. You would think that Oracle does not care if it deletes the undo at 15 minutes or if it deletes the undo at a later date such as 7 days later and the performance should stay the same.
View 3 Replies View RelatedI have oracle 11gr2 database on linux os. It's total sga size is 500mb only. Now, if uses wants read the 1gb of data from database, then there is no sufficient memory in buffer cache. so how it will works. the transaction will get successful or it will fail.And i have another doubt, does oracle can read the data from memory only or it can also read directly from disk.
View 11 Replies View Relatedthe large data FOR UPDATE in table column ?
claimClob clob:=claim; -- claim large data
v_buf varchar2(1000);
amount binary_integer:=1000;
position binary_integer:=1;
[Code]....
why the FOR UPDATE don't do nothing ?
I was asked in a telephone interview about committing a delete ( a few million rows)in an oracle log table which had a trillion rows in total. He said that delete took 2 days.
Then he asked me if then commit is performed(assuming a huge rollback segments are allocated) how long does it take for that commit .