Server Administration :: Reorganization Includes A Table With A LONG Column

Jan 18, 2012

I am trying to reoarganise a tablespace with Enterprise Manager from manual to automatic, but script generation gives the following warnng :

Quote:Reorganization includes a table with a LONG column. To support reorganization of tables with LONG columns that are greater than 32Kbytes the external procedure MGMT$REORG_MOVELONGCOMMAND must be configured properly. It has been determined this external procedure is not currently configured as expected. Configure SQL*Net appropriately to allow it to call the external procedure service process.

I made changes suggested by [URL] ..... in the section Using Reorganize Objects with LONG Columns but the warning persists. I noticed however that $ORACLE_HOME/lib/libnmuc.so is an empty file (0 bytes).

Oracle Database 10g 10.1.0.2.0
Solaris (SunOS 5.9 Generic_117171-07 (64-bit))
Oracle home /data/lun1/oracle/product/10.1.0/db_1
{ORACLE_HOME}/network/admin/tnsnames.ora
# tnsnames.ora Network Configuration File: /usr/oracle/product/10.1.0/db_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
[code]....

View 1 Replies


ADVERTISEMENT

To Perform A Reorganization Of The Partition Table

May 6, 2010

I would like to perform a reorganization of the partition table. The table contains 144 partitions and 2038 indexes partitions with that. I would like to ask you to make me understand the best possible way to perform the reorg with cascade options in one attempt.

Also, am not sure how to get the information of the index type is Globally / Local index partition. Is there any document for reference that details about the reorg of partition tables?

View 7 Replies View Related

Semantic Technologies :: Can Bind LONG Value Only For Insert Into LONG Column

Dec 22, 2012

I got an exception when I was using sesame adapter to dump a turtle file which contains long texts as objects into oracle semantic database. The exception information is:

org.openrdf.repository.RepositoryException: org.openrdf.sail.SailException: java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column

ORA-06512: in "SF.ORACLE_ORARDF_ADDHELPER", line 1
ORA-06512: in line 1
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:439)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:395)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:802) ...

View 1 Replies View Related

ORA-01461 - Can Bind LONG Value Only For Insert Into LONG Column

Sep 26, 2012

resolve problem with move lob objects ? I move table partition and lob (BLOB) from one tablespace to another :

alter table EBIF.APO_T_VER_DISP_ACC_RESP MOVE PARTITION P1M20120901 LOB(SIGNATURE_PATTERN) STORE AS (TABLESPACE tmp) t
able EBIF.APO_T_VER_DISP_ACC_RESP MOVE PARTITION have : pbeb_ap1.SYS>select partition_name , tablespace_name from dba_lob_partitions where table_name='APO_T_VER_DISP_ACC_RESP';

PARTITION_NAME          |TABLESPACE_NAME
------------------------------|------------------------------
P1M20110901          |TD1M20110901
P1M20111001          |TMP
P1M20111101          |TMP
P1M20111201          |TMP
P1M20120101          |TD1M20120101
[code]....

I used skrip to generate move :

select 'alter table '||table_owner||'.'||table_name||' MOVE PARTITION '||partition_name||' LOB('||COLUMN_NAME||') STORE AS (TABLESPACE TD_PART_RW) PARALLEL 4;'

from dba_lob_partitions where tablespace_name='TMP';

when I started loadink into dis table I get : ORA-01461: can bind a LONG value only for insert into a LONG column

when I recreate this table ALL work ok , but new table is not partitioned .

View 2 Replies View Related

Performance Tuning :: Create Partitioned Table With Column Of LONG Or LONGRAW?

Nov 3, 2010

the reason behind the below statements:

1) We cant create TABLE PARTITIONED on CLUSTER or INDEX on CLUSTER TABLE.

2) We cant create a partitioned table with the column of LONG or LONGRAW? (But how it could be possible with BLOB, CLOB?

View 3 Replies View Related

Server Administration :: Long Data Type Illegal?

Dec 26, 2012

when I try comnd create table a as select * from b where 1=2; it says illegal datatype long..i m bemused what sin has the long datatype done?

View 4 Replies View Related

Server Administration :: How To Drop A Column In Huge Table

Aug 8, 2011

I want to drop a column in a huge table which contain about 420,000,000 rows,i use the alter table drop coumn command to execute,and found it takes a long time and generate huge redo.

Is there any quickly way to drop a column in a huge table?

View 5 Replies View Related

Server Administration :: Column Limit Same For All Types Of Table

Aug 10, 2010

Question 1: What is the no of column limits in a single table? Is it same for all type of tables.
Question 2: How to estimate the size for a table in advance with "undo generation amount, index size, redo log size"?
Question 3: What is the real time scenario for creating a table on another schema?
Question 4:I read the below line in a document, but I'm not able to understand it

"we can't move types and extent tables to a different schema when the original data still exists in the database"?

View 6 Replies View Related

Server Administration :: Allocating Extent To A Clob Column Of A Table

Dec 30, 2010

I have a table with two clob columns and need to manually allocate space to the table and to its lob segment. Is the following command correct?

--to allocate extent to the table
alter table emp allocate extent;
--the table has columns named col1 and col2 which are clob
--to allocate extents to the columns
alter table emp modify lob (col1) (allocate extent (size 10m))
/
alter table emp modify lob (col2) (allocate extent (size 10m))
/

View 3 Replies View Related

Server Utilities :: DBMS-DATAPUMP.metadata-filter And Long Table List?

Jun 9, 2010

I have a problem with DBMS_DATAPUMP.metadata_filter.Let's suppose that I need to export a huge list of tables (a,b,c,d,e,f,g,h,i...). Let's suppose that the list is dynamic do NOT want to use

DBMS_DATAPUMP.metadata_filter (handle => h1,
NAME => 'NAME_EXPR',
VALUE => 'IN (''a'', ''b'', ...)',
object_type => NULL);

In my_export_table there is the list:

CREATE TABLE my_export_table
(
EXPORT_OBJECT_NAME VARCHAR2(50 BYTE)
)

Now I'm trying to use this form:

DBMS_DATAPUMP.metadata_filter (handle => h1,
NAME => 'NAME_EXPR',
VALUE => 'IN (SELECT a.export_object_name FROM my_export_table a, user_objects b WHERE a.export_object_name = b.object_name AND b.object_type = ''TABLE'')',
object_type => NULL
);

but it results in error.

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-00942: table or view does not exist

[code]...

View 1 Replies View Related

Server Administration :: Functions To Convert The Long Type Field Data To Varchar2 Type

Oct 17, 2011

Is there some functions to convert the long type field data to varchar2 type?

View 2 Replies View Related

Updating Long Data Type Column

Aug 31, 2010

I have a task to update one of the rows in a table (having only 2 columns, number and long) which is long data type. We are on Oracle 10g. Not sure how to use update for a long data type column.

I have tried using dbms_metadata_util.long2varchar, but still not getting what I want.

View 2 Replies View Related

SQL & PL/SQL :: Concat Or Append In Long-raw Type Column?

Feb 4, 2012

Im having mail content column of type long Raw in a table.i just want concat or append a value in that column when i tried it shows error "illegal use of long type".how to append value to it. value will be string type

View 15 Replies View Related

Forms :: ORA-01461 When Updating A LONG-Column Via Trigger

Jul 5, 2010

I have a non-base-table item which I want to update in the pre-update trigger of the current block.

If the content of the field exceeds 4000 characters, i get the error message

ORA-01461: can bind a LONG value only for insert into a LONG column.

The code is

update tab set long_col = :formsblock.long_col
where tab.tabpk = :formsblock.foreign_tabpk;

Workarounds would be,
1.) to delete the old dataset and insert the new one:

delete from tab
where tab.tabpk = :formsblock.tabpk;

insert into tab (tabpk, long_col) values
(:formsblock.foreign_tabpk, :formsblock.long_col);

or 2.) to change the Item from a non-database item to a database item and use the internal update of the forms-module, but both workarounds are not very satisfying.

Do you know another way to update the LONG-column within the pre-update trigger (or any other PL/SQL part of forms)?

View 2 Replies View Related

Server Administration :: Auditing Lob Column Changes When Working With Dbms_lob

Oct 26, 2010

It seems that dml trigger doesn't fire when lob field is being updated using dbms_lob package. As it stated in Oracle documentation:

Quote:Using OCI functions or the DBMS_LOB package to update LOB values or LOB attributes of object columns does not cause Oracle to fire triggers defined on the table containing the columns or the attributes.

I need to know that table was updated (or is about to be updated), how can I do that in case it is lob field that is being updated?

View 1 Replies View Related

Server Administration :: Get Size Of CLOB Column But Not Getting Any Output

Sep 10, 2012

I'm trying to get size of CLOB column but not getting any output.

SQL> desc TABLE_STEP_INST234
Name Null? Type
----------------------------------------- -------- ----------------------------
NUM_PENDING_PREREQS NOT NULL NUMBER(10)
OBJID NOT NULL VARCHAR2(31)
OUTFLOW_BITS NUMBER(19)
PARAMS CLOB
PARENT2PROC_INST NOT NULL VARCHAR2(31)
ROOT2PROC_INST NOT NULL VARCHAR2(31)
START_TIME DATE
STATUS NOT NULL NUMBER(2)
[code]...

View 2 Replies View Related

Server Administration :: Stats Gather-LAST_ANALYZED Column Not Updated

Sep 11, 2012

Using below script i run stats gather job..But LAST_ANALYZED column is not updated for the table.. Why it was not updated.. My DB version is 11.1.0.7.0

exec DBMS_STATS.GATHER_TABLE_STATS(ownname => 'XXXX', tabname => 'XXXXXX',estimate_percent =>5, method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO', degree => 8, granularity => 'ALL',Cascade => TRUE);

View 11 Replies View Related

Server Administration :: Move Partitioned Table Between Table Spaces Of Different Block Size?

Apr 4, 2011

I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.

So far the only option I have is to export and then import back the data.

know if there is any way to move a partitioned table between table spaces of different block size?

View 14 Replies View Related

Server Administration :: Reorganize A Table And Index After The Deletion Of Records From Table?

Feb 7, 2012

We deleted millions of records from a table.

1.Is it necessary to reorganize a table and index after the deletion of records from table ? Because i see some change in table size after table and index reorganization.

2.Will re org table and index improve the database performance ?

View 7 Replies View Related

Server Administration :: ORA-39726 / Unsupported Add / Drop Column Operation On Compressed Tables

Jan 11, 2012

my user is trying to drop columns, but she gets below error:

SQL Error: ORA-39726: unsupported add/drop column operation on compressed tables

39726. 00000 - "unsupported add/drop column operation on compressed tables"

i just checked whether table is compressed or not, it is not compressed it seems:

select owner, table_name,COMPRESSION,COMPRESS_FOR from dba_tables
2 where owner = 'EQUIPMENT' AND TABLE_NAME = 'ETHRNT_VRTL_CNXN_HRLY_AGG_SWP';
OWNER TABLE_NAME COMPRESS COMPRESS_FOR
------------------------------ ------------------------------ -------- ------------
EQUIPMENT ETHRNT_VRTL_CNXN_HRLY_AGG_SWP DISABLED

1 row selected. i am able to set one column as UNUSED. and then i am able to see the count accordingly from below view:

select * from DBA_UNUSED_COL_TABS
where owner ='EQUIPMENT' and table_name = 'ETHRNT_VRTL_CNXN_HRLY_AGG_SWP';

but not able to drop the unused columns. if i tried to drop a column directly from the table, that also giving above error.

View 7 Replies View Related

SQL & PL/SQL :: Two Columns Can Have Long Datatype In Table?

Apr 27, 2010

I have question related to LONG datatype. Actually from google and get to know that one table can have only one LONG datatype when i searched for reason . i got these resons:-

With 9i (I believe) and later versions, Oracle deprecates using the long datatype in favor of the lob (clob, nclob and blog) datatypes. It is only supported for backward compatibility.

Restriction:- It can not be used in create type as an attribute of the defined type.

It can not be used in where conditions.

There can be no indexes on long columns.

Regular Expression are not possible.

long can not be returned from a stored function.

SQL can not call functions that have an attribute of type long.

And even more restrictions.

So I want to know that is only reason because of that Oracle doesn't allow us to make two Column or is there any strong reason which make it more logical Like storing of data in Row blocks or some thing else.

View 2 Replies View Related

SQL & PL/SQL :: Update Long String In Table By Command

Jan 23, 2011

I use sqlplus in oracle (linux).I have a table and the string cell have long string .

Like below :

column A Column B

A BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
.....................................BBBBBBBBBBB

So, I need to edit/update the row A and the value in Column B.But the string in Column B is so long and I only need to edit one character.IF I use update command , I need to type very long string and it is easy to wrong edit .

View 8 Replies View Related

Drop Statement On Table Taking Long Time

Sep 20, 2010

We are firing a normal Drop command on our database and the database version is 10.2.0.4.The database is running on AIX v5.The command is taking more time than usual .

When i am monitoring the session i can see that a call is being made to procedure "aw_drop_proc".Could i ask you if this is something that is taking more time than usual.

We are not having any partitions on the nested tables .We have a pack of tables and we are dropping this pack through a procedure.The pack comprises of nested tables & normal tables.To drop a nested table it is taking around 6 seconds(Table with no rows) and a normal table(With no rows) it is taking 17 milli seconds.We have a partition on Normal table.

The same operation in windows is taking very less time when compared to AIX.

View 5 Replies View Related

Precautions For Converting A Table Field From Long To Clob

Jul 31, 2012

I am more of a C/C++ guy and relatively amateur in oracle. I have to update a table field from "Long" to "CLOB". I have planned to do a simple alter table, and as far as I know there won't be any issues.

Queries:
1. Although I have triple checked, is there any scenario under which there can be any data loss during the data type change? The data is very critical and no data loss can be entertained.
2. Is there any easy way to update all the related views without having to do so manually?
3. Any particular precautions I should take before introducing the change?

View 2 Replies View Related

SQL Net Round Trips On Long Data Type Table

Oct 12, 2012

We are working on a performance tuning aspect, where in a table has a LONG datatype and number of round trips are increasing based on the number of rows fetched.i.e one round trip for every row.

Background.

1. created a table with LONG data type.
2. inserted bulk load of data.
3. set auto trace on and executed below tests..

SQL> select count(*) from test_long;

COUNT(*)
----------
6110

SQL> desc test_long
Name Null? Type
----------------------------------------------------------------------------------------------------- -------- ----------------------------------------------------------------------------
ID NUMBER
TESTLONG LONG
[code].....

but on LONG column, irrespective of the array size the round trips does not reduce.

View 1 Replies View Related

SQL & PL/SQL :: Extracting Data From LONG RAW Data Type Column

Oct 7, 2011

I have come one requirement where i need to extract data from a LONG RAW data type column.

View 7 Replies View Related

SQL & PL/SQL :: Find Constraint Name From User Table - Illegal Use Of LONG Datatype

Apr 30, 2010

I want to find the constraint name from User_Constraints table using the following query:

Select * From User_Constraints Where Table_Name='CHARGECODE' and Constraint_Type='C' And Search_Condition = '"PERCENTAGE" IS NOT NULL';

Then it shows "ORA-00997: illegal use of LONG datatype" error.

Is there any way to compare with long type value.

View 11 Replies View Related

Server Utilities :: How To Get Parent Table Column Reference

Sep 24, 2013

I have csv file which having parent and child related data.i need to load parent data in to parent table and get parent reference id and store in child table with child data. i am not able to find how to get parenet refernce id using control file. In my csv file i have 2 parent name rows. I need to create one A record in parent table and get that parent primary(P_ID) for 'A' record and put into child table for c_name test1 and test2 records.

mydata.csv
---------------
P_Name C_Name
------ ------
A, test1;
A, test2;
B, test3;
B, test4;
C, test5;
C, test6;
D, test7;
D, test8;

1. table parent (P_ID, P_Name)

2. table child (C_ID, C_Name, P_ID)

i need data bello way.

Parent Table data:

P_ID P_Name
----- ------
1A
2B
3C
4D

Child Table data:

C_IDC_NameP_ID
--------------
1test11
2test21
3test32
4test42
5test53
6test53
7test64
8test64

My controll file
-------------------
LOAD DATA
INFILE 'mydata.data'
APPEND
INTO TABLE parent
when (1:1) = 'p'
fields terminated by ',' optionally enclosed by '"' trailing nullcols

[code].....

able to load parent data and generate P_ID, but i am not able to get P_ID for child records.

View 6 Replies View Related

Data Archive Script Is Taking Too Long To Delete A Large Table

Aug 8, 2013

We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below

CREATE TABLE "APP"."MON_TXNS"    (    "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,     "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,     "ID_PAYER" NUMBER(12,0),     "ID_PAYER_PI" NUMBER(12,0),     "ID_PAYEE" NUMBER(12,0),     "ID_PAYEE_PI" NUMBER(12,0),     "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,     "STR_TEXT" VARCHAR2(60 CHAR),     "DAT_MERCHANT_TIMESTAMP" DATE,     "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),     "DAT_EXPIRATION" DATE,     "DAT_CREATION" DATE,     "STR_USER_CREATION" VARCHAR2(30 CHAR),     "DAT_LAST_UPDATE"

[Code]...

 Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete  

SQL> explain plan for  2  delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW); 

Explained. SQL> select * from table(dbms_xplan.display); 

PLAN_TABLE_OUTPUT--------------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 2798378986
 -------------------------------------------------------------------------------------
| Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------|   0 | DELETE STATEMENT       |            |  2520 |   233K|    87   (2)| 00:00:02 ||   1 |  DELETE                | MON_TXNS   |       |       |            |          ||*  2 |   HASH JOIN RIGHT SEMI |            |  2520 |   233K|    87   (2)| 00:00:02 ||   3 |    INDEX FAST FULL SCAN| OTW_ID_TXN |  2520 | 15120 |     3   (0)| 00:00:01 ||   4 |    TABLE ACCESS FULL   | MON_TXNS   | 14260 |  1239K|    83   (0)| 00:00:02 |

-------------------------------------------------------------------------------------
 PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------------------------- 
Predicate Information (identified by operation id):
--------------------------------------------------- 

View 6 Replies View Related

Server Administration :: How To Compress Table

Sep 8, 2011

There is a huge table,i want to compress it,how can i do? alter table tb_my_table compress After executed the sql,the size of table have any change.

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved