SQL & PL/SQL :: Unpivoting Large Tables

Feb 6, 2012

I have a 27 million row table in the following format:

MEDCLM_MTH_SUM_KEY PRIMARY_DIAG_CD DIAG_CD2 DIAG_CD3 DIAG_CD4 DIAG_CD5 DIAG_CD6 DIAG_CD7 DIAG_CD8 DIAG_CD9 DIAG_CD10
2212990780 5552 78907 53170 5368
2231127242 V5481 7812 71595 4019 2761 2859 496 V4364 30501

I need to unpivot this data to get it to look like this:

MEDCLM_MTH_SUM_KEY DIAG_CD_LEVEL DIAG_CD
2212990780 PRIMARY_DIAG_CD 5552
2212990780 DIAG_CD2 78907
2212990780 DIAG_CD3 53170
[code]...

I was wondering if there was a quicker, more efficient way to do this.

View 3 Replies


ADVERTISEMENT

Maintain Large Tables / Cleanup Data From Our Tables

May 18, 2011

I have to cleanup data from our tables (Production Environment) that contain millions of rows. The question is apart from the solution of the partitioned tables what alternative recommended solution suggests Oracle?

To delete these tables by using a cursor PL/SQL block or to import all the database and in the tables that we want to remove the old rows to use the QUERY option of the data pump utility.

I have used both ways and i have to admit that datapump solution is much much faster than the deletion that suffers from I/O disk.The question again is which method from these two is more reliable and less risky for the health of the database.

View 5 Replies View Related

Upload Very Large Files In Tables?

Nov 2, 2008

is it possible to upload very large files in oracle's tables. For example 1-2 gigabyte video file or even more. In other words is it possible to use oracle as file server to upload very large files and store them?

View 2 Replies View Related

SQL & PL/SQL :: Large Tables / Remove Unnecessary Columns

Mar 14, 2011

I have a large table with 450 column and we are only using nearly 170 columns and our BD block size is 8k.The DBA informed that there is an row chaining happening in the Database.My question is if we have data available in 170 column .why row chaining is happening.

The DBA informed us to remove the unnecessary columns .. Does those empty columns have any impact on the chaining.If we increase the size of DB block to 32k . does it will resolve the issue.

View 4 Replies View Related

Takes Long Time To Drop Tables With Large Numbers Of Partitions

Jul 17, 2013

11.2.0.3 This is for a build. We are still in development. No risk of data loss. As part of the build, I drop the user,re-create it, re-create the objects. Allows us to test the build all the way through. Its our process. This user has some tables with several 1000 partitions. I ran a 10046 trace and oracle is using pl/sql to do loops to do DML against the data dictionary. Anyway to speed this up? I am going to turn off the recyclebin during the build and turn it back on. anything else I can do? Right now I just issue 'drop user cascade'. Part of is the weak hardware we have in the development/environment. Takes about 20 minutes just to run through this part of the script (the script has alot more pieces than this) and we do fairly frequent builds. I can't change the build process. My only option is to try to make this run a little faster.

View 3 Replies View Related

Performance Tuning :: Create Small Functional Indexes For Special Cases In Very Large Tables

Apr 5, 2012

Create small functional indexes for special cases in very large tables.

When there is a column having one values in 99% records and another values that have to be search for, it is possible to create an index using null value. Index will be small and the rebuild fast.

Example

create index vh_tst_decode_ind_if1 on vh_tst_decode_ind
(decode(S,'I','I',null),style)

It is possible to do index more selective when the key is updated and there are many records to create more levels in b-tree.

create index vh_tst_decode_ind_if3 on vh_tst_decode_ind
(decode(S,'I','I',null),
decode(S,'I',style,null)
)

To access the record can by like:

SQL> select --+ index(vh_tst_decode_ind_if3)
2 style ,count(*)
3 from vh_tst_decode_ind
4 where
5 decode(S,'I','I',null)='I'
6 group by style
7 ;

[code]....

View 2 Replies View Related

Value Too Large For Column MSG

Mar 9, 2011

1. When querying the "alert_log" table I created from the alert log using the script below, 2 new files were created ALERT_LOG_30499.bad and ALERT_LOG_30499.log.

The ALERT_LOG_30499.log. contains this error message:

error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log
ORA-12899: value too large for column MSG (actual: 82, maximum: 80)

the ALERT_LOG_30499.bad , so far, only contains datafile resize information. The datafiles have plenty of space and there is plenty of space on the San slice the datafiles reside.

2. then each time I recreate the table and increased the increased the varchar2 size, the "actual" size will also increase in the log file.

error processing column MSG in row 2910 for datafile /u02/damistst/admin/bdump/alert_damistst.log ORA-12899: value too large for column MSG (actual: 92, maximum: 90)

3. When I increased the varchar2 size to 120+ it gave me this error message:

[oracle@tds_dw bdump]$ cat ALERT_LOG_30715.log

LOG file opened at 03/09/11 14:46:20

Field Definitions for table ALERT_LOG
Record format DELIMITED BY NEWLINE
Data in file has same endianness as the platform
Rows with all null fields are accepted

Fields in Data Source:

MSG CHAR (255)
Terminated by ","
Trim whitespace same as SQL Loader

TABLE DDL:

create table
alert_log ( msg varchar2(80) )
organization external (
type oracle_loader
default directory BDUMP
access parameters (
records delimited by newline
)
location('alert_damistst.log')
)
reject limit 1000;

**** QUESTION
I can still query the alert_log table in sqlplus, but those log and bad files are generated, is this an issue?

example of a piece of the results from " select * from alert_log; "

MSG
--------------------------------------------------------------------------------
Thread 1 advanced to log sequence 5254 (LGWR switch)
Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log
Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log
Thread 1 cannot allocate new log
Checkpoint not complete
Current log# 1 seq# 5254 mem# 0: /tds_oradata/redo01a.log
Current log# 1 seq# 5254 mem# 1: /u02/damistst/REDO_LOGS/redo01b.log
Wed Mar 9 14:33:09 2011
Thread 1 advanced to log sequence 5255 (LGWR switch)
Current log# 2 seq# 5255 mem# 0: /tds_oradata/redo02a.log
Current log# 2 seq# 5255 mem# 1: /u02/damistst/REDO_LOGS/redo02b.log

13076 rows selected.

View 7 Replies View Related

Inserted Value Too Large?

Jun 12, 2008

I keep getting the "ORA-01401:inserted value too large for column". No biggie - I've dealt with this multiple times before (but obviously not enough in this instance).

The data being entered is a SINGLE digit number - a number like 1, 2 or 3 - nothing fancy, just a plain straight everyday single digit number. The field in question is / was set as field type "Integer". Now, there is no set field size for integers! - not in Oracle anyway. Since it wasn't happy, I decided I'll try field types of 'Number' and also "Varchar2" set to 10 bytes. I have deleted the column from the table and re-created it as well.

Here's the even more puzzling bit: I can INSERT data into this field, BUT I can not UPDATE the field with the exact same data. The data is being inserted from a csv file. The same exact csv file used to insert works, but the same data in the same file will not update only that particular column.

If I delete the specific column data from the csv file, all goes through fine. If I hard code the update for the field (eg SET field2 = '1' or even SET field2 = ' ') it still doesn't work. So I know it is not the csv file that is causing problems. I deleted all data from the csv file except the field in question - still no luck.

So after eliminating:
1. The field type
2. The field length
3. The data being inserted
4. The external source of the data

What else could possibly be the problem?

View 1 Replies View Related

SQL & PL/SQL :: Value Too Large For Variable

Jan 21, 2011

I HAVE DECLARED A VARIABLE

VAR1 VARCHAR2(20000);

BUT STILL WHEN I ASSIGN SOME STRINGS TO THAT VARIABLE I GET "VALUE TOO LARGE" MESSAGE. WHAT SHOULD I DO?

View 2 Replies View Related

SQL & PL/SQL :: Getting Error / Value Too Large For Column

Jul 29, 2011

I am using trim function in my select query. But still I am getting white space in my output. because of this, I am getting the error "value too large for column... " when I load the data into a table through sqlloader.

define APPName="&1"
set heading off;
set verify off;
set newpage 0
set feedback off;
set rtrimspool on;
set termout off;
set pagesize 40000;

[code].....

View 3 Replies View Related

SQL & PL/SQL :: Oracle Large Update

Sep 30, 2011

I need to add a new column to a very large table, and update it with 'N'. (this is similar as specifying default 'N'). I'm using Oracle 9i. Which is the best method regarding to speed, to update this column on entire table? The table contains ~30 millions of records. I've read that parallel DML (here UPDATE) does not work on unpartitioned tables. My table is not partitioned. If i specify:

update /*+ full(p) parallel(p,10) */ my_table p set p.my_column = 'N';

This, i think will not speed up the operation on 9i. Our business does not accept to use CREATE TABLE AS SELECT, then renaming table and recreating all indexes and so on.

View 21 Replies View Related

Identify Cause Of Large UGA Consumption?

Apr 27, 2013

I'm trying to identify what PL/SQL or the exact code that's is causing massive UGA consumption. How can I identify it?

View 5 Replies View Related

Can Undo Tablespace Be Too Large

Apr 26, 2013

Can an undo tablespace be too large and actually hurt performance? I have seen a system with a dedicated 1 TB drive for undo tablespace (no guarantee) with an undo_retention of 7 days. Would this hurt performance? What about setting an undo_retention of 24 hours with no guarantee? The only mention I could find online said that it would not hurt performance but I wanted to double check. You would think that Oracle does not care if it deletes the undo at 15 minutes or if it deletes the undo at a later date such as 7 days later and the performance should stay the same.

View 3 Replies View Related

Large Amount Of Data

Aug 6, 2013

I have oracle 11gr2 database on linux os. It's total sga size is 500mb only. Now, if uses wants read the 1gb of data from database, then there is no sufficient memory in buffer cache. so how it will works. the transaction will get successful or it will fail.And i have another doubt, does oracle can read the data from memory only or it can also read directly from disk. 

View 11 Replies View Related

SQL & PL/SQL :: FOR UPDATE Large Data

Jan 23, 2012

the large data FOR UPDATE in table column ?

claimClob clob:=claim; -- claim large data
v_buf varchar2(1000);
amount binary_integer:=1000;
position binary_integer:=1;

[Code]....

why the FOR UPDATE don't do nothing ?

View 4 Replies View Related

Large Rows Committing Delete

Nov 8, 2010

I was asked in a telephone interview about committing a delete ( a few million rows)in an oracle log table which had a trillion rows in total. He said that delete took 2 days.

Then he asked me if then commit is performed(assuming a huge rollback segments are allocated) how long does it take for that commit .

View 5 Replies View Related

Adding Column To Large Table

Aug 12, 2013

I want to add column to table which has huge amount of data and fill with data from another table. What is the best way to do it? Is it faster to use CTAS instead of ALTER TABLE ADD COLUMN?

View 2 Replies View Related

Querying Large Non-consecutive Range

Aug 23, 2007

SELECT * FROM table WHERE id = $x

$x being a range of non-consecutive values like so:
1,3,5-9,13,18,21 and so on...

I realize I can query using an array of operands and such, but these ranges will be in upwards of 100 or more items. I want to minimize the number of queries I have to do and the length of them. Is there any resource you can point me to that can optimize something like this?

View 3 Replies View Related

SQL & PL/SQL :: How To Store A Large String In Oracle

Nov 20, 2009

how do i store large string in oracle.

View 19 Replies View Related

SQL & PL/SQL :: Performance Tuning In Large Inserts

Jan 19, 2011

I want to load 10 millions records from staging table to master table.One logic must be take during the load, the logic is rows already present in master table means, we need to update corresponding rows in master table otherwise rows insert in target table.

I have been using bulk collect and forall method to load data. it shows better performance compare then cursor row by row process. As per oracle doucmentation, we cannot use SELECT statements inside FORALL condition so we could not use logic inside the forall condition.

View 2 Replies View Related

SQL & PL/SQL :: Splitting Large Table Output

Sep 20, 2012

I need to dump the contents of a very large table into text files for archiving as we retire this old DB. The table has about 16 million rows, and a few of the columns are up to 4000 characters wide (varchar2(40000)). I've got 2 problems:

1) How can I select records that occur in a certain month of a year (there is a date column) and put the selected records into a file?

2) I don't have access to the server OS, so UTL_FILE is not possible. The output is also so large that I'm having trouble with the DBMS_OUTPUT.PUT_LINE.

I'm trying to get the first block of the IF working first, so the rest is just placeholders.

DECLARE
v_mm number (2);
v_yyyy number (4);
min_mm number (2);
min_yyyy number (4);
max_mm number (2);
max_yyyy number (4);
min_date date;
[code]....

View 12 Replies View Related

SQL & PL/SQL :: Store Large XML Data In Blob?

May 8, 2010

Since XML-files only contain character data, we could/should store it in a CLOB, rather than a BLOB.

But, One of my friend having a table where a column is defined as bloband came to know that XML data are being stored. I searched for some article with keyword 'How to insert large XML data in BLOB' But did not work.How to store the large xml content in a Blob and How to extract it?

View 2 Replies View Related

SQL & PL/SQL :: Create Large CSV And Send It As Attachment

Jun 22, 2012

Can i send a large file( over 1 million) as email attachment as .CSV file...?

I have a requirement to send a large file ".CSV" when the procedure / package is invoked as email attachment. The data in the CSV file is pulled from a table (as below).

(1) I tried below code to execute "send_email" which is uses utility UTL_SMTP. It is working fine with
100000 records and getting an email attachment with .csv
(2) If more 100000 i am not getting any email / attachment.

I am looking forward to send a huge data like 1 million.
>>>>>>>>>>>>>>>>>>>>>
DECLARE
l_clob CLOB;
l_attach_text VARCHAR2 (32767);
l_attach_text_h VARCHAR2 (32767);
Cursor c1 is
SELECT LOCATION,PARTY_NAME,ADDRESS1,CITY,STATE_PROV,COUNTRY,POSTAL_CODE FROM emp_table;
[code].......

View 11 Replies View Related

SQL & PL/SQL :: Insert From Large String Into CLOB

Jun 7, 2012

I'm looking for a way to insert strings larger than 40.000 characters in a CLOB-field without geting the "ORA-01461: can bind a LONG value only for insert into a LONG column".

Something like this:

insert into MyClobTable(ID,Data) values ('101','A string containing more than 40000 characters...')

The problem is that a Java-application concatinates the string from a MSSQL-DB so I don't store the string in my oracle-DB. As far as I'm aware this means I can't chop my string in pieces and use declare to put the pieces in variables, right?

Below is an example I found but I don't think I can apply it on my case, correct?

SQL> CREATE TABLE myClob
2 (id NUMBER PRIMARY KEY,
3 clob_data CLOB);

Table created.

SQL>
SQL> INSERT INTO myClob VALUES (101,null);

1 row created.

SQL>
SQL> declare
2 clob_pointer CLOB;
3 v_buf VARCHAR2(1000);
4 Amount BINARY_INTEGER :=1000;

[Code]...

PL/SQL procedure successfully completed.

SQL>
SQL> drop table myClob;

Table dropped.

SQL>

View 3 Replies View Related

How To Get Fast Output From Large Table

Jan 25, 2013

i have three tables and all of these tables have around 30L records.

Using join i am retrieving records from these tables but it is taking much more time to get output.

Partition can improve performance?

View 7 Replies View Related

SQL & PL/SQL :: Data Of Column Is Large - How To Insert

Oct 9, 2013

I have encountered some problems in SQL I want to create a table with a bunch of prepared data. For ease of use, I choose to generate a SQL file which contains all the sql clauses used to create the table and insert the data. So all the data can only be inserted to a table using sql clause.

My questions:
1) If data of a column is large (for example, 1 M text), how to insert it using SQL, is there a piecewise method.
2) And how can I insert BLOB data using SQL clause.

What I what is to enclose all the operations in a single SQL file, and when the table is needed, just execute this SQL file.

View 2 Replies View Related

Estimate Next Extent Size For Very Large Table?

May 13, 2011

How to estimate next extent size for very large table? What should I take into account? Is there any formula for that?

View 4 Replies View Related

SQL & PL/SQL :: Displaying N Number Of Rows From Large Result Set

Nov 29, 2012

I have a query that returns 11 Million rows but not all of them can be displayed in SQLDeveloper or DBVisualizer because of limited memory or other type of issues. I need to copy the entire result set to excel for further calculations.

Is there any way that i can select N number of rows out of my actual result set.

For example:
a) A result set contains 10 Million rows in total.
b) I want to display first 5 Million rows by executing a query
c) Then I want to display the remaining 5 Million rows by executing the query again with any parameter changes.

So all I want is to extract the rows of my actual result set in two or more executions, depending on the number of rows.

View 3 Replies View Related

Oracle SQL - Query A Large Library File?

Dec 1, 2008

I'm currently new to Oracle 10g SQL and I need to query a large library file.I need to find the different first keywords associated with references written by authors who have more than 10 references in the readings table.

View 4 Replies View Related

Server Administration :: Use Large Pages Parameter And AMM?

Jul 29, 2011

It always used to be that Automatic Memory Management and Linux huge pages were incompatible: you had to use one or the other. But 11.2.0.2 has new parameter, USE_LARGE_PAGES. This isn't documented apart from a few articles on metalink, but Googling it suggests that if it is on TRUE (the default) or ONLY then I can use AMM with huge pages.

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved