Estimate Next Extent Size For Very Large Table?

May 13, 2011

How to estimate next extent size for very large table? What should I take into account? Is there any formula for that?

View 4 Replies


ADVERTISEMENT

Server Utilities :: Estimate Size Of FlatFile Based On Table Size?

May 8, 2013

We are planning to export the table data to a file pipedelimited. How do i estimate the size of the FlatFile based on the table size? or avg rowlength

View 3 Replies View Related

How Oracle Determine Initial Extent And Max Extent Size

Aug 10, 2012

Suppose tablespace allocation_type is system then how oracle determines the initial extent and max extent size?

View 4 Replies View Related

Partitioned Table - What Extent Size To Be Set

Sep 28, 2012

I have a partitioned table (one partition per month). Every month there are added about 1GB data. What extent size should I set? 1GB will be ok?

What if data will be greater than 1GB, adding new 1GB extent takes probably a lot of time and clients may see delays while they're inserting in this time? (it's OLTP system)

When new extent is allocated? Exact in time of lacking space in existing extent or before? Partitions are dropped after one year so free space isn't a problem.

View 6 Replies View Related

Server Administration :: Calculation Of Initial Extent Size Of Table

Apr 21, 2010

I need to create table A. which will going have more than 8L records. Daily this table A will truncate and reinsert all 8L records. Also number of records(8L) will we increase 50K per month. what should be storage clause parameters . Mainly initial and next extent.

View 3 Replies View Related

SQL & PL/SQL :: View Sample Data From Very Table Which Is Large In Size?

Apr 26, 2010

I have a query on , how to view the sample data from a very table which is large in size ( more than 10 million ).

I just need to see some sample data from a large table ( to see what kind of data which is application related ).

My question is :

Select *
from Sample_table
where rownum < 10

is this a Good way to view the sample data ?

I have understanidng that the rownum will be assigined to the rows once all the rows are reteived.

So what is the best way to view ?..I am not sure of any condition to put in the intial time of querying.

View 5 Replies View Related

What Will Be Initial Extent Size

Oct 29, 2010

I've read the documentation that describes the storage management.I create a tablespace as:

CREATE TABLESPACE MY_TABLESPACE_NAME
DATAFILE 'path/filename1.dbf' SIZE 3000M AUTOEXTEND ON NEXT 200M MAXSIZE 4000M
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
BLOCKSIZE 8k
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;

As the extent management is local, does it mean that any storage clause of the objects (tables, indexes etc.) placed in it isn't taken into consideration? I mean in a case of placing a table in the mentioned tablespace that has a storege parameters defined as follows:

CREATE TABLE MY_TABLE(
...
)
TABLESPACE MY_TABLESPACE_NAME
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 100M
NEXT 20M
MINEXTENTS 1
MAXEXTENTS 50
BUFFER_POOL DEFAULT
)

1. what will be the initial extent size? 1M or 100M?
2. what will be the next extent size? 1M or 20M?
3. will the maxextents parameter be taken into consideration?
4. when i'm sure the tablespace is dedicated to keep only one object [MY_TABLE], what should be the relation between the initial datafile size [filename1.dbf] and the initial extent size? Should they be iqual or doesn't matter?
5. as the SEGMENT SPACE MANAGEMENT is AUTO, the PCTFREE param doesn't make sense, right?

View 1 Replies View Related

Performance Tuning :: Initial Extent For Table?

Mar 19, 2012

1.2 million chained rows, 1.7 million blocks, etc. Initial extent for this table is 64k and next 1 mb. I would try to calculate this out better for efficiency and performance. This will not be efficient as it stands. calculate the size.

View 14 Replies View Related

How To Make System Table Space Extent Management Dictionary

Dec 21, 2010

I want to create system table space's extent management dictionary with the syntax:

CODEcreate database
logfile
group 1 ('/u01/app/oradata/anand/redo1a.log') size 100M,
group 2 ('/u01/app/oradata/anand/redo2a.log') size 100M,
group 3 ('/u01/app/oradata/anand/redo3a.log') size 100M
datafile '/u01/app/oradata/anand/system.dbf' size 400M extent management dictionary
sysaux datafile '/u01/app/oradata/anand/sysaux.dbf' size 300M
default temporary tablespace temp tempfile '/u01/app/oradata/anand/temp.dbf' size 50M

but it is giving error
ERROR at line 6:
ORA-25141: invalid EXTENT MANAGEMENT clause

how can I make system tablespace's extent management dictionary?

View 3 Replies View Related

Server Administration :: Allocating Extent To A Clob Column Of A Table

Dec 30, 2010

I have a table with two clob columns and need to manually allocate space to the table and to its lob segment. Is the following command correct?

--to allocate extent to the table
alter table emp allocate extent;
--the table has columns named col1 and col2 which are clob
--to allocate extents to the columns
alter table emp modify lob (col1) (allocate extent (size 10m))
/
alter table emp modify lob (col2) (allocate extent (size 10m))
/

View 3 Replies View Related

Exadata :: Estimate Storage Savings For Hybrid Columnar Compression?

Mar 27, 2013

Is there a way to estimate the storage savings for Hybrid Columnar Compression (HCC) in Oracle Exadata x3-2 machine ?

View 5 Replies View Related

Adding Column To Large Table

Aug 12, 2013

I want to add column to table which has huge amount of data and fill with data from another table. What is the best way to do it? Is it faster to use CTAS instead of ALTER TABLE ADD COLUMN?

View 2 Replies View Related

SQL & PL/SQL :: Splitting Large Table Output

Sep 20, 2012

I need to dump the contents of a very large table into text files for archiving as we retire this old DB. The table has about 16 million rows, and a few of the columns are up to 4000 characters wide (varchar2(40000)). I've got 2 problems:

1) How can I select records that occur in a certain month of a year (there is a date column) and put the selected records into a file?

2) I don't have access to the server OS, so UTL_FILE is not possible. The output is also so large that I'm having trouble with the DBMS_OUTPUT.PUT_LINE.

I'm trying to get the first block of the IF working first, so the rest is just placeholders.

DECLARE
v_mm number (2);
v_yyyy number (4);
min_mm number (2);
min_yyyy number (4);
max_mm number (2);
max_yyyy number (4);
min_date date;
[code]....

View 12 Replies View Related

How To Get Fast Output From Large Table

Jan 25, 2013

i have three tables and all of these tables have around 30L records.

Using join i am retrieving records from these tables but it is taking much more time to get output.

Partition can improve performance?

View 7 Replies View Related

Server Utilities :: Estimate Tablespace Growth While Loading Data Using Sqlldr?

Jun 1, 2011

We load large amount of data into multiple tables using sqlldr. Amount of data that we need to load varies according to the situation. We want to estimate the tablespace usage growth due to this data load, so we can verify/extend the tablespaces before the data load. Though, setting to autoextend will work in this case, We want to avoid extending the tablespace during sqlldr executing due to performance.

Our initial attempt was to note the tablespace size before and after executing the sqlldr and use the delta. But this delta was not consistent in different environments for the same amount of data. Different environments mean different oracle servers, different existing sizes of tablespaces, One data file Vs multiple data files etc.

How do we reliably estimate how much tablespace we need for the given amount of data?

View 3 Replies View Related

SQL & PL/SQL :: How To Implement Pagination For Large Table Joins

Sep 2, 2011

I have two large tables(rptbody and rpthead) which has over millions or even more records. Below is the table schema

describe rpthead
Name Null Type
--------------------------- -------- -------------
RPTNO NOT NULL NUMBER
RPTDATE NOT NULL DATE
RPTD_BY NOT NULL VARCHAR2(25)
PRODUCT_ID NOT NULL NUMBER
[code]...

What I want is getting all data if the referenced RPTNO belongs to a particular product_id from rptbody table, here's the sql

SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE
FROM RPTBODY t0
WHERE
(
t0.RPTNO IN
(
SELECT t1.RPTNO FROM RPTHEAD t1 where t1.PRODUCT_ID IN ('4647')
)
)
ORDER BY t0.LINENO

Since the result set is pretty large, so my application(think it as c couple of jobs, each job should be finished in a time window) can only process a subset of all data, so I need pagination so that the next job can continue the processing until all data is processed, below is the SQL with pagination

select * from (
select a.*, ROWNUM rnum from
(
SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE
FROM RPTBODY t0
WHERE
(
[code]....

As you can see each query will take 100 rows from the db. The problem for now is that the query taking too much of time(10+ mins), I know the slowness is due to "ORDER BY t0.LINENO", but it's required for pagination.

View 4 Replies View Related

SQL & PL/SQL :: Unable To Create Index On Large Table

Sep 30, 2012

I am trying to create a new index on large table of size around 100GB. but i am getting the following error:

ORA-1652: unable to extend temp segment by 128 in tablespace TEMP.

temp tablespace size is : 20 GB.

does it mean that the whole index will be created at temp tablspace first?

View 3 Replies View Related

PL/SQL :: Deleting Large Number Of Rows From Table

Apr 30, 2013

Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.

Method I have used:

1. Created Temp table

2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500.
delete from B where exists (select 1 from temp where b.col1=temp.col1);

3. Why it is taking too much time for deleting records in table A.

Is there any thing that during deleting data from such master table, it is referring to all dependent tables even if dependent data is not present ?

View 12 Replies View Related

Performance Tuning :: Managing Large Table?

Aug 26, 2011

I am working with an online application with the database in Oracle 10G. We have a table with 10 million rows and this table is subjected to grow in future also. Moreover we cannot archive some of these rows as these records are required for referencing.

We have all necessary indexes on the table but querying this table takes a lot of time especially when it is joined with other tables. some methods with which I can manage this table in a better way so that queries joining this table would execute faster..

SELECT
TAB1.C6,
TAB1.C8,
TAB1.C10,
TAB3.C4,

[code]....

View 7 Replies View Related

Server Utilities :: Export Dump Of Large Table

Apr 9, 2010

We have two databases running on 10.2.0.4 and 9.2.0.8. Both are having the same unpartitioned table of size 80G. I am exporting the table on 10g by using parallel=8 and dumpfile with %U option. That took around 4 hours to export the table.

And on 9.2.0.8, i am exporting using below parameters, taking around 5 hours.

buffer=2000000
recordlength=64000

options i can try to speed up the export in both versions.

View 2 Replies View Related

SQL & PL/SQL :: Update Statement - Calculating Few Values From Large Table

Sep 2, 2011

I have a large table and want to calculate just a few values. Therefore, I don't want to create a new table, I want to update the table. Here an example:

I want to calculate the VALUE_LAG with ID = 4 only (-> two values).

create table zTEST
( PRODUCT number,
ID number,
VALUE number,
VALUE_L1 number );

[Code]..

I tried this, but obviously, windows functions are not allowed in the update statement.

update zTEST
set VALUE_L1 = lag(VALUE) over (partition by PRODUCT, order by ID)
where ID = 4

How can I do this?

View 12 Replies View Related

How To Subdivide 1 Large TABLE Based On The Output Of A VIEW?

Aug 15, 2012

I am searching for a decent method / example code to subdivide a large table (into a global temp table (GTT) for further processing) based on a list of numeric/alphanumeric which is the resultset from a view.

I am groping with the following strategy in PL/SQL:

1 -- set up cursor, execute the view (so I have the list of identifiers)

2 -- create a second cursor (or loop?) which: accepts each of the identifiers in turn executes a query (EXECUTE IMMEDIATE?) on the larger table INSERTs (or appends?) each resultset into the GTT

3 -- Then the GTT contains just the requires subset of the larger table for further processing and eventual import into iReport for reporting.

GTT is defined and ready to go, the larger table contains approx 40,000 rows and I need to extract a dozen subsets or so which add up to approx 1000 rows.

View 10 Replies View Related

Use Range-hash Partitioning Of A Large Dimension Table

Apr 12, 2013

At moment we use range-hash partitioning of a large dimension table (dimension model warehouse) table with 2 levels - range partitioned on columns only available at bottom level of hierarchy - date and issue_id.

Result is a partition with null value - assume would get a null partition in large fact table if was partitioned with reference to the large dimension.Large fact table similarly partitioned date range-hash local bitmap indexes

Suggested to use would get automatic partition-wise joins if used reference partitioningWould have thought would get that with range-hash on both dimension.

View 3 Replies View Related

RAC & Failsafe :: Inserting Large Data Locks The Destination Table In RAC

Oct 18, 2010

Scenario:

Our application is using a two instance, one for the live active data and the other for the reports data. We have a process which moves the data from the live instance to reports instance every night. In a single db environment the process is working without any issues. However when we move to the RAC environment the reports db's (insert) in large table get locked and we are unable to insert data to the reports db.

What we are performing is:

Insert into my_table_rpt select * from may_table_live@db_link_to_livedb;

Issues:

my_table_rpt get locked

We have found the workaround by disable locking in destination and subsequent to the insert enable locking

ALTER TABLE my_table_rpt DISABLE TABLE LOCK;

Insert the data to the reports database table

Then

ALTER TABLE my_table_rpt ENABLE TABLE LOCK

Question:

Why does the large destination table (my_table_rpt) get locked in the RAC environment?

View 2 Replies View Related

Performance Tuning :: Split Large Table To Small Pieces?

Mar 28, 2011

I have several large tables in the live system! Those table are store historical information.

current situation:

Now, A table record was 129 million rows.

Every month added 4.5M records to this table.

This table data size 17GB and index size 28GB.

I have only 30 GB available free space on disk!

How to split this table to small pieces (partition table by month)?

What is the best approach?

I would like to do partitioning on this table month by month.

View 12 Replies View Related

Data Archive Script Is Taking Too Long To Delete A Large Table

Aug 8, 2013

We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below

CREATE TABLE "APP"."MON_TXNS"    (    "ID_TXN" NUMBER(12,0) NOT NULL ENABLE,     "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE,     "ID_PAYER" NUMBER(12,0),     "ID_PAYER_PI" NUMBER(12,0),     "ID_PAYEE" NUMBER(12,0),     "ID_PAYEE_PI" NUMBER(12,0),     "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE,     "STR_TEXT" VARCHAR2(60 CHAR),     "DAT_MERCHANT_TIMESTAMP" DATE,     "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE),     "DAT_EXPIRATION" DATE,     "DAT_CREATION" DATE,     "STR_USER_CREATION" VARCHAR2(30 CHAR),     "DAT_LAST_UPDATE"

[Code]...

 Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete  

SQL> explain plan for  2  delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW); 

Explained. SQL> select * from table(dbms_xplan.display); 

PLAN_TABLE_OUTPUT--------------------------------------------------------------------------------------------------------------------------------------------

Plan hash value: 2798378986
 -------------------------------------------------------------------------------------
| Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------|   0 | DELETE STATEMENT       |            |  2520 |   233K|    87   (2)| 00:00:02 ||   1 |  DELETE                | MON_TXNS   |       |       |            |          ||*  2 |   HASH JOIN RIGHT SEMI |            |  2520 |   233K|    87   (2)| 00:00:02 ||   3 |    INDEX FAST FULL SCAN| OTW_ID_TXN |  2520 | 15120 |     3   (0)| 00:00:01 ||   4 |    TABLE ACCESS FULL   | MON_TXNS   | 14260 |  1239K|    83   (0)| 00:00:02 |

-------------------------------------------------------------------------------------
 PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------------------------- 
Predicate Information (identified by operation id):
--------------------------------------------------- 

View 6 Replies View Related

Application Express :: How To Show Data From A Table Having Large Number Of Columns

Oct 8, 2013

I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns. Is it possible to design report in below format(half columns on one side of page, half on other side ofpage : 

Column1DataColumn11DataColumn2DataColumn12DataColumn3DataColumn13DataColumn4DataColumn14DataColumn5DataColumn15DataColumn6DataColumn16DataColumn7DataColumn17DataColumn8DataColumn18DataColumn9DataColumn19DataColumn10DataColumn20Data I am using Apex 4.2.3 version on oracle 11g xe.

View 2 Replies View Related

Performance Tuning :: How Expensive (speed) Is Unique Versus Primary Key In Large Table

Aug 15, 2011

I have two design alternatives and need to understand how expensive (speed) is one of them against the other for a medium size table (100K-200K records):

create table xyz
(
f1 number not null,
f2 varchar2(20) not null,
f3 number not null,
f4 varchar2(50),

[code]....

the idea is to optimize the design by using a PK instead of the 3 keys and there is a debate that searching a unique index field(2nd scenario) is of the same speed than searching a PK field (1st scenario).

View 5 Replies View Related

Server Administration :: Move Partitioned Table Between Table Spaces Of Different Block Size?

Apr 4, 2011

I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.

So far the only option I have is to export and then import back the data.

know if there is any way to move a partitioned table between table spaces of different block size?

View 14 Replies View Related

Row Size Of Two Different Row In Table?

Sep 13, 2011

One of my user is asking to give me the size(bytes) of row 39 and size(bytes) of row 49 of a table ZETR.

as i am not aware of how to collect size(bytes) for a particular row.

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved