How Oracle Determine Initial Extent And Max Extent Size
Aug 10, 2012Suppose tablespace allocation_type is system then how oracle determines the initial extent and max extent size?
View 4 RepliesSuppose tablespace allocation_type is system then how oracle determines the initial extent and max extent size?
View 4 RepliesI've read the documentation that describes the storage management.I create a tablespace as:
CREATE TABLESPACE MY_TABLESPACE_NAME
DATAFILE 'path/filename1.dbf' SIZE 3000M AUTOEXTEND ON NEXT 200M MAXSIZE 4000M
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
BLOCKSIZE 8k
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;
As the extent management is local, does it mean that any storage clause of the objects (tables, indexes etc.) placed in it isn't taken into consideration? I mean in a case of placing a table in the mentioned tablespace that has a storege parameters defined as follows:
CREATE TABLE MY_TABLE(
...
)
TABLESPACE MY_TABLESPACE_NAME
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 100M
NEXT 20M
MINEXTENTS 1
MAXEXTENTS 50
BUFFER_POOL DEFAULT
)
1. what will be the initial extent size? 1M or 100M?
2. what will be the next extent size? 1M or 20M?
3. will the maxextents parameter be taken into consideration?
4. when i'm sure the tablespace is dedicated to keep only one object [MY_TABLE], what should be the relation between the initial datafile size [filename1.dbf] and the initial extent size? Should they be iqual or doesn't matter?
5. as the SEGMENT SPACE MANAGEMENT is AUTO, the PCTFREE param doesn't make sense, right?
I need to create table A. which will going have more than 8L records. Daily this table A will truncate and reinsert all 8L records. Also number of records(8L) will we increase 50K per month. what should be storage clause parameters . Mainly initial and next extent.
View 3 Replies View Related1.2 million chained rows, 1.7 million blocks, etc. Initial extent for this table is 64k and next 1 mb. I would try to calculate this out better for efficiency and performance. This will not be efficient as it stands. calculate the size.
View 14 Replies View RelatedORA-01658: unable to create INITIAL extent for segment in tablespace
I am getting above error when a batch job is running.Its insterting data in tables. I checked free space i dont know y its not using avalable space.
Database Version: 11202
select sum(bytes)/1024/1024/1024, sum(maxbytes)/1024/1024/1024 from dba_data_files where tablespace_name='test_data';
sum(bytes) sum(maxbytes)
---------------- -------------
60 0
select sum(bytes/1024/1024/1024) from dba_free_space where tablespace_name='test_data';
sum(bytes/1024/1024/1024)
----------------------
50
I have executed the below statement to move a bulk of table from source table space-SHOP3_DATA to destination tablespace -LINUX_CATA. Before executing the statement the source table space was having 85% full message . While executing the statement this is giving the error for the source tablespace means the the tablespace is getting consumed in the source TS. Currently no-one is using the tablespace SHOP3_DATA' . If I add datafile to the SHOP3_DATA the problem may be solved but why the space is getting consumed in the source. Now the tablespace SHOP3_DATA is 95% full
SQL> alter table LINUX_CATA.TST_FOLDERS move tablespace LINUX_CATA;
Table altered.
SQL> alter table LINUX_CATA.TST_SEARCH_TESTS move tablespace LINUX_CATA;
alter table LINUX_CATA.TST_SEARCH_TESTS move tablespace LINUX_CATA
*
ERROR at line 1:
ORA-01658: unable to create INITIAL extent for segment in tablespace SHOP3_DATA
[code]....
while importing data i got this error in my log file.and i cannot import my data successfuly
in my log file error i found like this
ORA-01658: unable to create INITIAL extent for segment in tablespace USERS
IMP-00017: following statement failed with ORACLE error 1658:
IMP-00003: ORACLE error 1658 encountered
ORA-01658: unable to create INITIAL extent for segment in tablespace USERS
IMP-00017: following statement failed with ORACLE error 1658:
i can import my data using imp utility using this syntax
C:UsersAdministrator>imp tiger/****@tcs file=E:DUMP s.
dmp log=E:DUMP s.log fromuser=tiger121 touser=tiger statistics=none
this my user tiger default tablespace its uses and its a auto extend on and locally managed tablespace,and i have enough space on my drive also.
I have a partitioned table (one partition per month). Every month there are added about 1GB data. What extent size should I set? 1GB will be ok?
What if data will be greater than 1GB, adding new 1GB extent takes probably a lot of time and clients may see delays while they're inserting in this time? (it's OLTP system)
When new extent is allocated? Exact in time of lacking space in existing extent or before? Partitions are dropped after one year so free space isn't a problem.
How to estimate next extent size for very large table? What should I take into account? Is there any formula for that?
View 4 Replies View RelatedI am creating the below table.but why oracle is not taking next extent value that I mentioned in the storage clause?
create table acs (
timestamp varchar2(19),
source varchar2(19),
message varchar2(35),
sev varchar2(10),
[Code]....
How to get the row ids of the first and last row in every extent by dba_extents?
View 13 Replies View RelatedI did some google searches about large number of extents and ASSM. I see bits and pieces on the web. This is something I need to look at while testing an application. Not looking to go into 'why' I would use smaller extents, I just want to make sure I have what I need to look for during testing..Issues with massive numbers of extents:
1. DBA_EXTENTS query is really slow.
2. issues truncating tables (due to having to read lots of extents)
3. issues splitting maxvalue partitions and with dropping partitions.
4. if I stay away from ASSM, would this reduce these issues? Are there any other performance issues or other issues I need to know about to check when I do tests?
Any issues with query or insert wait time? The tables that would get smaller events would have thousands of partitions/sub-partitions . Most of these sub-partitions will be rather smaller.I just want to test for a variety of different cases. The 'why' will come out during testing.
When ever error occurred as "Unable to extend extent", we do add either datafiles or increase the size (autoextend on).But in a interview, i was asked to handle the error without increasing size/adding new datafiles.
how can i handle this error without increasing size?
There was a tablespace of size 100MB.That Tablespace is filled and hence throwing an error like "unable to extent". But verified that there is around 20MB freespace present in the tablespace.what would be the reason behind this error?
View 2 Replies View RelatedI am receiving this error in production databases...There are 2 probable extent failures for tablespace
View 14 Replies View RelatedI want to create system table space's extent management dictionary with the syntax:
CODEcreate database
logfile
group 1 ('/u01/app/oradata/anand/redo1a.log') size 100M,
group 2 ('/u01/app/oradata/anand/redo2a.log') size 100M,
group 3 ('/u01/app/oradata/anand/redo3a.log') size 100M
datafile '/u01/app/oradata/anand/system.dbf' size 400M extent management dictionary
sysaux datafile '/u01/app/oradata/anand/sysaux.dbf' size 300M
default temporary tablespace temp tempfile '/u01/app/oradata/anand/temp.dbf' size 50M
but it is giving error
ERROR at line 6:
ORA-25141: invalid EXTENT MANAGEMENT clause
how can I make system tablespace's extent management dictionary?
I am trying to restore to a backup instance on a backup server. When I try to recreate the tables I keep getting ORA-01659: unable to allocate MINEXTENTS. The tablespaces and datafiles on both servers show as the same size in OEM.
I have dropped all tables and OEM shows tablespaces are empty. Then I run a script to recreate all tables. Most of the tables don't get created because their TS is full. After the script to recreate all tables runs, the main tablespaces are full, more full than on the production machine. I have also tried ALTER TABLESPACE xxx COALESCE; on each tablespace right after dropping all tables and before recreating them to reclaim free space. Why is it full? I've only dropped and created the tables, there shouldn't be any data in them yet.
ORA-01659: unable to allocate MINEXTENTS beyond 2 in tablespace PLUS_T...The backup instance was already there, all I did was drop the tables. Here's what I ran on prod to build a script to recreate the tables on backup server. Got it off Burleson somewhere.
SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name) ||'; '
FROM USER_TABLES u;
I have a table with two clob columns and need to manually allocate space to the table and to its lob segment. Is the following command correct?
--to allocate extent to the table
alter table emp allocate extent;
--the table has columns named col1 and col2 which are clob
--to allocate extents to the columns
alter table emp modify lob (col1) (allocate extent (size 10m))
/
alter table emp modify lob (col2) (allocate extent (size 10m))
/
How to find the tables starting with smallest size and vice versa in schema level and database level?
View 4 Replies View RelatedThe db is 11.2.0.3 on a linux machine.I would like to know the "fetch size" of an application, but I was not able to find any related meteris in v$statname.
The application configruation is invisible to me.Do I need to do some calculations based on statistic metrics from v$statname?
If so, what meteris should be considered for the assumption for "fetch size" ?
The following is from manual, but the application configuration is invisible to me.
[URL]
Setting the Fetch Size
The following methods are available in all Statement, PreparedStatement, CallableStatement, and ResultSet objects for setting and getting the fetch size:
•void setFetchSize(int rows) throws SQLException
•int getFetchSize() throws SQLException
Here is my problem : I create a tablespace named ts_photos.I create then a table on this tablespace.Then I insert some blobs records.The space grows up to 2 Gb.
I perform a truncate statment. Then the space grows up to 4Gb.I re-insert the same records and the size stays in 4Gb.The database in in archivelog mode. How can I return to the initial size (2Gb of effective data) : should I drop the tablespace and recreate it?
Which is the correct method to calculate actual data size in a table? becaue when I serach in google, I saw the below line.
"Oracle thumb rule says (actual space required for a table + 30 % space) will calculate the original space requirement for a table."
Method 1:
actual space = num_rows*avg_row_len
Method 2:
actual space = (Num of rows in a table) * (Avg_row_len) + ((Num of rows in a table) * (Avg_row_len)* 0.3)
Is it possible to create a .sql script that, when executed, will determine which OS (i.e. Windows or Linux) Oracle is being hosted on? At the moment, all our scripts are written for Windows and I believe that for Linux the slashes must point the other way in order for the script to run.
Or, would the easiest thing be to create two copies of the script - one for Windows and one for Linux? :)
Before I install the Oracle database 11.2.0.1 in Linux server 2.6.39-400.209.1.el6uek.x86_64, I created the following groups
oinstall, dba, oper, and asmadmin. groupadd oinstall # required from training groupadd dba # required from training groupadd oper # and group asmdba, asmoper from training groupadd asmadmin #
optional from training I made a mistake when I created Oracle user account. I created it with dba as initial group“ useradd -g dba -G oinstall,oper,asmadmin oracle”, instead of “useradd -g oinstall -G dba,oper,asmadmin oracle” After all I installed Oracle database, now I have concerns and questions. Should I use user mod to update the Oracle user account to install as initial group or just leave it alone? If I now do “usermod –g oinstall –G dba,oper,asmadmin oracle”, will it break anything, any impact to the database?
unable to start the DBCONSOLE immediately after installing Oracle 11G R2 onto my Windows 7 box. Here is what I have ran from the command line:
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:Userssandifer>emctl start dbconsole
Environment variable ORACLE_UNQNAME not defined. Please set ORACLE_UNQNAME to database unique name.
C:Userssandifer>SET ORACLE_HOSTNAME=localhost
[code].....
I checked the above listed file path, and sure enough, OC4J_DBConsole_localhost_orcl is missing.
we have 96GB Memory on the UNIX server and 85% of its usage shows oracle processes I want to determine which Oracle processes are taking most of the memory
SGA is around 36G
SGA_TARGET is 40G
PGA is around 4G
the total of around 40-45 GB of usage is understandable but what other oracle process are chewing up the remaining 30-40 GB on the server is not known
load averages: 7.35, 6.46, 6.15; up 248+11:33:21 12:25:03
2202 processes: 2196 sleeping, 1 zombie, 5 on cpu
CPU states: 83.8% idle, 10.5% user, 5.8% kernel, 0.0% iowait, 0.0% swap
Memory: 96G phys mem, 15G free mem, 128G total swap, 128G free swap
PID USERNAME LWP PRI NICE SIZE RES STATE TIME CPU COMMAND
21720 oracle 258 0 0 40G 40G cpu/48 215:28 2.04% oracle
10709 oracle 1 0 2 1816K 1448K cpu/9 0:02 0.90% res_conf_email_
[code]......
We are planning to export the table data to a file pipedelimited. How do i estimate the size of the FlatFile based on the table size? or avg rowlength
View 3 Replies View RelatedI am using oracle 8.1.5 database and my temp01.dbf file size is increased upto 19.8 GB now i want reduce its size .
View 13 Replies View Relatedsettings I can make for Initial and Next extents? I am working at a client site where they have an in-house E-Business Suite database that is running very fast. In most cases the initial extent on the big tables is set to something like 64Mb with the next extent set to figures varying between 25Mb and 50Mb. What this means is that we don't have many extents.
They have recently outsourced their database to Oracle on Demand and are getting poor performance. Over at OOD the initial extents remain as set in the local database - taken from a clone. However, all of the next extents are set small to nimbers like 128Kb. Some of the big Oracle EBS tables have more than 128Kb in a single record and we are seeing many 1000s of extents already.
Could the large number of extents and the small size of the next extent be the cause of a substantial decrese in performance? I'm looking for some justification to go back to OOD with a view to asking them to reinstate the settings as they were in the client database before it was migrated.
I have a table that holds the definition of schedules, a schedule defines when a document should be submitted to a specific party. The schedule definition contains a start date, and end date, a recurrence type (is this submitted one time, or on a recurring schedule) and the frequency at which the document should be submitted. The second table provides a history of the submissions, this stores when it was due and when it was received. At the beginning of the month we pre-populate the submittal table with a list of records that will be due for the month. For instance, on September 1 we look through all of the schedules and determine which ones would have a record due at some point in 9/12 and then create a record in the submittal table.
I am having issues getting the calculated list of submittal records to work properly.
The DDL and DML will be in a follow-up post
Here is the query that I am currently using and fails to work properly.
with schedules as (
-- generate a list of valid permit schedules
select s.schedule_id,s.submittal_frequency_months,s.recurrence_type,
s.first_due_date,s.requires_approval,
round(round(months_between(to_date('09/01/2012','mm/dd/yyyy'),s.first_due_date))/decode(s.submittal_frequency_months,0,1,s.submittal_frequency_months)) recurrence_number
[Code]....
-- create a list of all potential due dates for these schedules
select submittal_id_seq.nextval,schedule_Id,8,requires_approval,
case
when recurrence_type='One Time'
then first_due_date
when recurrence_type='Recurring' and trunc(first_due_date)=to_date('09/01/2012','mm/dd/yyyy')
[Code]....
-- exclude those that already have a submittal record ;
Basically I found all possible records in the schedule table that could have a record due in September, then generate a result for all possible instances and then look at only those whose calculated due date is 09/01/2012. I've determined that the root problem I have right now is this line:
(select level iteration from dual connect by level <= (select max(schedules.recurrence_number) from schedules)) d
Schedule ID 469907 has a start date of 05/15/1992 and a frequency of every 2 months. I calculate what I call the recurrence number, which is the number of times the schedule has happened since its start date to now. I use that to do an add_months calculation from the start date and then eventually compare these calculted start dates with my target month (09/12). In this one records case the calculated recurrence number is 122. So when I generate the connect by level is does 122 records for every schedule, so I end up with duplicate records in the submittal table for many of the schedules. This current query could probably work if I could figure out a way to make the level be schedule ID specific, but I've failed at that thus far.
Aside from the fact that this is returning the wrong results, I am thinking there must be a better more efficient method to determine which records are due for a given month. I was thinking there is probably some cool way to use the model clause here, but I haven't got a grasp on that one yet.
If you run the following insert statement you'll see that it inserts over 2400 records:
insert into submittal (submittal_id,schedule_id,submittal_status_type_id,requires_approval,due_date,created_by,created_date,modified_by,modified_Date)
with schedules as (
-- generate a list of valid permit schedules
select s.schedule_id,s.submittal_frequency_months,s.recurrence_type,
s.first_due_date,s.requires_approval,
[Code]....
-- only submittals whose last due date has not passed, null last date included
and trunc(s.first_due_date,'mm') <= to_date('09/01/2012','mm/dd/yyyy') -- only valid start dates
--and round(round(months_between(to_date('09/01/2012','mm/dd/yyyy'),s.first_due_date))/decode(s.submittal_frequency_months,0,1,s.submittal_frequency_months)) >0
)
-- create a list of all potential due dates for these schedules
select submittal_id_seq.nextval,schedule_Id,8,requires_approval,
case
when recurrence_type='One Time'
then first_due_date
when recurrence_type='Recurring' and trunc(first_due_date)=to_date('09/01/2012','mm/dd/yyyy')
[Code]...
You can see the problem after words:
select schedule_id,count(0)
from submittal
where trunc(due_date,'mm')=to_date('09/01/2012','mm/dd/yyyy')
and submittal_status_type_id=8
having count(0) >1
group by schedule_id;Tony