I have a partitioned table (one partition per month). Every month there are added about 1GB data. What extent size should I set? 1GB will be ok?
What if data will be greater than 1GB, adding new 1GB extent takes probably a lot of time and clients may see delays while they're inserting in this time? (it's OLTP system)
When new extent is allocated? Exact in time of lacking space in existing extent or before? Partitions are dropped after one year so free space isn't a problem.
I need to create table A. which will going have more than 8L records. Daily this table A will truncate and reinsert all 8L records. Also number of records(8L) will we increase 50K per month. what should be storage clause parameters . Mainly initial and next extent.
I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.
So far the only option I have is to export and then import back the data.
know if there is any way to move a partitioned table between table spaces of different block size?
I've read the documentation that describes the storage management.I create a tablespace as:
CREATE TABLESPACE MY_TABLESPACE_NAME DATAFILE 'path/filename1.dbf' SIZE 3000M AUTOEXTEND ON NEXT 200M MAXSIZE 4000M LOGGING ONLINE PERMANENT EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M BLOCKSIZE 8k SEGMENT SPACE MANAGEMENT AUTO FLASHBACK ON;
As the extent management is local, does it mean that any storage clause of the objects (tables, indexes etc.) placed in it isn't taken into consideration? I mean in a case of placing a table in the mentioned tablespace that has a storege parameters defined as follows:
1. what will be the initial extent size? 1M or 100M? 2. what will be the next extent size? 1M or 20M? 3. will the maxextents parameter be taken into consideration? 4. when i'm sure the tablespace is dedicated to keep only one object [MY_TABLE], what should be the relation between the initial datafile size [filename1.dbf] and the initial extent size? Should they be iqual or doesn't matter? 5. as the SEGMENT SPACE MANAGEMENT is AUTO, the PCTFREE param doesn't make sense, right?
1.2 million chained rows, 1.7 million blocks, etc. Initial extent for this table is 64k and next 1 mb. I would try to calculate this out better for efficiency and performance. This will not be efficient as it stands. calculate the size.
I have a table with two clob columns and need to manually allocate space to the table and to its lob segment. Is the following command correct?
--to allocate extent to the table alter table emp allocate extent; --the table has columns named col1 and col2 which are clob --to allocate extents to the columns alter table emp modify lob (col1) (allocate extent (size 10m)) / alter table emp modify lob (col2) (allocate extent (size 10m)) /
CREATE TABLE CARGA_P_RECARGA_NEW1 ( TELE_NUM VARCHAR2(10) NOT NULL, FECHA DATE NOT NULL,
[Code].....
Then I tried to insert some rows in that table, every insert statement is like this:
INSERT INTO CARGA_P_RECARGA_NEW1 VALUES ('3134769595','20/01/2013 07:22:50','1107','CONFB_20121121_20121122175002 60000000000000000090.TXT',0,16,'8327--7991284',1);
Every insert I executed had the month 01 because I expected to query results only from partittion p_0113 but nevermind how query I execute, the result is always the same. I mean if I excute this statement:
SELECT * FROM CARGA_P_RECARGA_NEW1 P_0113;
I get the same result when I execute any other like this:
We recently had to delete data from the table. This was a simple delete statement with a where clause and without taking into consideration any partition/subpartition clauses. Post committing the delete we have a count mismatch problem with two queries in particular
select count(0) count_without_parallel FROM TRANSACTION_TABLE t;
--THIS RETRIEVES *15774811* ROWS
select /* parallel(t,default) */count(0) count_with_parallel FROM TRANSACTION_TABLE t+
--THIS RETRIEVES *15777617* ROWS WHICH IS THE ACTUAL EXPECTED COUNT.
I also ran the following just to summarize
select (select count_with_parallel from ( select /* parallel(t,default) */count(0) count_with_parallel FROM TRANSACTION_TABLE t))+ - +(select count_without_parallel from (+ select count(0) count_without_parallel FROM TRANSACTION_TABLE t)) as false_difference from dual;
The difference in *2806* rows as expected.To re-affirm my counts I ran
select /*+ parallel(t,default) */ 'count_on_t',count(*) from TRANSACTION_TABLE t group by 'count_on_t' order by 1;
--THIS RETRIEVES *15777617* ROWS
Removing the parallel hint reverts back to the lesser count. Not sure what is wrong but something prevents the query from parsing the whole table and/or partitions and subpartitions.
I have partitioned the table based on field.But when I am selecting by Partition or by the field I am getting Explain plan as Table Access full.I am pasting the sql and Explain Plan here. The table has two partition by BOOKING_DT_WID. One less than 20100801 and other less than 99991231.
CODESELECT * FROM WC_BOOKING_SALESREP_F WHERE BOOKING_DT_WID >= 20100801; SELECT * FROM WC_BOOKING_SALESREP_F PARTITION(SALESREP_LESS1_99991231); Here is the Explain Plan for the same. CODESELECT STATEMENT ALL_ROWSCost: 1,501 Bytes: 293,923,641 Cardinality: 809,707 4 PX COORDINATOR [code]....
How do I know if the sql is doing partition prune.
CODEPARTITION t1p1 VALUES LESS THAN (TO_DATE('2011-11-01', 'YYYY-MM-DD')) PARTITION t1p2 VALUES LESS THAN (TO_DATE('2011-11-02', 'YYYY-MM-DD')) .... PARTITION t1p4 VALUES LESS THAN (MAXVALUE)
Every year partitions will be added for next 12 month. The table partition will be dropped every month (I have to have data from last six month so in July I could drop partition t1p1, in August - t1p2....). How many tablespaces should I create for this table and how place partitions in them to have data for last six month and use minimum space on disk?
I was thinking about one tablespace for whole table because space of each dropped partition will be reused, what do you think about that?
I got just confused while looking at the below two create table statements:
CREATE TABLE Test ( TestID integer not null, Name varchar2(20) not null ) PARTITION BY LIST (TestID) ( PARTITION testPart1 VALUES (1) TABLESPACE tbspc1, PARTITION testPart2 VALUES (2) TABLESPACE tbspc2@RemoteServer);
and
CREATE TABLE Test ( TestID integer not null, Name varchar2(20) not null ) tablespace tbspc1 PARTITION BY LIST (TestID) ( PARTITION testPart1 VALUES (1) TABLESPACE tbspc1, PARTITION testPart2 VALUES (2) TABLESPACE tbspc2@RemoteServer);
I have a partitioned table that is streamed to another database. I need to archive data on that table. That is I need to add a partition and remove a partition.
If I make those changes to the source table, will it stream over to the destination table?
If not, can I ...
pause streaming make changes to source table make same changes to destination table sreenable streaming. I know making data changes to the destination table can screw up streams but not sure if that holds for ddl.
My table, HMTX have 10 partitions each of one have 6 millions of rows (average). We have 7 partitioned LOCAL indexes in that table. Every month we load data into a new partition (6 million of rows aprox) and drop the oldest partition in table HMTX.
In order to do that we have a script that contain the next statements:
drop of all indexes drop index n1; drop index n...; drop index n7;
[Code]...
create indexes again with tha same storage and degree parameters CREATE INDEX hmtx_TST_N1 ON hmtx (campo1, campo2, campo3 .... campo8) TABLESPACE xxxx PCTFREE 0 INITRANS 2
[Code]....
My problem is in rhe index creation section, despite use parallel with degree 8 and nologging the index was created in :
Elapsed: 02:43:50.85.
In past months that index was created in : Elapsed: 01:43:36.94 Elapsed: 04:48:31.24 Elapsed: 00:57:16.28
there are another way in order speed the index creation ?? o another way to disable ths index ??
Im having table which is of 45M rows table [Not partitioned], Now I want to compress the old data other than last 3Months data, I should not go for partition compress. Rarely some select queries will be fired on that Old data. Now how can I compress that table without affecting the Indexes , Dependencies proc, pkgs, Functions.
I have table with 4 partition by range partition. I am loading the table in bulk mode to latest partition. Before I load , I dropped the index and after Load I will be creating index. So when I am dropping index, it is dropping index from all the partitions and when creating the index, I am creating the index for all partitions. When I am creating index using local, it is telling you have to create local index for all partitions at the same time. because of that I have to drop and recreate all indexes again. Again I have to gather stats for whole table .
I was thinking we can build index for one partition and index should remain as is for old partitions If this is not the case, how do I plan my load for a partitioned table using bulk mode to latest partition.
I have normal tables with hugh Data and would like to increase the performace by following means:
1) Add a new column in each table. Say this column Name is IS_LIVE. This new column have only two value 1 ( LIVE ) OR 0 ( NOT LIVE ). 2) Change the normal tables to Partitioned table. There would be only two partitioned in all the table. The partitioned key column would be IS_LIVE and both partitioend recrods would be in two different tablespace. 3) Added a POLICY function to these partitioned table to Always add a Query Predicate of '1' to all queuries.
I am interested to know that what kind of Indexes ( Global Or local ) would be suitable for these kind of Design.Is there any use of having Local index on IS_LIVE.Please note that Primary Key doesnot have this new column in it.
Is it possible for the DBMS_STATS "LIST STALE" command to show a stale partition but NOT have its table show as stale?
I had a scenario where the table itself AND 1 partition showed as stale. I ran a fnd_stats gather table stats just on that 1 partition. Once it was completed it showed the partition to no longer be stale. it also showed that the table was no longer stale. so I guess I do not need to run stats on the whole table as well?
so if this is the case, when would I need to run stats on the full partitioned table if running it on the partitions themselves removes the staleness of the table?