We are evaluating partition strategies with view to achieving performance gains in reporting in particular. How efficient are partitioned indexes in this regard e.g.
just partitioned indexes on an un partitioned table.
One large fact table with durrogate keys on which have bitmpa indexes which link to unique key in associated dimensions. Considering partitioning the bitmap index which links to the largest dimension and similarly partition the dimension key on largest dimension.
And here is what I have done. I have created a procedure that takes 3 input parameters, please see the attached script:
1) TABLE_OWNER 2) TABLE_NAME 3) PARTITION_NAME - requires to query the particular partition to get the partition name
a situation where they will input dates as the partition was on a DATE column, now my challenge is how to incorporate this into the procedure to accept DATE as an input which will require one to query the particular table to get dates.I thought of using (HIGH_VALUE - 1) to get the dates from ALL_IND_PARTITIONS.
I'm trying to find a way to ADD new partitions to local indexes and at the same time specify their tablespaces without having to DROP and RECREATE.
Here´s an example table based on yearly partitioning:
CREATE TABLE "TABL_ANOM" ( ANOM_TS TIMESTAMP(6) NOT NULL , ANOM_TIPO NUMBER(2, 0) NOT NULL , ANOM_NIVEL NUMBER(2, 0) NOT NULL , ANOM_ID NUMBER(10, 0) NOT NULL
[code]...
Here´s an index def for the table:
CREATE INDEX "TABL_ANOM_INDEX1" ON "TABL_ANOM" ("ANOM_NIVEL") LOCAL (PARTITION DGSCOPSX_2011 TABLESPACE DGSCOPSX_2011 ,PARTITION DGSCOPSX_2012 TABLESPACE DGSCOPSX_2012 )
OK. Now I want to add partitions for 2013 so for the table I use:
ALTER TABLE TABL_ANOM ADD PARTITION DGSCOPS_2013 VALUES LESS THAN (TO_DATE('2014-01-01 00:00:00','YYYY-MM-DD HH24:MI:SS')) TABLESPACE DGSCOPS_2013;
and this works fine for the table but I can't find a similar command to simply add additional partitions to the indexes. I know that I can drop and recreate the indexes with the additional partition defs but on some of my tables, I'm dealing with hundreds of millions of rows and I think it would take way too long to drop and recreate all indexes on all partitions.
Also related is the PRIMARY KEY index partitions. Is there a way to add partitions (specifying the tablespaces) without having to DROP and re-ADD the CONSTRAINT with the additional partition for 2013?
While trying partition exchange feature of Oracle with 2 hash partitioned tables, I come to know that I can't directly exchange partitions between 2 partitioned tables
I have two hash partitioned tables , so to move partition data from one table to another will include-
1) Exchange from partitioned table to non-partitioned table. 2) exchange from non-partitioned table to new partitioned table.
But I am not sure in which hash partition my data will go in new partitioned table (data need to be moved has single key value on basis of which tables are partitioned),
A basic select and group by query I am optimising for my Database course has returned results that indicate it will perform better on a clustered index when returning a smaller number of rows (5% of the largest table) and on a hash clustered index when returning higher volumes (50% and 80%). I understand that it is possible to use more than one index type on a table to improve performance, but I am struggling to understand how I might establish a hash cluster and a cluster on the same table? and then use hints to drive the query down one access path or the other.
CREATE TABLE CARGA_P_RECARGA_NEW1 ( TELE_NUM VARCHAR2(10) NOT NULL, FECHA DATE NOT NULL,
[Code].....
Then I tried to insert some rows in that table, every insert statement is like this:
INSERT INTO CARGA_P_RECARGA_NEW1 VALUES ('3134769595','20/01/2013 07:22:50','1107','CONFB_20121121_20121122175002 60000000000000000090.TXT',0,16,'8327--7991284',1);
Every insert I executed had the month 01 because I expected to query results only from partittion p_0113 but nevermind how query I execute, the result is always the same. I mean if I excute this statement:
SELECT * FROM CARGA_P_RECARGA_NEW1 P_0113;
I get the same result when I execute any other like this:
We recently had to delete data from the table. This was a simple delete statement with a where clause and without taking into consideration any partition/subpartition clauses. Post committing the delete we have a count mismatch problem with two queries in particular
select count(0) count_without_parallel FROM TRANSACTION_TABLE t;
--THIS RETRIEVES *15774811* ROWS
select /* parallel(t,default) */count(0) count_with_parallel FROM TRANSACTION_TABLE t+
--THIS RETRIEVES *15777617* ROWS WHICH IS THE ACTUAL EXPECTED COUNT.
I also ran the following just to summarize
select (select count_with_parallel from ( select /* parallel(t,default) */count(0) count_with_parallel FROM TRANSACTION_TABLE t))+ - +(select count_without_parallel from (+ select count(0) count_without_parallel FROM TRANSACTION_TABLE t)) as false_difference from dual;
The difference in *2806* rows as expected.To re-affirm my counts I ran
select /*+ parallel(t,default) */ 'count_on_t',count(*) from TRANSACTION_TABLE t group by 'count_on_t' order by 1;
--THIS RETRIEVES *15777617* ROWS
Removing the parallel hint reverts back to the lesser count. Not sure what is wrong but something prevents the query from parsing the whole table and/or partitions and subpartitions.
I have partitioned the table based on field.But when I am selecting by Partition or by the field I am getting Explain plan as Table Access full.I am pasting the sql and Explain Plan here. The table has two partition by BOOKING_DT_WID. One less than 20100801 and other less than 99991231.
CODESELECT * FROM WC_BOOKING_SALESREP_F WHERE BOOKING_DT_WID >= 20100801; SELECT * FROM WC_BOOKING_SALESREP_F PARTITION(SALESREP_LESS1_99991231); Here is the Explain Plan for the same. CODESELECT STATEMENT ALL_ROWSCost: 1,501 Bytes: 293,923,641 Cardinality: 809,707 4 PX COORDINATOR [code]....
How do I know if the sql is doing partition prune.
CODEPARTITION t1p1 VALUES LESS THAN (TO_DATE('2011-11-01', 'YYYY-MM-DD')) PARTITION t1p2 VALUES LESS THAN (TO_DATE('2011-11-02', 'YYYY-MM-DD')) .... PARTITION t1p4 VALUES LESS THAN (MAXVALUE)
Every year partitions will be added for next 12 month. The table partition will be dropped every month (I have to have data from last six month so in July I could drop partition t1p1, in August - t1p2....). How many tablespaces should I create for this table and how place partitions in them to have data for last six month and use minimum space on disk?
I was thinking about one tablespace for whole table because space of each dropped partition will be reused, what do you think about that?
I got just confused while looking at the below two create table statements:
CREATE TABLE Test ( TestID integer not null, Name varchar2(20) not null ) PARTITION BY LIST (TestID) ( PARTITION testPart1 VALUES (1) TABLESPACE tbspc1, PARTITION testPart2 VALUES (2) TABLESPACE tbspc2@RemoteServer);
and
CREATE TABLE Test ( TestID integer not null, Name varchar2(20) not null ) tablespace tbspc1 PARTITION BY LIST (TestID) ( PARTITION testPart1 VALUES (1) TABLESPACE tbspc1, PARTITION testPart2 VALUES (2) TABLESPACE tbspc2@RemoteServer);
I have a partitioned table that is streamed to another database. I need to archive data on that table. That is I need to add a partition and remove a partition.
If I make those changes to the source table, will it stream over to the destination table?
If not, can I ...
pause streaming make changes to source table make same changes to destination table sreenable streaming. I know making data changes to the destination table can screw up streams but not sure if that holds for ddl.
I have a partitioned table (one partition per month). Every month there are added about 1GB data. What extent size should I set? 1GB will be ok?
What if data will be greater than 1GB, adding new 1GB extent takes probably a lot of time and clients may see delays while they're inserting in this time? (it's OLTP system)
When new extent is allocated? Exact in time of lacking space in existing extent or before? Partitions are dropped after one year so free space isn't a problem.
My table, HMTX have 10 partitions each of one have 6 millions of rows (average). We have 7 partitioned LOCAL indexes in that table. Every month we load data into a new partition (6 million of rows aprox) and drop the oldest partition in table HMTX.
In order to do that we have a script that contain the next statements:
drop of all indexes drop index n1; drop index n...; drop index n7;
[Code]...
create indexes again with tha same storage and degree parameters CREATE INDEX hmtx_TST_N1 ON hmtx (campo1, campo2, campo3 .... campo8) TABLESPACE xxxx PCTFREE 0 INITRANS 2
[Code]....
My problem is in rhe index creation section, despite use parallel with degree 8 and nologging the index was created in :
Elapsed: 02:43:50.85.
In past months that index was created in : Elapsed: 01:43:36.94 Elapsed: 04:48:31.24 Elapsed: 00:57:16.28
there are another way in order speed the index creation ?? o another way to disable ths index ??
Im having table which is of 45M rows table [Not partitioned], Now I want to compress the old data other than last 3Months data, I should not go for partition compress. Rarely some select queries will be fired on that Old data. Now how can I compress that table without affecting the Indexes , Dependencies proc, pkgs, Functions.
I have table with 4 partition by range partition. I am loading the table in bulk mode to latest partition. Before I load , I dropped the index and after Load I will be creating index. So when I am dropping index, it is dropping index from all the partitions and when creating the index, I am creating the index for all partitions. When I am creating index using local, it is telling you have to create local index for all partitions at the same time. because of that I have to drop and recreate all indexes again. Again I have to gather stats for whole table .
I was thinking we can build index for one partition and index should remain as is for old partitions If this is not the case, how do I plan my load for a partitioned table using bulk mode to latest partition.
I have normal tables with hugh Data and would like to increase the performace by following means:
1) Add a new column in each table. Say this column Name is IS_LIVE. This new column have only two value 1 ( LIVE ) OR 0 ( NOT LIVE ). 2) Change the normal tables to Partitioned table. There would be only two partitioned in all the table. The partitioned key column would be IS_LIVE and both partitioend recrods would be in two different tablespace. 3) Added a POLICY function to these partitioned table to Always add a Query Predicate of '1' to all queuries.
I am interested to know that what kind of Indexes ( Global Or local ) would be suitable for these kind of Design.Is there any use of having Local index on IS_LIVE.Please note that Primary Key doesnot have this new column in it.
Is it possible for the DBMS_STATS "LIST STALE" command to show a stale partition but NOT have its table show as stale?
I had a scenario where the table itself AND 1 partition showed as stale. I ran a fnd_stats gather table stats just on that 1 partition. Once it was completed it showed the partition to no longer be stale. it also showed that the table was no longer stale. so I guess I do not need to run stats on the whole table as well?
so if this is the case, when would I need to run stats on the full partitioned table if running it on the partitions themselves removes the staleness of the table?
A datafile was deleted from a partitioned table which has 31 partitions , that datafile contains data from day 25 to day 31 st I am not having a backups of this database. but I have all the dumps of that table what is the solution to get those 6 partitions data back.
I am trying to export a partition of a table and import it to another database. I get the below error when I try to import.
ORA-14400: inserted partition key does not map to any partition
If I export the table(for that particular partition) and import the table(after dropping the table) in destination, the partitions and sub partitions are created without any problem.
The table is Range Partitioned and Sub partitioned in List. So I had to perform the below operation if I want to retain other data in the Destination table.
1. Drop the existing partition 2. Create the partition and sub partition, same as source 3. Execute imp
In fact I had to perform step#2, as if I split the partition also, the sub partition gets replicated in the new partition, which again throws the same error. Is there better way of managing the partitions and subpartition in destination with exp/imp utility, so that I need not perform step#1 and step#2 manually.