I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.
So far the only option I have is to export and then import back the data.
know if there is any way to move a partitioned table between table spaces of different block size?
CREATE TABLE CARGA_P_RECARGA_NEW1 ( TELE_NUM VARCHAR2(10) NOT NULL, FECHA DATE NOT NULL,
[Code].....
Then I tried to insert some rows in that table, every insert statement is like this:
INSERT INTO CARGA_P_RECARGA_NEW1 VALUES ('3134769595','20/01/2013 07:22:50','1107','CONFB_20121121_20121122175002 60000000000000000090.TXT',0,16,'8327--7991284',1);
Every insert I executed had the month 01 because I expected to query results only from partittion p_0113 but nevermind how query I execute, the result is always the same. I mean if I excute this statement:
SELECT * FROM CARGA_P_RECARGA_NEW1 P_0113;
I get the same result when I execute any other like this:
We recently had to delete data from the table. This was a simple delete statement with a where clause and without taking into consideration any partition/subpartition clauses. Post committing the delete we have a count mismatch problem with two queries in particular
select count(0) count_without_parallel FROM TRANSACTION_TABLE t;
--THIS RETRIEVES *15774811* ROWS
select /* parallel(t,default) */count(0) count_with_parallel FROM TRANSACTION_TABLE t+
--THIS RETRIEVES *15777617* ROWS WHICH IS THE ACTUAL EXPECTED COUNT.
I also ran the following just to summarize
select (select count_with_parallel from ( select /* parallel(t,default) */count(0) count_with_parallel FROM TRANSACTION_TABLE t))+ - +(select count_without_parallel from (+ select count(0) count_without_parallel FROM TRANSACTION_TABLE t)) as false_difference from dual;
The difference in *2806* rows as expected.To re-affirm my counts I ran
select /*+ parallel(t,default) */ 'count_on_t',count(*) from TRANSACTION_TABLE t group by 'count_on_t' order by 1;
--THIS RETRIEVES *15777617* ROWS
Removing the parallel hint reverts back to the lesser count. Not sure what is wrong but something prevents the query from parsing the whole table and/or partitions and subpartitions.
I have partitioned the table based on field.But when I am selecting by Partition or by the field I am getting Explain plan as Table Access full.I am pasting the sql and Explain Plan here. The table has two partition by BOOKING_DT_WID. One less than 20100801 and other less than 99991231.
CODESELECT * FROM WC_BOOKING_SALESREP_F WHERE BOOKING_DT_WID >= 20100801; SELECT * FROM WC_BOOKING_SALESREP_F PARTITION(SALESREP_LESS1_99991231); Here is the Explain Plan for the same. CODESELECT STATEMENT ALL_ROWSCost: 1,501 Bytes: 293,923,641 Cardinality: 809,707 4 PX COORDINATOR [code]....
How do I know if the sql is doing partition prune.
CODEPARTITION t1p1 VALUES LESS THAN (TO_DATE('2011-11-01', 'YYYY-MM-DD')) PARTITION t1p2 VALUES LESS THAN (TO_DATE('2011-11-02', 'YYYY-MM-DD')) .... PARTITION t1p4 VALUES LESS THAN (MAXVALUE)
Every year partitions will be added for next 12 month. The table partition will be dropped every month (I have to have data from last six month so in July I could drop partition t1p1, in August - t1p2....). How many tablespaces should I create for this table and how place partitions in them to have data for last six month and use minimum space on disk?
I was thinking about one tablespace for whole table because space of each dropped partition will be reused, what do you think about that?
I got just confused while looking at the below two create table statements:
CREATE TABLE Test ( TestID integer not null, Name varchar2(20) not null ) PARTITION BY LIST (TestID) ( PARTITION testPart1 VALUES (1) TABLESPACE tbspc1, PARTITION testPart2 VALUES (2) TABLESPACE tbspc2@RemoteServer);
and
CREATE TABLE Test ( TestID integer not null, Name varchar2(20) not null ) tablespace tbspc1 PARTITION BY LIST (TestID) ( PARTITION testPart1 VALUES (1) TABLESPACE tbspc1, PARTITION testPart2 VALUES (2) TABLESPACE tbspc2@RemoteServer);
I have a partitioned table that is streamed to another database. I need to archive data on that table. That is I need to add a partition and remove a partition.
If I make those changes to the source table, will it stream over to the destination table?
If not, can I ...
pause streaming make changes to source table make same changes to destination table sreenable streaming. I know making data changes to the destination table can screw up streams but not sure if that holds for ddl.
I have a partitioned table (one partition per month). Every month there are added about 1GB data. What extent size should I set? 1GB will be ok?
What if data will be greater than 1GB, adding new 1GB extent takes probably a lot of time and clients may see delays while they're inserting in this time? (it's OLTP system)
When new extent is allocated? Exact in time of lacking space in existing extent or before? Partitions are dropped after one year so free space isn't a problem.
My table, HMTX have 10 partitions each of one have 6 millions of rows (average). We have 7 partitioned LOCAL indexes in that table. Every month we load data into a new partition (6 million of rows aprox) and drop the oldest partition in table HMTX.
In order to do that we have a script that contain the next statements:
drop of all indexes drop index n1; drop index n...; drop index n7;
[Code]...
create indexes again with tha same storage and degree parameters CREATE INDEX hmtx_TST_N1 ON hmtx (campo1, campo2, campo3 .... campo8) TABLESPACE xxxx PCTFREE 0 INITRANS 2
[Code]....
My problem is in rhe index creation section, despite use parallel with degree 8 and nologging the index was created in :
Elapsed: 02:43:50.85.
In past months that index was created in : Elapsed: 01:43:36.94 Elapsed: 04:48:31.24 Elapsed: 00:57:16.28
there are another way in order speed the index creation ?? o another way to disable ths index ??
Im having table which is of 45M rows table [Not partitioned], Now I want to compress the old data other than last 3Months data, I should not go for partition compress. Rarely some select queries will be fired on that Old data. Now how can I compress that table without affecting the Indexes , Dependencies proc, pkgs, Functions.
I have created materialized view which hold few million records.Should i have to analyze the view and compute the statistics after i create the materialized view?
Also, just in case i need further indexing, should i have to take the statistics for the table again?
I have table with 4 partition by range partition. I am loading the table in bulk mode to latest partition. Before I load , I dropped the index and after Load I will be creating index. So when I am dropping index, it is dropping index from all the partitions and when creating the index, I am creating the index for all partitions. When I am creating index using local, it is telling you have to create local index for all partitions at the same time. because of that I have to drop and recreate all indexes again. Again I have to gather stats for whole table .
I was thinking we can build index for one partition and index should remain as is for old partitions If this is not the case, how do I plan my load for a partitioned table using bulk mode to latest partition.
I have normal tables with hugh Data and would like to increase the performace by following means:
1) Add a new column in each table. Say this column Name is IS_LIVE. This new column have only two value 1 ( LIVE ) OR 0 ( NOT LIVE ). 2) Change the normal tables to Partitioned table. There would be only two partitioned in all the table. The partitioned key column would be IS_LIVE and both partitioend recrods would be in two different tablespace. 3) Added a POLICY function to these partitioned table to Always add a Query Predicate of '1' to all queuries.
I am interested to know that what kind of Indexes ( Global Or local ) would be suitable for these kind of Design.Is there any use of having Local index on IS_LIVE.Please note that Primary Key doesnot have this new column in it.
Is it possible for the DBMS_STATS "LIST STALE" command to show a stale partition but NOT have its table show as stale?
I had a scenario where the table itself AND 1 partition showed as stale. I ran a fnd_stats gather table stats just on that 1 partition. Once it was completed it showed the partition to no longer be stale. it also showed that the table was no longer stale. so I guess I do not need to run stats on the whole table as well?
so if this is the case, when would I need to run stats on the full partitioned table if running it on the partitions themselves removes the staleness of the table?
A datafile was deleted from a partitioned table which has 31 partitions , that datafile contains data from day 25 to day 31 st I am not having a backups of this database. but I have all the dumps of that table what is the solution to get those 6 partitions data back.
I am trying to export a partition of a table and import it to another database. I get the below error when I try to import.
ORA-14400: inserted partition key does not map to any partition
If I export the table(for that particular partition) and import the table(after dropping the table) in destination, the partitions and sub partitions are created without any problem.
The table is Range Partitioned and Sub partitioned in List. So I had to perform the below operation if I want to retain other data in the Destination table.
1. Drop the existing partition 2. Create the partition and sub partition, same as source 3. Execute imp
In fact I had to perform step#2, as if I split the partition also, the sub partition gets replicated in the new partition, which again throws the same error. Is there better way of managing the partitions and subpartition in destination with exp/imp utility, so that I need not perform step#1 and step#2 manually.
How can check database performance in a non-sequential parts of a day using AWR tool. For example, suppose we divide a day to 2 parts: low-traffic time and high-traffic time with the following time interval:
low-traffic time : 7:00am - 10:00am and 7:00pm - 11:00pm high-traffic time : 10:00am - 01:00pm and 4:00pm - 06:00pm
I want to examine performance using snapshots gathered by AWR. If I get 2 AWR reports for low-traffic time of day (one for 7:00am - 10:00am, another for 7:00pm - 11:00pm ) and compute average of values in 2 reports, could I called it database performance in low-traffic part of day or not?
we are looking to use oracle 11g as a log database using partitioned tables.
- The tables will have only a 3-5 columns of ~varchar(50) size - We are looking at a volumn of ~33million rows (inserts) per day.
1) Will partitioned tables be able to handle this type of volume? 2) if yes, will a Composite Partitioning (using last modified datetime as range) then subparition with range be the best choice?
if this type of volume is too high for 11g, what are some of the alternative products we can use.