PL/SQL :: Oracle Exchange Partition Feature Not Working As Expected?
Aug 17, 2012
I used the Exchange Partition feature to swap segments between 2 tables- one Partitioned, and one Non-Partitioned. The exchange went well. However, all the data in the partitioned table has gone to the partition which stores the maxbound values.
/** actual table names changed due to client confidentiality issues */
-- Drop the 2 intermediate tables if they already exist
drop table ordered_inv_bkp cascade constraints ;
drop table ordered_inv_t cascade constraints ;
/**
1st create a Non-Partitioned Table from ORDERED_INV and then add the primary key and unique index(s):
*/
create table ordered_inv_bkp as select * from ordered_inv ;
alter table ordered_inv_bkp add constraint ordinvb_pk primary key (ordinv_id) ;
--
create unique index ordinv_scinv_uix on ordered_inv_bkp(
SCP_ID ASC,
[code]....
-- Next, we have to create a partitioned table ORDERED_INV_T with a similar
-- structure as ORDERED_INV.
-- This is a bit tricky, and involves a pl/sql code
-- Add section to set default values for the intermediate table OL_ORDERED_INV_T
FOR crec_cols IN (
SELECT u.column_name ,u.nullable, u.data_default,u.table_name
FROM USER_TAB_COLUMNS u WHERE
u.table_name ='ORDERED_INV' AND
u.data_default IS NOT NULL )
LOOP
[code]....
-- Next, use exchange partition for actual swipe
-- Between ordered_inv_t and ordered_inv_bkp
-- Analyze both tables : ordered_inv_t and ordered_inv_bkp
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HENRY220', TABNAME => 'ORDERED_INV_T');
DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HENRY220', TABNAME =>'ORDERED_INV_BKP');
END;
/
SET TIMING ON;
I ran exchange partition from non-partitioned table to a partitioned table with the following params: WITHOUT VALIDATION UPDATE GLOBAL INDEXES since we have a GLOBAL index( the GLOBAL is a must). After the exchange , if I'm running a simple query on the first column of the PK the plan is very bad and the EM Adviser advices me to build an index based on that column. I'm using 11.2.0.2
I am trying to exchange a partition and I am seeing an ORA-14096. I've done this several time before with other tables without problem. I am pretty sure my columns and index really does match.
Connected to Oracle Database 11g Enterprise Edition Release 11.2.0.3.0
SQL> ALTER TABLE se_keywords EXCHANGE PARTITION p_user_1068 WITH TABLE se_keywords_1068 INCLUDING INDEXES WITHOUT VALIDATION; ALTER TABLE se_keywords EXCHANGE PARTITION p_user_1068 WITH TABLE se_keywords_1068 INCLUDING INDEXES WITHOUT VALIDATION
ORA-14096: tables in ALTER TABLE EXCHANGE PARTITION must have the same number of columns
SQL> SELECT COUNT(*) cnt 2 FROM all_tab_columns a 3 FULL OUTER JOIN all_tab_columns b ON a.column_name = b.column_name 4 AND b.owner = USER
While trying partition exchange feature of Oracle with 2 hash partitioned tables, I come to know that I can't directly exchange partitions between 2 partitioned tables
I have two hash partitioned tables , so to move partition data from one table to another will include-
1) Exchange from partitioned table to non-partitioned table. 2) exchange from non-partitioned table to new partitioned table.
But I am not sure in which hash partition my data will go in new partitioned table (data need to be moved has single key value on basis of which tables are partitioned),
My company already work with Oracle 10g, I developed an application using this database and asp.NET. It allows the management of BD and exploitation of these data by clients via ASP pages.
Now I must use Microsoft Exchange (especially the calendar) to retrieve data from the Oracle database and embed links to setup my ASP pages to the calendar.
Being new to Exchange, my questions are:
- It can be done? - Should I use a particular driver? - Some idea ?
Using Oracle 11g's compression feature in production? I haven't read anything negative yet, that doesn't meant that there isn't anything that could have an adverse affect. I wanted to check to see if there are any affects on the performance or any disadvantages of using this compression feature. I have tested this on one my major tablespace and I did see a big difference in the reduce size on the tablespace but I am still hesitated to put this into production.
Environment: Oracle RDBMS 10g R2. DB OS: HP Itanium
We use Oracle EBS R12.1.2 in our company and one of the analyst reported performance with saving the configuration in Pricing module. The common fix is to gather stats on BOM_EXPLOSIONS table. Recently, when the issue occurred I collected statistics on the table. The performance didnt improve. I went ahead and decided to trace the Oracle form session using the profile 'Initialization SQL Statement - Custom" at user level.
I also monitored the session in OEM 10g grid. The analyst performed the same set of steps and the performance was normal and acceptable. Analyst tried again and performance was matching with the expectation. I cleared the trace profile and analyst tried again. This time analyst had worse performance as the original issue. The issue got fixed later part of the day on its own. This has made me curious and thought to discuss it here.
I have had similar experience with 10g and 11g, when I enable the trace on the issue cannot be reproduced and when trace is off the issue pops back up.
i have table with range partition and list sub-partition..can i add one more list sub-partition if it is not possible , i have to drop first sub-partition.
I am trying to xcom the file from Linux server location to Windows server location through shell script. When i execute my DBMS_SCHEDULER to trigger the shell script i am getting the below mentioned error.
Error report: ORA-27369: job of type EXECUTABLE failed with exit Exchange full ORA-06512: at "SYS.DBMS_ISCHED", line 150 ORA-06512: at "SYS.DBMS_SCHEDULER", line 441 ORA-06512: at "CUSTDOMAIN.TSC_041_REPORT_PRC", line 50 [code]....
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for 32-bit Windows: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production.
Requirement is format given multi line text in the specified lines and each line be of specified character. Here words should not be broken, instead they must come to new line if lines are available.
Function could of the sort: CREATE OR REPLACE FUNCTION wrap_text(p_text VARCHAR2 ,p_len NUMBER-- no of lines ,p_chr NUMBER--no of chr/line ) RETURN VARCHAR2;
I need to exclude a single schema from the autostats gathering feature in 11g. The tables in this schema are analyzed at the appropriate time via the application code. The autostats gathering job sometimes kicks in at a time in which the tables are getting updated or loaded which can skew explain plans during the updates/inserts.
I've searched through the oracle documentation and cannot find a way to simply "exclude" the schema without locking it. I see it is possible to disable the autostats at the entire db level but not at the schema level.
I Know we can create dynamic partitions on table in oracle 11g. Is it possible to create normal partition and sub partition both dynamically.I have to create Normal partition range on date and sub partition list on Batch ID (varchar).
I have a table that partitioned into six partitions. each partitions placed in different table space and every two table space placed it on a different hardisk
when I will do query select with the non-partition keys condition, how the search process ? whether the sequence (scan sequentially from partition 1 to partition 6) or partition in a hardisk is accessed at the same time with other partition in other hardisk. ( in the image, partition 1,4 accessed at the same time with partition 2,5 and 3,6)
I am having Oracle 9i relaese 2 on my db server. I am getting the following error every time I try to create a bitmap index:-
ORA-00439: feature not enabled: Bit-mapped indexes
I have queried the v$option table .Here the value of parameter Bit-mapped indexes is FALSE.
The result of v$version is :-
Oracle9i Release 9.2.0.1.0 - 64bit Production PL/SQL Release 9.2.0.1.0 - Production CORE 9.2.0.1.0 Production TNS for Solaris: Version 9.2.0.1.0 - Production NLSRTL Version 9.2.0.1.0 - Production
Actually when we created the database our installation was halted . so we manually created the database using Create database command.
We have 10g physical standby set in our environment and we are migrating to 11g now. We want to use the active data guard feature of 11g to run the live reports on standby rather production. Questions I have is:
1) On our current 10g standby environment, we use db_name=cusms which is exactly matching with the production database name. I don't see we are using database_unique name on our standby. But I have read several blogs where everyone talks about using db_unique name on standby and db_name can be exactly matching with production on 11g. I wanted to know, is db_unique name a new requirement to have on 11g? can I go ahead and not use db_unique name and just have db_name exactly matching with production? What are the implications of doing so? The reason we want to stick to this is in-case of failover we want the database name to be the same. But I want to hear your thoughts on this:
2)While building standby, I did noticed few things and want your clarifications:
a.On standby database, should I mount instance using pfile or spfile or it doesn't matter? b. Lets say if I use either spfile or pfile, can I just have db_unique name in that file and just start the instance in no mount and do the duplicate from rman? c.As soon as my duplicate target database for standby from active database got finished, I usually exit the rman session and go to sqlplus and shutdown the standby instance. (Is this ok to do) d.Then I start the standby instance with startup (mount and open the database) this should open the standby database in read only mode. Following I issue alter database recover managed standby database using current logfile disconnect to put the database in recovery mode. (any steps missing here) e.Then go to primary and do few log switches and come back to standby to see if the primary changes moved to secondary or not.
But what I have observed is:
a. When I do the duplicate it runs successful. But during the course of duplicate, primary system generates few archives which are not shipped or applied on standby. When I go to standby to recover the database, it says media recovery needed and ask for archives files. I need to manually move this files from primary to standby to apply. Isn/t this automatically taken care? b. I also noticed after I can not open the standby database in read only mode after the duplicate command. While trying to open, it says database media recovery needed. What's the best procedure to open the database in read only mode immediately? c. On my standby init.ora lets say if I use db_unique name, where would my control file be place? Will oracle create controlfile from primary and put it on my standby database and put an update an entry into my pfile or spfile?
Below is the sample code working fine in 10g and not working now in 11g.
CREATE OR REPLACE AND RESOLVE JAVA SOURCE NAMED "PSTest" AS import java.sql.SQLData; import java.sql.SQLException; import java.sql.SQLInput; import java.sql.SQLOutput; import java.util.List; [code]....
we got the below error: ORA-00932: inconsistent datatypes: expected an IN argument at position 1 that is an instance of an Oracle type convertible to an instance of a user defined Java class got an Oracle type that could not be converted to a java class
Current Oracle version is Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit and the version we are upgrading is Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit
I have a question related to partitions and dividing into subpartitions on the existing table.Situation is as follows:
1. we have an inventory table with a list partition on one column sales_desk_id. 2. This table contains millions of records. Due to concurrency and due to high amount of data inserts, now there is a need to make sub partitions based on sale_date.
Question: is there any way to make the subpartitions without dropping the tables?
BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production PL/SQL Release 11.1.0.7.0 - Production CORE 11.1.0.7.0 Production TNS for IBM/AIX RISC System/6000: Version 11.1.0.7.0 - Production NLSRTL Version 11.1.0.7.0 - Production
We have a table with interval partition. This table is accessed very frequently. We are suppose to exchange partitions between this actual table from it's corresponding staging table.
In order to keep the newly created partitions empty, is there a way to restrict other applications from using it before we push data from staging table to it's actual table.