DECLARE
v_name VARCHAR2(256);
BEGIN
SELECT sys_context('userenv', 'current_user') INTO v_name FROM dual;
DBMS_REDEFINITION.CAN_REDEF_TABLE(v_name, 'SO33070_ORIGINAL', dbms_redefinition.CONS_USE_ROWID);
END;
Success
3. Creating a duplicate table
CREATE TABLE SO33070_NEW
(
SERIAL_ID NUMBER(15,0),
INSERTED_TIME DATE DEFAULT SYSDATE
)
PARTITION BY RANGE ("INSERTED_TIME") INTERVAL (NUMTODSINTERVAL(1,'DAY'))
(
PARTITION "p1_1" VALUES LESS THAN (TO_DATE(' 2012-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
)
I recently started working with legacy code and noticed that some huge tables (5 years worth of data, don't have more details on me right now but can post later if needed) are partitioned based on time sequence number column while majority of queries are done based on time (different column). Queries performance is degrading and I'd like to try to modify partitioning and run some tests to evaluate performance improvement.
My only concern is with so much live data I have to come up with solution on how to switch partitioning with the least impact on applications running 24 x 7. Something you have done in the same situation and it worked?
I have problem to transfer data in non partitioning table to partitioning table.
I have non partitioning table and i create new table partitioning that have same column and type like in non partitioning. So how can i transfer data from table in non partitioning to table in partitioning?
I partitioned a source table of around 100 million rows (62GB) in DEV server. The target database was created new. It was range partioned on a date column as follows:
PARTITION BY RANGE (ENTRY_DATE_TIME) ( PARTITION ppre2012 values less than (TO_DATE('01/01/2012','DD/MM/YYYY')) TABLESPACE WST_LRG_D, PARTITION p2012 values less than (TO_DATE('01/01/2013','DD/MM/YYYY')) TABLESPACE WST_LRG_D, PARTITION p2013 values less than (TO_DATE('01/01/2014','DD/MM/YYYY')) TABLESPACE WST_LRG_D, PARTITION p2014 values less than (MAXVALUE) TABLESPACE WST_LRG_D )
That is yearly basis. Anything before 2012 went to ppre2012, then p2012, p2013 and so forth. There is 20 million rows in p2012. and around 75 million rows in ppre2012. We needed both the source (un-partitioned) and target (partitioned) tables in DEv for comparision. The queries are normally on the current year partition. Just to state taht I am a developer and don't have full visibility to the production instance.
Now that our tests are complete, we would like to promote this in production. Obviously in production we would not not need both source and target tables. In all probability this will be performed over a weekend window. Therefore I would like to suggest the following .
1) use expdp to export source table 2) drop the source table 3) create a new source table "partitioned" with no indexes 4) use impdp to get data back into table 5) create global index (it is a unique index to enforce uniquness) and the rest of indexes as local 6) perform dbms_stats.gather_table_stats(user,'SOURCE', cascade=>true). This takes around 2 hours in dev
My point is that whether importing 100 million rows will not cause issues with undo segments. Can we import data say first to the current partition p2012 (20 million rows) first.
I have a big table in which we load about 37M recrods. We have informatica ETL which Loads the data in bulk Mode and creats index after completion. The data load takes about 1Hr and Index Creation takes about 1/2 hr. In total it takes about 90 to 95 Mnts.
Now I thought if Partition and Load paralley, It will improve perfromance. We did 4 partition and and each Partition about 9M records. The data load in Bulk mode is completing in 25 Mnts. Again When I am creating index over it, It is taking about 40 Mnts. and in Total Load time is 65 Mnts.
Is there way I can better performance to complete the load in 1/2 hr ?
At moment we use range-hash partitioning of a large dimension table (dimension model warehouse) table with 2 levels - range partitioned on columns only available at bottom level of hierarchy - date and issue_id.
Result is a partition with null value - assume would get a null partition in large fact table if was partitioned with reference to the large dimension.Large fact table similarly partitioned date range-hash local bitmap indexes
Suggested to use would get automatic partition-wise joins if used reference partitioningWould have thought would get that with range-hash on both dimension.
Database Version : DB : Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit ProductionOS : HP-UX nduhi18 B.11.31 U ia64 1022072414 unlimited-user licenseAPP : SAP - ERP I have to RANGE partition on UPDATED_ON or PROFILE either one table which is having below
structure : Name Null? Type -------------------- -------- -------------------------------- MANDT NOT NULL VARCHAR2(9) MR_ID NOT NULL VARCHAR2(60) PROFILE NOT NULL VARCHAR2(54) REGISTER_ID NOT NULL VARCHAR2(30) INTERVAL_DATE NOT NULL VARCHAR2(24) AGGR_CONSUMPTION NOT NULL NUMBER(21,6) MDM_VERS_NO NOT NULL VARCHAR2(9) MDP_UPDATE_DATE NOT NULL VARCHAR2(24) MDP_UPDATE_TIME NOT NULL VARCHAR2(18) NMI_CONFIG NOT NULL VARCHAR2(120) NMI_CONFIG_FLAG NOT NULL VARCHAR2(3) MDM_DATA_STRM_ID NOT NULL VARCHAR2(6) NSRD NOT NULL VARCHAR2
[Code]....
As per my knowledge, RANGE is better suited for DATE or NUMBER. and INTERVAL partition is possible on DATE or NUMBEr . Column PROFILEIts is of VARCHAR2 datatype. I know still I can partition as Oracle internally convert varchar2 to number while inserting data. But INTERVAL is not possible. How to RANGE partition on PROFILE ? Column CREATED_ON :It is of NUMBER with decimal
I am creating a table from another existing table in another schema. The existing table contains data. When I am using the query- create table m _voucher as select * from ipm.m_voucher,I am getting the whole data of m_voucher but I want empty m_voucher table, so what will be the query to get the empty m_voucher table?
We have a transaction table and has 30 million rows. The table is not partitioned till date. We need to create partition on this table. We had an idea of moving this data to a temporary table and create partition[range]on the original table and move the data back.
Can we modifying the existing constraint on table?I have table level UNIQUE constraint on 3 columns of table.I need to modify the UNIQUE constraint to 2 columns? Instead of dropping/recreating the constraint, is there any option to modify the existing constraint
Ex
CREATE TABLE TEST_CONST(NUM1 NUMBER , NUM2 NUMBER , NUM3 NUMBER , UNIQUE (NUM1 ,NUM2,NUM3)); ;
SELECT * FROM USER_CONS_COLUMNS UCC WHERE UCC.TABLE_NAME LIKE 'TEST_CONST';
ALTER TABLE TEST_CONST MODIFY CONSTRAINT SYS_C0025132 UNIQUE(NUM1,NUM2);
I have table property and flat , in both of these tables I have cvp_id common colomn .Now I want to add cluster on this colomn so how can I add cluster to table which is already exists.
I have 2 tables that doesn't have primary keys. These 2 tables have same number of rows. I want to create a new table from getting some columns from table1 and some columns from table 2. I want to combine first row from table1 and first row from table2.
Below is example
TABLE1
ACOL1 ACOL2 ACOL3 A1 A2 A3 B1 B2 B3 C1 C2 C3
TABLE2
BCOL1 BCOL2 BCOL3 11 12 13 21 22 23 31 32 33
COMBINED_TABLE
ACOL1 BCOL2 BCOL3 A1 12 13 B1 22 23 C1 32 33
I tried below query but no luck. It gives below error:
Query : create table COMBINED_TABLE AS select a.ACOL1, b.BCOL2, b.BCOL3 from (select ACOL1,rownum from TABLE1) a, (select BCOL2, BCOL3, rownum from TABLE2) b WHERE a.rownum = b.rownum
Error : ORA-01747:"invalid user.table.column, table.column, or column specification"
declare vnum number; vname varchar2(50):='t1'; begin begin select count(*) into vnum from dba_tables where table_name=vname; dbms_output.put_line('table count '||vnum); exception
when others then
vnum := 0; end; begin if vnum>0 then execute immediate 'drop table '||vname; dbms_output.put_line('table dropped'); end if; exception
when others then
dbms_output.put_line('table does not exists'); end; execute immediate 'create table '||vname ||' ( n number)'; dbms_output.put_line('table created'); end;
Im having table which is of 45M rows table [Not partitioned], Now I want to compress the old data other than last 3Months data, I should not go for partition compress. Rarely some select queries will be fired on that Old data. Now how can I compress that table without affecting the Indexes , Dependencies proc, pkgs, Functions.
I had created a new table named USERLOG with two fields from a previous VIEW. The table already consist of about 9000 records. The two fields taken from the VIEW, i.e. weblog_views consist of IP (consists of IP address), and WEB_LINK (consists of URL). This is the code I used,
CREATE TABLE USERLOG AS SELECT C_IP, WEB_LINK FROM weblog_views;
I want to add another column to this table called the USER_ID, which would consists of a sequence starting with 1 to 9000 records to create a unique id for each existing rows. I'm using Oracle SQL Developer: ODMiner version 3.0.04. I tried using the AUTO-INCREMENT option,
ALTER TABLE USERLOG ADD USER_ID INT UNSIGNED NOT NULL AUTO_INCREMENT;
But I get an error with this,
Error report: SQL Error: ORA-01735: invalid ALTER TABLE option 01735. 00000 - "invalid ALTER TABLE option"
I had created a new table named USERLOG with two fields from a previous VIEW. The table already consist of about 9000 records. The two fields taken from the VIEW, i.e. weblog_views consist of IP (consists of IP address), and WEB_LINK (consists of URL). This is the code I used,
CREATE TABLE USERLOG AS SELECT C_IP, WEB_LINK FROM weblog_views;
I want to add another column to this table called the USER_ID, which would consists of a sequence starting with 1 to 9000 records to create a unique id for each existing rows. I'm using Oracle SQL Developer: ODMiner version 3.0.04.
I tried using the AUTO-INCREMENT option,
ALTER TABLE USERLOG ADD USER_ID INT UNSIGNED NOT NULL AUTO_INCREMENT;
But I get an error with this,
Error report: SQL Error: ORA-01735: invalid ALTER TABLE option 01735. 00000 - "invalid ALTER TABLE option"
1) How to add a new column to the existing table's particular position, instead of atlast.
2) I created a table without mentioned the datatype size as below Create table dummy (name char, age number). Then what is the default size will be allocated for those column's?