SQL & PL/SQL :: Exchange Partitions Between Actual Table From It's Corresponding Staging Table
Nov 26, 2012
We have a table with interval partition. This table is accessed very frequently. We are suppose to exchange partitions between this actual table from it's corresponding staging table.
In order to keep the newly created partitions empty, is there a way to restrict other applications from using it before we push data from staging table to it's actual table.
I have 2 similar tables. Same columns. Data inserted to second table from first for a condition and first table is truncated. Then data moved from second to first and second table truncated. Second table is to store data for a small period of time. First table has indexes. Do I need any index on second table?
The query on second table is always insert into second select * from first. Similarly insert into first select * from second.
I have a table that has 2 columns of type nested table. Now in the purge process, when I try to truncate or drop a partition from this table, I get error that I can't do this (because table has nested tables). how I will be able to truncate/drop partition from this table? IF I change column types from nested table to varray type, will it work?
Also, is there any short method of moving existing data from a nested table column to a varray column (having same fields as nested table)?
I've a staging table STG_TABLEA which has a primary key discount_seq_no.
I am creating a pl/sql procedure to populate a primary key (discount_seq_no) with a database sequence. The intent is to keep this populated with next value incrementing by 1.
I am using Oracle 11.2.0.2 version.
I've put together the below code, not sure on next steps...
BEGIN UPDATE STG_TABLEA SET A.DISCOUNT_SEQ_NO = "INSERT A SEQUENCE HERE AND KEEP INCREMENTING the seq value by 1 COMMIT; EXCEPTION WHEN OTHERS THEN RAISE; END;
While trying partition exchange feature of Oracle with 2 hash partitioned tables, I come to know that I can't directly exchange partitions between 2 partitioned tables
I have two hash partitioned tables , so to move partition data from one table to another will include-
1) Exchange from partitioned table to non-partitioned table. 2) exchange from non-partitioned table to new partitioned table.
But I am not sure in which hash partition my data will go in new partitioned table (data need to be moved has single key value on basis of which tables are partitioned),
I am having a similar problem like above ONLY in UNIX box where my datafile is delimited by "|". The last field is ITM_CMNT declared as VARCHAR2(60) in Oracle. When I have exactly 60bytes in the last field it rejects the record saying actual 61 and max allowed is 60. If i reduce it to < 60bytes then it is stored as a value enclosed with double quotes. The enclosing double quote is on the next line.
"PROC,RAM,FLPY,HD,ACT MTX CLR DSP,D/PCMCIA,TRKBAL,LIT ION BA"
Expected: the one below is exactly 60bytes.
PROC,RAM,FLPY,HD,ACT MTX CLR DSP,D/PCMCIA,TRKBAL,LIT ION BAT LOAD DATA INFILE * INTO TABLE TMPTLI_LAWSON_ITM_MST TRUNCATE FIELDS TERMINATED BY "|" (ITM_NO, HAZ_MAT_CD, ITM_SHRT_DS, ITM_SON "TRIM(:ITM_SON)", ADDED_DT DATE "YYYY-MM-DD", AVL_CD , ITM_CST_AMT, ITM_SLL_AMT, EXCHG_PRC_AMT , ITM_UOM "TRIM(:ITM_UOM)", PCK_QTY INTEGER EXTERNAL, SPC_HNDL_CD "TRIM(:SPC_HNDL_CD)", EFF_DT DATE "YYYY-MM-DD", ITM_CMNT "TRIM(:ITM_CMNT)")
My developer came with a requirement of creating partitions on a table which has 40 million records. His exact requirement is to create as many as partitions in such a way that 1 partition should not exceed 5k-10k records and these records should be inserted/updated on the same date (i.e. using a column as source_timestamp field). How to accomplish this?
I have tables in production which has got huge no of partitions(say more than 100), but I would like to extract table definiation along with mentioned few partitions(say 10 partitions) alone. How to do that, which way is the best to extract DDL with right format.
because when I use metadata package the format for the extraction is not good, is there a way to extract table definition with mentioned partition names.
I couldn't either DROP or TRUNCATE the table partitions that were created. Here are the DDLs and DMLs I'm using.
Create table student(no number(2),name varchar(2)) partition by range(no) (partition p1 values less than(10), partition p2 values less than(20), partition p3 values less than(30),partition p4 values less than(40)); Insert into student values(1,'a'); Insert into student values(11,'b'); Insert into student values(21,'c'); Insert into student values(31,'d');
When I do the following query, it returns data.
SELECT * FROM STUDENT PARTITION(p1);
But, when I try to perform any of the following queries, it says invalid partition name.
ALTER TABLE STUDENT DROP PARTITION p4; ALTER TABLE STUDENT TRUNCATE PARTITION p3;
I am using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit
I have to drop some partitions in table on production environment (to get free space). The environment have to be continuously available. I was considering of use ALTER TABLE ... DROP PARTITION ... UPDATE INDEXES but it is slow, because of use clause UPDATE INDEXES. Is there another possibility to remove these data?
What is the limit on number of partitions on a table.on many forums , 1024k-1 is given the maximum limit.But Exactly , I am not able to understand this 1024k-1.
I ran exchange partition from non-partitioned table to a partitioned table with the following params: WITHOUT VALIDATION UPDATE GLOBAL INDEXES since we have a GLOBAL index( the GLOBAL is a must). After the exchange , if I'm running a simple query on the first column of the PK the plan is very bad and the EM Adviser advices me to build an index based on that column. I'm using 11.2.0.2
I am trying to exchange a partition and I am seeing an ORA-14096. I've done this several time before with other tables without problem. I am pretty sure my columns and index really does match.
Connected to Oracle Database 11g Enterprise Edition Release 11.2.0.3.0
SQL> ALTER TABLE se_keywords EXCHANGE PARTITION p_user_1068 WITH TABLE se_keywords_1068 INCLUDING INDEXES WITHOUT VALIDATION; ALTER TABLE se_keywords EXCHANGE PARTITION p_user_1068 WITH TABLE se_keywords_1068 INCLUDING INDEXES WITHOUT VALIDATION
ORA-14096: tables in ALTER TABLE EXCHANGE PARTITION must have the same number of columns
SQL> SELECT COUNT(*) cnt 2 FROM all_tab_columns a 3 FULL OUTER JOIN all_tab_columns b ON a.column_name = b.column_name 4 AND b.owner = USER
I am facing problem while inserting data into Core Table from staging (both Core Table and Staging are on same database but different schema)
I am using below command:
INSERT /*+ APPEND PARALLEL("CORE TABLE", DEFAULT, DEFAULT) */ INTO SELECT DATA from Staging
CORE TABLE is quit big contains millions of record partition on date and having old stats of 2007. Data fetching from staging is very fast appx 1 million record in 2 mins. we are inserting one day data daily into CORE TABLE from staging and its taking 3 Hrs.
We have Development, Staging/UAT ( installed on XX.XX.XX.10 ) and Production ( installed on XX.XX.XX.20) Environment respectively. I have queries regarding getting the data from Production environment into Staging environment. The overall PROD database size is around 250 GB.
STAGING DATABASE DETAILS
SID : STG_DB Staging Schema Name :schema_UAT Replication Schema Name :schema_PrdReplica ( This is the schema where the production data gets loaded daily)
PROD DATABASE DETAILS
SID : PROD_DB Prod Schema Name : schema_PROD
What is happening now: ---------------------------- There is a script (Stored Proc) written on staging ( STG_DB.schema_PrdReplica ) environment which executes daily in NIGHT and does replication. Currently we use DBMS_DATAPUMP to get the ENTIRE data/Meta Data (Everything) from Production to Staging. It is ta king significantly more time. It takes approx 8 Hours to replicate the everything from PROD_DB.schema_PROD to STG_DB.schema_PrdReplica
What I am expecting : ----------------------------- I want to reduce the replication time.
I have heard about Level 0 (Full BackUp ) and Level 1 ( Incremental Cumulative ) Backups in RMAN. I am planning to take PROD_DB.schema_PROD Full Backup (Level 0) on Sunday and will restore that on STG_DB.schema_PrdReplica immediately. And on weekdays ( Mon - Fri ) I will take Level 1 ( Incremental Cumulative ) and will restore that on STG_DB.schema_PrdReplica
I am assuming by doing so, the overall replication time will be reduced. How can I implement this with script assuming that two different servers are on different machines.
I used the Exchange Partition feature to swap segments between 2 tables- one Partitioned, and one Non-Partitioned. The exchange went well. However, all the data in the partitioned table has gone to the partition which stores the maxbound values.
/** actual table names changed due to client confidentiality issues */
-- Drop the 2 intermediate tables if they already exist
drop table ordered_inv_bkp cascade constraints ; drop table ordered_inv_t cascade constraints ; /**
1st create a Non-Partitioned Table from ORDERED_INV and then add the primary key and unique index(s):
*/ create table ordered_inv_bkp as select * from ordered_inv ; alter table ordered_inv_bkp add constraint ordinvb_pk primary key (ordinv_id) ; -- create unique index ordinv_scinv_uix on ordered_inv_bkp( SCP_ID ASC,
[code]....
-- Next, we have to create a partitioned table ORDERED_INV_T with a similar
-- structure as ORDERED_INV.
-- This is a bit tricky, and involves a pl/sql code
-- Add section to set default values for the intermediate table OL_ORDERED_INV_T
FOR crec_cols IN ( SELECT u.column_name ,u.nullable, u.data_default,u.table_name FROM USER_TAB_COLUMNS u WHERE u.table_name ='ORDERED_INV' AND u.data_default IS NOT NULL ) LOOP
[code]....
-- Next, use exchange partition for actual swipe
-- Between ordered_inv_t and ordered_inv_bkp
-- Analyze both tables : ordered_inv_t and ordered_inv_bkp
BEGIN DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HENRY220', TABNAME => 'ORDERED_INV_T'); DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'HENRY220', TABNAME =>'ORDERED_INV_BKP'); END; / SET TIMING ON;
My company already work with Oracle 10g, I developed an application using this database and asp.NET. It allows the management of BD and exploitation of these data by clients via ASP pages.
Now I must use Microsoft Exchange (especially the calendar) to retrieve data from the Oracle database and embed links to setup my ASP pages to the calendar.
Being new to Exchange, my questions are:
- It can be done? - Should I use a particular driver? - Some idea ?
I am trying to xcom the file from Linux server location to Windows server location through shell script. When i execute my DBMS_SCHEDULER to trigger the shell script i am getting the below mentioned error.
Error report: ORA-27369: job of type EXECUTABLE failed with exit Exchange full ORA-06512: at "SYS.DBMS_ISCHED", line 150 ORA-06512: at "SYS.DBMS_SCHEDULER", line 441 ORA-06512: at "CUSTDOMAIN.TSC_041_REPORT_PRC", line 50 [code]....
When we are trying to create number data type column of a table with precision greater than actual value,it's accepting the definition of the table . But we are unable to insert any values into the table.how internally it stores the value
SQL> drop table precision_test; Table dropped SQL> create table precision_test(name number(2,5)); Table created SQL> insert into precision_test values (1); insert into precision_test values (1) [code]....
I have a procedure which will execute on every Monday. Same is not executed last Monday. Can I execute the Procedure on some other day with out changing the actual procedure?
I look after a team of DBAs and I have a request to free up space on our very expensive storage system. However the answers on how to do this differ and i'd like to ask for external input...So not being a techincal person I see the world as quite black and white. Meaning that you delete data and you free space but after doing much reading I understand this is not the case, as you essentially create data fragmentation within the datafile resulting in the db having lots more space to write into but not actually freeing space, even if you shrink the file it doesnt free space or do a reorg?
We have as an example a DB with 2 billion rows of data in 1 table, no partioning just one large table. We have worked out that we can probably delete 1 billion rows or even better only keep a rolling 3 month window of data. What would be the suggestion on deleting this data and reclaiming the disk space to actually see additional disk space made available at the os level.
How about deleting the data and reclaiming the space. Through reading it looks like it might be something like, delete, creating new table space partitions from this data. This in theory would create new a tablespace in newly created data files which would result in the data being reorganised and taking up less physical space and when completed you point to the newly created partitions and drop the old tables.
how they have done this as it must be a common problem that people have created some different solutions. What commands, procedures have been used?