Server Administration :: Reorganize A Table And Index After The Deletion Of Records From Table?
Feb 7, 2012
We deleted millions of records from a table.
1.Is it necessary to reorganize a table and index after the deletion of records from table ? Because i see some change in table size after table and index reorganization.
2.Will re org table and index improve the database performance ?
It is not possible to run SHRINK SPACE against a table with a function based index. This is documented, and I've tested on the current release. I've reverse engineered it a bit, and the issue is in fact that you cannot SHRINK SPACE if there is an index on a virtual column:SQL> SQL> create table t1(c1 number, c2 as (c1*2)) segment creation immediate;
Table created. SQL> alter table t1 enable row movement; Table altered. SQL> alter table t1 shrink space; Table altered. SQL> create index i1 on t1(c2); Index created.
SQL> alter table t1 shrink space; alter table t1 shrink space
ERROR at line 1: ORA-10631: SHRINK clause should not be specified for this object.
I would like to reorganize a table inorder of primary key but I'm not sure if I'm expecting the right thing from dbms_redefinition package.
I am working on oracle 9i 9.2.0.8
I have the following table :
SELECT * from SCOTT.TESTTABLE ID REF ---------- ---------- 1 FF 2 BB 3 CC 4 DD 8 EE 6 ZZ 7 YY 5 GG
when I use the CODEexec dbms_redefinition.start_redef_table('SCOTT', 'TESTTABLE', 'TESTTABLE2', 'id id, ref ref', dbms_redefinition.cons_use_pk);
Would the newly created scott.testtable be created in order of primary key (ID) thus a select * from scott.testtable will give me an ordered result?
When I do the test, the table before and after the redefinition is exactly the same so why use the CONS_USE_PK if it doesn't order the table by primary key?
I have to reorganize one table that related to several other tables. The reorg is too slow when it runs on this table. I would like to create one image of the table and synch it with the original one in real time. So when I run the reorg, I will use the image table that does not constrained by indexes and other objects. Once the reorg is done, I would like to rename the table. how could I do the replication in real time?
Let's consider such table that all rows fit into single block:
SQL> create table test as select rownum id, '$'||rownum name from dual connect by level <= 530; Table created. SQL> create index i_test on test(id); Index created. SQL> SQL> begin
[code].....
why does approach with full scan take longer even if table occupies only one data block? PS. 11gR2
I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.
So far the only option I have is to export and then import back the data.
know if there is any way to move a partitioned table between table spaces of different block size?
If as part of housekeeping of the Source database we delete some records (older records), will the materialized view also be updated with the deletion? I believe the answer is yes. In that case can we ensure that this delete does not happen?
Is there anyway we can prevent MView refresh from deleting the records that is once inserted even if we delete the same records in source DB?
I am importing the dump.All the tables are getting imported but when it comes to the part as shown in the error it gets struck there and nothing is showed after.
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
The dumps for which I am doing the import is huge. Its around 46 GB.
select sl.sid, sl.serial#, sl.sofar, sl.totalwork from v$session_longops sl, v$session s, dba_datapump_sessions d where s.saddr = d.saddr and s.sid = sl.sid and s.serial# = sl.serial# order by start_time;
It showing the lock for two sid's. Should I kill the process and import the dump again? I used below command to import dums
I have a huge table (about 60 gb) partition over range. The index on this table is global index created on 4 columns together. I have a query which is running very slowly. The explain plan is showing the use of this global index.Explain plan is not showing pstart and pend because the index is global.
SELECT DISTINCT EXPOSURE_REF FROM KBNAS.VW_EXPOSUREDETS_FOR_CCYREVAL WHERE EXPOSURE_CURRENCY='THB' AND BASE_TXN_CCY='USD' AND BRANCH_CODE='7000' AND (REVAL_STATUS='O') AND CONV_RATE<>'62' AND (EXPOSURE_AMOUNT<>0) UNION SELECT DISTINCT ED.EXPOSURE_REF FROM KBNAS.EXPOSURE_DETAILS ED, [code].....
I have attached DDL for table EXPOSURE_DETAIL(PARTITION),LEDGERCARD,LEDGERCARDDETAILS, DDL for INDEX on those tables and DDL for Views..
Issue: we have created the Indexes but when we check the explain plain .. full table scan is going on..I have attached the explain plan ..
We need to load data from index by table to table.Below code is working fine.
declare query varchar2(200); Type l_emp is TABLE OF emp%rowtype INDEX BY Binary_Integer; rec_1 l_emp; begin
[Code]....
But data from source table and target table is dynamic.Ex:In above code, emp(source) and target table is emp_b are static. But for our scenario is depends on the source table , target would change as below.If source is emp then target is emp_bIf source is emp1 then target is emp_b1 ............
create or replace procedure p(source in varchar2, target in varchar2) as query varchar2(200); source varchar2(200); Type l_emp is TABLE OF emp%rowtype INDEX BY Binary_Integer; rec_1 l_emp;
Oracle 11gI have a large table of 125 million records - t3_universe. This table never gets updated or altered once loaded, but holds data that we receive from a lead company. I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads - that will be updated with regard to when the lead is mailed and for other relevant information. select records from this 125 million record table to insert into the smaller table.
I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches. My current attempt has been to create a View using the query that selects the records as shown below. Then use a second query that inserts into T3_Leads from this View V_Market. This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause? My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key.
CREATE VIEW V_Market asWITH got_pairs AS ( SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */ l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no , ROW_NUMBER () OVER ( PARTITION BY l.address_key ORDER BY l.hh_verification_date DESC ) AS r_num FROM t3_universe e JOIN t3_universe l ON l.address_key = e.address_key AND l.zip_code = e.zip_code AND l.p1_gender != e.p1_gender
Step 1.Insert all records from the external table into the export table. Truncate the export table first
Step 2.Read in a record from the export map table
Step 3.Search through export table records looking for the key words BRANCH =. Compare the branch code with the branch code form the map table
Step 4.If a match is found mark all records in the export table for the worksheet with the global ID from the export map table as follows..The first line of a worksheet is marked by the words WKSHTS..The last line of the work sheet is marked by the words COMPANY CONFIDENTIAL..We will need to capture the line break so also mark the next line after the COMPANY CONFIDENTIAL line
Step 5.Continue with Steps 2 - 4 until all records have been processed from the export map table.
first I have to create a procedure ti insert data from external table to export table.Global id will be blank.it will be updated by the mapping table's Global Id when The EB COLUMN's data(i.e 8p,2Betc ) will match with the BRANC=NA,2Betc of the datasheet loaded from the external table.. FOLLOWING IS THE SAMPLE DATASHEET
WKSHTS AAAAA BBBBBBBBBBB ELECTRONICS INC. TIME REPORT-DATE PAGE SORT - BR, SLSREP AEC FIELD SALES REPRESENTATIVE 16:14 09/21/12 1 BRANCH = 2B EMPLOYEE NAME SALVAAG, GREGG Days in the Month 28 [code]....
THERE ARE 2 pages..I have to split this LONG REPORT STORED IN WKSHT_LINE COLUMN OF EXPORT TABLE to 2 records..like wise 500 pages are there means 500 records.. AND THEN FIND BRANCH= after that which two words will come i.e NA,2B etc if it will MATCH WITH MAPPING TABLE"S EB COLUMN"S DATA,THEN MAPPING TABLE's GLOBAL ID WILL BE UPDATED TO EXPORT TABLE's GLOBAL ID WHICH IS BLANK
Trying to auto insert the newest records from one table into another Table. I have a vendor provided table that is part of my database (running Oracle 9i) so I can't change the underlying structure to it or their process stops fluxing. However, I can add a trigger to it. What I want to do is this:
When the vendor's software inserts a new row (through their own automated process) I want to insert data from that same new record into another table of my own. (where of course I can re-format it, etc., and make the data my own)
The original vendor table does not have a insertion timestamp field to work off of.What is the best way to trigger an insert off the latest inserted record? It works to replace all the records in the entire vendor table but I only want to insert one record at a time.
I am trying to update records in the target table based on the records coming in from source. For instance, if the incoming record is present in the target table I would update them in the target else I would simply insert. I have over one million records in my source while my target has 46 million records. The target table is partitioned based on calendar key. I implement this whole logic using Informatica. Looking at the informatica session log I find that the informatica code is perfectly fine but its in the update part it takes long time (more than 5 days to update one million records). find the TARGET TABLE query and the UPDATE query as below.
TARGET TABLE: CREATE TABLE OPERATIONS.DENIAL_REGRET_FACT ( CALENDAR_KEY INTEGER NOT NULL, DAY_TIME_KEY INTEGER NOT NULL, SITE_KEY NUMBER NOT NULL, RESERVATION_AGENT_KEY INTEGER NOT NULL, LOSS_CODE VARCHAR2(30) NOT NULL, PROP_ID VARCHAR2(5) NOT NULL, [code].....
SQL> Alter Table tb_hxl_user Shrink Space Cascade; Alter Table tb_hxl_user Shrink Space Cascade * ERROR at line 1: ORA-10635: Invalid segment or tablespace type
SQL> desc tb_hxl_user; Name Null? Type ----------------------------------------- -------- ---------------------------- STATEDATE NOT NULL DATE USERNUMBER NOT NULL VARCHAR2(13) PROVCODE NOT NULL NUMBER
We are running 11g (11.2.0.3)We have a "working table" that is empty at the beginning of the day.Then we start adding rows (insert) with a key column called STATE with a value of 100.At the same time, there are other apps that pickup data in state 100 , process that data and change that state to 200 or 300.There is also another app that pickup data in state 200 , process that data and change that state to 300 or 400.
So in summary, the data on that table is at the beginning empty, then all the rows are in state 100, they slowly move to different states (200, 300, etc) and by the end of the day, they are all in 400.
My question is what would be the best way to collect stats on this table?
I was thinking to create an hourly job to collect stats on that table: exec dbms_stats.gather_table_stats ( ownname => 'SCOTT', tabname => 'WORK_T
Name Null Type ------------------------------ -------- ------------------------ ENTITY_ID NOT NULL VARCHAR2(100 CHAR) ENTITY_TYPE_ID NOT NULL NUMBER SOURCE_ID NOT NULL VARCHAR2(512 CHAR) XML_SCHEMA_ID NOT NULL NUMBER JOB_ID NOT NULL NUMBER FINGERPRINT NOT NULL VARCHAR2(100 CHAR) ENTITY_XML_DATA CLOB() ARCHIVED NUMBER(1) CREATION_DATE TIMESTAMP(6) MODIFICATION_DATE TIMESTAMP(6) ARCHIVING_DATE TIMESTAMP(6) CREATED_BY VARCHAR2(50 CHAR) MODIFIED_BY VARCHAR2(50 CHAR)
The problem is that the data of the table are 40GB while on the DB the table holds 400GB! How can I shrink and reuse that space except from drop/recreate and drop/import?
The table has no initial data, so that I can play with the INITIAL parameter. Data are inserted, updated and deleted all the time. I have run DBMS_ADVISOR which recommended to SHRINK table. I have performed the shrink :