Takes Long Time To Drop Tables With Large Numbers Of Partitions
Jul 17, 2013
11.2.0.3 This is for a build. We are still in development. No risk of data loss. As part of the build, I drop the user,re-create it, re-create the objects. Allows us to test the build all the way through. Its our process. This user has some tables with several 1000 partitions. I ran a 10046 trace and oracle is using pl/sql to do loops to do DML against the data dictionary. Anyway to speed this up? I am going to turn off the recyclebin during the build and turn it back on. anything else I can do? Right now I just issue 'drop user cascade'. Part of is the weak hardware we have in the development/environment. Takes about 20 minutes just to run through this part of the script (the script has alot more pieces than this) and we do fairly frequent builds. I can't change the build process. My only option is to try to make this run a little faster.
View 3 Replies
ADVERTISEMENT
Jun 22, 2012
MY DB Version: 10.2.o
OS: Windows Server 2003
I am trying to import on table which i have the export dump file which i take using expdp previously when i load that table on the same host
by using below command:
expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
after that i zip that dump and move it to external usb and now i need that table i copy that table and unzip that that dump
Command i am using to do the import is :
impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
But the query of import is still runing even not showing any amount of rows to be imported.
i already make the tablespace in which the table was previosuly before dropping but when i check the sapce of tablespace that is also not consuming one error i got preiviously while performing this task is:
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "CDR"."SYS_IMPORT_TABLE_03" successfully loaded/unloaded
Starting "CDR"."SYS_IMPORT_TABLE_03": cdr/********@tsiindia directory=TEST_DIR dumpfile=CAT_IN_DATA_042012.DMP tables=CAT_IN_DATA_042012 logfile=impdpCAT_IN_DATA_042012.log
[code]....
i check streams_pool_size it will show zero and then i make it to 48M and after that
SQL> show parameter streams_pool_size;
NAME TYPE VALUE
-----------
streams_pool_size big integer 48M
But still it takes time
View 13 Replies
View Related
Sep 20, 2010
We are firing a normal Drop command on our database and the database version is 10.2.0.4.The database is running on AIX v5.The command is taking more time than usual .
When i am monitoring the session i can see that a call is being made to procedure "aw_drop_proc".Could i ask you if this is something that is taking more time than usual.
We are not having any partitions on the nested tables .We have a pack of tables and we are dropping this pack through a procedure.The pack comprises of nested tables & normal tables.To drop a nested table it is taking around 6 seconds(Table with no rows) and a normal table(With no rows) it is taking 17 milli seconds.We have a partition on Normal table.
The same operation in windows is taking very less time when compared to AIX.
View 5 Replies
View Related
Sep 15, 2011
is there any syntax to drop all the tables at a time in a user
View 12 Replies
View Related
Aug 20, 2012
[URL]...
I have a quite complex view that selects from approx 10 long tables (approx 4M records each) and build one "customer sentence" pre customer id. I will be always getting just one row from this view, eg. select * from my_view where party_id = XYZ. I'll NEVER EVER select the whole view.
The problem is that running a query:
select * from my_view where party_id = XYZ takes really long time, while putting the party_id = XYZ condition directly into the view executes in 0.0 seconds.
After putting a ORDERED FIRST_ROWS(1) hint into a view the execution plans seems to be the same (or very similar) for both queries. Unfortunately, I can not transfer anything but screenshot from the environmnet - therefore I paste the exec plans as screenshots only - pls follow the link: [URL]...
View DDL:
create or replace view my_view as
select /*+ ORDERED FIRST_ROWS(1) */ pt.party_id
pt.party_id as id_klienta_mdm,
pt.master_reference_no as id_klienta_ref_mdm,
[code]...
View 6 Replies
View Related
Sep 26, 2011
Query to drop partitions on a table who have no.of rows as zero.
select 'ALTER' || '' || 'TABLE' || TABLE_NAME || 'DROP' || 'PARTITION' || PARTITION_NAME from dba_tab_partitions where TABLE_OWNER='xyz' ;
select count(*) from table_name partition (partition_name);
View 14 Replies
View Related
Jul 8, 2011
I have to drop some partitions in table on production environment (to get free space). The environment have to be continuously available. I was considering of use ALTER TABLE ... DROP PARTITION ... UPDATE INDEXES but it is slow, because of use clause UPDATE INDEXES. Is there another possibility to remove these data?
View 2 Replies
View Related
Sep 29, 2011
writing a trigger to drop partitions with zero rows which are older than 6months and drop the local indexes and rebuild the global indexes for any schema in a databaase ?
I have tried the below code :
declare
v_statement varchar2(600);
v_rows number;
begin
for x in (select *
from dba_tab_partitions
[code]........
I want to avoid using row number and also want to dynamically select a schema when executing the script.
View 39 Replies
View Related
Nov 12, 2012
In my database,stale_percent is set to 10. and i have table which has partition. i have dropped table partition dropped which has 10% of data. I would like to know whether oracle will consider only insert,update,delete as stale percent or will it include the dropping paritition data also. Because my stats gather is not running. When i include drop partition data it exceed 10% of stale_percent,But excluding dropped partition it is not exceeds 10% of stale.
TABLE_NAME PARTITION_NAME SUBPARTITION_NAME INSERTS UPDATES DELETES TIMESTAMP TRU DROP_SEGMENTS
------------------------------ ------------------------------ ------------------------------ ---------- ---------- ---------- ----------- --- -------------
sample_DATA_DATA 235825577 0 0 11-NOV-2012 NO 3
test_DATA_DATA 811618472 0 0 11-NOV-2012 NO 12
sample_DATA_DATA SYS_P2665099 3005966 0 0 11-NOV-2012 NO 0
sample_DATA_DATA SYS_P2665119 3873671 0 0 11-NOV-2012 NO 0
[code].....
View 6 Replies
View Related
Apr 25, 2013
I am using oracle 9.2.0.8 on RHEL 4.8 (64-bit). I am facing a strange problem. I have this one job in database that takes almost 12-15 minutes to execute but when I execute procedure in that job manually, it executes in one minute. Even when no other job is running in database, it takes more than 10 minutes to execute.
View 1 Replies
View Related
Apr 25, 2013
I am using oracle 9.2.0.8 on RHEL 4.8 (64-bit). I have this one job in database that takes almost 12-15 minutes to execute but when I execute procedure in that job manually, it executes in one minute. Even when no other job is running in database, it takes more than 10 minutes to execute.
View 1 Replies
View Related
Jan 26, 2011
I have a long running transaction (more than 3 hours), and at the same time other operations are occurring on different tables, using the same UNDO space.
Sometimes we see ORA-30036, but this error occurs very late in the process. The transaction normally takes 3 hours, but when UNDO space is full, we do not get ORA-30036 upto 8 hours or 9 hours of process.
I am wondering what could be happening in the background, when UNDO space is full, which makes the transaction to extend upto 8 hours or 9 hours (pl. note, this transaction gets completed within 3 hours normally). This is in 11g, UNDO space is managed manually.
View 2 Replies
View Related
Feb 13, 2012
1) My database dump size near about 4GB , which is provided by the vendor .
2) In the dump , total objects are 364949 , where
Table : 121316
LOB object : 121315
(Normal+LOB) indexes : 122317
3) Now when I run the import using system or another user , it hangs on the below stage for 70+ hours ..
impdp ntest/ntest directory=test_dir dumpfile=JBLLIVE.31Jan2012.11.50AM.dmp remap_schema=JBLLIVE:NTEST logfile=ntest_10feb.log
Import: Release 11.2.0.1.0 - Production on Fri Feb 10 09:49:50 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "NTEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "NTEST"."SYS_IMPORT_FULL_01": ntest/******** directory=test_dir dumpfile=JBLLIVE.31Jan2012.11.50AM.dmp remap_schema=JBLLIVE:NTEST logfile=ntest_10feb.log
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"NTEST" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
----
In this situation I observed the worker status and see that some table and some LOB objects including LOB indexes are imported . Worker process do it in background but it does not show in the front import log file (I dont understand why it not shows in the import logfile). it imports one table,one LOB , one LOB index ..then again one table,one LOB , one LOB index ... in this way .
And my observation first it inserts data into the LOB tables and then it inserts into normal table . And when it is starting to insert data to the normal table then this table's log are shown in the import logfile.
an example of our data type :
Objects :
===================================================
LOB_FD17_RGS_TSTCD2 LOB
FD17_RGS_VERSION TABLE
(here i see one table has one LOB segment, in this way 121316 table has 121316 LOB)
SQL> desc FD17_RGS_VERSION
Name Null? Type
----------------------------------------- -------- ----------------------------
RECID VARCHAR2(255)
XMLRECORD BLOB
Our observation perhaps inserting blob mainly occurs the slowness . Is there any patch or is there any bug regarding BLOB/LOB objects in oracle-11gR2
View 6 Replies
View Related
Sep 13, 2010
Here is one way to create EXCEL file from oracle sql query and prevent excel displaying large numbers in scientific notation(exponential notation)
set feedback off
set verify off
set heading off
spool c:excel_test.xls
select 'PO_NUMBER'||chr(9)||'VENDOR_NUMBER' from dual
union
select '=PROPER('||po_number||')'||chr(9)||'=PROPER('||vendor_number||')'||chr(9)
from invoices
where rownum < 12
order by 1 desc
Note that PO_NUMBER is 16 characters, VENDOR_NUMBER is 15 characters in invoices table.
View 1 Replies
View Related
Aug 8, 2013
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below
CREATE TABLE "APP"."MON_TXNS" ( "ID_TXN" NUMBER(12,0) NOT NULL ENABLE, "BOL_IS_CANCELLED" VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL ENABLE, "ID_PAYER" NUMBER(12,0), "ID_PAYER_PI" NUMBER(12,0), "ID_PAYEE" NUMBER(12,0), "ID_PAYEE_PI" NUMBER(12,0), "ID_CURRENCY" CHAR(3 BYTE) NOT NULL ENABLE, "STR_TEXT" VARCHAR2(60 CHAR), "DAT_MERCHANT_TIMESTAMP" DATE, "STR_MERCHANT_ORDER_ID" VARCHAR2(30 BYTE), "DAT_EXPIRATION" DATE, "DAT_CREATION" DATE, "STR_USER_CREATION" VARCHAR2(30 CHAR), "DAT_LAST_UPDATE"
[Code]...
Data is first moved to table in schema3.OTW. and then we are deleting all the rows in otw from original table. below is the explain plan for delete
SQL> explain plan for 2 delete from schema1.mon_txns where id_txn in (select id_txn from schema3.OTW);
Explained. SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT--------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2798378986
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 || 1 | DELETE | MON_TXNS | | | | ||* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 || 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
-------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
View 6 Replies
View Related
May 18, 2011
I have to cleanup data from our tables (Production Environment) that contain millions of rows. The question is apart from the solution of the partitioned tables what alternative recommended solution suggests Oracle?
To delete these tables by using a cursor PL/SQL block or to import all the database and in the tables that we want to remove the old rows to use the QUERY option of the data pump utility.
I have used both ways and i have to admit that datapump solution is much much faster than the deletion that suffers from I/O disk.The question again is which method from these two is more reliable and less risky for the health of the database.
View 5 Replies
View Related
Sep 7, 2012
I have a table that has 2 columns of type nested table. Now in the purge process, when I try to truncate or drop a partition from this table, I get error that I can't do this (because table has nested tables). how I will be able to truncate/drop partition from this table? IF I change column types from nested table to varray type, will it work?
Also, is there any short method of moving existing data from a nested table column to a varray column (having same fields as nested table)?
View 1 Replies
View Related
Feb 4, 2012
In an attempt to take older data off line and allow database refreshes to be faster, tablespaces associated with partitioned table data for a given time period was taken off line, leaving only tablespaces that relate to the current time period online. In effect, tablespaces related to 2010 and earlier were taken offline from a table.
1. Without giving a filter on the partition key (the business date) to scan for data greater than the dates in the off lined tablespace partition, we get a ORA-376/ORA-1110 error (data file cannot be read at this time).
2. Materialized views using fast refresh or refresh on commit, will also not work because of the partitions being off line.
Queries directly querying the tables are manageable from an application point of view.But the materialized views failing to aggregate is a bigger problem.
how we can manage this situation? I know that I can move the partitions to a different table in a tablespace to be taken off line. But if possible, we wanted to solve this without doing a move partition.
View 2 Replies
View Related
Feb 6, 2012
I have a 27 million row table in the following format:
MEDCLM_MTH_SUM_KEY PRIMARY_DIAG_CD DIAG_CD2 DIAG_CD3 DIAG_CD4 DIAG_CD5 DIAG_CD6 DIAG_CD7 DIAG_CD8 DIAG_CD9 DIAG_CD10
2212990780 5552 78907 53170 5368
2231127242 V5481 7812 71595 4019 2761 2859 496 V4364 30501
I need to unpivot this data to get it to look like this:
MEDCLM_MTH_SUM_KEY DIAG_CD_LEVEL DIAG_CD
2212990780 PRIMARY_DIAG_CD 5552
2212990780 DIAG_CD2 78907
2212990780 DIAG_CD3 53170
[code]...
I was wondering if there was a quicker, more efficient way to do this.
View 3 Replies
View Related
Nov 2, 2008
is it possible to upload very large files in oracle's tables. For example 1-2 gigabyte video file or even more. In other words is it possible to use oracle as file server to upload very large files and store them?
View 2 Replies
View Related
Nov 3, 2012
in one short i want to drop more than one user?
View 6 Replies
View Related
Dec 5, 2011
I have made one application form where users need to enter some data. This data is getting inserted in four tables. As in for now the data is properly getting saved and retrieved only for one user at a time. But problem arrives when more than one users are simultaneously making an entry and saving the data at one time.Same number is getting generated for the users who are saving the data at one time which should not happen.
View 4 Replies
View Related
Mar 14, 2011
I have a large table with 450 column and we are only using nearly 170 columns and our BD block size is 8k.The DBA informed that there is an row chaining happening in the Database.My question is if we have data available in 170 column .why row chaining is happening.
The DBA informed us to remove the unnecessary columns .. Does those empty columns have any impact on the chaining.If we increase the size of DB block to 32k . does it will resolve the issue.
View 4 Replies
View Related
Jul 23, 2012
I am working on 10.2.0.4 oracle version database of my production,
when executed a simple drop command , the total time it took is 26 secs. on Avg. the table is holding only 20 records. this is happening for the last few weeks , prior to that it took less than 0 secs.
05:22:58 SQL> drop table C$_100GL_INTERFACE ;
Table dropped.
Elapsed: 00:00:26.67
but on successive executions the elapsed time falls to 10-15secs.
And on the same test env, we are achieving the expected results less than 0 secs.
View 15 Replies
View Related
Dec 17, 2012
i want drop all tables in oracle8
View 2 Replies
View Related
Sep 14, 2010
I have upgraded oracle database from 9i to 11g using export and import utility. After migration we are facing performance issue in report generation, We have observed that First execution of report is taking very long time and when we generate the same report 2 -3 times there is considerable change in the execution time and it is more better than the first execution.
2 days back I have restarted the database and found the same issue. There are around 300 Reports and it is not possible to generate all the reports 2-3 times every time we restart the database.
View 5 Replies
View Related
Sep 4, 2013
I have long select which operate on 5 tables and has a lot of conditions in where clause (many combinations of values of just a few columns). Does reducing of those conditions could improve performance or just has a small impact?
I think if I have a lot of conditions on the same column, it don't take a lot of time to check them because values are in memory.
View 3 Replies
View Related
Apr 27, 2012
I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.
I have verified the following
All records in tables in 2 schemas are same.
All indexes are same
Analyzed the tables
Gathered Histogram on all the columns as per the first schema.
But now i still have the same problem, don't know what could be the problem.
Table_nameNum_RowsBlocks
PRPSL_LST_T5866107159
PRPSL_WKFLW_ACTVTY_T5829904030
ITEM_CHR_VAL_T5134340104049020
ITEM_RGN_ASSN_T8571220137215
Also attached 2 screen shots of OEM Plans..
View 2 Replies
View Related
Mar 20, 2012
We would take backup regulary by RMAN(catalog). Last two days we could not take backup. Backup start but after take backup some datafile then rman does not do anything.
View 4 Replies
View Related
Aug 2, 2012
I am trying to upload big files to individual table with BLOB column. During upload process after long time approx. 2h I get the following error message:
[#|2012-08-01T19:03:01.667+0200|WARNING|sun-appserver2.1|java.lang.Class|_ThreadID=27;_ThreadName=httpSSLWorkerThread-8082-2;_Reques
tID=4cec5fc8-b9e1-4017-a859-8759ec1f5d37;|oracle.jdbc.driver.OracleBlobOutputStream.flushBuffer(OracleBlobOutputStream.java:236)
java.io.IOException: ORA-01013: user requested cancel of current operation
[code]....
I am using Glassfish Server v2.1.1 with APEX Listener v1.1.3.243.11.40...The Timeout parameters for JDBC settings in APEX Listener are default. Thus I would expect to abort earlier to be an issue of JDBC Connection?
View 1 Replies
View Related