What Happens When UNDO Space Is Full / Transaction Takes More Time To ERROR
Jan 26, 2011
I have a long running transaction (more than 3 hours), and at the same time other operations are occurring on different tables, using the same UNDO space.
Sometimes we see ORA-30036, but this error occurs very late in the process. The transaction normally takes 3 hours, but when UNDO space is full, we do not get ORA-30036 upto 8 hours or 9 hours of process.
I am wondering what could be happening in the background, when UNDO space is full, which makes the transaction to extend upto 8 hours or 9 hours (pl. note, this transaction gets completed within 3 hours normally). This is in 11g, UNDO space is managed manually.
I am using oracle 9.2.0.8 on RHEL 4.8 (64-bit). I am facing a strange problem. I have this one job in database that takes almost 12-15 minutes to execute but when I execute procedure in that job manually, it executes in one minute. Even when no other job is running in database, it takes more than 10 minutes to execute.
we have a situation where both undo tablespaces were almost filled i.e UNDOTBS1 99% and UNDOTBS2 100% filled so i add data files to it and then i found a lot of blocking session and was just killing them through EM then i stop my front end listener and also down the service, now i don't have any blocking session but on EM a big WAIT is coming. alert log shows nothing serious, it was showing deadlock but now it is over as well.
I am using oracle 9.2.0.8 on RHEL 4.8 (64-bit). I have this one job in database that takes almost 12-15 minutes to execute but when I execute procedure in that job manually, it executes in one minute. Even when no other job is running in database, it takes more than 10 minutes to execute.
i'm facing a problem while i'm inserting millions of record from table to table that undo tablespace reach 100% full and execution aborted. , how can free the undo tablespace ??? many of extendes are offline. will it flush automatically ??? or what i should do
Import: Release 11.2.0.1.0 - Production on Fri Feb 10 09:49:50 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "NTEST"."SYS_IMPORT_FULL_01" successfully loaded/unloaded Starting "NTEST"."SYS_IMPORT_FULL_01": ntest/******** directory=test_dir dumpfile=JBLLIVE.31Jan2012.11.50AM.dmp remap_schema=JBLLIVE:NTEST logfile=ntest_10feb.log Processing object type SCHEMA_EXPORT/USER ORA-31684: Object type USER:"NTEST" already exists Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE ---- In this situation I observed the worker status and see that some table and some LOB objects including LOB indexes are imported . Worker process do it in background but it does not show in the front import log file (I dont understand why it not shows in the import logfile). it imports one table,one LOB , one LOB index ..then again one table,one LOB , one LOB index ... in this way .
And my observation first it inserts data into the LOB tables and then it inserts into normal table . And when it is starting to insert data to the normal table then this table's log are shown in the import logfile.
I am using Oracle 10.2.0.3. Since yesterday i am seeing a session with sid 1160 using undo tablespace but not able to find how much it is using .I need to know which session and from which module and how much is the Undo being used by those sessions. I have tried searching but all the queries provide me with some different results each time.
Also i need the same information for REDO being generated .
11.2.0.3 This is for a build. We are still in development. No risk of data loss. As part of the build, I drop the user,re-create it, re-create the objects. Allows us to test the build all the way through. Its our process. This user has some tables with several 1000 partitions. I ran a 10046 trace and oracle is using pl/sql to do loops to do DML against the data dictionary. Anyway to speed this up? I am going to turn off the recyclebin during the build and turn it back on. anything else I can do? Right now I just issue 'drop user cascade'. Part of is the weak hardware we have in the development/environment. Takes about 20 minutes just to run through this part of the script (the script has alot more pieces than this) and we do fairly frequent builds. I can't change the build process. My only option is to try to make this run a little faster.
But the query of import is still runing even not showing any amount of rows to be imported.
i already make the tablespace in which the table was previosuly before dropping but when i check the sapce of tablespace that is also not consuming one error i got preiviously while performing this task is:
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production With the Partitioning, OLAP and Data Mining options Master table "CDR"."SYS_IMPORT_TABLE_03" successfully loaded/unloaded Starting "CDR"."SYS_IMPORT_TABLE_03": cdr/********@tsiindia directory=TEST_DIR dumpfile=CAT_IN_DATA_042012.DMP tables=CAT_IN_DATA_042012 logfile=impdpCAT_IN_DATA_042012.log
[code]....
i check streams_pool_size it will show zero and then i make it to 48M and after that
SQL> show parameter streams_pool_size; NAME TYPE VALUE ----------- streams_pool_size big integer 48M
regarding sizing undo tablespace and undo_retention parameter.we have to implement the database in production system with 40 users but how much space should be allocated to undo tablespace is there any propotions related to virtual memory and the parameter.i have gone thru oracle doc's and some related sites.its an ERP aplications that contains 20 modules .I am an new one to this dba level
i Cannot drop old undo tablespace. While dropping the old undo tablespace we get an error
ERROR at line 1: ORA-01548: active rollback segment '_SYSSMU77$' found, terminate dropping tablespace
SQL> select tablespace_name, status, segment_name from dba_rollback_segs where status != 'OFFLINE';
TABLESPACE_NAME STATUS SEGMENT_NAME ------------------------------ ---------------- ------------------------------ SYSTEM ONLINE SYSTEM APPS_UNDO NEEDS RECOVERY _SYSSMU77$
i create this trigger to lock inserting transaction for any data greater than 30-04-2013 on a certain table
CREATE OR REPLACE TRIGGER INVALID_DATE_VALUE BEFORE INSERT or UPDATE on ra_cust FOR EACH ROW BEGIN IF :NEW.TRX_DATE >'30-APR-2013' THEN -- RAISE_APPLICATION_ERROR('ora-0000','DATE CANNOT FALL IN RANGE...Invalid Date'); RAISE_APPLICATION_ERROR(-20000, 'IT IS NOT ALLOWED TO INSET DATE VALUE GREATER THEN ''30-04-2013'''); END IF; END;
and this trigger created successfully
when i try to insert data in the table ra_cust even in a range less than 30-apr-2013
i got this error
ORA-20000: IT IS NOT ALLOWED TO INSET DATE VALUE GREATER THEN '30-04-2013' ORA-06512: at "ARASK_HAGAR.INVALID_DATE_VALUE", line 5 ORA-04088: error during execution of trigger 'ARASK_HAGAR.INVALID_DATE_VALUE'
While making a full db export i have got this error even though my export was completed with this warning. What should i need to do regarding to this error. My oracle version is 10.1.0.2 and Server is windows 2003.
Ideally we need to include description in tnsnames.ora but its taking time to contact DBA here. Hence we tried with this work around. The same is working fine if we are using sqlplus but sqlloader gives this error.
LRM-00116: syntax error at 'ADDRESS' following '('
SQL*Loader: Release 11.2.0.2.0 - Production on Thu Jan 10 21:31:32 2013
I am trying to export a full DATABASE using the command...EXP [username/password]@[CS]FILE=PATH\[filename.dmp] LOG=PATH\[logname.log] INDEXES=n STATISTICS=none COMPRESS=Y
the database begins to export as shown below, but the export terminates with the below error.
About to export specified users ... . exporting pre-schema procedural objects and actions . exporting foreign function library names for user [username] . exporting PUBLIC type synonyms . exporting private type synonyms [code],,,,
considering that i have exported full databases successfully before using the mentioned command above.
One of my Friend gets error While datapump Export backup of Full database.
Pfa below error details:-
ORA-31693: Table data object "RADIOMIRCHI_PIP_HRMS"."GM_DEPT" failed to load/unload and is being skipped due to error: ORA-02354: error in exporting/importing data ORA-00604: error occurred at recursive SQL level 3 ORA-21780: Maximum number of object durations exceeded. [code]......
lv_ret := WEBUTIL_FILE_TRANSFER.AS_To_Client_with_progress(lv_clnt_file, lv_srvr_file, 'Download from Application Server in progress', 'Please wait');
to download a file to my H: drive.Here lv_ret is a boolean variable.The file is not downloaded to my H drive when there is no enough space.How to capture that error?
while trying to refresh an materialized view.. oracle throws cannot extend temp table space error.. while starting to refresh mivew temp table space is empty but once refresh started temp tablespace is growing and throws cannot extend temp tablespace error,,,size of temp tablesapce is 200GB..when i monter the session it does an sort event of an table(ammt_pol_ag_comm).. only 4% of this sort event is completing after that it throws error bu occupying the entire 200 GB tabespace.. MView script below..
CREATE materialized VIEW ammv_agent_pol_persis_emas NoLogging Parallel 10 Build Immediate Refresh on demand With Primary Key AS
try to evaluate it as IMDB cache.I am facing this error repeatedly. I have increased perm size from 6.25 GB to 10 GB.After inserting about 460.000 rows I get the error again. Is it possible that 460.000 rows need 3.75 GB space?
In Oracle database these rows occupy about 200 MB space.
I need a stored proc that takes list of ids as input parameters. For all these Ids. the proc will send out data from another table as out ref cursor. Sounds very simple yet I am stuck with how do I pass the input list of ids.
SQL> SELECT * FROM V$VERSION; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for Linux: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production
after i backup my database,i check the alert log ,i found the following errror:
Mon May 14 09:19:42 2012 Errors in file /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc: Mon May 14 09:19:42 2012 Errors in file /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc: Mon May 14 09:19:42 2012 Errors in file /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc:
[code]....
the trace number 26967 :
[oracle@shenzhengair archivelog]$ cat /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
The below query takes more than 30 minutes to return data.All the objects used are views. There is no direct reference to any table.The views with _mnth_ have data for 7 distinct months. The base table for all the views have a composite PK on the columns AR_ID (or ACCT_AR_ID),MSRMNT_PRD_ID
I need the order by, as the query is part of informatica code, and the order by works in the further processing.
SELECT ac.ar_id AS acct_ar_id, m.msrmnt_prd_dt AS msrmnt_prd_dt --removed the rest of column list to reduce size of code. FROM edxf.ar_rsrv_mnth_v ac, edxf.crdt_acct_mnth_v c, edxf.crdt_acct_v ca, (SELECT msrmnt_prd_id, msrmnt_prd_dt FROM edxf.msrmnt_prd_v WHERE msrmnt_prd_id = [code]....
Also, the count of data in the views is as below.
ViewTotal countCount for 1 msrmnt_prd_id --------------------------------------------------------- ar_rsrv_mnth_v1841892281945 crdt_acct_mnth_v664941457087369 crdt_acct_v12258728NA
Sysaux Tablespace is running low. WE SET AWR RETENTION TIME=60 DAYS. WE ARE NOT INTEREST TO EXTEND SYSAUX TABLESPACE SIZE. Usually we take AWR weekly once. Some times we did ADDM report and ASH.
CODEsql>select TABLESPACE_NAME, FILE_NAME, BYTES/(1024*1024), AUTOEXTENSIBLE, MAXBYTES/(1024*1024) from dba_data_files where tablespace_name = 'SYSAUX';
1. What's the best SOLUTION ? 2. Can i shrink sysaux tablespace ? 3. I think , The size for all occupants in sysaux tablespace is less than 200 MB => how to find actual content of sysaux tablespace ? 4. What could be the reason for growth? Is there any way to free the space from sysaux table space?
I'm joinging two tables event_types and tmp_acc tables.
event_types contains 2 Billion records tmp_acc contains 20,000 records.
Resulting rows are about 300,000 records in event_types table end_t and account_obj_id0 are joined indexed
no indexs in tmp_acc.
When I run below query with nexted loop it takes 6 hrs to complete. But when I run with hash join even after 4 days it was still running. what is wrong with hash join here. Why it takes so long. I'm joining only 20000 rows. So I think there should be a way to get result rows quickly.
show parameters hash_area_size
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ hash_area_size integer 2097152
explain plan for select --+ parallel(e,6) [code]....