I have Sap r/3 system which runs on Oracle 9 database. The problem is that the sql queries produces an awful lot of logs thus my disk is full after very short time.
I do not need the logs since its development environment. Are there any tools that erases the logs automatically?
Using 11gR2, windows 7 client machine. I need to update the table missing_volume (below), where I need to calculate the estimated_missing column. The calculation of estimated_missing column for current month needs previous month numbers (as commented inside the code below). I want the output like the first table. Notice the records start from January, hence estimated_missing for January can't be calculated, but for the the rest of the months it can be done by simply changing 'yr' and 'mnth' (commented inside the code towards the end).
yr mnth location volume actual_missing expected_missing estimated_missing --------------------------------------------------------------------------------------------------------------------------------- 2013 January loc1 48037 24 57 2013 February loc1 47960 3660 53 24 2013 March loc1 55007 78 57 28 2013 April loc1 54345 72 58 77The code:
UPDATE missing_volume g
[Code]....
The code does calculate correct number for 'estimated_missing' as I run the code for each month, but the problem is while updating the current month it also erases the record for previous month. E.g. as can be seen below, after I updated April the column only has the record for April, previous month record is gone, similarly updating March removed February, etc. I can't understand why it's happening!! Here is the output I get:
yr mnth location volume actual_missing expected_missing estimated_missing --------------------------------------------------------------------------------------------------------------------------------- 2013 January loc1 48037 24 57 2013 February loc1 47960 3660 53 2013 March loc1 55007 78 57 2013 April loc1 54345 72 58 77
why it's happening (I mean where is the flaw in the code) and how to get the desired output (first table).
I got many times oracle ORA-00494 error and the database went down but since 29th of july the database have not been killed. The error message is below :
ORA-00494: mise en file d'attente [CF] d�tenue pendant trop longtemps ( (more than 900 seconds)) par inst 1, osid 176484 ORA-00028: votre session a �t� ferm�e
My database is used for datawarehouse of many terabytes.
Initially the redo log size was 500Mbytes and I've set it to 3Gbytes. The maximum log switch is after 5 minutes. I want log to be switched every 20 minutes or every 30 minutes.
To obtain the size of redo logs I've executed this query :
SQL> select OPTIMAL_LOGFILE_SIZE from v$instance_recovery;
OPTIMAL_LOGFILE_SIZE -------------------- 54763
53,5 Gbytes is it not very big as redo log size? What's the maximum size of redo log? To set very big redo log size what are the requirements? Which precautions should I take before? What are the risks? Are any other ways to change the log switch frequency?
I am using Oracle 11.2.1.0 version.I want to restrict archiving for some tables. I think NOLOGGING will solve this problem. Is there any option for restricting archiving.
For example, I have three tables called A, B and C. I want to archive only 2 tables A and B but not C.
java.sql.SQLException: Unexpected exception while enlisting XAConnection java.sql.SQLException: XA error: XAResource.XAER_RMERR start() failed on resource 'weblogic.jdbc.jta.DataSource': XAER_RMERR : A resource manager error has occured in the transaction branch javax.transaction.xa.XAException: Unexpected error during start for XAResource 'EOD': null at weblogic.jdbc.wrapper.XA.createException(XA.java:103) at weblogic.jdbc.jta.DataSource.start(DataSource.java:765) at weblogic.transaction.internal.XAServerResourceInfo.start(XAServerResourceInfo.java:1182) at weblogic.transaction.internal.XAServerResourceInfo.xaStart(XAServerResourceInfo.java:1115)
I am trying to create materialized views based on a few tables in a logical standby database.
The target database (11g R2) where the MVs will be created is a stand-alone database.
The DB where the base tables reside is a logical standby database (11g R2).
The requirement is to do a "FAST REFRESH" of the Materialized Views.
My questions are :
1. Can I create MV logs in the logical standby DB?
2. If the answer to question no. 1 is "Yes", do I need to do anything different or configure the logical standby DB in a specific manner in order to create MV logs. From what I understand, the objects in the logical standby database are in a locked state.
Assuming you have a 9i database . where you have it enabled in archive mode , yet constantly deleting the archived redo logs , due to space constraints .
Will you be able to perform a full level 0 backup , and the following incremental backups , in the absence of the archived redo logs ? And are these incremental backups enough to recover the database or particular data files , to the point of the backup itself at least ?
I'm currently working on a project in which I do not have permissions to access the Server where the database is installed and configure.Because of company policies, I do not have Admin Rights over Oracle, but I do have an account that can make Selects to DBA_USER_PRIVS for instance.
I would like to know if there is any way to access the database logs to know if there was any kind of problem within the database, because one of my Schemas misteriously went clean (all tables, sequences, triggers, ... vanished)
i have a sequence for one of my table that this sequence's current value was 3000 yesterday but today when i checked current value of it, i surprised because the value changed to 50, can i check who changed my sequence? is exists any data dictionary that shows logs of modified database objects.
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production PL/SQL Release 11.2.0.3.0 - Production "CORE 11.2.0.3.0 Production" TNS for 32-bit Windows: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production
We are in the process of setting up our backup policy. After the Archived Logs have been backed up, we need to delete them after 7 days. Also the actual files on disk.
I always find difficult to understand the alert logs and other log of cluster as well so i am wondering what to read for . do i need operating system knowledge or oracle architecture knowledge or some concepts or ?. as i saw many experts and even normal dba always talk about read logs and they are quite technical so how can i achieve to understand these logs completely
here i have an question with oracle database backup strategy.my question is
how to backup my oracle database call DB11G without archived logs while the database is open for user activity and also this should be the base for an incremental backup strategy?
I have two Oracle 10.2.0.4 databases, on two SUN M5000 (Solaris 5.10). One is primary (EXP1), the other is the standby (EXPBKP). Both SID is 'EXP'. If I do a manual recover (after I sync'ed primary archive logs from primary to standby), it works. If I do a manual log switch, logs are shipped AND applied on the standby. But after a while, I can see some errors on primary alert log , and logs are not shipped anymore :
CODE Errors in file /applications/oracle/admin/EXP/bdump/exp_arc2_1277.trc: ORA-03135: connection lost contact Tue Oct 26 08:30:14 2010 FAL[server, ARC3]: FAL archive failed, see trace file. Tue Oct 26 08:30:14 2010 Errors in file /applications/oracle/admin/EXP/bdump/exp_arc3_1279.trc: ORA-16055: FAL request rejected ARCH: FAL archive failed. Archiver continuing [code]....
I'm an Oracle novice and from what I've read so far, it seems that you should be able to do rollbacks and data recovery using the redo logs. I'm having a difficulty understanding the need for the undo tablespace.
it seems that you should be able to do rollbacks and data recovery using the redo logs. I'm having a difficulty understanding the need for the undo tablespace.
My supervisor wants to remove all the archivelogs since it was just a test for 1 year the DB is not actually YET been used. Problem is they want it to be used as soon as possible w/o recreating again the database and just removing some data on tables and removed archive logs. how to safely removed the existing archived logs and create a full backup with archived fresh to sequence 1.
I have an environment in which backup is performed of Oracle 10/11 databases with the use of RMAN and Tivoli Storage Manager (Data Protection for Oracle).There are several databases and for every one there is a daily full backup and hourly archive logs backup.
Sometimes when full db backup takes longer (up to 4 hours) archive logs backups are missed - as TSM node cannot perform two backups at a time. I would like not to have those missed backups.
Option A was to delete association of the arch log scheduler during full backup. But when removing association we lose historical data about backup. And we need historical data to be able to create weekly / monthly / quarterly statistics of completed backups. We need to have 99% completed.
Option B was to create two nodes in TSM (TDPO) and one will do full backup only and another one only arch logs backup. So the problem is moved to RMAN. But from RMAN specialist I heard that this may cause problems with full backup. During full backup also archive logs are backed up (at the start and end) so there might be a problem with accessing the file that is used by another process. And this may cause problem with full backup - which we want to avoid especially.
Backup your entire database, without archived logs, while the database is open for user activity. This backup should be the base for an incremental backup strategy
I have version 11.2.0.3 installed in a server AIX and i'd like to know where are alerts and logs like "max extents reached in table....". I don't see them in alert.log.
Where is the best way to monitoring this class of alerts and logs?
How can i see how many actuals extents go consumed in tables and indexes?
We are using the following commands to take hot backup of our RAC database. Hot backup is fired by "backup" user on Linux system.
======================= rman target / nocatalog <<EOF CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '$backup_dir/$date/%F'; run { allocate channel oem_backup_disk1 type disk format '$backup_dir/$date/%U'; #--Switch archive logs for all threads [code]...... ======================= Due to which after command (used 2 times) "sql 'alter system archive log current';" I see the following lines in alert log 2 times. Because of this all the online logs are not getting archived (Missing 2 logs per day), the backup taken is unusable when restoring. I am worried about this. I there any to avoid this situation. ======================= Errors in file /u01/oracle/admin/rac/udump/rac1_ora_3546.trc: ORA-19504: failed to create file "+DATA/rac/1_32309_632680691.dbf" ORA-17502: ksfdcre:4 Failed to create file +DATA/rac/1_32309_632680691.dbf ORA-15055: unable to connect to ASM instance ORA-01031: insufficient privileges =======================
I'm using oracle 10.2.0.3. on windows 2003. I've implemented physical stanby and it was working fine until last week. No problem on primary DB. Archived logs are sent normally to standby. And standby DB is also able to apply the archived logs coming from primary. I've already check data on standby DB after opening it with READ ONLY option : data are exported normally and I can read them.
The issue is the status of logs in view v$log on standby database.
select group#,status from v$log; group# status 1 clearing_current 2 clearing 3 clearing
I've tried this :
On standby DB :
SQL> RECOVER MANAGED STANDBY DATABASE CANCEL; Media recovery complete. SQL> alter database open read only;
complete. SQL> alter database clear logfile group 1; Database altered. SQL> alter database clear logfile group 2; Database altered. SQL> alter database clear logfile group 3; Database altered.
One of the Hitachi support guy has suggested to create a separate disk group for Online redo logs. His rationale was that ORLs was write only files and it would be better to put in a separate disk group.
This morning when I checked my archive logs, I suprised that the redo files are generating after every 3 min and each of file size is 50M, which is the actual size of both log members. I m using RAC database with DR server.Usally the total redo logs quantity for one day is 4 to 5. but since 10 pm of yesterday to 7 am today, the quantity of log files are 109, each of 50 M .
I've a primary database and a physical standby.Logs are shipping perfectly from primary to standby.The logs that are applied on the standby are getting stored in a mount point(LINUX-/opt2) which consumes more space(160 GB). Will that be right if I go ahead and delete them?
My rman configurations in primary is:
RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 4; CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP ON;