Developer is importing some large volumes of data. As the filesystem was filling up fast, I removed the all the archive log files. Will this affect the functioning of database? It is a development environment.
I am using Oracle 11.2.1.0 version.I want to restrict archiving for some tables. I think NOLOGGING will solve this problem. Is there any option for restricting archiving.
For example, I have three tables called A, B and C. I want to archive only 2 tables A and B but not C.
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production PL/SQL Release 11.2.0.3.0 - Production "CORE 11.2.0.3.0 Production" TNS for 32-bit Windows: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production
We are in the process of setting up our backup policy. After the Archived Logs have been backed up, we need to delete them after 7 days. Also the actual files on disk.
My supervisor wants to remove all the archivelogs since it was just a test for 1 year the DB is not actually YET been used. Problem is they want it to be used as soon as possible w/o recreating again the database and just removing some data on tables and removed archive logs. how to safely removed the existing archived logs and create a full backup with archived fresh to sequence 1.
We are using the following commands to take hot backup of our RAC database. Hot backup is fired by "backup" user on Linux system.
======================= rman target / nocatalog <<EOF CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '$backup_dir/$date/%F'; run { allocate channel oem_backup_disk1 type disk format '$backup_dir/$date/%U'; #--Switch archive logs for all threads [code]...... ======================= Due to which after command (used 2 times) "sql 'alter system archive log current';" I see the following lines in alert log 2 times. Because of this all the online logs are not getting archived (Missing 2 logs per day), the backup taken is unusable when restoring. I am worried about this. I there any to avoid this situation. ======================= Errors in file /u01/oracle/admin/rac/udump/rac1_ora_3546.trc: ORA-19504: failed to create file "+DATA/rac/1_32309_632680691.dbf" ORA-17502: ksfdcre:4 Failed to create file +DATA/rac/1_32309_632680691.dbf ORA-15055: unable to connect to ASM instance ORA-01031: insufficient privileges =======================
Trying to find a best practices document from Oracle regarding the use of ASM to store the archivelogs in RAC. Most of the DBA's I know create a non-ASM location to store the archivelogs. Are there any considerations if you are also using dataguard?
I am looking for a way to capture a TXs from one database and create a script or use it with tool, so we can capture TXs close to production while testing changes being implemented in database.
Env:
OS: Redhat Linux 5.0 DB: 10.2.0.4
Actually Requirement: We have active-active Golden Gate setup done for one of our DB and once a quarter we make changes in the database using DDL (CREATE / ALTER - TABLES, FUNCTIONS, TRIGGERS, etc). We are 24/7 environment and hence no downtime is affordable. What I like to know is a way to capture all TXs from one of the DB and create a script so we can run them while we are testing new changes in the database. Something like DBMS_WORKLOAD_CAPTURE Package (Available only after 11gR1) or some tools available in the market.
I also tried looking into Load Runner but felt it works with App tier than DB, I may be wrong there.
I trying to backup archive logs using rman in standby database. I'm able to backup archive logs using simple command it get's successfully completed. rman > BACKUP ARCHIVELOG ALL When i'm trying to do with keep command it's getting failed.I'm trying to do on physical standby databaseBACKUP ARCHIVELOG ALL KEEP UNTIL TIME 'SYSDATE+100' TAG = 'TEST'.
I have Oracle 10gR2 database on a windows server, I have scheduled rman backup for this database from Solaris server
set echo onrun { crosscheck archivelog all; backup check logical database plus archivelog; delete noprompt obsolete device type DISK; } exit;
It is deleting the obsolete backup pieces but it is not deleting old archive logs (obsolete archive logs) and it is not even showing old archives as obsolete when I check thru report obsolete
Currently I am at the point where the configuration has been completed, and I just need to sync the standby database to the primary one. I can see in the log files that the archive logs are being shipped, but they are not applied on the standby system.
If I run "recover standby database;" manually in sqlplus I can see that it is trying to apply an archive log which is way too old (ORA-00279: change 9656498443 generated at 04/29/2008 08:45:08 needed for thread 1). I
n the alert log I can also see this error: Warning: Recovery target destination is in a sibling branch of the controlfile checkpoint. Recovery will only recover changes to datafiles.
At this point I was thinking that the standby database might be on a different incarnation compared to the primary, but this is not the case, they are both in incarnation 6:6 6 MVF 4023175798 CURRENT 48493546257 13-06-21
We have a nightly rman backup that was complete and clean.The backup contains a Data File, Control File and a backup Archive Log file.The database server crashed with all the subsequent archive logs lost.
We are trying to recover from the three backup files.We are bumping into the backup piece missing as all the archive logs are done. how to restore with the exising backup files. the lost archive logs are probably salvageable.
Is there any way I can get all the archive log files present in a backup piece. I tried following but it always gives me all the log files for each piece:
**** SQL> select s.set_stamp, s.set_count, s.PIECES, s.backup_type, s.controlfile_included from v$backup_set s, v$backup_piece p where s.set_stamp=p.set_stamp and s.set_count=p.set_count and p.handle='5mjv4do0_1_1';
I'm currently working on a project in which I do not have permissions to access the Server where the database is installed and configure.Because of company policies, I do not have Admin Rights over Oracle, but I do have an account that can make Selects to DBA_USER_PRIVS for instance.
I would like to know if there is any way to access the database logs to know if there was any kind of problem within the database, because one of my Schemas misteriously went clean (all tables, sequences, triggers, ... vanished)
I always find difficult to understand the alert logs and other log of cluster as well so i am wondering what to read for . do i need operating system knowledge or oracle architecture knowledge or some concepts or ?. as i saw many experts and even normal dba always talk about read logs and they are quite technical so how can i achieve to understand these logs completely
I have version 11.2.0.3 installed in a server AIX and i'd like to know where are alerts and logs like "max extents reached in table....". I don't see them in alert.log.
Where is the best way to monitoring this class of alerts and logs?
How can i see how many actuals extents go consumed in tables and indexes?
I need to verify if the current date is grater than the 15th of the current month. If its grater than the 15th of the current month i need to do an action or if else its lesser than 15th of the current month i need to do an other operation.
### Changes made ### 1 week before we did a change on tablespace segment management - from MANUAL to AUTO by following method: 1. create INVD2 & INVX2 & LOBD tablespace. 2. Move TABLE from INVD to INVD2. 3. Rebuild INDEX from INVX to INVX2. 4. Move LOBSEGMENT from INVD to LOBD tablespace. 5. After confirm no segments exist in old tablespace, offline and drop INVD & INVX. 6. Change default tablespace for INV user to INVD2. 7. RENAME TABLESPACE INVD2 to INVD, INVX2 to INVX. 8. Change default tablespace for INV user to INVD back. 9. Run Gather Schema Stat for INV using UNIX scheduler which work usually. However, error ended with ORA-03113 & ORA-03114. 10. Manual execute with same statement the following day, procedure completed successfull.
After 1 week later, inventory forms detected error FRM-40735 in all forms. Checked the gather schema stat job was run in the morning before user feedback..
AFter refer notes from metalink, I understand this is a bug where RENAME of the tablespace could not rename as the previous one, as the deleted entry is still exist in sys.ts$?
There is no segments exist in the deleted tablespace, or any user default tablespace is assigned to the deleted tablespace.
My Question:How can we delete the deleted entry from sys.ts$?And should we rename the tablespace from INVD to INVD3 (or can we use back INVD2) to avoid any unforseen error again?
I've got to collect statistics by DBMS_STAT. For the first time, the spcreate.sql is to be called, however, it returned one error following:
If this script is automatically called from spcreate (which is the supported method), all STATSPACK segments will be created in the PERFSTAT user's default tablespace.
Using perfstat tablespace to store Statspack objects
create public synonym STATS$SNAPSHOT_ID for STATS$SNAPSHOT_ID * ERROR at line 1: ORA-00955: name is already used by an existing object Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options
And then, I checked it again with its owner:
SQL> select owner, synonym_name 2 from dba_synonyms 3 where synonym_name='STATS$SNAPSHOT_ID';
OWNER SYNONYM_NAME ------------------------------ ------------------------------ PUBLIC STATS$SNAPSHOT_ID
Then, I checked again after dropping schema PERFSTAT: