Im in the midst of installing a patch on my UIM. However, in the planning phase, the Oracle guide recommends that I backup my database schema and database domain for UIM in case the patching fails and might affect the whole UIM app. May I know how to do this?
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01": mvanmannekes/******** schemas=cmsstagingb remap_tablespace=cmsliveb_data:cmslivea_data
Due to some some Network issues . we have planing to move oracle database one domain to another domain changing domain name and IP address of oracle database . if want to change oracle database 10g(10.2.0.5.0) 32 bit windows version what are setting s to change in oracle database.
I am on 11.2.0.3 Enterprise Edition. We are using the new feature "Composite Domain Index" for a Domain index on a very large table (>250.000.000 rows). It really works with mixed queries. We added two number columns using FILTER BY.We have lots of DML on this table. Therefore, we are executing synchronize and optimize once the week. The synch behaves pretty normal. But "optimize_index" takes a very very long time to complete. I have switsched on 'logging' for the optimize process. The $I table takes some time but is finished normally. But the optimization of the $S table (that is the table created for the CDI feature) is running over 12 hours now - and far from being finished. From the logfile, I can see that it optimizes 1000 rows every 20 minutes. Here is the output of the logfile:
Oracle Text, 11.2.0.3.0 14:33:05 06/26/12 begin logging 14:33:05 06/26/12 event 14:33:05 06/26/12 process $N for optimize: SEQDEV.GEN_GES_DESCRIPTION_CTX_I 14:33:16 06/26/12 14:33:16 06/26/12 [code]....
I haven't found a recommendation from Oracle not to use "optimize_index" for Domain Indexes with CDI. But in my case, it would be much faster just to drop and recreate the Domain Index in question.
move the tables with data present in the user scott(full) to another schema named test. In my case scott is in user tablespace and for test schema i have created different tablespace named test_tbs.
I have oracle 10g installed on my system and name of the database is "ORCL" for which I have schedule the incremental backup everyday. Mentioned below are the steps followed
*************PARAMETERS TO BE CHANGED****************** configure channel 1 device type disk format '\192.16.17.140dbbackups192.16.17.152oracle_rman_backup_incrementalstd_%U'; configure channel 2 device type disk format '\192.16.17.140dbbackups192.16.17.152oracle_rman_backup_incrementalstd_%U'; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '\192.16.17.140dbbackups192.16.17.152oracle_rman_backup_incrementalcntrl_%U'; CONFIGURE RETENTION POLICY TO REDUNDANCY 7; CONFIGURE CONTROLFILE AUTOBACKUP ON; *******************************************************
*******COMMAND FOR THE CONNECTING TO RMAN************** rman LOG = \192.16.17.140dbbackups192.16.17.152oracle_rman_backup_incremental rmanlog_%date:~4,2%-%date:~7,2%-%date:~10%.txt APPEND CONNECT TARGET SYS/ORACLE@ORCL *******************************************************
********INCREMENTAL BACKUP COMMAND********************* RUN { BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr_backup' DATABASE; BACKUP ARCHIVELOG ALL DELETE INPUT; } ********************************************************************
Now I want to restore this backup to some other system with new database. How to do this recovery to some other database on new system.
I want to take backup of selected data from tables of a schema (as huge data, that is not used, causing slow query performance) I planned to create a seprate backup schema and tablespace to store the data from these tables. Then write procedures that can move the data to and fro among table of those schema. And create partitioned index on those backup tables.
I have facing problem while taking backup on Windows 7 client of Oracle 11g R2 database. I have installed oracle 11gR2 for windows on windows 7 machine. I have created a directory like below in database.
On Database Server
SQL> create directory win_expdp_dir as 'd:expimp';
Directory created.
SQL> grant read, write on directory win_expdp_dir to lab;
Grant succeeded.
On Windows 7 machine (client machine)
D:appproduct11.2.0client_1BIN>expdp lab/lab@wbdata.wbh-db11g DIRECTORY=win_expdp_dir DUMPFILE=lab.dmp LOGFILE=lab.log Export: Release 11.2.0.1.0 - Production on Mon May 2 12:51:44 2011 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39002: invalid operation ORA-39070: Unable to open the log file.
I have give all sharing rights on d:expimp directory. My main question is why i'm getting this error. Is there any thing missing in setup. how to take export on windows 7 client.
I am trying to clone a database on another server with different direcory structure. So that path on the source db server are /u04 whereas on on target db server it would be /u03.
Since this I am testing it on small database initially I have kept all datafiles, backup and archivelogs at /u04 and /u03 on the source and target db servers respectively Now I have copied the backups from source db server to target and since path is changed, I cataloged it
However during restore I am getting
Quote:ORA-19870: error reading backup
Here are the session details
RMAN> catalog start with '/u03/oradata/db7fra'; searching for all files that match the pattern /u03/oradata/db7fra List of Files Unknown to the Database ===================================== File Name: /u03/oradata/db7fra/DB7/archivelog/2011_03_15/o1_mf_1_16_6qyvpb3w_.arc File Name: /u03/oradata/db7fra/DB7/backupset/2011_03_15/o1_mf_annnn_TAG20110315T123018_6qypyv5v_.bkp
[code].....
I have altered permissions on the backup files as well but of no use
oracle@dev-biz:/u03/oradata/db7fra/DB7/backupset/2011_03_15 $ls -ltr total 545260 -rwxrwxrwx 1 oracle dba 12419072 Mar 15 13:52 o1_mf_ncsnf_TAG20110315T123008_6qypyrz2_.bkp -rwxrwxrwx 1 oracle dba 3072 Mar 15 13:52 o1_mf_annnn_TAG20110315T125043_6qyr54to_.bkp -rwxrwxrwx 1 oracle dba 426496 Mar 15 13:52 o1_mf_annnn_TAG20110315T125006_6qyr3zk4_.bkp -rwxrwxrwx 1 oracle dba 14336 Mar 15 13:52 o1_mf_annnn_TAG20110315T123018_6qypyv5v_.bkp
I've read a lot about the different types of backup available with Oracle (hot and cold backup). However, I was thinking of a different way of performing this task. I'm currently using Windows Server 2008 R2 and Oracle 11g Standard Edition. I'd like to schedule an entire backup of my server via the utility "Windows Server Backup" (available for free).That way, I could recover my entire server with all the programs and files in case the latter crashes.I'm wondering if this solution could be used as a way of backing up (and recovering) the Oracle database. Should I still set up a regular hot backup with the Archivelog mode enabled in case some operations/transactions were being done at the time of the crash (for the data integrity)?
SQL> SELECT * FROM V$VERSION; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod PL/SQL Release 10.2.0.1.0 - Production CORE 10.2.0.1.0 Production TNS for Linux: Version 10.2.0.1.0 - Production NLSRTL Version 10.2.0.1.0 - Production
after i backup my database,i check the alert log ,i found the following errror:
Mon May 14 09:19:42 2012 Errors in file /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc: Mon May 14 09:19:42 2012 Errors in file /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc: Mon May 14 09:19:42 2012 Errors in file /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc:
[code]....
the trace number 26967 :
[oracle@shenzhengair archivelog]$ cat /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc /u01/app/oracle/admin/szcargo/udump/szcargo_ora_26967.trc Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
online backup done thru RMAN.Suppose i am taking online backup of full database. During the backup, user's are inserting/deleting/modifying data. This data is getting stored as online archives. Once the database backup is finished, how these archives are applied to the database to make the database up to date.
Backup entire database, without archived logs, while the database is open for user activity and also This backup should be the base for an incremental backup strategy.
We are running a 5Tb database backup which takes really a long time to backup and impacts the performance in general during the backup. We got a DataGaurd database in place which we want to utilize to run the RMAN backup on to avoid performance impacts and just use DG database for backup and DR purposes. My Question is if we run the backups on DataGaurd Database, How would a RMAN DG database backup would clear the archive logs from Primary production and from DataGaurd Database during the backup process?
I need to restore a backup of database A from into Database B. Both have the same names. Database B is already up and running.
I have a full rman backup of Database A ( it was taken with a recovery catalog which I don't have access to now as it was deleted ). I just have the full backup pieces including the control files. Is it possible to recover this database into Database B from this stand point.
I was thinking
2. Shutdown database B 3. Mount database B. It has same name as Database A. 4. With RMAN restore controlfile. Will a new controlfile be created from the backup directory I have the rman pieces for the full backup in ?
I have created a local partitioned index with the parameter NOPOPULATE create index lb_text_idx on ac_maint (DISC_CMPLNT_TX) indextype is ctxsys.context local parameters ('lexer skipJoinLexer datastore logbookTextDS NOPOPULATE');
When an new value of project_id is inserted into table1 , I create a partition using the following command where prjId is the new value of the project_id ALTER TABLE EVENT split PARTITION pmax AT ('||prjId||') INTO ( PARTITION p_' || prjId || ', PARTITION pmax)
Now I run a huge data load into the EVENT table for a project Id say 1 and insert millions of records.I want to rebuild the lb_text_idx for this partition alone so that data is available for searching by the application. I did a rebuild using the command " alter index lb_text_idx rebuild partition p_1".
However when I run the below sql, I get a count of 0. I know that the text MAINT is definitely present in the table.
SELECT COUNT(1) FROM EVENT WHERE PROJECT_ID = 1 and contains (DISC_CMPLNT_TX, 'MAINT' ) > 0
When I run the below sql select parameters,status from user_ind_partitions where partition_name = 'P_1' , I get parameters as null and status as usable.Am I getting 0 records because the parameters are set to null? How do I rebuild with the same parameters for the specific partition alone.