Rolling Upgrade Of CRS / ASM / DBHomes And Database From 10.2.0.4 To 10.2.0.5?
Nov 5, 2012
I have an upgrade question. My customer wants to minimize downtime and so wants to do a rolling upgrade of crs, asm, dbhomes and databases from 10.2.0.4 to 10.2.0.5.
Customer wants to upgrade server1 during normal business hours- CRS/ASM/DBHOMES, then move databases from server2 to server1 and upgrade during planned downtime, then upgrade server2 during normal business hours -CRS/ASM/DBHOMES.
3 steps, taking 3 days to complete. Is this possible? I know CRS can be done in a rolling upgrade, but what about ASM?Is this doable?
We have physical data guard configured version (10.2.0.4). We are in need to upgrade primary & standby database to 11G R2. Can we perform rolling upgrade.
find reference note IDs for DB upgrade from 11.2.0.2 to 11.2.0.3.2, as I am finding only Exadata which I don't want but I want to find for Ebiz database, on OS - Solaris 10 9/10 s10s_u9wos_14a SPARC.
Along with existing RMAN backups we do Exports - of our DB using and OS User and Oracle Wallet.Of the DB's we have upgraded the Data Pump Directory
Select * from dba_directories; (there are other commands to get this info as well).
I captured screens from the DBUA upgrades, but did not see an option to change this information.Is there a way to feed this information to the install moving forward. IE, ./DBUA -silent ?
Also, anyone tracked the percentage of storage increase from 10.2/11.1 to 11.2.
We are planning to upgrade our database from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi to Oracle Standard Edition 11g . We also have oracle apex installed on Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi database with oracle apex 3.1
No our plan to upgrade the database and oracle apex to 4.2. Since Oracle Enterprise Edition is licensing is very expensive we though of buying standard edition and upgrade to this version.
can we upgrade the oracle database from enterprise edition to latest standard edition?
After many tries of upgrade'ing oracle I decided to make backup of my database and then remove 11.2.0.1 and install 11.2.0.3 to recover db. Unfortunately I am not sure how should I perform restoration.I backed up data by calling RMAN> backup database include current controlfile;
After that I moved files from fast_recovery_area and cleared my machine from current oracle release.I also did a copy of directories:
I want to convert this query to just return a single line for -cidterr.rnam, cidterr.rnum, cidterr.tnam, cidterr.tnum
With an average sum by week. Similar to how if I did a sum by week from the original query and placed the results into an excel pivot and said show total as average.
Main Aim : To find all those id's who have taken all the tests within a rolling window of 45 days.
I have a table "MBS_FIRST_DATE" with the following data :This table has the patients who have the test along with the first date..This table is derived such that it has only one record with the first date of the test irrespective of the test.
create table MBS_FIRST_DATE ( medical_record_number VARCHAR2(600), requested_test_name VARCHAR2(39), result_date DATE
[code]..
Process :will be explaining with a patient id : 1) Consider the patient 1001274 from mbs_first_date table. 2) This patient has an date of July 08th 2008 & test SBP from first table. (keep this test an an anchor). 3) For the patient above loop through the all_recs table with test & result date ordered for the patient. (excluding SBP) 4) The first record we have is CHL with 08/05/2009 (May 8th 2009).. 5) Since this record is not within 45 days from SBP date for the patient..we go to the next record of SBP for the patient. 6) The next record for SBP is 11/05/2009 (May 11th 2009) . 7) Consider the CHL date again which is with 08/05/2009 (May 8th 2009).. Since both are within 45 days ..store both the values keeping SBP date as an anchor date as it's the test that's having minimum date from table 1. Even though there is one more CHL date which is within 45 days from SBP we don't care about it. 9) Go to the next test for the same patient which is DBP.. 10) The DBP first date is July 08th 2008.. 11) Since it's not within 45 days from previously stored SBP date (11/05/2009) ignore the record. 12) GO to the next record which is 10/05/2009..as this is within 45 days from SBP & already CHL (stored date) is within 45 days..Grab all the 3 dates as all are within 45 days from anchor date (SBP date).
SO the o/p will be 1001274 SBP 11/05/2009 1001274 CHL 08/05/2009 1001274 DBP 10/05/2009
Code which I wrote :I know some where I am missing the loop
I have a shells script which invokes a SQL file. However even with AUTOCOMMIT OFF and on SQLERROR EXIT ROLLBACK. Sqlplus fails to rollback.
My sql file has 3 lines 3 are correct and 1 is incorrect. For example: INSERT INTO TEST_ROUTING VALUES (24, 'ROUTING'); INSERT INTO TEST_ROUTING VALUES (25, 'ROUTING'); INSERT INTO TEST_ROUTING VALUES (26, 'ROUTING);
Lets say file is called 1.sql
My shell script invokes this SQL as follows: (Where $File1 = 1.sql)
$SQLPLUS_PATH/sqlplus -s /nolog <<-EOF>> ${LOGFILE} connect $DB_USER/$Password1@$Database1 SET AUTOCOMMIT OFF @$File1 WHENEVER SQLERROR EXIT ROLLBACK; EOF [code]......
So tried SET AUTOCOMMIT, tried SQLERROR ROLLBACK and tried few variations.
I have a 2 node RAC environment (11.2.0.3) where each node has there own local Grid_home and RDBMS_home.
I am installing a Rolling Bundle Patch with OPatch in this environment. The installation document says that "The order of patching in RAC install is GRID_HOME, then RDBMS_HOME" so i did the following.
1. stopped all oracle related services on node1 2. set oracle_home=<Grid_home> 3. applied the opatch 4. opatch succeeded on node1 and it says "The node 'NODE2' will be patched next... Is the node ready for patching?
1. Should i shutdown the oracle services in Node2 and continue to patch the Grid_home ? If yes then the DB will be completely down for user access. This defeats the purpose of rolling mode which says there is no downtime. 2. Should i patch the RDBMS_home on node1 , start all the oracle services on node1 , stop the oracle services on node2 and then resume the opatch on node1 which is waiting to patch the Grid_home on node2 ?
i want to know is there any difference between upgradation of SID and database. while upgrading my database from oracle 11.2.0.2 to 11.2.0.3 in DBUA it shows my SID is upgrading (ORCL) but i have few other databases ex (test and prod).
is it enough if we upgrade the SID or we must perform any other actions..
and 2nd QUERY :
i have 2 oracle homes in my server with different versions like (11.2.0.2 and 11.2.0.3)and i have few databases and there tablespaces. how to determine which database is created on which version and which database is upgraded from 11.2.0.2 to 11.2.0.3.
When upgrading database from 10.2.0.1.0 to 10.2.0.4.0 , i am getting following error$
/run Installer Starting Oracle Universal Installer...Checking installer requirements...Checking operating system version: must be redhat-3, SuSE-9, SuSE-10, redhat-4, redhat-5, UnitedLinux-1.0, asianux-1, asianux-2 or asianux-3 Passed All installer requirements met. Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-09-07_01-25-05AM. Please wait ... error: invalid compressed data to inflate /tmp/OraInstall2013-09-07_01 -25-05AM/oui/jlib/ewt3.jarerror: invalid compressed data to inflate /tmp/OraInstall2013-09-07_01-25-05AM/oui/gui de/htmlguide.jar
we have a very critical application running and the backend database is 10.2.0.3. we are planing on upgrading to 10g to 11.2.0.2 and looking ways to look with minimal downtime off production. steps for upgrade with very minimal downtime of appliction?
I tried updating the oracle database from version 11.2.0.1 to 11.2.0.3 but I have no clear procedure. I chose the option of the installer "upgrade database" and install the software on a new home. To migrate, must be all oracle services stopped? In the migration phase, it is normal to ask me listener data, etc .... or installer now detects the same database?
I want to upgrade our database version (9.2.0.7) to 10g (10.2.0.2)
I know if I set COMPATIBLE parameter to 10.2.0.2 at first then I can't downgrade to 9.2.0.7 if any problem occurs I am not sure that our application can deal with 10g or not so I think it's better to leave the COMPATIBLE parameter to 9.2.0 several days for sure and if all things go well then change COMPATIBLE to 10.2.0.2 (I need to say that I can’t test our application in a test environment )
Now do you think leaving COMPATIBLE to 9.2.0 (after upgrade) for many days can cause any problem to Database ?
I am trying to upgrade the database 32bit from 10.2.0.1 to 10.2.0.4 version in Linux 32 bit . I faced the version incompatibility error during patch set installation and hence ran the patchset installation with -ignoresysprereqs option.
Now during the patchset installation i encountered the below error in in the install logfile. ************************************** INFO: Start output from spawned process: INFO: -------------------------------------------------------------------------------- INFO: INFO: /u01/app/oracle/product/10.2.0/bin/genclntsh INFO: genclntsh: genclntsh: Could not locate /u01/app/oracle/product/10.2.0/network/admin/shrept.lst INFO: make: INFO: *** [client_sharedlib] Error 1 INFO: INFO: End output from spawned process. INFO: -------------------------------------------------------------------------------- INFO: Exception thrown from action: make Exception Name: MakefileException Exception String: Error in invoking target 'client_sharedlib' of makefile '/u01/app/oracle/product/10.2.0/network/lib/ins_net_client.mk'. See '/u01/app/oracle/oraInventory/logs/installActions2012-06-24_11-45-11AM.log' for details. Exception Severity: 1 ***********************************************
Provide me the certified os and versions for oracle db 10gr2
After upgrading 11gR1 database (11.1.0.7.0) to 11gR2 (11.2.0.3.0), the datapump exports have been taking quite a bit longer. When database was 11gR1, a full expdp took approx. 40-45 minutes. After upgrade, it takes approx. 1 hour 40-50 minutes. These times were with parallel=4. I tried with parallel=8 and parallel=12, both of these took around 1 hour 5-10 minutes, better but still quite a bit slower than pre-11gR2 upgrade. I tried with exclude=statistics, index_statistics, indexes; it still took approx. 1 hour 40-45 minutes. This is a PeopleSoft database so there are many, many objects to be exported. The database was upgraded using dbua.
we had our Oracle RAC database at primary and Standby database as DR. It was running in 10g (10.2.0.1.0) in HP UX 11i platform.
We need to upgrade both the setups from 10.2.0.1.0 to 10.2.0.4.0 . I will go through the upgrade guide, i like to know is there any special case for upgrading when we do for both primary and standby database.
And whether i need to create a new Oracle home with 10.2.0.4.0 software or i can upgrade in the same oracle home which already exists.
I need to upgrade database from 9.2.0.6.0 to 11.1.0.7.0 . But 11.1.0.7.0 database is not available for download in oracle downloads. How do i download 11.1.0.7.0 and bring the links for upgrading process.
I need to migrate database from Windows to Linux. The current size is ~50GB.
Current Env: OS = Windows 2003 DB version = 10.2.0.3
Proposed Env: OS = Linux DB version = 11.2.0.2
Would using datapump be a correct choice for this migration? Also do the step below seem correct?
01. Pre-create tablespaces on target 11g database 02. Export full database of source 10g database 03. Copy dumpfile to Destination Server 04. Grant IMPORT_FULL_DATABASE system privilege to user SYSTEM of target 11g database 05. Import full database to target 11g database
I have installed oracle 11g R2 in windows and i ve created database of vesrion 11.2.0.1.0. I tried to upgrade the database to higher version using the dbua utility . But in this utility source & destination both is showing as 11.2.0.1.0. upgrade the database from 11.2.0.1.0 to higher version using the dbua utility
I am trying to upgrade the oracle 9.2.0.8 database in oracle 11.2.0.2. I have installed the new oracle home for 11g and after running the pre-upgrade script and setting the all environment variable like oracle base, oracle home, oracle sid and ora_nls
i am running the dbua but after click on finish button it's do the processing of 2% but after that i get ------------------------------------------------- ORA-12709 error while loading create database character set --------------------------------------------------
When we upgraded the database from 10g to 11g, Apex upgrade failed. Due to the failure, I removed the old schema and did a fresh install of Apex 4.1. It is working but we lost the applications from previous Apex.
What is the best strategy when you need both database and Apex upgrades? We are a Linux workshop, setting up Apex in Oracle Fusion Middleware 11g environment.