Oracle Clustered Production Environment
May 26, 2013
I have a requirement to upgrade Oracle Data Integrator(ODI) from 10.1.3.5 to 11.1.1.6.3 We have a Clustered production environment where N1 will be up when N2 be down and viceversa.
Here N1 and N2 are the ODI servers as well as DB(11g Release 2) servers. They both access the SHARED CLUSTERED database. From ODI we will generally point the Oracle clustered IP(Virtual IP) which will internally point either N1 or N2 whichever is active. ODI application wise we are clear about the procedure.
Having some issues on DB related activities.
1. Should I break the cluster definitely? Cant I do the activity without breaking the cluster?
2. Do I need to point N1, N2, Clustered IP (Virtual IP) while doing the activities?
3. Since its a clustered database, do I need to db related activities once or twice? (Twice means, manually on both the servers)
4. As they are using same file structures (RAC), If the Virtual IP points N1 by default, assume that I create two new users a
View 1 Replies
ADVERTISEMENT
Feb 6, 2013
I faced a problem in our production primary DB like the listener was running but it was not connecting through the remote system, when i accessed the primary database.
But when i restarted the listener it started connecting. I want to know the reason for this crash/freeze.
Oracle version: Oracle Database 11g release 11.2.0.1.0 SE1 64 - bit production
View 3 Replies
View Related
Jun 25, 2013
Need patch strategy specially for production environment?? We have production environment and we are planning to patch our databases using PSU pacthes with no downtime. Our production environment has standby. We have one option that we can make standby to primary. Is there any other options that we can use?
View 2 Replies
View Related
Jul 8, 2011
I have to drop some partitions in table on production environment (to get free space). The environment have to be continuously available. I was considering of use ALTER TABLE ... DROP PARTITION ... UPDATE INDEXES but it is slow, because of use clause UPDATE INDEXES. Is there another possibility to remove these data?
View 2 Replies
View Related
Sep 25, 2013
What is non-clustered index, How to create non-clustered index.
View 10 Replies
View Related
Jun 19, 2012
I'm trying to move my backup sets from windows database environment, to OEL 5.7 environment on another server.
I've found a manual [URL] by which I am trying to do it.I took backup sets from last night's backup using RMAN,and the current parameter(initSID.ora) file from the running live database.Now i need to configure control files in the pfile accordingly.
1. can i take current control files from the running system, to restore and recover backup sets from last night, to the state the database was at backup time?
2. how can i find out if control files are backed up and know by RMAN? "list backup completed after '2012-JUN-19';" >> gives me Archive redo logs, datafiles, but don't see the control files(or don't reconize them).
View 15 Replies
View Related
Jan 1, 2013
I am trying to use the APEX (as a substitute for my PHP forms base application) for a new client. Can I use the APEX and Oracle XE in production, without spending any money for a license from Oracle - i.e. for listener, Web server, etc,?
I already have setup the APEX that works fine on my laptop, able to see and work with the form on the browser, so can I take the same to a server as a production?
View 4 Replies
View Related
Apr 25, 2013
We just got a new Dell R720 server that will host our Oracle DB. The server hasn't even been turned on yet but we know that the load on the server will be very low for a long time.
One of our problems is that we need to run a VERY important application. Since it is not very resource consuming compared to it's importance we chose to run it on a not so new Xeon 5110 1.60 GHz - 4GB RAM server. He said it's not a good idea and that we should buy a new server. (money is very low)
The software vendor suggested to virtualize our R720 server, host a vm running our database, and along with it other smaller machines like the one I described above. I suggested the use of Oracle VM, Oracle Linux for the database host and transforming the physical servers servers in VM with P2V.
Our IT Manager didn't like that, he said that it's not recommended to run a database on a virtual machine. But our software vendor said that many of their clients run their solution this way.
View 7 Replies
View Related
Nov 1, 2010
Where to download the oracle software for building a production server? What are steps ?
View 1 Replies
View Related
Jul 18, 2013
link where can i get oracle 9i for Linux OS??Is 9i available for 64 bit ?
View 4 Replies
View Related
Aug 23, 2012
we are planing to put the same oracle environment on two machines.
what is the best way to realize that.installation on each machine or installation on one machine and copy the environment to the second?
View 8 Replies
View Related
Aug 29, 2012
recently i have tried to upgrrade from oracle 11.2.0.2 to 11.2.0.3 in a test environment (virtual machine).
there i have mentioned the SID as ORCL, but when we took the same snapshot to other machine (VM). iam unable to connect to the VM, unless i gave the local net service configuration as ORCL but it is unable to connect. but when i ran the query "select * from GLOBAL_NAME" output was ORCL. domain. com again i have configured in LNSC as ORCL.DOMAIN.COM it is working.
but i have given the sid as ORCL..
View 5 Replies
View Related
Sep 23, 2013
I have a scenario In which I have say 4 AQ in which I will post the message.Also i have say 2 database.I am planning to create an MDB which will poll on these AQ's, so whenever I post message the MDB will read it and perform a specific action.I believe I can create only one MDB per queue, if it is so then I have to create 8 MDB.
As there 2 datasource and 4 MDB. Is there any other way to handle this I mean without creating 8 MDB as the Data sources can increase to 10 to 20 so the number of MDB will be 20 to 40.I guess this will affect the application performance. Can I make some changes in application so that only few MDB's are required?
View 0 Replies
View Related
Jun 28, 2013
How can we bring down the databases in oracle fail safe environment?
We have one database X in two server�s windows A & B with oracle fail safe environment.
What procedure should we fallow to bring down the database X.
Today I was strangling to bring down the database because database was automatically coming up once brought down the database. what procedure should we follow to bring down the database in OFS environment.
View 1 Replies
View Related
Dec 13, 2012
i am reinstalling oracel 11g in my window 7 64 bit machine after i uninstalled it. however, i got an issue that is environment variable path failed in the installation process. this didn't happen when i firstly successfully installed oracle.
View 7 Replies
View Related
Jul 15, 2008
I am running a job that is using Pro C code. I am running it on an Oracle 10g database with an Oracle 9 client on a UNIX platform. The code compiled fine. The job runs fine sometimes but other times it fails with a Segmentation Fault error.
I have the same job running in an Oracle 8i environment with no problems.
View 1 Replies
View Related
Jul 26, 2012
how to change non - SYS oracle users' password in data guard envirnment. We all know that for SYS password change in data guard. DBA has to change in primary database by either "alter user SYS identified by xxxx" or create password file with orapwd.
Then scp password file to standby database. However, if I want to change SYSTEM or DBSNMP passwords, I change on primary with " alter user ....." SQL, then new passwords will be login dictionary. But this new SYSTEM pqssword will be shipped with redo log to standby and SYSTEM password on standby will be updated? I need technical answer on this question
View 7 Replies
View Related
Feb 7, 2012
We are in the process of setting up a DR environment for our SAP and Oracle databases . The netapp and our architects came up with solution as follows .
1.Standby databases are built for all production databases.
2.The SAP file systems are replicated to the secondary site
3.The Oracle logfiles and controlfiles are replicated by netapp snap mirror every 10 mins interval
4.The database is recovered through recover standby database every 15 mins at standby site
5.Please note there is no data guard involved .
6.To test the failover , the mirror is broken .The standby controlfile is replaced with Production controlfile and Redo logs files.
7.The standby database issued a startup comnmand and it worked .
Would like to know whether the step 6 is a correct approach ? I tried to convince the architects that this will result in a very disastrous situation for us but none is listened to .
View 3 Replies
View Related
Jan 12, 2013
how to integrate Application Express with Oracle EBS. Any companies that offer hosted EBS environment with APEX and allow developer-level access (ie database, application server, front-end responsibilities)?
View 1 Replies
View Related
Mar 23, 2013
How to overcome the prompt user id & password to connect oracle DB from Unix environment.
View 2 Replies
View Related
Jun 13, 2012
I've just patched a development oracle environment (RAC 11GR2) with temporary patch 8730312. I would like to do same thing on Production RAC. Is it necessary to test all applications on dev environment with this kind of patch before deploy it on Production ?
View 2 Replies
View Related
Jul 11, 2012
Can we create non-cluster index on a clustered index?
View 5 Replies
View Related
Oct 13, 2012
I will have to proceed with Oracle 9 database refresh from production server to integration server. 5 biggest schemas must be exported and imported. They constitute 97% space used in a database. This is very big database so I would like to be sure that everything will go smoothly. That is why i want to ask you some questions.
Have you got any clues for me before I start with exp/imp? From my side i will tell you that I will have to exp/imp schema by schema because there is small space both on production and integration disk for a dump. First thing I thought are dependencies between schemas that are exported and that which are not, and also between schemas that are exported/imported one by one.
This is procedure that I plan:
For every schema that is to be refreshed
{
1. Export schema with ROWS=N CONSTRAINTS=Y
2. EXPORT schema with ROWS=y CONSTRAINTS=N
3. Import schema from step one
4. Disable all the foreign key constraints using ALTER TABLE DISABLE CONSTRAINT.
5. Import schema with rows
}
ALTER TABLE ENABLE CONSTRAINT
With above procedure i think that I will avoid problems with dependencies between schemas exported/imported one by one. But my concern is if there are any dependencies between those schemas and schemas that are not exported. Is there an way to check it before refresh ?
View 4 Replies
View Related
Aug 3, 2010
Any on give explanation for difference between Index and Clustered Index?
It will be great if i get explanation how memory allocation and Execution takes place.?
View 4 Replies
View Related
Jan 10, 2013
I am facing the row lock issue in production. I have been trying to resolve the issue but i coud'nt. I traced out by using different queries which sql query is locking which but everything looks good.
And i also checked for connections open and close everything is in good place but unable to resolve the issue. we are running a batch file which runs in every night some of the records are processing and if any one record is failed it is blocking another records.My oracle version is oracle 10.2.0
View 1 Replies
View Related
Mar 22, 2012
i want to create table from other database table by using dblink.but while creating the new table i am getting a error of temp space.
the data is the source table have around 2 millions of records.
i am using below syntax to craete table:
CREATE TABLE UDT_Fcst_arch_72 PARALLEL (DEGREE 4)
AS SELECT * FROM UDT_Fcst_arch@PRD
View 9 Replies
View Related
Nov 2, 2012
Oracle 10.2.0.4
I partitioned a source table of around 100 million rows (62GB) in DEV server. The target database was created new. It was range partioned on a date column as follows:
PARTITION BY RANGE (ENTRY_DATE_TIME)
(
PARTITION ppre2012 values less than (TO_DATE('01/01/2012','DD/MM/YYYY')) TABLESPACE WST_LRG_D,
PARTITION p2012 values less than (TO_DATE('01/01/2013','DD/MM/YYYY')) TABLESPACE WST_LRG_D,
PARTITION p2013 values less than (TO_DATE('01/01/2014','DD/MM/YYYY')) TABLESPACE WST_LRG_D,
PARTITION p2014 values less than (MAXVALUE) TABLESPACE WST_LRG_D
)
That is yearly basis. Anything before 2012 went to ppre2012, then p2012, p2013 and so forth. There is 20 million rows in p2012. and around 75 million rows in ppre2012. We needed both the source (un-partitioned) and target (partitioned) tables in DEv for comparision. The queries are normally on the current year partition. Just to state taht I am a developer and don't have full visibility to the production instance.
Now that our tests are complete, we would like to promote this in production. Obviously in production we would not not need both source and target tables. In all probability this will be performed over a weekend window. Therefore I would like to suggest the following .
1) use expdp to export source table
2) drop the source table
3) create a new source table "partitioned" with no indexes
4) use impdp to get data back into table
5) create global index (it is a unique index to enforce uniquness) and the rest of indexes as local
6) perform dbms_stats.gather_table_stats(user,'SOURCE', cascade=>true). This takes around 2 hours in dev
My point is that whether importing 100 million rows will not cause issues with undo segments. Can we import data say first to the current partition p2012 (20 million rows) first.
View 18 Replies
View Related
Feb 22, 2011
Would like to know if I can replicate Production database (10.2.0.2) to Test (10.2.0.4).
tell me the process to do that on windows environment.
I wonder if I can do it with two different oracle versions?
View 4 Replies
View Related
Aug 30, 2011
We have Development, Staging/UAT ( installed on XX.XX.XX.10 ) and Production ( installed on XX.XX.XX.20) Environment respectively. I have queries regarding getting the data from Production environment into Staging environment. The overall PROD database size is around 250 GB.
STAGING DATABASE DETAILS
SID : STG_DB
Staging Schema Name :schema_UAT
Replication Schema Name :schema_PrdReplica ( This is the schema where the production data gets loaded daily)
PROD DATABASE DETAILS
SID : PROD_DB
Prod Schema Name : schema_PROD
What is happening now:
----------------------------
There is a script (Stored Proc) written on staging ( STG_DB.schema_PrdReplica ) environment which executes daily in NIGHT and does replication. Currently we use DBMS_DATAPUMP to get the ENTIRE data/Meta Data (Everything) from Production to Staging. It is ta king significantly more time. It takes approx 8 Hours to replicate the everything from PROD_DB.schema_PROD to STG_DB.schema_PrdReplica
What I am expecting :
-----------------------------
I want to reduce the replication time.
I have heard about Level 0 (Full BackUp ) and Level 1 ( Incremental Cumulative ) Backups in RMAN. I am planning to take PROD_DB.schema_PROD Full Backup (Level 0) on Sunday and will restore that on STG_DB.schema_PrdReplica immediately. And on weekdays ( Mon - Fri ) I will take Level 1 ( Incremental Cumulative ) and will restore that on STG_DB.schema_PrdReplica
I am assuming by doing so, the overall replication time will be reduced. How can I implement this with script assuming that two different servers are on different machines.
View 1 Replies
View Related
Feb 4, 2012
I was planning to a production cut over from Aix to linux .I thought GG as an option so that i can have two DBs run parallel and replicate and do a cutover to linux during the change window.
Now the problem i see is that only half the tables have primary key.. so I THINK golden gate cannot be used an an option.
View 2 Replies
View Related