Moving Schema From One Database To Another Of Different Unix Flavour With Minimal Downtime
Aug 14, 2013
Oracle version : 10g, 11g (applicable to both) Consider a schema named 'UNIVERSE' present in the database A running on Linux platform where this schema needs to be moved to another database B running on windows platform or AIX platform with no downtime provided and data needs to be consistent. Is this practically possible?
We would be moving oracle 11g unix sun solaries to oracle 11g Linux readhat OS. what would be the disadvantage and what are the item needs to be verified. Basically advantage of oracle 11g Linux readhat OS.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01": mvanmannekes/******** schemas=cmsstagingb remap_tablespace=cmsliveb_data:cmslivea_data
I have installed Oracle XE 10g on Linux (32-bit) with modest resources (i.e. 512MB RAM).Everything seems to run well.i use this installation to run very basic integration tests only. So, it won't be used in multi-user or production environment.
I imagine, it should be possible to tweak the XE configuration and decrease its footprint even more,to instruct automatic resources management to not to be ready for 20 simultaneous connections (as I understand that's the default), but 5, and such.
1) Is it possible to decrease memory allocation? I have learned a bit about SGA and PGA settings, but I;m not sure what's a reasonable limit here.
2) Is it possible to disable number of connections , as explained above.
3) Is it possible to completely disable unused components, like some application servers, Java applications, and such. I have already tried to set the HTTP port to 0, but there may be is more aggressive policy possible.
4) Any chance to cut down the nubmer of spawned processes?
I am using Oracle 9i and Unix on my system and trying to execute a UNIX shell command through external procedure in C.I created a shared lib (libextproc.so) for the following function.
int sysrun(char *command) { return system(command); }
This function runs fine when caled through a driver function in C, meaning that the shared lib is fine.In PL/SQL, I have used the following method to invoke a UNIX command:- create or replace library shell_lib as '/home/ECETRAonsite/oracle/OraHome1/lib/libextproc.so'; / create or replace function sysrun (syscomm in varchar2) return binary_integer as language C name "sysrun" library shell_lib parameters(syscomm string); /
Now when I call this PL/SQL function to invoke the command, it is run succesfully but does not create the file.
PL/SQL procedure successfully completed.I have verified that the path for 'touch' is correct.Following are my configuration files. listener.ora ------------- LISTENER = (DESCRIPTION_LIST = (DESCRIPTION =
I need to move everything from database 'X' to database 'Y' (assume: both are Oracle 11.2).Which should be the most appropriate way to achieve that?I thought of Transportable Tablespaces- URL..... but I'm worried about it's limitations especially this:SYSTEM Tablespace Objects - You cannot transport the SYSTEM tablespace or objects owned by the user SYS. Some examples of such objects are PL/SQL, Java classes, callouts, views, synonyms, users, privileges, dimensions, directories, and sequences This means all PL/SQL code- Packages/Procs/Functions will be affected.
move the tables with data present in the user scott(full) to another schema named test. In my case scott is in user tablespace and for test schema i have created different tablespace named test_tbs.
Does it work with copying datafiles from 10g2 to 11g2? I want to move one huge tablespace (which contains one table) from 10g2 to 11g2, what is the best method to do that?
I need to move database ORCL into our existing central database CNTR (both are on same OS and oracle version) I started exp each schema from ORCL and imp in CNTR.
But there is one schema EXMP in database ORCL which also exists in CNTR database with same tables, indexes . The data under schema EXMP in ORCL should be added to schema EXMP in CNTR.
We're currently in a situation where the primary database server fs size does not match the standby database server fs size.
Standby database filesystem is almost 100% utilized, and we suggested to move some of the datafiles first to avoid threshold alerts and archive gaps.
Now, if we're gonna move datafiles on the physical standby, I believe the process would be stop managed redo apply -> shutdown standby -> OS move -> startup mount -> alter rename -> start managed redo apply. Is this correct? If not, how?
Also, would it have an effect if the controlfiles of primary and standby do not match because of the movement?
I need to develop a form which has to read and display the contents of a text file that is stored in the Unix system where the Oracle data base is installed. So basically its the database server and not the forms application server.
1. Create an external table for the file everytime when the form is loaded by dropping and re creating the table and base the data block in the form on that table and execute_query and display the contents.
2. I am confused whether to use webutil or utlfile packages to read from the file and display on the screen as the file resides in the database or Oracle server and not forms application server or client machine.
I would like to ask you if you know which built-in can I use for transferring a excel file from our Unix box to a table in oracle database, right now we are using webutil_file_transfer.Client_To_DB_with_progress using forms developer, but I need to run as an automatic process uploading form unix into oracle directly without using forms.
when migrating from 32 bit Linux to 64 bit Windows version on database standard edition, is there a server media needed?if yes, can you give me more details on what it consists of?
We are trying eliminate/minimize the downtime for our application. As part of new code deployments sometimes we need to modify DB Structure also. As it is taking time to backup current DB and apply new DDL, the application is down.
Is there a way to eliminate the downtime, if I can leverage Data Guard, Golden Gate or RAC concepts?
Is there any way I can find out what caused the database to crash; either a history of commands executed within the database, I lost my bdump directory before the scheduled backup ran and the only logs available are after I re-created the directory.
SQL> startup ORA-00444: background process "PMON" failed while starting ORA-07446: sdnfy: bad value '' for parameter . SQL> [code]....
We are facing a project where it is mandatory that the migration (from 9i to 11g) happens without any downtime. We thought about using Goldengate do to this migration. But i would like to listen to somebody who already did such kind of migration (i never used goldengate before). The basic steps to do such migration would be:
1) Install the Goldengate client on both source and target 2) Export only the metadata (structure of the table, for example) from source to target (here is one point of doubt of mine. This export can be only done using exp/imp?) 3) Perform the initial load from source to target (here i have another doubt: It it possible to perform an initial load from a whole database?) 4) Configure manager, export and replicat to perform the migration with the source database open in read-write
With the steps above, would i be able to perform a migration without downtime? What other considerations do you have?
I have new virtual UNIX machine and I installed oracle client on /usr/lib/oracle. Also I have a oracle database and I am able to connect to this database from my desktop sql developer.
So now I am trying to connect from new UNIX machine. Where I created tnsnames.ora file under /usr/lib/oracle/network/admin and before connecting did export the following
ORA-12545: Connect failed because target host or object does not exist. Not sure what I missed here. using same tns file I am able to connect from sql developer on windows.
We have an Oracle Server database of Size 50 GB having 10 GB Data. And Planning to have a new Database Server of 200GB . So my question is after moving all the 10 GB data to 200 GB Database Server, will the performance of the system come down? Will it reduce the speed?
I am often tasked with refreshing schema's from one DB to another.The first thing I need to check is the space the objects take up in the source DB. SQL statement that prints the size of the following objects
I have two same DB schema (same structure, same data) and I need to provide update in one of them when data in the other one is updated. It is singe direction only (we change data in DB Schema A and synchronize data in the DB Schema B; there is not opposite direction). Only small portion of data (compared to the size of DB Schema) might be changed or added this way.