Replication :: Create Same Database Replica In 25 Systems?
Jan 2, 2010I need to create same database replica in 25 systems. Which method can i use for that...each servers will hold exactly same data and structures
View 1 RepliesI need to create same database replica in 25 systems. Which method can i use for that...each servers will hold exactly same data and structures
View 1 Repliesi need to create a replica of an existing table in same schema dynamically so as to reflect the modifications on replica table which has been done on primary table.this table we need to maintain as snapshot,like if a column is added or deleted then it should automatically be added or deleted on the replica table.how should i approach this.
View 4 Replies View RelatedI would need to create a physical standby database for DR without using data guard. I like to know in more details like
1. What would be required to switch over to standby database in case of failure of primary database? Is it going to be a manual process each time, or can be automated using scripts?
2. How the archive logs will be applied to standby database in timely manner to sync with primary database?
3. How the Primary database will be synch with Standby database once the issue resolved on primary database.
4. Where do I get a complete script to create physical standby database on Windows platform?
Is Oracle Database 11g Release 2 (11.2.0.3.0) available for IBM AIX on POWER Systems (64-bit), Or do we need to upgrade 11.2.0.1 to 11.2.0.3.0.
View 4 Replies View Relatedi need to set up a central server with all the master tables and two other local database which will hold the updatable materialized view of the master table...the databases must be synchronized with central server..and user will work on the materialized view database...
View 10 Replies View RelatedThe Oracle machine A (11R1) is placed in San Jose.
The Oracle machine B (10R2) is placed in Sydney.
I need to develop a daily replica from A to B. I don't need the entire db, but only some single schema.
The entire DB size is 1TB.The exp pump gzipped dump size will be 5+GB.Tnsping from A to B takes 500+ ms.The regular A exp pump -> ftp -> B imp pump will be too slowly because of the poor network.Which way it would be the best to implement it?
I would like to develop a form which will be replica of windows explorer.It should be able to read all files or files from a specific folder from an unix platform.
if yes can i have a template. I do not want to start from the scratch since I do not have much time.
We are using Oracle 10g rel 2. The replication is setup on 1 server which is in City A, and the snapshot server is in City B.
City A .
Create materialized view log on table-a
with primary key
including new values.
City B database.
Create user test_rep identified by test
grant connect, resource, create any materialized view , table, view , procedure to test_rep.
Create materialized view city-A_db_MV
refresh fast
select * from cityA.Tablea@city-Adb
When i select from city-A_db_MV, it showed the complete table-A of city-A database.
Now if we make any changes to City-A table at the Master site, will it be propagated automatically to the MV site.
I guess we need to create jobs to push / or refresh fast .. isnt it. But exactly how to do it is a question.
Secondly if we make a replication group at Master site at city-A db, how do we refresh that Group and how to monitor whether it is refreshing on time or not? do we need to see the jobs every now and then.
but still a lot of questions unanswered, even though i had read the documents earlier.
1-The MView was created without identifying that after what interval it will be fast refreshed.
2- How to Manually refresh it. Does it support On Commit, I think it is not.
3- Where should be we make a group and then add the table to that group and refresh that group.
Should this group belongs to the Master Site or to MV site?
I have to create a materialized view for a table which does not have index on any field.
While creating a Mview i am getting an error "TABLE DOES NOT HAVE THE PRIMARY KEY CONSTRAINT".
application developers do not want to create an index on the base table onto which MView is to be created.
is there any way to create a materialised view for the table without index, or is it necessary to have the index on the base table before creating MView on it.
How I can create a Materialized View without having any data in it.
For e.g.
I create a Materialialized View based on a View.
CREATE MATERIALIZED VIEW test_mv
REFRESH FORCE ON DEMAND
AS
SELECT * FROM test_view
In the above case the data fetched by the view test_view gets stored in the Materialized View test_mv. Suppose I want materialized view test_mv to get created with all the columns of test_view but not the data. I will refresh the materialized view test_mv later for data as and when required.
What shall I do for immidiate formation of materialized view test_mv without data.
m trying to create replications of data from the M.V. site. to master site
/* AT MASTER SITE */
1. MASTER REPLICATE GROUP
2. MASTER REPLICATE OBJECT
3. REPLICATION SUPPORT
4. REPLICATION ACTIVITY
5. M.V. LOG
/* AT M.V. SITE */
1. MATERIALIZED VIEW (M.V)
2. DBMS_REFRESH.MAKE (REFRESH GROUP)
AND I AM NOT ABLE TO CREATE
1. CREATE_MVIEW_REPGROUP
2. CREATE_MVIEW_REPOBJECT
IN M.V. SITE , and i am getting following error while creating REPGROUP
CODE
SQL> BEGIN
2 DBMS_REPCAT.CREATE_MVIEW_REPGROUP (
3 gname => 'emp_repg',
4 master => 'orc1',
[code]..
i have faced one problem with reports. i had created 4 reports. recently we are designed new application using oracle forms6i. we are created more than 10 forms and added to application they were working fine. but when i was added these reports they are not generated. but in server report are running. i mean the report genereted in server but not in local system.
View 4 Replies View Relatedmake one form for many systems i have (i.e) I need to make it like portal many button for many systems When i enter the button i login to specified system (i.e) I need to make new connection to this schema and disconnect the previous one.
View 1 Replies View RelatedCREATE MATERIALIZED VIEW Matview1
NOLOGGING
NOCACHE
NOPARALLEL
REFRESH COMPLETE ON DEMAND
START WITH sysdate
NEXT sysdate + 1
WITH ROWID
ENABLE QUERY REWRITE AS
select Query;
if i run select query it works fine .. also the user has create materialized view and query rewrite privs .. not sure why i am getting insufficient privileges error still ..
Create Materialized view with Refresh on commit example. with create Log file example.
View 6 Replies View RelatedOur application has been installed at customers in North America, Europe and South America for several years, in some cases, over 10 years. At least one of our customers has hundreds of gigabytes of data. We are considering options for cleaning out the old data.
The database runs on a variety of systems (Linux, Windows, Unix) and in several version (Oracle 9, 10, and 11). We need a solution that works in all environments.
Two of our main criteria for a successful solution are that:
-It maintains application data referential integrity. Our application makes little use of foreign key constraints, so the cleanup process will apply critical business rules to candidate data to determine if it can be deleted or not.
-The operation of the cleanup program does not impact use of the system in production.
For various reasons (license cost, installation issues) the partitioning option is not available to us.
Alternative 1: Flag records for cleanup
This requires adding a 1-character column to each table. That is a one-time operation done during implementation. The procedure applies the business rules and sets the flag according to whether a row is to be deleted or not. Rows marked for deletion can be checked, reset, exported, etc. Finally, a separate process deletes all marked rows.
Advantage of this is that the deletion process will use a full table scan to find the marked records. There is no index navigation, so hopefully less overhead. Disadvantage is that its updating application data which might affect user's perceived system response. There is some undefined concern that locking or other table activity involved with updating the flags could impact users.
Alternative 2: Build a list of keys for data to be deleted
We will build a list table during implementation. The first process examines the application data, applies the deletion rules and writes key information to the list for data that can be deleted. The list can be checked, reset, rebuilt and listed rows can be exported as required. Finally, the cleanup process uses the list to find and delete the data.
Advantage is that it doesn't update the application data as its building the list. Disadvantages are that it that there is some overhead in building and checking the list. The list requires more space than the flags in alternative 1 but we can handle that in various ways. The procedure needs to navigate key structures during the delete step as well as in the list-building phase.
what type of storage systems used for saving archive files and backup files "especially for oracle databases"
View 5 Replies View RelatedI want to setup one way replication in oracle 10g Database.
Example: There are 2 database in two locations.DB1 and DB2.
DB1 need to get data from DB2 every one hour and update in DB1 database.
Is it possible to create replication on same database with different schema ?
View 1 Replies View RelatedHow I can maintain a replication scheme from a production database and a standby. I was watching the advanced replication methods of Oracle, but what I want is in the evening to run a process and modify the database incrementally and thus leave until the next night.
And the server I want to allocate to the standby database, also implements other processes, so my settings would be:
Production: Oracle Database 11g Linux 5.5
Standby: Windows 2003 Database 11g
Maybe that data is important, let me make clear that what I want is that the database is updated incrementally.
I have a table MYTABLE in database mydb1 duplicated via materialized view and materialized view log and refresh_snapshot commands to a MYTABLE on mydb2 database.
I like to duplicate this table MYTABLE to a third database mydb2, using the same method (materialized view and refresh_snapshot command).
Is it possible ? What's hapend to the materialized view log where I launch a refresh_snapshots on mydb2 ? How is this materialized view log truncated ?
There is a database db1 which has user U1 in in it contains T1 as table.
Likely,
There is also another database db2 which also has a user named U2 containing table T2 in it.
Now
I want to use the concept of JOINS and Join Table T1 of database named DB1 and Table T2 of database named DB2 and access from database named DB3 using Materialized View Concept.
what shall i do to access tables of DB1 and DB2 from database DB3 using Materialized View.
I am planning to move the databases in multi master replication from HP-Ux to AIX server. I am planning to use export/import to moving the database. But do I need to drop the replication administrator user before I do export/import?
View 1 Replies View Related I am implementing GG 11g r2 for 12C database. But i am getting below error. My question Why Goldengate needs specific package ... Since this is homogeneous & heterogeneous.
/u01/12c_database_software/goldengate/dirtmp.
2013-08-30 05:28:44 INFO OGG-01513 Oracle GoldenGate Capture for Oracle, ext1.prm: Positioning to Sequence 66, RBA 25067536, SCN 0.0.
2013-08-30 05:28:44 ERROR OGG-01028 Oracle GoldenGate Capture for Oracle, ext1.prm: ORA-06550: line 1, column 7:
PLS-00201: identifier 'SYS.DBMS_INTERNAL_CLKM' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored.
2013-08-30 05:28:44 ERROR OGG-01668 Oracle GoldenGate Capture for Oracle, ext1.prm: PROCESS ABENDING.
Database details
----------------
SQL> select object_name, object_type from dba_objects where object_name='DBMS_INTERNAL_CLKM' and object_type in ('PACKAGE');
no rows selected
SQL> select object_name, object_type from dba_objects where object_name='DBMS_INTERNAL_CLKM';
no rows selected
we have four location, four database server separately, we need to synchronize all the servers.
for example. all the server having table parts. if the parts_count column change in server. it should be change all the four servers.
I have a clients who currently implementing an application that using Oracle 10g in 20 distributed location. Each location will have its own database server and locally managed. I plan to create a Disaster Recovery (DR) Centre for this client in centralised location. I plan to setup 20 Application Server but only one database server with 20 instances. My question, can the Dataguard manage the replication between 20 database (with single instances) and single database (with 20 instances? The reason we design such way is to reduce the license cost of Oracle.
View 9 Replies View RelatedThere are 2 databases, database A and database B. Database A is Oracle 11.2.0.2 which runs on linux and Database B is Oracle 11.2.0.2 which runs on windows xp machine. In database A, there are 100's of tables which are being updated every 10 minutes or 15 minutes. For reporting purpose, the developer wants to run report for the tables. But since database A is being updated every now and then, generating reports takes almost 15 to 20 minutes. So the reports can be generated in Database B. Once in a day the database B should have the updated data from database A so that the reports can be generated in database B with less time. What could be the best solution for the database B to have the updated data on daily basis from database A in oracle?
View 3 Replies View RelatedWe have three unix servers with four databases (10gR2) containing "HP Operation Management Unix" (OMU) server messages for monitoring purpose, and we now want to transfer these data to one new database on a new server for reporting purpose.
The message table in each OMU database keeps the message row until it is "Acknowledged" or for maximum fourteen days, then it is moved to an historic table where it stays for another three days. Keeping data for only seventeen days are a performance issue.The new "Reporting database" is intended to hold messages data for the last 90 days.
I wonder which method to use to move/replicate data against the databases? Materialized view using database link, with view on top of the MVs. How to keep rows longer than the master (source) table, avoiding deletion when master row is deleted
Oracle Streams, with local capture and remote apply. How will this influence on the master database performance. There are about 10000 new messages in each OMU database every day. Is it possible having four streams connections against the reporting database ?
Or should I simply use database triggers which fires after insert and update and applies changes to the reporting database using database links ?
We have Oracle Database Link linked to MySQL . We want to force executing remote statement that contains group by on the remote MySQL server and let MySQL do the aggregation instead of Oracle.
We tried also DRIVING_SITE but it doesn't work and the query that was sent to MySQL from Oracle DB Link didn't include GROUP BY and it looks like that the Group by was executed on the local Oracle.
Is there a way for force executing GROUP BY statements on remote DB instead of the local Oracle DB?
Actually am trying to replicate two db servers from one in hong kong and another in china. when am trying to establish the replication, am getting error 'ORA-04052: error occurred when looking up remote object' like this...
but the same way i have tried in my local network, it is working fine.i have tried schema replication through enterprise manager grid control..