Replication :: How Much Amount Of Data Over Network
Mar 1, 2011I have implement multi master replication between two server.
How much amount of data transfer over the network? How to calculate this value?
I have implement multi master replication between two server.
How much amount of data transfer over the network? How to calculate this value?
We want to replicate the data between the databases.We have 4 databases in a network.If there will be any change in database 1,e.g. updation in any table,it should automatically replicate on other 3 databases.or user will change something in database 2 ,it should replicate on other 3 databases and vice versa. All 4 databases have same schema and same configuration.
View 1 Replies View Relatedwhat are the recommended network requirements for implementing Oracle Multi Master replication.
View 1 Replies View RelatedI have oracle 11gr2 database on linux os. It's total sga size is 500mb only. Now, if uses wants read the 1gb of data from database, then there is no sufficient memory in buffer cache. so how it will works. the transaction will get successful or it will fail.And i have another doubt, does oracle can read the data from memory only or it can also read directly from disk.
View 11 Replies View Relatedextract a huge amount of data from a couple of views... the problem is that they want it in TXT files with fixed record length. There will be like 6 files, for a total amount of about 10GB.
export those tables in the fastest possible way? If I'm not mistaken exp and expdp can't create txt files, so do I really need to use utl_file or spool?
I am working in a bank as an system consultant, i have a SAN Storage Area and oracle as below.
SAN 1
This interface includes the DATA FILES of the oracle tablespace
SAN 2
SAN1 Mirrors the DATA FILES of the oracle tablespace to SAN 2
1. Can i rely on real time data recovery from SAN2 ?
2. if SAN1 (Data Files are currupted) will the SAN2 Data Files will be currupted as well.
3. If the SAN2 is currupted then what Oracle Features can be used to have uncurrupted data.
I'm using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production and TNS for Linux: Version 11.2.0.3.0 - Production.Requirement is to create a script to add a LIST partition to some selected tables in a schema (tables do not have data, they are not partitioned). There are about 300 such tables (can vary) and their names are maintained in a separate table. Example -Existing table
-CREATE TABLE test_part(id number (2),
name varchar2(20),
audit_userid number (9));
Expected table
-CREATE TABLE test_part(id number (2), name varchar2(20), audit_userid number (9))
PARTITION BY LIST (audit_userid) (PARTITION p1_audit_userid VALUES (1));
Ultimate goal is to add more partitions based on the amount of data to be populated.
I have created 2 PCs (1 for Primary DB (IT_SERVER), 2 for Standby DB (IT_SERVER2) on Oracle VM Virtual Box in order to check my Data Guard configuration.
I have created a Shared Folder on 2nd PC (IT_SERVER2) for Oracle Standby DB. Now when I issue my query from 1st PC (Primary DB) to Create a Parameter file on 2nd PC Shared Oracle Folder, it returns following errors
SQL> CREATE PFILE='\IT_SERVER2E:ORACLEPFILESTLDB2.ORA' FROM SPFILE;
CREATE PFILE='\IT_SERVER2E:ORACLEPFILESTLDB2.ORA' FROM SPFILE
*
ERROR at line 1:
ORA-09210: sftopn: error opening file
OSD-04002: unable to open file
O/S-Error: (OS 67) The network name cannot be found.
Then I try this
SQL> CREATE PFILE=\IT_SERVER2'E:ORACLEPFILESTLDB2.ORA' FROM SPFILE;
CREATE PFILE=\IT_SERVER2'E:ORACLEPFILESTLDB2.ORA' FROM SPFILE
*
ERROR at line 1:
ORA-00911: invalid character
What to do?
I got a primary database which ships logs successfully to a standby database where it is applied successfully as well.
I can connect using the tns name defined for data guard (which are already used for log shipping) in sqlplus.
Yet, at the time of configuring data guard broker,the command add database 'standby' connect identifier is 'to_standby'; fails.
Looking into drc<instance>.log, I see the error ora-03113 network I/O.
I haven't seen any error like that online and I can't explain why it does that when logs can ship already with the same connection identifier.
I have edited dest_2 with and without parameters LGWR and SYNC and reconfigured the broker without success. I've removed the configuration files and tried again.
I've checked that the password file is the same on both servers as well.Nothing seems to work. Is there a parameter I'm missing somewhere.
We have an application that fetches and writes data into oracle database through pro c. oracle datyabase is on another server.
We are storing some secure information into oracle database so we want to encrypt the data sent by our aplication into oracle database.We do not want to use SSL(i.e certificates) and also do not want to make use of Advance Security Option available in oracle and also do not want to make any changes in sqlnet.ora file on server side.
achieve encryption of traffic between our application and Oracle database?
I am trying to write data to a network shared folder. When I write to a local file it works perfectly. Below is my procedure.
CREATE OR REPLACE procedure nbpsbp_file as
type r_cursor is ref cursor;
refr r_cursor;
tab_name varchar2(20):= null;
tab_name1 varchar2(20) := null;
tab_name2 varchar2(20) := null;
[code]....
When I execute the above procedure, it gives me the following error
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
ORA-06512: at "NBPSBP_FILE", line 36
ORA-06512: at line 1
I have also set the parameter utl_file_dir = '\10.16.10.225 emp' When I set the utl_file_sir to a local folder, for example, c: emp, and use the same path in UTL_FILE.FOPEN, then it works fine and writes the desired output to text file. But when I give it a network address, it raises the above error.
Suppose, we have one main database on one side(say A) and an external system on the other side(say C). Midway, we have one more staging database(say B).Lets take, we have one record related to Bank Information both in database A and database B and the following activites are performed.
1) "Bank Name" column in database B is updated by the external system C.
When this Bank Name is updated in database B by external system C, we need to update the value of this field in database A.
2) Suppose couple of days later, the Bank phone number of the same bank record is updated in main database A and the same update needs to be reflected in staging database B.
How can we take care of both these activities of data-synchronization. What are the different approaches we can take? FYI, we are on oracle 10g rel2 and Windows OS.
We have Development, Staging/UAT ( installed on XX.XX.XX.10 ) and Production ( installed on XX.XX.XX.20) Environment respectively. I have queries regarding getting the data from Production environment into Staging environment. The overall PROD database size is around 250 GB.
STAGING DATABASE DETAILS
SID : STG_DB
Staging Schema Name :schema_UAT
Replication Schema Name :schema_PrdReplica ( This is the schema where the production data gets loaded daily)
PROD DATABASE DETAILS
SID : PROD_DB
Prod Schema Name : schema_PROD
What is happening now:
----------------------------
There is a script (Stored Proc) written on staging ( STG_DB.schema_PrdReplica ) environment which executes daily in NIGHT and does replication. Currently we use DBMS_DATAPUMP to get the ENTIRE data/Meta Data (Everything) from Production to Staging. It is ta king significantly more time. It takes approx 8 Hours to replicate the everything from PROD_DB.schema_PROD to STG_DB.schema_PrdReplica
What I am expecting :
-----------------------------
I want to reduce the replication time.
I have heard about Level 0 (Full BackUp ) and Level 1 ( Incremental Cumulative ) Backups in RMAN. I am planning to take PROD_DB.schema_PROD Full Backup (Level 0) on Sunday and will restore that on STG_DB.schema_PrdReplica immediately. And on weekdays ( Mon - Fri ) I will take Level 1 ( Incremental Cumulative ) and will restore that on STG_DB.schema_PrdReplica
I am assuming by doing so, the overall replication time will be reduced. How can I implement this with script assuming that two different servers are on different machines.
In multimaster replication, I am getting the following error in the master sites.
ORA-01403: no data found
ORA-01403: no data found
But when i checked for the data, I found that the records are present.
We need to transfer data from oracle 10g to Oracle 9i in the following condition.
There will be two database server , one is online server where online user fill the form which is generated by java, spring , hibernate and using database 10 g. at day end i need to execute a process that transferring data from online server to offline server that is in oracle database 9i. This process is scheduled. Some security reason client do not kept this two database on same network. My challenge is that transfer data from online server to offline server with applying client security norms.I have option like:
1) Using Oracle replication method, creating materialized view on remote server , refreshing it at regular interval. but database connectivity is not contineous, should i go for that ?
2) Write java application on intermediate server where we write process to get the connection of this two database servers. From java application we call the procedure for selecting data from Oracle 10g and insert into oracle 9i database and using flag on both data to identified how many rows are transfered and how many remaining for trasfer.
We are getting following error when we are trying to extract data from ASM.
GGS ERROR 500 Oracle GoldenGate Capture for Oracle, ext_1.prm: Getting attributes for ASM file +DATA/testgg/onlinelog/group_1.257.742844671, SQL <BEGIN dbms_diskgroup.getfileattr('+DATA/testgg/onlinelog/group_1.257.742844671', :filetype, :filesize, :lblksize); END;>: (6550) ORA-06550: line 1, column 7: PLS-00201: identifier 'DBMS_DISKGROUP.GETFILEATTR' must be declared ORA-06550: line 1, column 7: PL/SQL: Statement ignoredNot able to establish initial position for begin time 2011-02-16 16:42:05.
Database version is 10.2.0.4.
Oracle Data Replication. I am using Oracle 10g release 10.1 and I want to do replication of my data from one machine to another machince.
View 2 Replies View RelatedActually am trying to replicate two db servers from one in hong kong and another in china. when am trying to establish the replication, am getting error 'ORA-04052: error occurred when looking up remote object' like this...
but the same way i have tried in my local network, it is working fine.i have tried schema replication through enterprise manager grid control..
we have four location, four database server separately, we need to synchronize all the servers.
for example. all the server having table parts. if the parts_count column change in server. it should be change all the four servers.
Materialized views are normally used for summarized data access.
CREATE MATERIALIZED VIEW mv_snapshot_A
REFRESH FAST START WITH SYSDATE
NEXT SYSDATE + 20/1440
WITH PRIMARY KEY
AS SELECT * FROM A;
This does not seem to be the case here as the materialized view seems to be just a full select. The overhead of the snapshot logs are concerning for this core table. Can we turn off logging in 10g ? the materialized view is defined as fast refresh/ build immediate .
The main requirement here is to keep the snapshot every 15 minutes so that the users can see the updated information ( the flow of data from one location to other).
User get the location wise count of data and can go further in details like in which location wise system wise data count. As the base table is volatile the materialized view is used so that the moment the user clicks for location wise details the data is static for 15 min and user don't get confused.
i need to copy data from certain table's from one DB to another at random intervals. Table structure for the one's getting copied can be same in both the DB.
After reading various posts here i have understood that it can be done using Oracle Replication and Oracle stream.
how these 2 methods work and how they are different from each other.
I have a clients who currently implementing an application that using Oracle 10g in 20 distributed location. Each location will have its own database server and locally managed. I plan to create a Disaster Recovery (DR) Centre for this client in centralised location. I plan to setup 20 Application Server but only one database server with 20 instances. My question, can the Dataguard manage the replication between 20 database (with single instances) and single database (with 20 instances? The reason we design such way is to reduce the license cost of Oracle.
View 9 Replies View RelatedHow I can create a Materialized View without having any data in it.
For e.g.
I create a Materialialized View based on a View.
CREATE MATERIALIZED VIEW test_mv
REFRESH FORCE ON DEMAND
AS
SELECT * FROM test_view
In the above case the data fetched by the view test_view gets stored in the Materialized View test_mv. Suppose I want materialized view test_mv to get created with all the columns of test_view but not the data. I will refresh the materialized view test_mv later for data as and when required.
What shall I do for immidiate formation of materialized view test_mv without data.
m trying to create replications of data from the M.V. site. to master site
/* AT MASTER SITE */
1. MASTER REPLICATE GROUP
2. MASTER REPLICATE OBJECT
3. REPLICATION SUPPORT
4. REPLICATION ACTIVITY
5. M.V. LOG
/* AT M.V. SITE */
1. MATERIALIZED VIEW (M.V)
2. DBMS_REFRESH.MAKE (REFRESH GROUP)
AND I AM NOT ABLE TO CREATE
1. CREATE_MVIEW_REPGROUP
2. CREATE_MVIEW_REPOBJECT
IN M.V. SITE , and i am getting following error while creating REPGROUP
CODE
SQL> BEGIN
2 DBMS_REPCAT.CREATE_MVIEW_REPGROUP (
3 gname => 'emp_repg',
4 master => 'orc1',
[code]..
Replication happens across the servers to make sure that the data is the same (synchronized). If one of the server crashes and later recovers, does it need to repair the data to keep them in sync with the other servers?
View 4 Replies View Relatedi need to set up a central server with all the master tables and two other local database which will hold the updatable materialized view of the master table...the databases must be synchronized with central server..and user will work on the materialized view database...
View 10 Replies View RelatedSome materialized views get status broken on refreshment, but only sometime. When I try to refresh them manually I get following message:
"ORA-01400: cannot insert NULL into...".
But I know for sure that there are no NULL values in the master table, MV and master tables are declared in the same way and all columns in master tables are NOT NULL columns. Another ting is that this error I get only on columns with data type CLOB.
I have created a MV on Oracle 10gR2 using dblink. Source database is AS400 DB2.MV script:
CREATE MATERIALIZED VIEW MV_BU
TABLESPACE TB_XXX
PCTUSED 0
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
[code]...
Above query should be refreshing the data every minute but I realized that its not doing it.
I have two servers:
10g server 10.1.4.30
11g server 10.1.4.32
I have a table testing_mview on 10g which do not have any primary key. I have created MV log for it:
CREATE MATERIALIZED VIEW LOG ON testing_mview WITH ROWID;
Materialized view log created.
My requirement is
1) if there is an UPDATE/DELETE/INSERT on testing_mview, it should be writen to MATERIALIZED VIEW LOG (this has been achieved)
2) Materialized view testing_mview1 of 11g on server 10.1.4.32 should pull these changes on a scheduled basis ( I have created the database link ORCL10R2 here from 11g to 10 g)
On 11g:
SQL> create database link ORCL10R2 connect to omig identified by pswd using
'(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.4.30)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ICS3)
)
)';
Database link created
SQL> create materialized view testing_mview1 REFRESH FAST with rowid as select * from testing_mview@ORCL10R2;
create materialized view testing_mview1 REFRESH FAST with rowid as select * from testing_mview@ORCL10R2
*
ERROR at line 1:
ORA-04052: error occurred when looking up remote object SYS.DBMS_SNAPSHOT@ORCL10R2
ORA-00604: error occurred at recursive SQL level 2
ORA-06544: PL/SQL: internal error, arguments: [55916], [], [], [], [], [], [], []
ORA-06553: PLS-801: internal error [55916]
ORA-02063: preceding 2 lines from ORCL10R2
I have 3 reporting tables with 2.2 million records each being rebuilt nightly. The data is used online 24/7 by users and thus, snapshot tables are being built from the refreshed reporting tables. The current method to do this:
delete from snapshot table;
insert into snapshot table (select * from report table);
<repeat for other 2 tables>
commit;
This seems to me to be resource intense on the system even though the table is defined with nologging option.
Is it better to create a MV (select only with refresh complete on demand)? The query is very simple without joins so it at first seems like overkill. However, I am also seeing that dbms_mview.refresh allows for an atomic option. Thus, if 1 of the 3 MVs fails during refresh all 3 rollback, which is a nice feature.
Are there better ways to replicate a snapshot table that I've missed? Is a delete and insert strategy a bad idea?