Replication :: Replicate (A) Schema Objects To (B) At Server?
Mar 30, 2011I told about Oracle multi master replication below.
Can I replicate some objects of "X" schema at A server to "Y" schema at B server using MM replication?
I told about Oracle multi master replication below.
Can I replicate some objects of "X" schema at A server to "Y" schema at B server using MM replication?
I need the steps to replicate one schema to other system. I am using standard edition one 11gR2. Necessary steps to implement (basic replication).
View 2 Replies View RelatedHow can i add exttrail to a replicate?
View 1 Replies View RelatedI have a table MYTABLE in database mydb1 duplicated via materialized view and materialized view log and refresh_snapshot commands to a MYTABLE on mydb2 database.
I like to duplicate this table MYTABLE to a third database mydb2, using the same method (materialized view and refresh_snapshot command).
Is it possible ? What's hapend to the materialized view log where I launch a refresh_snapshots on mydb2 ? How is this materialized view log truncated ?
I have a table in one database and I want to replicate it in another two databases with materialized views. The refresh may be fast. Is this possible?
View 1 Replies View RelatedIam having the following query, After executing schema refresh using export & import , getting count of database objects comparison to be done,
-- SELECT 'TRUNCATE TABLE '||OWNER||'.'||TABLE_NAME||' ;' FROM DBA_TABLES WHERE OWNER='PRICING' order by TABLE_NAME;
SQL> SELECT 'select count(*) from '||OWNER||'.'||TABLE_NAME||' ;' FROM DBA_TABLES WHERE OWNER='PRICING' order by TABLE_NAME;
The output expected was to display each table name in a schema following below with corresponding number of records to be displayed, but it wasn't showing correctly.
I was importing one schema from Oracle 10g to 11g using traditional import. I imported as a SYS user, so all the objects created in SYS schema. how can I remove these objects and retain only default SYS objects
View 11 Replies View RelatedI heard that USERS tablespace should not contain any other application schema objects.
If the above statements is true , why it should not contain other schema objects ?
Implemented the Golden Gate replication tool? In particular, to replicate data from Oracle to Sybase?
No details are needed, just a quick nod indicating "yes, it has been done successfully".
I get import error while trying to import objects into schema.
Export file created by EXPORT:V11.02.00 via direct path
import done in US7ASCII character set and UTF8 NCHAR character set
import server uses UTF8 character set (possible charset conversion)
. importing DEMO's objects into TEST
. . importing table "TAB1"
IMP-00058: ORACLE error 1950 encountered
ORA-01950: no privileges on tablespace 'USERS'
i understand we need to grant the user space resource on the tablespace as below.
ALTER USER <user> QUOTA UNLIMITED on <tablespace_name>
My another question is can we grant QUOTA UNLIMITED on <tablespace_name> to user ?
I imported a schema HR from export DUMP ....i can find all the objects of schema HR in the imported database... but i got an error for a plan_table which is assigned to USERS tablespace in the source database.. ...
HERE COMES THE ERROR I GOT DURING IMPORT:
[oracle@localhost mom]$ imp file=exp_schema_ref.dmp log=imp.ref.log fromuser=hr touser=hr commit=y
Import: Release 10.2.0.1.0 - Production on Mon Jul 26 00:03:57 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Username: sys/sys as sysdba
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options
Export file created by EXPORT:V10.02.01 via direct path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
. importing HR's objects into HR
. . importing table "COUNTRIES" 25 rows imported
. . importing table "DEPARTMENTS" 27 rows imported
. . importing table "EMPLOYEES" 107 rows imported
[code]....
i work in an application that should make the replication from a publusher table to a remote subscribe table, using snapshot,and trigger, replication data of update works perfectly (update,insert,delete), but when i try to add or dropp a clumn in the publisher table, repplication fail, i know that my method d'ont replicate ddl statment like create or alter table, so i would like the better way to do the replication of the ddl statment without loosing tha data in the subscribe table, i'm working with oracle XE,
View 1 Replies View RelatedI am looking at a performance issue at the moment and trying to replicate on a test system. I am initially looking at the impact of upto-date statistics on the main schema's objects.
For this I wanted to:
first run the batch with whatever stats were present in the database Flashback the db to before the batch . Gather stats Re-run the batch with updated stats and compare results.
However, I inadvertently ran the stats job before running the load the first time! I have the SCN from when the environment was set up like production (ie before the stats were run) so am I correct in saying that if I flashback to this point then the stats will be "old" and I can just run the batch then? I know I can verify this when I Flashback the database by looking at LAST_ANALYZED on tables etc but it would be good to know this before hand as it's a 12 hour batch.
We transferred our Oracle database 11.1.0.7 from windows 2003 enterprise edition 32 bit to windows 2008 enterprise edition server 64 bit.Database is working fine but we have 53 uncompiled objects which are related to OLAPSYS and public as follows
OLAPSYSALL$OLAP2_AW_CUBE_AGG_LVLVIEW
OLAPSYSALL$OLAP2_AW_CUBE_AGG_MEASVIEW
OLAPSYSMRAC_OLAP2_AW_DIMENSIONS_VVIEW
OLAPSYSALL$OLAP2_AW_CUBE_AGG_OPVIEW
OLAPSYSALL$OLAP2_AW_CUBE_AGG_SPECSVIEW
[code]....
Our business objec is working fine and all other schema does not have any uncompiled objects.How can we validate the OLAPSYS and PUBLIC schema.
Actually am trying to replicate two db servers from one in hong kong and another in china. when am trying to establish the replication, am getting error 'ORA-04052: error occurred when looking up remote object' like this...
but the same way i have tried in my local network, it is working fine.i have tried schema replication through enterprise manager grid control..
Let's say we have Table - A and we would like to replicate specific row transaction to Table B.
Here are the rows in *Table A*
Time: Lets say 15:00
A1 Just Updated @15:00
A2 Just inserted @15:01
A3
B1 - Daily Delete Row -i.e just deleted a while back - Non scheduled process --executed by application @15:02
B2 -
B3 - Daily Delete Row - i.e just deleted a while back -- Non Schduled process --executed by application @15:05
B4 - Just recently purged (As part of 180 Day purge ) - Scheduled process executed by operations team @15:10
B5 - Just recently purged (As part of 180 Day purge ) - Scheduled process executed by operations team @15:10
B6 -Just recently purged (As part of 180 Day purge ) - Scheduled process executed by operations team @15:10
Current Data in Table B (Before Replication) @15:00
A1 (without updates)
A3
B1
B2
B3
B4
B5
B6
Expected rows in Table B (via replication/snapshot/materialized view / or any other method)
*Replication at 15:30*
Table B - Read Only
Expected rows after replication-
A1 -- Newly updated details
A2 -- Newly inserted row
A3
B1 - Daily delete row is expected to be replicated
B2
B3 - Daily delete row is expected to be replicated
***Note row B4 is not expected to be replicated to table B.
Questions:
1) How can we get updates, inserts and daily deletes replicated while ignore large purges?
2) How can large purge changes be reflected in replicated tables as well without deleting daily deletes?
I was wondering whether we all think of disadvantages or is it we always love the concept and use them..so recently i started out to think what may be the disadvantages of schema objects.
For example Packages has lot of advantages like Information hiding, Scope of varaibles declared in spec, dependency crisis etc, then i read somewhere that if a package is referenced then the entire package will be downloaded in the memory.
So my question is what if the package is very large would it accomdate in the memory without any leaks , i am assuming that its like LEAST RECENTLY USED mechanism. i have a question for you about disadvantages
1. Packages
2. Cursors
3. Collections, includes Associative arrays, VARRAY, nested table
4. materialized views
5. views
6. indexes
7. having more than 100 columns in a table need to spilt it or to keep in the same table.
any object can have its own disadvantages.
I got the below error while executing the procedure. What the procedure doing is recreate the objects of one schema into another. After recreating some objects it throws the error.
code is...
1 CREATE OR REPLACE PROCEDURE SCHEMA_IMPORT6
2 AUTHID CURRENT_USER
3 AS
4 V_Objid NUMBER;
5 V_Xml XMLTYPE;
6 V_Xml_Ind XMLTYPE;
[code]....
I wanted to move all db objects from one schema to another schema.
View 4 Replies View RelatedRecently I migrated our Oracle to new machine using exp/imp on schema basis. After import finished I had tons of invalid objects in database. I ran utlrp.sql script and lots of them got validated. Than I recompiled manually in EM those are left but two invalid objects (MGMT_JOB_UI description and body) in SYSMAN schema gave error while recompiling.
Now when I click on any scheduled jobs to edit it or view its schedule, EM throw following error:
X Error
jobType - jobType page property expected
I think its related to that invalid package. The errors while compiling the specification are as follow:
Line # = 50 Column # = 1 Error Text = PL/SQL: Declaration ignored
Line # = 65 Column # = 9 Error Text = PLS-00201: identifier 'JOBRUNTABLETYPE' must be declared
Line # = 88 Column # = 1 Error Text = PL/SQL: Declaration ignored
Line # = 107 Column # = 9 Error Text = PLS-00201: identifier 'JOBEXECTABLETYPE' must be declared
The Package specification is as below,
AS---------------------------------------------------------------------------------- type definitionTYPE CURSOR_TYPE IS REF CURSOR;-- Get the targets for this step, if any-- if p_return_display_names is false, return the target's internal nameFUNCTION get_step_targets ( p_step_id NUMBER, p_return_display_names BOOLEAN DEFAULT true ) RETURN SMP_EMD_STRING_ARRAY;-- Get the targets for this step, if any, as a comma separated string-- if p_return_display_names is false, return the target's internal nameFUNCTION get_step_targets_str ( p_step_id NUMBER, p_return_display_names BOOLEAN DEFAULT true ) RETURN VARCHAR2;-- Get the parameters for this job, filter as specified for this jobtypePROCEDURE get_visible_params ( p_job_id RAW, p_exec_id RAW, p_params_out OUT CURSOR_TYPE);-- Get the URI for this uri_use-- see emSDK/job/dtd/UriSource.java for uri_use constantsFUNCTION get_display_uri ( p_job_type IN VARCHAR2, p_uri_use IN
[code]....
i have schema A and i want to check who(which schemas) can access schema A's objects and what privileges other schemas have on objects of schema A.
View 7 Replies View RelatedI am building an information service that manages Suppliers. The suppliers are used by our billing system, tender system and sales system. Though 60% of the attributes of supplier are unique to each system, there are still 40% attributes of Supplier that are shared across the systems.
My objective is to build a flexible system, so that change to one individual system's data, should not impact other systems. For example, if i need to make certain tables offline for upgrading them, it should not impact rest of the systems that need supplier information. What is the best way of achieving this? Should all the different context specific attributes live in one schema, but deployed on different table spaces? Also, the read and update may happen more for one set of attributes than the other. How should i logically represent them via one model, but deploy them in such a fashion that they can evolve independently?
A function returns the comments of all objects of the schema using metadata api. I used DATABASE_EXPORT as object_type for open function. which name (eg: base_object_name, base_object_schema...) should i use to get comments of single object. My code is..
1 CREATE OR REPLACE function f_depen_obj
2 return clob
3 as
4 a number;
5 b number;
[code].....
I got comments of all objects when i commented 11th,12th lines. Now i want the comments of EMP table of SCOTT schema. I got the following error ..
SQL> select f_depen_obj from dual;
ERROR:
ORA-31603: object "EMP" of type TABLE not found in schema "EMP"
ORA-06512: at "SYS.DBMS_METADATA", line 1546
ORA-06512: at "SYS.DBMS_METADATA", line 1583
ORA-06512: at "SYS.DBMS_METADATA", line 1901
ORA-06512: at "SYS.DBMS_METADATA", line 3806
ORA-06512: at "SYS.DBMS_METADATA", line 3784
ORA-06512: at "SCOTT.F_DEPEN_OBJ", line 17
no rows selected...i got output (comments of single object) when i use COMMENT instead of DATABASE_EXPORT in open function(line no: 9). But i need to use DATABASE_EXPORT as a object_type. Which name is suitable for set_filter procedure instead of 'NAME'?
Is there a way to restore the schema and its associated objects once it is dropped?
1. The flash back database feature is not ON in the database where schema was dropped.
2. The data files/ table space has not been dropped
3. There was no export taken for the schema.
I need a job to get executed for every 1hour.Like i need a query to identify invalid objects in database per schema ,invalid type and this is a job to run every hour.How to schedule it.I only know that dba_jobs have the info.
View 3 Replies View RelatedIs it possible to create replication on same database with different schema ?
View 1 Replies View RelatedI want to import a schema from one database schema to another schema b from db STBTST to STATST and from schema CMSSTAGINGB to CMSSTAGINGA
I first want to test this to my own schema (mvanmannekes) CMSSTAGINGA is filled at the moment.
So i've created a dump from STBTST-CMSTAGINGB For importing im using this statement:
impdp mvanmannekes/password schemas=cmsstagingb remap_tablespace=cmsliveb_data:cmslivea_data
remap_tablespace=cmsliveb_index:cmslivea_index
remap_schema=cmsstagingb:mvanmannekes directory=expdp_dir dumpfile=cmstagingb.dmp
I'm getting this:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "MVANMANNEKES"."SYS_IMPORT_SCHEMA_01": mvanmannekes/********
schemas=cmsstagingb remap_tablespace=cmsliveb_data:cmslivea_data
[code]....
A single master schema where many developers are accessing. all share same password.
now i would like to trace all the changes made by each users. so i create a individual users for all and grant permission to access that schema.do i have a possibility of auditing the changes did by each user for that particular schema
Is it possible to replicate table data on real time from sql server (2005 32 bit or sql server 2000 32 bit)to oracle 10g running on linux 64 bit? If yes then what are the steps.
It will be one way replication from sql server to oracle. Which option is best sql server dts or Oracle Stream replication to replicate table data.
how to take all schema metadata export except one schema (scott)
can i use like EXCLUDE=schema:"IN('SCOTT')