Using Total Recall With Streams - Reporting Database?
Jun 2, 2010this Topic was already discussed in "Physical standby on oracle 11g - Reporting needs" but without finishing.
View 5 Repliesthis Topic was already discussed in "Physical standby on oracle 11g - Reporting needs" but without finishing.
View 5 RepliesA streams apply process which applies to a sql sever database is increasing its pga use continually until i stop the process and restart it. I need to stop it once every week or it will use too much of the pga and the database will hand causin paging etc.
View 1 Replies View RelatedI would like to know the Replication method which is fast and the best approach,we need two schemas to be moved/replicated to a new reporting database.It appears that data is to be flown in one way,do we proceed with Materialized view replication or please clarify about Oracle Streams and Advanced replication. what are the factors to decide the replication method.
View 3 Replies View RelatedI want to audit user connection on my reporting database, and send a report to application team on monthly basis, with a list of users who are not connected for a month and remove them.
What would be best method, i know there is LOGON trigger, or database level auditing.
Database Version - 10.2.0.4.0
OS - Red Hat Linux
We have configured oracle one way stream between two databases. Source database is capturing the changes (No downstream configured). Configuration was working fine but destination database was lagging behind very much i.e about 15 days behind the source database. We are ok with this but the problem is now that , as per client request we have restored previous backup and open the database with resetlog option in source database. After resetlog , archivelog sequence has been changed and stream is not working.
Can I apply the previous archivelog (before resetlog archivelogs ) in destination database anyway.Source database is a production database.
I working on a Production Environment, where our Database is running on a IBM-AIX platform.I need to implement Logical standby database for reporting purposes, in order to improve the performance.Now the transaction and the reporting is done at the same site itself.I want to know that, operating system and platform architecture (RAM size, no. of CPUs, ORACLE_HOME etc..) on the primary and standby systems must be same or can be different. complete configuration for Dataguard. My os version is,
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
I encountered the following error while trying to setup streams replication at the database level using dbms_streams_adm.maintain_global. Desmond begin*ERROR at line 1:ORA-23616: Failure in executing block 6 for script.
E00C49DDDB27C899E040A8C04C0119DA withORA-06550: line 21, column 3:PL/SQL: ORA-00942: table or view does not existORA-06550: line 21, column 3:PL/SQL: SQL Statement ignoredORA-06550: line 23, column 3:PL/SQL: ORA-00942: table or viewORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 659ORA-06512: at "SYS.DBMS_RECOVERABLE_SCRIPT", line 682ORA-06512: at "SYS.DBMS_STREAMS_MT", line 2427ORA-06512: at "SYS.DBMS_STREAMS_ADM", line 3004ORA-06512: at line 2 SQL> select forward_block from dba_recoverable_script_blocks where script_id = '
[code]....
We have three unix servers with four databases (10gR2) containing "HP Operation Management Unix" (OMU) server messages for monitoring purpose, and we now want to transfer these data to one new database on a new server for reporting purpose.
The message table in each OMU database keeps the message row until it is "Acknowledged" or for maximum fourteen days, then it is moved to an historic table where it stays for another three days. Keeping data for only seventeen days are a performance issue.The new "Reporting database" is intended to hold messages data for the last 90 days.
I wonder which method to use to move/replicate data against the databases? Materialized view using database link, with view on top of the MVs. How to keep rows longer than the master (source) table, avoiding deletion when master row is deleted
Oracle Streams, with local capture and remote apply. How will this influence on the master database performance. There are about 10000 new messages in each OMU database every day. Is it possible having four streams connections against the reporting database ?
Or should I simply use database triggers which fires after insert and update and applies changes to the reporting database using database links ?
11.1.0.7
Is this the command to check the total size of disk space a database have used.
select sum(bytes) from dba_data_files;
Is the temp size included?
I am trying to find the space occupied on disk by the tablespaces of the database that contain tables, some (and not all) of whose columns are encrypted. My query is like this:
select distinct a.tablespace_name, file_name, bytes /(1024*1024*1024) File_Size_In_GB
from dba_data_files a, dba_tables b,
(select distinct owner, table_name from DBA_ENCRYPTED_COLUMNS) c
where
a.tablespace_name = b.tablespace_name and
b.owner = c.owner and
b.table_name = c.table_name
order by a.tablespace_name;
The output of the query is as shown in the attached file:
TABLESPACE_NAMEFILE_NAMEFILE_SIZE_IN_GB
DMS_DATAM:ORACLEORADATASPOPRODDMS_DATA_0044.DBF29.296875
DMS_DATAM:ORACLEORADATASPOPRODDMS_DATA_0045.DBF29.296875
DMS_DATAM:ORACLEORADATASPOPRODDMS_DATA_0051.DBF29.296875
DMS_DATAN:ORACLEORADATASPOPRODDMS_DATA_0012.DBF19.53125
[code]...
Since the output (under the heading Total Size of the tablespace) is probably the sum of all the datafiles returned by the query and is obviously incorrect, I have not given the rest of it. I also tried the following:
select distinct a.tablespace_name, file_name, bytes /(1024*1024*1024) File_Size_In_GB,
sum (bytes/(1024*1024*1024))over (partition by a.tablespace_name order by file_name) "Total Size of the tablespace"
from dba_data_files a, dba_tables b,
(select distinct owner, table_name from DBA_ENCRYPTED_COLUMNS) c
where
a.tablespace_name = b.tablespace_name and
b.owner = c.owner and
b.table_name = c.table_name
order by a.tablespace_name ;
[code]...
Here, the fig. under the heading "Total Size of the tablespace" are probably the sum of all the records returned by the query if distinct is not used i.e all the data file sizes returned by the query.
tune my query and get the desired results? I think this can be achieved by group by with rollup, cube, order by and grouping functions, but am not sure how to proceed. I know that I can get the results by using Enterprise Mgr. Console in 2 mins., but would still like to get the results with the queries.
I am using Oracle 11G. I have set of tables in a schema - DDXX_UTIL.
Table Names wll be with the high level qualifiers DDXX_REJ_EMP*
DDXX_REJ_EMPLOYEE
DDXX_REJ_EMPLOYEE_DEPEND
DDXX_REJ_EMPLOYEE_ADDRESS
DDXX_REJ_EMPLOYEE_SAL
DDXX_REJ_EMPLOYEE_EXP
My requirement is to get the number of rejections happened on that day after the batch cycle for the tables with the high level qualifier DDXX_REJ_EMP* and list out each table followed by the rejection cause and count.
Expected Result
-----------------------------------------------------------------
TABLE NAME : DDXX_REJ_EMPLOYEE
REJECTION_CAUSE COUNT
ID LIST MISSING 25
MANDAORY FIELDS IS NULL 56
TABLE NAME : DDXX_REJ_EMPLOYEE_DEPEND
REJECTION_CASUE COUNT
ID LIST MISSING 25
MANDAORY FIELDS IS NULL 56
Given below is the create statement for one of the table.
CREATE TABLE DDXX_UTIL.DDXX_REJ_EMPLOYEE
(
REJECTION_CAUSE VARCHAR2(510 BYTE),
INTEG_REJ_DATE VARCHAR2(10 BYTE)
);
SAMPLE_DATA
DDXX_REJ_EMPLOYEE
REJECTION_CAUSE INTEG_REJ_DATE
INVALID ID 31-OCT-10
INCORRECT FIELD 31-OCT-10
INVALID ID 31-OCT-10
INCORRECT FIELD 31-OCT-10
I need to group by the rejection cause and integ_rej_date for all the tables.
I am using OEM Grid control 10g for monitoring/reporting purposes and a bit new to OEM.I am able to generate database alerts sent via email say for DB_abc1 and also generate reports.I installed Oracle 10g agent on a another box and lets say with DB_abc2 running on it,and did not realise that there was a database control already installed on that box, so I
1.Uninstalled database control on DB_abc2 box.
2.Uninstalled Oracle 10g agent and re-installed it on DB_abc2 box.
2.Was able to configure the DB_abc2 database from OEM Grid control gui ie the dbsnmp user password and was able to connect to it.
But when I put in alert for things like ORA-XXXX errors I am not getting any emails, also when I put in database usage reports(to be sent via emails) the reports are not picking up any data, they are empty.
I checked the agent on the box in question and the heart beat is OK, and I manually uploaded using "emctl upload" and it is sucessful.The only alerts I am getting for DB_abc1 is the agent up/down thats it.
DB_abc1 is a 9208 database and DB_abc2 is 10201 database.
I have a sql to pull all payments from vendors for a specific time period; however, now tasked to only show the Address of each Tax Reporting Site. I have tried several commands but have not been successful.
View 7 Replies View RelatedI have a table PRODUCT with following structure:
Productcode Productname Quantity
AF Ade Fgh 100
LO Ldo Ope 50
SK Ske Kro 47
....
There may be any number of records in the table. Now I need to print distinct products in different columns such as.
AF LO SK
100 50 47
There may be any number of products...I need to print all those products as different columns as shown above. How to write the Stored procedure for this?
I need write a query based on a bunch of user supplied IDs. The IDs will be pasted as plain text, one per row, by end user in a memo field in the reporting environment, and I need to do something like this:
SELECT PHONE_NUMBER FROM TELEPHONE
WHERE
ID IN('MEMO_ID1', 'MEMO_ID2',.....)
Reporting environment does not provide any tools to automatically convert plain text into IDs.
We are copying our transaction tables data into another database for our reporting applications (say every day midnight refresh will happen).
The Transaction Database has some 30tables. Existing system is following below points and it is taking 2hours to complete.
1) Truncate data from reporting database (or schema)
2) Direct path Insert into reporting database (or schema) as select * from transaction tables.
3) Rebuild index and Enable constraints.
Note: Each tables data will vary from 30lakhs to 50lakhs. Dump/import/export is not advised by the client.
I want to cut down the time i.e., below 2hours. Instead of above method. Can go for a field in each table specifying the time of each records update/insert operation and then pick the modified records only and copy into reporting db.
My DBA gave me a table with only one date column say Table1.Date. Its in the format of Date and say it is = 7/23/2012.
Now i have to create my own Reporting View(which is used for reporting) based on that date column like below:
It should be a column with values in the following format =
2012-07
2012-06
2012-05 etc....upto
2010-01
So i started out my creating like this:
select
to_char(Table1.Date,'yyyy-mm')
from Table1
Union
[Code]....
.and so on till i get 2010-01.
there has to be a better way to do this.
why we seem to be unable to report against the ap invoice hold release date using Discoverer in Oracle 11.5.10? the person who wrote our current report used a decode statement to look at the last update date of the release lookup code to create a release date, but i am trying to recreate this in a different tool (Qlikview) and just wanted to understand why we seem to be unable to report on the field as is!
View 4 Replies View RelatedI'm running a query like the below but now i would like to make the last line actually say Grand Total. Instead of just total.
SELECT decode (grouping (farinva_invh_Code),0,null,'Total') farinva_invh_Code,
--decode ( grouping (amt),0,null,'GrantTotal')Grant_Total,
farinva_invh_Code,
spriden_id,
--spriden_last_name "last Name"
[Code]....
Source DB is logical standby 10.2.0.5
Destination DB is 11.2.0.3
Is it possible to set up streams on the logical DB to replicate some schemas to the 11gR2 DB?
Am trying to implement Oracle Streams Replication (Using Metalink Note 733691.1).I have configured the steps, but in my alert log am getting the below error:
Check that the primary and standby are using a password file and remote_login_passwordfile is set to SHARED or EXCLUSIVE, and that the SYS password is same in the password files. returning error ORA-16191
am successfully able to connect db's both server
From 1st server
sqlplus sys@2nddb as sysdba
From 2nd server
sqlplus sys@1stdb as sysdba
while conning it is asking for password
Both DB's are created with Same Oracle sys user password. after this disabled case sensitivity , still the error persists.
Platform: Windows 2003
Streams Set up: One way Streaming at table level
The error: ORA-26786: A row with key ("REPT_NUM", "STATE_CODE", "SURVEY_ID") = (067305669, 49, J) exists but has conflicting column(s) "DATE_TIME", "PREV_PARENT_ID" in table TOPCATI_JOLTS.UNIT ORA-01403: no data found
We are consistently getting this error every other day and some weeks more often, different records of course. On the capture site the application does a process called split cases. In this process the application will take an old PK case num insert a new PK case number with all of the data of the old case (Parent level). At the unit (Child level) the application will change all of the units to this new FK case number. This means the old case (parent) is left with no child units. This is all one transaction.
Is it possible that streams may be applying the LCR's out of order? especially since that whole process is one transaction.
I am trying to setup a one way queue that will be consumed by an inhouse ESB.We currently are able to create a multi-consumer queue using:
CODEBEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(queue_table => 'strmadmin.streams_queue_table',
queue_name => 'strmadmin.streams_queue');
END;
but unfortunatly the ESB that we are using uses a jms component which seems to only be able to take mono-consumer queue.So we have created our queue usuing the following
CODEBEGIN
DBMS_AQADM.CREATE_QUEUE_TABLE( Queue_table => '"STRMADMIN"."STREAMS_QUEUE_TABLE"',
Queue_payload_type => 'SYS.ANYDATA',
storage_clause => 'PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 TABLESPACE
[code]...
This therefore permits us to have a queue/queuetable which pushes data to a single consumer.The probleme comes when we try to add a table rule using the following command:
CODEBEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES( table_name => 'source.t1',
streams_type => 'apply',
streams_name => 'streams_apply',
queue_name => 'strmadmin.streams_queue',
[code]...
We then get the following error:
CODEORA-06512: on line 2 24039. 00000 - "Queue %s not created in queue table for multiple consumers"
*Cause: Either an ADD_SUBSCRIBER, ALTER_SUBSCRIBER, or REMOVE_SUBSCRIBER procedure, or an ENQUEUE with a non-empty recipient list, was issued on a queue that was not created for multiple consumers.
*Action: Create the queue in a queue table that was created for multiple consumers and retry the call.
We are able to create the capture rule without any problem but without the apply rule, nothing seems to end up into the queue table.AQ is not a viable solution since it is troublesome when it comes to deletes and mass updates.
We have a core banking database and this database includes our customer related tables and these tables are really huge.
And we have other database for some applications and this database needs fresh customer data which take place in core banking database. Our current method is materialized views but we have some performance problems about it.
What can give better performance for synchronizing the tables between databases?
-oracle streams?
-triggers?
-dbms_comparison?
i'm trying to write a pl/sql to find all missing archived logs that are need for streams replication.
There is already a oracle metalink note for this. But yet it would give only the archive log name that contains my dba_capture.start_scn and we need to check if the files exist in disk or not!
The problem here is, when using ASM, dba_registered_archived_log view is truncation the file name and it is really difficult to pin point the logs. So is it fine to join this view with V$archived_log? is deleted and status column would do the trick? I modified the plsql as below. Is this fine/accurate?
CODEdbms_output.put_line('Capture will restart from SCN ' || lScn ||' in the following file:');
for cr in
(select decode (a.name, NULL, 'NOT FOUND', a.name) name, to_char(a.completion_time, 'hh42:mi:ss') completion_time from v$archived_log a,dba_registered_archived_log b where lscn between b.first_scn and b.next_scn
and a.deleted = 'YES' and a.status != 'A')
loop
f_rec :=1;
[code]......
Can we use Materialized views in Streams like how we use tables... ?
View 11 Replies View RelatedMy setup is following:
CODEhost22 (Oracle Linux 5.9) | host30 (Windows XP) with SQL Server Express 2008
DB22 | Gateway to MSSQL | DB30
Local Local | |
Capture --> Apply ------> Gateway -----------------> DUMMYUSR.DUMMYTBL
Process Process | |
Capture process on DB22 captures changes made to local DUMMYUSR.DUMMYTBL table, enqueues it to capture queue and propagates to apply queue for local apply process, which, using database link DB30 supposed to apply these changes to table DUMMYUSR.DUMMYTBL on remote SQL Server database DB30.
The problem is that when apply process is executing transaction, it aborts with error:
QUOTE ORA-26713: remote object does not exists or is inaccessible.
ORA-26712: remote object is "DUMMYUSR"."DUMMYTBL"@"DB30".
In trace log I can see the following message:
QUOTE kniNoPrimaryKeyError: ORA-26713, remote obj doesn't exist,
DUMMYUSR.DUMMYTBL@DB30
[code]...
what can cause ORA-26713 error of apply process applying changes to SQL Server database through Oracle Database Gateway, if object to be modifyed exists and accessible from Oracle database for strmadmin user?
By the way, database link and tables were created with following commands: CODEDB22(Oracle): (connected as strmadmin) create database link DB30 connect to "dummyusr" identified by "dummypwd" using 'DB30';
DB22(Oracle): (connected as dummyusr)
create table DUMMYTBL (no number, name char(6), ddate date, payload varchar2(2013));
DB30(MSSQL): (connected as dummyusr)
create schema DUMMYUSR authorization dummyusr;
create table DUMMYUSR.DUMMYTBL (no float(53), name char(6), ddate datetime2(0), payload varchar(2013));
Till now my company used Share Plex for replication 6 DBs from different location to one server (used for BI operations).
1) Replication in in real time
2) All DBs are Oracle but different versions 9 and 10 .
3) We have horizontal and vertical replication. We replicate just some tables, for some tables just some columns and for some tables just records with some conditions
4) We have different charsets.
5) SharePlex have compare/repair tool to check it DBs are in sync mode
I started to look for Oracle Streams to use it instead of SharePlex. Do you know if it is able to handle that type of replications? Do you know about any Oracle Streams limitation which eliminates it?
Due to firewall restriction, i have to configure the streams in queue forwarding. Intermediate database will hold two different ip's (one from source and another to target) and will not talk each other.
View 1 Replies View RelatedI create following update conflict handlers, one after other. These are working correctly, my problem is when i create 2nd it replace first one, and when i create 3rd one it replace 2nd one. I want to put all three in action simultaneously. How can i do this.
{code}DECLAREcols DBMS_UTILITY.NAME_ARRAY;BEGINcols(1) := 'MODIFIED_DATE';cols(2) := 'MODIFIED_BY';cols(3) := 'RATE_CHANGE_DATE';cols(4) := 'PENSION_AMOUNT';cols(5) := 'EASYPAISA_WORK_ON';cols(6) := 'PPO_NO';cols(7) := 'NEW_NIC_NO'; cols(8) := 'OLD_NIC_NO';cols(9) := 'CLAIM_NO';DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(object_name => 'CORE_BUSINESS.CB_PENSIONER',method_name => 'OVERWRITE',resolution_column => 'CLAIM_NO',column_list => cols);END;{code} {code}DECLAREcols DBMS_UTILITY.NAME_ARRAY;BEGINcols(1) := 'MODIFIED_DATE';cols(2) := 'MODIFIED_BY';cols(3) := 'RATE_CHANGE_DATE';cols(4) := 'PENSION_AMOUNT';cols(5) := 'CLAIM_NO';DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(object_name =>
[code]....