The sqlcache is getting over-written.I would like to capture information from AWR snapshots and feed those (as a workload) into the DBMS_ADVISOR.I can't see where it's possible (other than manually creating my workload from AWR information)
I have created a table A say columns a,b,c,d,E. I have created a application to insert and update and delete rows in to this table.I want to capture when some user inserts , update, delete the rows.
I have a table MYTABLE in database mydb1 duplicated via materialized view and materialized view log and refresh_snapshot commands to a MYTABLE on mydb2 database.
I like to duplicate this table MYTABLE to a third database mydb2, using the same method (materialized view and refresh_snapshot command).
Is it possible ? What's hapend to the materialized view log where I launch a refresh_snapshots on mydb2 ? How is this materialized view log truncated ?
How can check database performance in a non-sequential parts of a day using AWR tool. For example, suppose we divide a day to 2 parts: low-traffic time and high-traffic time with the following time interval:
low-traffic time : 7:00am - 10:00am and 7:00pm - 11:00pm high-traffic time : 10:00am - 01:00pm and 4:00pm - 06:00pm
I want to examine performance using snapshots gathered by AWR. If I get 2 AWR reports for low-traffic time of day (one for 7:00am - 10:00am, another for 7:00pm - 11:00pm ) and compute average of values in 2 reports, could I called it database performance in low-traffic part of day or not?
Simplifying the data structure that my problem concern, let's say there are two materialized views between whose data there's a one-to-many relationship [the relationship can be logical without a need of creating any foreing keys].
The data should be as actual as possible respecting the content of the master tables, let's say it shoul be refreshed every 5 minutes.
As far as I know, the jobs related to each snapshot, even if they have a START WITH and NEXT parameter set to the same value, work independently...So: what would be the best manner to synchronize the jobs so as to make sure the data of both snapshots are coherent?
If I create an AWR Report via OEM or awrrpt.sql for an interval between two snapshots, in the report there are statements which were not executed in this interval.
Are there any known bugs for that problem or have you got an explanation for this behavior?
I have some questions about Oracle + EMC shared storage. I have Oracle 11gR1 RAC (2nodes) + ASM environment with shared shorage EMC Clariion AX4.
The database is running no archivelog mode. I'd like to implement point-in-time recovery using Snap View snapshots.
Currently my AX4 platform has the following LUNS:
LUN1 - registry 1 LUN2 - registry 2 LUN3 - vote 1 LUN4 - vote 2 LUN5 - vote 3 LUN6 - ASM - DISKGROUP DATA DISK DATA01 (actual db datafiles) LUN7 - ASM - DISKGROUP DATA DISK DATA02 (actual db datafiles)
Using source LUNs in consistent session will take sync snapshots of all the LUNs working against my database. Once something happens with database, the LUNs can be returned to specific point in time when snapshot consistent session was taken and I'm expecting the database will up and continue to work.
Questions:
1. Is my approach correct at all? (The database is running noarchivelog mode, not dealing with hot backups. The recovery point in time implemented purely via EMC snapshot consistent sessions.) 2. The AX4 platform has a limit 8 source LUNs in session. If I understand it correctly, I can't place more than 8 LUNs in session. What will be the workaround if my database will occupy more than 8 LUNs, I'd like to take their consistent snapshot; for example:
LUN1 - registry 1 LUN2 - registry 2 LUN3 - vote 1 LUN4 - vote 2 LUN5 - vote 3 LUN6 - ASM - DISKGROUP DATA DISK DATA01 (actual db datafiles) LUN7 - ASM - DISKGROUP DATA DISK DATA02 (actual db datafiles) LUN8 - ASM - DISKGROUP DATA DISK DATA03 (actual db datafiles) LUN9 - ASM - DISKGROUP DATA DISK DATA04 (actual db datafiles)
I am returning a refcursor as OUT parameter in my stored procedure. I would like to capture a no data found for the refcursor. Is there a way I can raise the exception without compromising the performance?
I have tried the below options that are not working.
1. If I run a SELECT query to check for records and then OPEN the refcursor for that SELECT, then it takes a performance hit as I am reading the table twice.
2. I can FETCH a refcursor into a table type and check the count in the table to raise exception.But once I fetch from a refcuror, the data is gone. So, this option does not work either.
OS : Windows 2003 DB : 10.2.0.4 I am doing capture and replay first time.I want to take 2 captures at a time. 1st capture for 2 schemas and 2nd capture for other schemas.Is it possible?I have searched on internet but didn't get any clue about it..
I am running 11.2. Within a DB trigger, I need to capture the IP address of the client making the change. I see there is a SYS_CONTEXT('USERENV','IP_ADDRESS') built in. Is this the correct way to capture the IP of the client making the change?
I was reading about Oracle SPM feature, but i have questions in mind for automatic feature..
* if we have set both capture and use parameter true. Oracle will automatically capture Plans, but will it automatically evolve new plans to accepted or it will wait for evolve it manually before using new plan.
* If it Automatically evolve, when and how will it?
* If it Automatically, will change old plan's flag accepted to No? or add one more accepted plan
I need the effective dates (start and end) of marital status changes in sequential order, without duplicate rows over the same time frame. (Per_all_people_f table only). For example below, I only need the items that are in bold. I am very new to pl/sql and cannot figure out how to do this.
When I do this in sql with Min date and max date; the 1st and 2nd blocks are correct, the 3rd block has wrong end date and 4th block is entirely missing as the 'M' is already counted for in block 1 even though it occurred after other status changes.
Example of the rows and what I need in BOLD below: So no gaps in time and it captures the effective date range for that particular marital status; I need to get: 1st block 'S' 10/23/2000 - 4/12/2004 2nd block 'M' 4/13/2004 - 10/1/2006 3rd block 'D' 10/2/2006 - 5/23/2007 4th block 'M' 5/24/2007 - 12/31/4712
Actual data in table I do get on a query with no restrictions:
490 *10/23/2000* *4/12/2004* 0 US S F 490 *4/13/2004* *10/1/2006* 0 US M F 490 *10/2/2006* 2/12/2007 0 US D F 490 2/13/2007 *5/23/2007* 0 US D F 490 *5/24/2007* 10/7/2010 0 US M F 490 10/8/2010 11/15/2012 0 US M F 490 11/16/2012 *12/31/4712* 0 US M F
we have 1 server oracle and many clients. when client sends sql statement to server.How to Capture sql statement to analysis before it sent to server and execution.
lv_ret := WEBUTIL_FILE_TRANSFER.AS_To_Client_with_progress(lv_clnt_file, lv_srvr_file, 'Download from Application Server in progress', 'Please wait');
to download a file to my H: drive.Here lv_ret is a boolean variable.The file is not downloaded to my H drive when there is no enough space.How to capture that error?
How can i capture an image using webcam connected to my PC and show that in the image item in my d2k form 6.0.I want to get the function in a when button pressed trigger.
How to capture key press in oracle forms 6i.My intention is to create a form with 2 blocks. First block have one text item which user will input data and second block displays data based on the value typing in the text item(Same like LOV).
I have this 3rd party tool, which are running SQL queries. I need to see what queries the tool is running and capture them. I enabled tracing but that's not working, as the tool doesn't establish connection, it connects when it has to run the query and then gets disconnected.
I am using the below code to capture all important logs when user failed to login on database , but i cant capture the username which are failing to connect on database.
CREATE OR REPLACE TRIGGER failed_logon_notifications AFTER SERVERERROR ON DATABASE
I have a table in that i have some columns along with that four columns to capture the user details. but these details must be captured Programmatically.like whenever a user insert or update or delete his credentials must be captured in these columns. but i am not figuring out in which trigger i have to write and how.
I have big source tables to load in a data warehouse. We are in a full Oracle environment. So I need to extract only delta since the last extract.
I need to capture even deleted rows from the source table.
I have tested the following solution:
- declare a materialized view log on the source table - load the content on this view log in my ODS - empty this view log - load my DWH with the captured delta
It is very simple and seems to work perfectly. I am just confused by the fact that nobody seems to have implemented such a solution.
I am looking for a way to capture a TXs from one database and create a script or use it with tool, so we can capture TXs close to production while testing changes being implemented in database.
Env:
OS: Redhat Linux 5.0 DB: 10.2.0.4
Actually Requirement: We have active-active Golden Gate setup done for one of our DB and once a quarter we make changes in the database using DDL (CREATE / ALTER - TABLES, FUNCTIONS, TRIGGERS, etc). We are 24/7 environment and hence no downtime is affordable. What I like to know is a way to capture all TXs from one of the DB and create a script so we can run them while we are testing new changes in the database. Something like DBMS_WORKLOAD_CAPTURE Package (Available only after 11gR1) or some tools available in the market.
I also tried looking into Load Runner but felt it works with App tier than DB, I may be wrong there.
I have a .sql file that is used as a wrapper file that when executes within sqlplus (9.2.0.1.0), executes a bunch of .sql files within it. Example below:
Each of the .sql file (A,B,C,D,E) Spools individual output of sql statment within them. Each of the indv .sql file queries different tables with different filters(where) clause.
I would like to capture any error (OS,SQL,DB) into indv error file (A_ERROR.log). The reason being is because later in the process we need to validate if there were any errors before processing and loading the data into our SQL database
oracle 11.2.0.3 I have insert only tables that receive srecords from multiple processes at a rate of about 200/second. Each transaction can have up to 100 records. I have another set of processes that queries this table for the latest data. These processes run anywhere from once a minute to once an hour. Processes do not get all of the data. They get data based on a type field.
Both of these are from java middle tiers. The process that queries data (The subscriber) does so at the request of many remote servers (there will be vast numbers). I am not allowed to expose these downstream databases to the internet (they are not oracle DBs anyway) so I cannot use streams or golden gate
So basicallyInsert Process: multiple sessions that combined insert records up to 200/second. There will be between 1-100 records per commit.
Query Process: Downtream process makes a request to my middle tier. This middle tier runs a query to get the latest data and passes it back. This design is set and I cannot change it.
1. right now we capture the insert time of the record. However, at this rate of inserts some processes will commit faster than others. So I cant use a 'greater than my insert time' query. 2. streams/golden gate won't work. can't register these DBs. 3. don't want to serialize my inserts because since I am not sure I can keep up with the insert rate. I don't even know what the specs will be for the production hardware. I have to actually deliver this before its decided. So I am being conservative. 4. I really want to avoid updates on this table if possible. In part due to my limited ability to test. 5. due to the number of downstream processes it is possible that it will request data and for some reason fail to insert the data locally. So the downstream application will keep track of the latest data it received. This means that a subscriber may need to request the same data again.
Is there a way to set up change data capture with multiple subscribers to handle this? if my subscribers are just queries? All the queries come from the same servers(there will be several, but all the same thing). If so, when I performance test this are there any wait issues I should keep an eye on?
We have a requirement where we need to publish only the delta/changes made to an existing materialized view. This MV is fast refreshed over a db link. Since, MV will have all the data, we won't
1] Created MV Log
CREATE MATERIALIZED VIEW LOG ON SHRTS_FLNG_PRCSD WITH PRIMARY KEY, SEQUENCE INCLUDING NEW VALUES;
2] Create MV
CREATE MATERIALIZED VIEW MV_SHRTS_FLNG_PRCSD (ITRTN_NB,UNQ_ID,XCHNG_ORG_ID,FLNG_ST,MAX_PRCSD_ITRTN_NB,SBMTD_SCRTS_AM,SBMTD_PSTN_AM) BUILD IMMEDIATE REFRESH FAST ON DEMAND WITH PRIMARY KEY AS
4) I have created a temp table to capture the diff between MV and newly inserted Deltas. I have added SEQ_NB column using Oracle sequence to capture the diff
CREATE TABLE MKT_CMPLC.TMP_SHRTS_FLNG_PRCSD ( ITRTN_NB NUMBER(2) NOT NULL, UNQ_ID CHAR(38 BYTE) NOT NULL, XCHNG_ORG_ID NUMBER(8) NOT NULL, FLNG_ST VARCHAR2(9 BYTE),
[code]....
5) I have refreshed the MV
exec dbms_mview.refresh('MV_SHRTS_FLNG_PRCSD');
6) In order to capture the new inserts, I am using following DML
INSERT INTO tmp_SHRTS_FLNG_PRCSD SELECT ITRTN_NB, UNQ_ID, XCHNG_ORG_ID, FLNG_ST, MAX_PRCSD_ITRTN_NB, SBMTD_SCRTS_AM,
[code]....
We need only delta records. Is there any way to extract actual rows from MV logs.