Data Guard :: ARCH Wait On SENDREQ While Configured With LGWR ASYNC Attributes?
Oct 5, 2010
In statspack I find a lot of wait events "ARCH wait on SENDREQ". The database is configured with the LGWR ASYNC attribute. In the documentation I can read that the "ARCH wait"-events may happen in case the database is configured with the ARCH attribute.How is it possible I get these "ARCH wait"-envents and do these wait events have any effect on the "on line" performance of the database? Is it possible that a user that saves data in the database has to wait for thest "ARCH wait"-events?
View 1 Replies
ADVERTISEMENT
Oct 25, 2012
Redo log information can be transmitted in one of two ways from the primary database to the standby database: either by ARCH or LGWR.
1. when ARCH involves
2. when LGWR involves
FAL_Client = (Should i enter net service name or Db name or service_name
FAL_Server = Should i enter net service name or Db name or service_name )
FAL_CLIENT='whichone'
[code]...
View 9 Replies
View Related
Mar 10, 2013
I have few queries regarding standby database.
1)On primary Database Standby Redo log is required for switchover and on standby database Standby Redo log is required for
--Real time Apply
--Maximum protection or Maximum availability
Am I correct?
2)My database is in Maximum Performance mode. I set up following entry on init.ora:
LOG_ARCHIVE_DEST_2='service=standby LGWR ASYNC'. My question is do I need to have STANDBY Redo log file on standby database in order to use LGWR transport (LGWR ASYNC)mode from primary? Without Standby redo log on standby database can it transport redo data from primary to standby using LGWR transport mode (LGWR ASYNC)?
3)I have changed from the "ARCH" attribute to "LGWR" attribute of the LOG_ARCHIVE_DEST_n initialization parameter. But I have not changed the protection mode. I would like to know whether is there any impact in the behavior of the database, if we do not change the mode from "MAXIMUM PERFORMANCE" to "MAXIMUM AVAILABILITY"?
View 1 Replies
View Related
Nov 14, 2013
How can find the number of standby databases configured for primary database from os level
View 9 Replies
View Related
Dec 11, 2012
I am trying to look at wait events for a long running query in TOAD.I start the query on one instance of TOAD and open the Session Browser on another instance.But I am surprised to find that in "TOtal Waits" on the RHS-> SQL*Net message from client is the longest time taking and is already -> 178577 units whereas I have just started the query.
Whereas in the Current Waits it shows DB File Scattered Read currectly as some seconds.
View 5 Replies
View Related
Oct 8, 2013
DB 11.2.0.2AIX 6
I am getting following two top wait events from AWR report
1)SQL*Net more data from client
2)log file sync
Does it hints towards network latency and hardware configuration?what should i do for first wait event?
View 11 Replies
View Related
Jul 5, 2011
We have physical data guard configured version (10.2.0.4). We are in need to upgrade primary & standby database to 11G R2. Can we perform rolling upgrade.
View 3 Replies
View Related
Oct 17, 2012
Controlling User Access to Tables in a Logical Standby Database can be controlled using the following command:
ALTER DATABASE GUARD STANDBY;
My simple question is: how can I know the current active Guard setting in the standby database?
Oracle 11g R2.
View 1 Replies
View Related
May 29, 2013
I have RMAN arch backup setup for every 5 hours, it seems running perfect and taking archives backup, but it not removing from archive log location, I have DR setup for this environment, how RMAN removes archives in DR environment.
Here is the information.
skipping inaccessible file /PRD1/arch02/arch/PRD1_1_8349_790425892.arc
RMAN-06061: WARNING: skipping archived log compromises recoverability
skipping archived logs of thread 1 from sequence 8440 to 8478; already backed up
skipping archived logs of thread 1 from sequence 8440 to 8478; already backed up
channel disk1: starting compressed archived log backup set
channel disk1: specifying archived log(s) in backup set
input archived log thread=1 sequence=8479 RECID=151073 STAMP=816692206
input archived log thread=1 sequence=8480 RECID=151076 STAMP=816695235
[code]....
View 2 Replies
View Related
Aug 26, 2013
I have an environment in which backup is performed of Oracle 10/11 databases with the use of RMAN and Tivoli Storage Manager (Data Protection for Oracle).There are several databases and for every one there is a daily full backup and hourly archive logs backup.
Sometimes when full db backup takes longer (up to 4 hours) archive logs backups are missed - as TSM node cannot perform two backups at a time. I would like not to have those missed backups.
Option A was to delete association of the arch log scheduler during full backup. But when removing association we lose historical data about backup. And we need historical data to be able to create weekly / monthly / quarterly statistics of completed backups. We need to have 99% completed.
Option B was to create two nodes in TSM (TDPO) and one will do full backup only and another one only arch logs backup. So the problem is moved to RMAN. But from RMAN specialist I heard that this may cause problems with full backup. During full backup also archive logs are backed up (at the start and end) so there might be a problem with accessing the file that is used by another process. And this may cause problem with full backup - which we want to avoid especially.
View 3 Replies
View Related
Nov 23, 2010
We have a table (call it PROBLEM table) that varies between 60 and 100 million records from one data archival process to the next.The table has data from all 50 states and is range partitioned by each states unique code and has a primary key based on the following:
STATE the data is from
BMRK YEAR the year the data was last benchmarked
AREA location the data is from
SERIES industry the data is located in
YEAR the data was reported
MONTH the data was reported
DATA TYPE the type of data that was reported
ESTIMATE TYPE the type of statistical estimate that was produced from the data
REPORT ID report id unique to each state
REPT SFC state the reporter resided in
We have developed a job that will process data from other tables and create some statistics resulting in approx 2.2 million inserts, .5 million updates and .5 million deletes on PROBLEM table during each job run. Once this table is loaded another process (call this PROBLEM_PROCESS)takes off that reads this updated information to produce some statistical data that is stored in another table. This data is produced by summing anywhere between 2 and 5000 records per query from the PROBLEM table.Here is the query (call it PROBLEM_QUERY)
select round(sum(cm_value*sample_weight),3),round(sum(pm_value*sample_weight),3)
from matched_sample
where state_fips_code = :1
and area_fips_code = :2
and series_type_code = 'B'
and year = :3
and month = :4
[code]...
When we run this job it is very fast (20 minutes) until it gets to the PROBLEM_QUERY, at this point we get DB file scattered read waits, DB file sequential read waits and async IO waits and the job has run for as long as 2 days. When I look at grid, the PROBLEM query is doing a full table scan and ignoring the index.
Our systems people advised us to re-organize and re-index that data in the PROBLEM table after all of the DML and before the PROBLEM_PROCESS takes off. If we do this, it works and cuts our time down alot. However, the re-organize and re-index process adds eight hours to this process. Note: When we re-roganized the data we create a copy table on another set of table spaces and insert the data into the copy sorting the data on the primary key columns. After this we re-index the new table and drop the old table renaming the new table. When we do this later, we move the data back to another set of table spaces. So, we move the data back and for between table spaces A and B we can say.
We have Oracle installed on a M5000 server with Solaris as the OS the binaries and data files are stored on a NetApp Storage array(model 3160) of 500 1TB SATA 7200 RPM drives. However, there are 128 other databases on the NetApp filer as well.
IMO the array and the slow disks are the problem. I believe this because we are catering to the slow disks by re-organizing and re-indexing the PROBLEM table during each run. I don't believe this should be neccessary. We normally re-organize and re-index our data each week in our production system after many more transactions than this.
Our systems people state it is our application. Oracle Support tells us the statistics are out of date but have not answered us on why the statistics are out of date and the index is abandoned after 1 run.
View 7 Replies
View Related
Jan 6, 2013
i have tried to create the index in diff tbs also but same error is there
SQL> create index inx_tbl_voicechat_unsub_ani on tbl_voicechat_unsub (ani) tablespace ideadb_index;
create index inx_tbl_voicechat_unsub_ani on tbl_voicechat_unsub (ani) tablespace ideadb_index
*
ERROR at line 1:
ORA-01115: IO error reading block from file 201 (block # 144265)
ORA-27070: async read/write failed
OSD-04016: Error queuing an asynchronous I/O request.
O/S-Error: (OS 23) Data error (cyclic redundancy check).
ORA-01115: IO error reading block from file 201 (block # 144265)
ORA-27070: async read/write failed
OSD-04016: Error queuing an asynchronous I/O request.
O/S-Error: (OS 23) Data error (cyclic redundancy check).
View 10 Replies
View Related
Aug 21, 2008
I have 3 sites which want to be updated as soon as the other one updates its data.(in 5 seconds). So I have chosen multi master solution,sync regardless of the resource consideration.
1-what is the difference between multimaster async and updatable materialized view regardless of resource consuming in multimaster?because in both of them each site can receive and also propagate the data...I just know that in materialized view we can work offline and then apply it manually is there any other thing?
2-I can not understand the difference between multimaster sync and async (considering the 5 seconds,because 5 second is a little time)I mean in this case are they different from each other or not.
View 1 Replies
View Related
Oct 2, 2012
We have a Oracle 10g database with RAC and Dataguard. When we look at the AWR report, the wait time shown by Oracle for this database is very high.
Service Time : 15.36%
Wait Time : 84.64%
This would imply Oracle is waiting for resources 85% of the time and only processing SQL queries during 15% of its non-idle time. However when we check the OS (RHEL), the iowait is only about 10% and the CPU is 80% idle. This means that that processing horsepower is available.
As such, the results between the OS and Oracle database (AWR report) seems contradictory. OS says we have CPU/IO capacity, however Oracle says we don't.
View 17 Replies
View Related
Jul 6, 2012
i am seeing (LGWR switch) happening in my databsae alert log every 3-4 minutes. is that appropriate? if not, what measure can i take to reduce this?
View 1 Replies
View Related
Jul 13, 2010
We had an issue last week were we had a session with a very basic SQL query lock up the database, spiking the CPU at 100%. When you would kill the session, the lock would just jump to another session and so on. We finally had to restart the database since our clients were being kicked out. After the restart of the database, the LGWR ended up locking and held the CPU between 85-95%. The archive logs were switching every 5 minutes, when normally it would be every 45min. We spoke with Oracle Support, but they just ended up brushing the issue off and saying it was a hardware issue and were not able to provide any kind of backing to that.
View 4 Replies
View Related
Jun 30, 2010
I have configured data guard in the windows XP same server .It is not able to apply log to the Standby database.when I queried the following, I go the following erros..
sql>select message from v$dataguard_status where dest_id=2;
FAL[server, ARC0]: Error 12514 creating remote archivelog file 'STNDBY'
PING[ARCk]: Heartbeat failed to connect to standby 'STNDBY'. Error is 12528.
LGWR: I/O error 1089 archiving log 1 to 'STNDBY'
"ARCk: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3113)
"
"ARCk: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
"
PING[ARCk]: Error 3113 when pinging standby STNDBY.
"LNS: Closing remote archive destination LOG_ARCHIVE_DEST_2: 'STNDBY' (error 1089)
"
LGWR: Error 1041 closing archivelog file 'STNDBY'
************************************************
Also if
SQL> select sequence#,applied from v$archived_log;
query gives the following message----
SEQUENCE# APP
---------- ---
7 YES
5 NO
8 NO
9 NO
6 NO
10 NO
10 NO
11 NO
11 NO
12 NO
12 NO
View 4 Replies
View Related
Jul 2, 2010
its possible to have multiple LGWR processes for a single database.If its possible how does the multiple processes write from redo log buffer to online redo log file.
View 3 Replies
View Related
Jan 16, 2013
i have configured physical standby in my local system, to check logshipping i created a table at primary db, wen i tried to check in standby, it says table does not exist..below are primary & standby alert entries..
Primary alert log
Fatal NI connect error 12514, connecting to:
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=172.16.0.98)(PORT=1522))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=STAND)(SERVER=dedicat ed)(CID=(PROGRAM=d:oracle11gappadministratorproduct11.1.0db_1inORACLE.EXE)(HOST=A960M)(USER=SYSTEM))(SERVER=dedicated)))
VERSION INFORMATION:
TNS for 64-bit Windows: Version 11.1.0.6.0 - Production
[code]....
View 1 Replies
View Related
Nov 5, 2010
I want to change dbname,sid,data/control file locations in operational dataguard setup i plan to follow as below
1)shutdown primary and standby (stop managed recovery)
2)change db name in init.ora of primary and standby change database name control file location
3)create control file for primary from trace(script) make changes for db name and file locations
4)mount and open primary database
5)create standby control file
6) transfer standby control file to standby
7) mount stand by database and start manage recovery
If this steps will error free do i need to follow any thing additional to this or what is other best way for this or its not possible at all
View 1 Replies
View Related
Feb 24, 2013
Oracle Version: 11.1.0.7.0
Active Dataguard
Statspack has been configured for Active Dataguard on Primary database.We got an spike of Buffer busy waits for about 5 min in Active Dataguard, this was causing worse Application SQL's response time during this 5 min window.Below is what i got from statspack report for one hour
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- -------------------
Begin Snap: 18611 21-Feb-13 22:00:02 236 2.2
End Snap: 18613 21-Feb-13 23:00:02 237 2.1
Elapsed: 60.00 (mins)
[code]...
Why there could sudden spike of demand on UNDO data in Active Data Guard ?
View 2 Replies
View Related
Mar 29, 2011
I am working in a bank as an system consultant, i have a SAN Storage Area and oracle as below.
SAN 1
This interface includes the DATA FILES of the oracle tablespace
SAN 2
SAN1 Mirrors the DATA FILES of the oracle tablespace to SAN 2
1. Can i rely on real time data recovery from SAN2 ?
2. if SAN1 (Data Files are currupted) will the SAN2 Data Files will be currupted as well.
3. If the SAN2 is currupted then what Oracle Features can be used to have uncurrupted data.
View 5 Replies
View Related
Jul 23, 2010
i did everything writen but when i do *SQL>alter database recover managed standby database disconnect from session;*
i go and look in the standby database AlertLog file ,and thats whats writen
*ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1: 'F:ORACLEPRODUCT10.2.0ORADATADBSYSTEM01.DBF'
ORA-27041: unable to open file
OSD-04002: غير قادر على فتح الملف
O/S-Error: (OS 3) The system cannot find the path specified.
[code]....
strange thing that it realises the primary database in drive F and it goes to it but i dont understand what could be the reason of this ,although im doing this command while primary database is shutdown!
View 2 Replies
View Related
May 2, 2012
I configure logical standby online .when I execute dbms_logstdby.buid,first
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
it was blcoked by other sesson,then i kill the holding session,but no work.then i cancel this step and execute it again . the error is
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
BEGIN DBMS_LOGSTDBY.BUILD; END;
*
ERROR at line 1:
ORA-01354: Supplemental log data must be added to run this command
ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 3669
ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 3755
ORA-06512: at "SYS.DBMS_LOGMNR_D", line 12
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_INTERNAL_LOGSTDBY", line 370
ORA-06512: at "SYS.DBMS_LOGSTDBY", line 157
[code]....
View 3 Replies
View Related
Jun 15, 2012
I have set up a cross platform (Microsoft Windows IA (32-bit) -> Linux x86 64-bit) data guard and it worked fine.Then I did a switch over (which again worked) and found out the data is not getting replicated at all.. checked the data files available from the new primary database and found out they are in the windows format as below..
SQL> select name from v$datafile;
NAME
--------------------------------------------------------------------------------
D:ORACLEAPPADMINISTRATORORADATAMFSSYSTEM01.DBF
D:ORACLEAPPADMINISTRATORORADATAMFSSYSAUX01.DBF
D:ORACLEAPPADMINISTRATORORADATAMFSUNDOTBS01.DBF
D:ORACLEAPPADMINISTRATORORADATAMFSUSERS01.DBF
D:ORACLEAPPADMINISTRATORORADATAMFSRMANRMAN_TS01.DBF
and physically they were created at '/home/app/oracle/product/11.2.0/db_1/dbs/' and as
D:ORACLEAPPADMINISTRATORORADATAMFSREDO02.LOG
D:ORACLEAPPADMINISTRATORORADATAMFSREDO03.LOG
D:ORACLEAPPADMINISTRATORORADATAMFSRMANRMAN_TS01.DBF
View 5 Replies
View Related
Jan 16, 2013
I configured dataguard in my local system.
1) scn differs wrt primary in standby (i checked, 1day difference), how to make scn same?
2)i created a table in primary, its not refelecting in standby, (below i ve pasted alertlog entries)
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
Errors in file d:oracle11gappadministratordiag
dbmsstandstand racestand_dbw0_6916.trc:
ORA-01157: cannot identify/lock data file 2 - see DBWR trace file
ORA-01110: data file 2: 'D:ORACLE11GAPPADMINISTRATORORADATASTANDSYSAUX01.DBF'
[code]....
3)wen i try to open standby database in read only mode gives below error..
ERROR at line 1:
ORA-16004: backup database requires recovery
ORA-01157: cannot identify/lock data file 1 - see DBWR trace file
ORA-01110: data file 1:
'D:ORACLE11GAPPADMINISTRATORORADATASTANDSYSTEM01.DBF'
View 31 Replies
View Related
Oct 21, 2012
Our organization has recently decided to go for storage metro cluster solution for disaster recovery. In a Data guard environment, we normally calculate how much archive log is generating and based on that value we calculate the required bandwidth.
For storage metro cluster, we need to find how much block is changing in our primary database, and the same rate of change would apply on DR cluster. Now, i need to give the assumption how much changing is happening in my system. How to calculate the change.
View 4 Replies
View Related
Mar 15, 2011
I have 2 databases( A and B).. which shares common ORACLE_HOME.i configured 2 listeners through netca in different ports and both listener names are different...
But when I stop LISTENER.. both the listeners are getting stop and when i start both the listeners are up and running.. and I could see database B's services.. and vice versa.and when i tnsping , i am able to see port 1521 default port for database A.
when I run I bash, i have included ORACLE_HOME path... so should i add oracle_SID also there..if so, which SID should I add A or B? from where listener picks the default information???
>> when I run I bash, i have included ORACLE_HOME path... so should i add oracle_SID also there..
Yes, among MANY other things.
CODE#*************************************************************
# Create an alias for every $ORACLE_SID on the UNIX server
#*************************************************************
for DB in `cat /var/opt/oracle/oratab|\
grep -v \#|grep -v \*|cut -d":" -f1`
[code]....
"ERROR:ORA-12514: TNS:listener does not currently know of service requested in connect descriptor"
View 4 Replies
View Related
Oct 29, 2013
MY requirment is: I want the first three nullable attributes. For Eg: If I have 60 columns in table, I need to fetch the first three null data in a row.
View 7 Replies
View Related
May 1, 2010
I have configured and tested the webutil in Windows Operating System and worked perfect, Now i am trying to deploy customized form to EBS R12 that using webutil, and after making same configuration which has done before, i got compilation error.
Form is very simple(Just button with following pl/sql code)
show_webutil_information (TRUE);
Erro is: "FRM-40039: Cannot attach library WEBUTIL while opening form XXR12_DEMO.fmb"
webutil configuration steps on R12 or the error solution,
View 1 Replies
View Related