How To Tune Log File Sync Waits In 11gr2
Mar 22, 2013os version is solaris 10
db version 11.2.0.3
how to tune log file sysnc waits in 11gr2 what parameters are involving in tuning log file sync and what would be its optimal values.
os version is solaris 10
db version 11.2.0.3
how to tune log file sysnc waits in 11gr2 what parameters are involving in tuning log file sync and what would be its optimal values.
platform: linux64 + glassfish I run msync as a client on the same server, but on the mobile manager web page, only download sync publications history can be found. What shall I do to sync local Berkeley database content to oracle?Do I need to create upload sync publications? Do I need to use mobile server workbench to create new applications?
View 1 Replies View RelatedWhen a user session commit he is waiting on 'log file sync' untill LGWR sends the message back to user session after writing log buffer content into redo log file.As far as i understand this is serialize operation(one at a time).
So how come i have 7 ms average 'log file sync' wait time and i can still perform 200 commits per sec ?
7 ms * 200 waits = 1400ms = 1.4 sec
How can i set db home name at the time of installation of oracle 11.2.O.3 .
View 3 Replies View RelatedI am trying to install Oracle 11g R2(64bit) on Windows 7 64 bit OS. While Creating Database using DBCA. I am getting error. Below is the screenshot.
Below is the error in the Trace file...
oracle.sysman.assistants.util.step.StepExecutionException:
Error in Process: C:Oracleapporacleproduct11.2.0serverinorapwd.exe
Unable to find error file %ORACLE_HOME%RDBMSopw<lang>.msb
Our product uses Oracle11gR1 and in new release we are going to use Oracle11gR2. For this we are performing following steps:
(1) Install Oracle11gR2 on a machine where our product (Oracle11gR1) is already installed.
(2)Upgrade Oracle11gR1 schema to Oracle11gR2.
(3)For using upgraded schema in our product installer we create clone of upgraded schema.
(4)For creating clone we are using Oracle11gR2 DBCA utility.
(5) Clone files are successfully created (DBC, CTL, DBF).
Now we performed same steps from another machine and DBF file size changed very much. On one machine it was 89 MB and on second machine it was 150 MB. There is no different in schema and both machines are Windows 7 machines.
Tor educe size of schema we tried different space reclamation commands but size is not changing.
Can we restore database password file in ASM for 11gR2 version.
View 4 Replies View RelatedI have a deadlock trace file to analyse and i used to be able to see the rowid as a 16 bit hex value in the trace file, which i could then query on to get the actual real world row name.I see in the 11GR2 deadlock trace the formatting is different and for the life of me i am unable to see the rowid. Has Oracle stopped reporting this now, or is their another way to get this value? The deadlock graph shows me
*** 2013-05-14 14:49:15.047
Submitting asynchronized dump request [28]
Global blockers dump end:-----------------------------------
Global Wait-For-Graph(WFG) at ddTS[0.16e] :
BLOCKED 0x4362890f8 5 wq 2 cvtops x1 TX 0x3d001b.0x18a3021 [FF000-0001-00000002] inst 1
BLOCKER 0x436288f38 5 wq 1 cvtops x28 TX 0x3d001b.0x18a3021 [102000-0001-00000002] inst 1
[code]....
i have a .dmp file and i want to use the data in this file for my further practices. so, i need to dump the data in the .dmp file to the any schema exists in data base.
View 1 Replies View RelatedI am using, Release 11.2.0.3.0 of oracle. In our database we observe, high 'Application' wait followed by 'other' and 'User I/O'. After investigating through the wait class 'Other', i found that 90% of the wait is due to wait event 'Enq: WF Contention ' that to excatly 5 PM to 5.30 PM daily. Then i found from the dba_hist_active_sess_history, that the sessions experiencing this waits are oracle internal(M004),
Now i can see two of the oracle jobs has been scheduled at this particular time.
1)dbms_autotask_prvt.age
2)dbms_scheduler.auto_purge
And these were having frequency daily.So, i am suspecting these were experiencing the waits. Now my question is, can i decrease the frequency(may be weekly) of these jobs and it will not put any negative impact on my DB? Or should change the schedule time to some less peak hour or whether these are 'not required' and can be disabled?
trim down the following sql to within 255 characters help:
select indate
from (
select case count(inputDate)
when 1 then inputdate
end as indate
from commLeaseBut5
[code]...
This sql is check a date field in the database for record which, if the date field is blank it should be a new record. Then the sql will assigned the current timestamp and stored to the new record. Otherwise, the sql will return the record timestamp for display.
Can we tune below mentioned query without creating any indexes :
SELECT /*+ PARALLEL(a,64) */x_abc_41, x_abc_44, CALLED_FROM_NUM, X_abc_25, created, evt_stat_cd, last_date, x_abc_number_from,
x_abc_complete_date
FROM table a
WHERE a.todo_cd in('MNPIN','MNPOUT')
AND a.x_abc_25 = 'NP RFS'
AND a.created BETWEEN sysdate-180 AND sysdate
Explain Plan :
PLAN_TABLE_OUTPUT
Plan hash value: 3718629559
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 11 | 2376 | 9462 (4)| 00:02:51 | | | |
[code]...
Note : Currently this query is taking 3 Minutes for retrieving 8000 records,But should be executed in less than 1 min.
We know and perform lots of tuning stuff on our databases, mainly on primary side.
but the set of queries running on standby databases (like to use for reporting purpose) are totally different from one running at primary.
so How can we tune out our Standby DB to perform well ?
I am using dbms_sqltune package to tune some resource intensive queries. In this I am looking to know how Sql profile works
I created task for one sql using dbms_sqltune, then in Report it was recommended to accept its sql profile (which will potentially benefits 65%)
then accepted that sql profile with
exec dbms_sqltune.accept_sql_profile(task_name => 'pc1_61d2dhmdwzc8d', replace => TRUE);
and re-executed same query but NO difference in time. Then, what that mentioned potentially benefit 65% is about.
1) Originally query is taking 10 sec to execute, with this 65% i thought that it will execute within 3-4 sec. Is it right?
2)Also, the query for which i have created tuning task has some hardcoded input values, what if i change input values next time, will that profile works with new input values?
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------
1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
A potentially better execution plan was found for this statement.
Recommendation (estimated benefit: 65.17%)
------------------------------------------
- Consider accepting the recommended SQL profile.
execute dbms_sqltune.accept_sql_profile(task_name => 'pc1_61d2dhmdwzc8d',
replace => TRUE);
We had a massive jump in cluster waits specifically; gc buffer busy acquire during an RMAN backup. We identified the cause of the waits to a few hot blocks with a table that may well need re-building in terms of ITLs and PCTFREE (Although I thorght ASSM would manage PCTFREE and PCTUSED..)
What happens during an RMAN backup that may cause huge cluster waits on hot tables? Is there some crazy redo issue going on or maybe flashback?
Either way the spike is there and we can pinpoint the activity on the database but just dont understand why RMAN would cause the issue.We have just found that the LARGE_POOL has not been set, in fact there is no SGA_TARGET either! Could this have an effect on RMAN and cluster waits?
in the awr report I saw some segments in Segments by ITL Waits section. Number of waits reaches to 100 in 8-hour snapshot. Is it small/big number? Should I consider increase INITRANS or there is nothing to be worried because value 80-100 are not too high?
View 3 Replies View RelatedI am trying to find some way how to tune and optimize the server performance in following situation. There are 100s of sessions inserting records to one table. Sessions are communication threads in java application, each thread is receiving messages that are to be stored in the table. Each message must be commited and then is ACK sent to remove client. Two problems are raising of course - much of ITLs on the table and lots of very small transactions. I can adjust the java application, but cant do much about the design.
I was thinking about some "caching" - if the messages are stored in memory and bulk-inserted to database by single thread the performance would be much higher. However, there would be possible loss of data - the message could be lost from memory cache and client already received ACK.
We have a table (call it PROBLEM table) that varies between 60 and 100 million records from one data archival process to the next.The table has data from all 50 states and is range partitioned by each states unique code and has a primary key based on the following:
STATE the data is from
BMRK YEAR the year the data was last benchmarked
AREA location the data is from
SERIES industry the data is located in
YEAR the data was reported
MONTH the data was reported
DATA TYPE the type of data that was reported
ESTIMATE TYPE the type of statistical estimate that was produced from the data
REPORT ID report id unique to each state
REPT SFC state the reporter resided in
We have developed a job that will process data from other tables and create some statistics resulting in approx 2.2 million inserts, .5 million updates and .5 million deletes on PROBLEM table during each job run. Once this table is loaded another process (call this PROBLEM_PROCESS)takes off that reads this updated information to produce some statistical data that is stored in another table. This data is produced by summing anywhere between 2 and 5000 records per query from the PROBLEM table.Here is the query (call it PROBLEM_QUERY)
select round(sum(cm_value*sample_weight),3),round(sum(pm_value*sample_weight),3)
from matched_sample
where state_fips_code = :1
and area_fips_code = :2
and series_type_code = 'B'
and year = :3
and month = :4
[code]...
When we run this job it is very fast (20 minutes) until it gets to the PROBLEM_QUERY, at this point we get DB file scattered read waits, DB file sequential read waits and async IO waits and the job has run for as long as 2 days. When I look at grid, the PROBLEM query is doing a full table scan and ignoring the index.
Our systems people advised us to re-organize and re-index that data in the PROBLEM table after all of the DML and before the PROBLEM_PROCESS takes off. If we do this, it works and cuts our time down alot. However, the re-organize and re-index process adds eight hours to this process. Note: When we re-roganized the data we create a copy table on another set of table spaces and insert the data into the copy sorting the data on the primary key columns. After this we re-index the new table and drop the old table renaming the new table. When we do this later, we move the data back to another set of table spaces. So, we move the data back and for between table spaces A and B we can say.
We have Oracle installed on a M5000 server with Solaris as the OS the binaries and data files are stored on a NetApp Storage array(model 3160) of 500 1TB SATA 7200 RPM drives. However, there are 128 other databases on the NetApp filer as well.
IMO the array and the slow disks are the problem. I believe this because we are catering to the slow disks by re-organizing and re-indexing the PROBLEM table during each run. I don't believe this should be neccessary. We normally re-organize and re-index our data each week in our production system after many more transactions than this.
Our systems people state it is our application. Oracle Support tells us the statistics are out of date but have not answered us on why the statistics are out of date and the index is abandoned after 1 run.
We are experiencing Network waits on one of our 2-node clustered databases...In every 1 hour of clock time we are finding 700-900 seconds of Network waits
From the AWR data I find that "ARCH wait on SENDREQ" is one of the main constituent for these Network waits and as such I suspect Network between Production database and its corresponding database might be slow
Question 1) Does this understanding look correct?
Question 2) Apart from the above what could be the other causes of the Network waits. Can we point out any particular area from following AWR extract...Seeing some gc* waits initially I thought it might be due to slow interconnect between the cluster nodes but some google search denotes it is not the case...So what could be other causes? I mean which network link I would check?
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 22631 22-May-13 10:00:11 976 7.9
End Snap: 22632 22-May-13 11:00:28 978 8.1
Elapsed: 60.29 (mins)
DB Time: 795.66 (mins)
[code].....
I am working for an agency which needs to synchronize(bi-directional) a master-master database configuration (Oracle <-> DB2) between sites hosted in Washington ,DC and Europe.
We are doing high level estimates right now of in-house development , Oracle Golden Gate and DBMOTO, MQ...etc with bi-directional synchronization with Oracle and DB2 in decentralized geographic locations?
Or production level experience(perf. issues, maintenance ..drawbacks) with any of the products mentioned.
How to avoid sort operation by an order by clause without changing the sort area size.what hints or changes should be done in query so that order by clause work faster.
View 10 Replies View RelatedIs there any relationship b/w tuning BUFFER CACHE and BUFFER BUSY WAITS?
1) Buffer Busy Waits are happening as the User process found the same Datablock is being used by another user in the BUFFER CACHE.
2) And also happens, when the server process found the same Datablock are being used in the Datafile.
I'm trying to work out how to synchronize a source and target database in an Oracle CDC implementation. We're in the architecture design stage of a near-real-time Operational Data Store style solution. Both the ODS and our pilot source system will be Oracle 10g. Our plan is to use Oracle Asynch Hotlog Change Data Capture to capture change data in near-real-time so that it can be applied to the ODS.
I understand the CDC apply process once the ODS and Source System are synchronized: DBMS_CDC_SUBSCRIBE.EXTEND_WINDOW to release the next window of change data, select from the publish views, then DBMS_CDC_SUBSCRIBE.PURGE_WINDOW to register the change data is no longer required.
But how do we do the initial synchronization if the source system is live and contains data and the ODS is new (and empty)? The easiest way would be to somehow flag EVERY row as change data. eg. Truncate every table and import. This would not be so good for existing CDC subscribers.
A more logical way would be to:
- Take a hot backup of the live prod database
- Activate CDC (DBMS_CDC_SUBSCRIBE.ACTIVATE_SUBSCRIPTION) on the source system to start tracking changes
- Manually build the ODS from the snapshot
- Start applying changes from CDC
if the source system is live, how can we GUARANTEE that the first 2 steps (snapshot and ACTIVATE_SUBSCRIPTION) are performed at EXACTLY the same time (ie. same SCN)?
Customer is sending data from legacy system (Source) with the web service which in turn calls a package lying on Oracle server (Target). Now this package is simply inserting data passed by legacy system into master staging table in Oracle database. When they started this process in Sept 2011 then 4 lack records were inserted into staging table. In Oct 11 it was 0 records Nov 11 it was 2 lack records, Dec 11 it was 1 lack records, in Jan 12 it was 1 lac records, Feb 12 73k records, Mar 12 0 records, Apr 12 52k records.
As we see that number of records inserted in the table got reduced with time.. what should be the starting point here since web service is calling that package on the fly, how can i enable trace for that package? I cannot replicate this is Dev as this process is only working in PROD.
If one of the redolog member corrupted and overcome this problem, I had removed the corrupted redolog member. Later I had added a new member to this group.
I would like to know, is the newly added log member will get sync with existing log member? How the newly added log member get sync with existing log member?
Following query is hanging either with 'Sequential access read' or 'Latch Free' wait event Important thing is the table which is self joined in subquery here does not have any index at all While it was hanged I tried to get trace of it and terminated twice. As such haven't got 'row source generataion' The table has only 120000 records and it shall update 34000 records
UPDATE invoice_header inv
SET inv.modified_due_date =
(SELECT inv1.btn_due_date
FROM invoice_header inv1
WHERE inv.dct_code = inv1.dct_code AND inv1.release = 'A5')
[code]...
During 'sequential read' using p1,p2 values tried to get what the session is reading and found that it is using the table itself.
During lath free I found following
SELECT name, 'Child '||child#, gets, misses, sleeps
FROM v$latch_children
WHERE addr= (select p1raw from v$session_wait where sid=18)
UNION
[code]...
However instead of self join when I creaed global temporary table as
create global temporary table t as select * from invoice_header where release='A5'
And used it in the update as
UPDATE invoice_header inv
SET inv.modified_due_date =
(SELECT t.btn_due_date
FROM t
WHERE inv.dct_code = t.dct_code AND t.release = 'A5')
WHERE inv.release = 'A5' AND inv.btn_due_date >= TRUNC (SYSDATE)
It updated the records in a second!!
Questions are
1) why it is producing 'sequential read' wait event when there is no index access or else why it is doing single block access when FTS is required?
2) Why is the 'latch free' wait event here and what it indicates here with 'cache buffer handles'?
Is it because we are reading and updating the same segment?
know in case DDL of table is required. It has all nullable columns and no index at all. Since it is 9i I am unable to use MERGE effectively in this case
We have siebel server on windows machine and db server on unix box. We have done some modification(added columns) in siebel tools level. Now we want to ddl synch through command line prompt. how ot do ddl sync?
View 4 Replies View RelatedWe are having 10gR2 DR setup,
We are planning for the cold backup of primary database on every sunday,
Does it going to affect DR sync if primary database is not availabe at the time of cold backup.
i have my oracle 10g database on my windows 7 and i install sql server 2005 on my windows server 2003 .now i want to sync my oracle 10g database with my sql 2005 means whenever i make any changes in my oracle table its automatically effect in my sql server also.
View 1 Replies View RelatedI have few queries regarding standby database.
1)On primary Database Standby Redo log is required for switchover and on standby database Standby Redo log is required for
--Real time Apply
--Maximum protection or Maximum availability
Am I correct?
2)My database is in Maximum Performance mode. I set up following entry on init.ora:
LOG_ARCHIVE_DEST_2='service=standby LGWR ASYNC'. My question is do I need to have STANDBY Redo log file on standby database in order to use LGWR transport (LGWR ASYNC)mode from primary? Without Standby redo log on standby database can it transport redo data from primary to standby using LGWR transport mode (LGWR ASYNC)?
3)I have changed from the "ARCH" attribute to "LGWR" attribute of the LOG_ARCHIVE_DEST_n initialization parameter. But I have not changed the protection mode. I would like to know whether is there any impact in the behavior of the database, if we do not change the mode from "MAXIMUM PERFORMANCE" to "MAXIMUM AVAILABILITY"?