Performance Tuning :: Wrong REDO Log File Location
Feb 25, 2011
I try to current redo log file location to multiplex redo log configuration. I did do this on the DB.I import a backup file to DB when I made this change, but its too slow running import process from old configuration. Approximately it is 4 times slow.
when I recovery old configuration for redo log file, import process is normal running...Why this changes hit to DB import process performance? The old redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/crtest1/oradata/redo04b.log"524288000
3"/crtest1/oradata/redo03a.log"524288000
3"/crtest1/oradata/redo03b.log"524288000
2"/crtest1/oradata/redo02a.log"524288000
2"/crtest1/oradata/redo02b.log"524288000
1"/crtest1/oradata/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
The new multiplex redo log file location is below:
GROUP#MEMBERBYTES
4"/crtest1/oradata/redo04a.log"524288000
4"/opt/redolog/redo04b.log"524288000
3"/opt/redolog/redo03a.log"524288000
3"/usr/redolog/redo03b.log"524288000
2"/usr/redolog/redo02a.log"524288000
2"/disk1/redolog/redo02b.log"524288000
1"/disk1/redolog/redo01a.log"524288000
1"/crtest1/oradata/redo01b.log"524288000
I think that new configuration is better for old configuration at security issue.Here is the disk partitions on the server:
-bash-3.00$ df -lh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9.6G 2.2G 7.3G 23% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
[code]....
View 4 Replies
ADVERTISEMENT
Apr 6, 2011
The REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
View 3 Replies
View Related
Feb 10, 2011
I would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
#Optimal log file size:
select optimal_logfile_size
from v$instance_recovery
----------------------------
OPTIMAL_LOGFILE_SIZE
1842
[code]....
View 9 Replies
View Related
Aug 16, 2013
This is a vendor app and some times the dynamic sql will work great. Sometimes - not so much. There is something that is telling the optimizer to use an incorrect index.
1) Is there a memory setting I can use that optimizer determine the correct index to use?
2) How do I figure out what influenced the optimizer's decision to choose a particular index?
View 3 Replies
View Related
Mar 17, 2006
How can I disable redo generation for DML statements.
View 25 Replies
View Related
Jun 18, 2012
In one of our envirnoment i could see the redo size is high.Trying to understand why this is more
View 18 Replies
View Related
Mar 24, 2013
how do you limit oracle redo?
View 2 Replies
View Related
Jul 1, 2010
How to Find Out Which SQL Statements Causing excessive Redo generation?
View 3 Replies
View Related
Aug 13, 2010
This morning when I checked my archive logs, I suprised that the redo files are generating after every 3 min and each of file size is 50M, which is the actual size of both log members. I m using RAC database with DR server.Usally the total redo logs quantity for one day is 4 to 5. but since 10 pm of yesterday to 7 am today, the quantity of log files are 109, each of 50 M .
View 2 Replies
View Related
Jun 18, 2013
My production DB has a couple of datafiles that were created in the wrong place, plus they are tiny - 100mb each. What is best way to get rid of them?
View 3 Replies
View Related
Aug 11, 2011
I need to take the database trace of each page of my web application. I am giving the following commands.
BEGIN
dbms_monitor.session_trace_enable(session_id=>122, serial_num=>NULL, waits=>true, binds=>true);
END;
Accessing the web application. Once it renders completely, I am executing the following command.
BEGIN
dbms_monitor.session_trace_disable(session_id=>122);
END;
In the user_dump_dest folder, I am expecting to see a new trace whenever I execute these commands. But the same trace file is getting updated. How do I make oracle to create a new trace for each iteration. I am using Oracle 11g Release on CentOS 5.x
View 3 Replies
View Related
Apr 10, 2013
i want to know about trace file and TKPROF. any example.
View 1 Replies
View Related
Oct 15, 2013
I am getting back into Oracle (from a long haul in MS only env.) and am now testing Oracle installs.I have been given a task of seeing the diff. between 12c and 10.2g...I set up 2 vms (excatly same configs) and used the same dmp file (on both env.) to restore data and settings for our jobs to run.We have some aggregated data, and cubes with DIM tables each being run on the vm machines. We run nightly jobs to rebuild our cubes.
I am supposed to see/analyze the value of 12c, and understand things might vary from company to company, but am perplexed at my result.12c is half the speed of 10.2g, both env. are the same out of the box with same dmp file and same hardware.
I am using the same dmp file, with the same jobs on each machine, with both vms having 10.2g or 12c installed out of the box as is.what default oracle settings might have changed from 10.2g to 12c that could make the exact same env. run twice as slow on the 12c?
Expectations were that out of the box with both machines running same jobs on same data (from dmp files) would have it that 10.2g would be slower than the 12c, except the 12c takes 2 times as long to run the jobs. I have reviewed every possibility as I know usually the problem is the person sitting in the chair and not the pc...but I confirmed all was identical from the one vm env. to the other, except the version of oracle out of the box.
What could be done to bring that default setting back to atleast equal time between the 2, that would give me a great starting point. Otherwise, I would have to toss this up to bloatware.
I read up a bit on the CBO, and know this might have changed in 12c.is there a way to bring it back to a backwards ealier config, so as to atleast match both env. execution plans?
View 19 Replies
View Related
Jan 13, 2011
find the attached tkprof'ed file of session
I started the trace after the query started (upon user's complaint)
However even after tracing the session for more than 30 minutes I am not geeting where the 30 minutes are accounted in this file
View 11 Replies
View Related
Nov 16, 2011
I executed a query which executed quickly (1.7 seconds) but since its output took time in displaying on the console the time shown by 'set timing on was 39.5 seconds
also I took trace (tkprof) for the same.My query is why the timings under 'Total Waited' (43.19 and 1.69) are not added to the elapsed time 1.83 seconds
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.06 0 10 0 0
Fetch 758 0.03 1.77 0 0 0 11345
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 760 0.03 1.83 0 10 0 11345
[code]....
View 1 Replies
View Related
Apr 13, 2011
In a 3-node RAC setup; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS
CPU
time 5,802 34.9
RFS
ping 15 5,118 33,671 30.8 Other
Log file sequential
read 234,831 5,036 21 30.3 System I/O
Sql*Net
more data from
client 24,1711,08745 6.5 Network
Db file sequential
read130,939 4533 2.7 User I/O
Findings:-
On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
1)Are these symptoms of undersized log buffer?
2)I feel Network wait can be reduced by tweaking SDU & TDU values based on MDU.
View 2 Replies
View Related
Apr 21, 2011
I ran a scp command to transfer a file from local server to a remote server. when I am trying to kill the process, its giving an error.
ksh: kill: 19258: No such process
If I do a ps -ef | grep scp its showing the process running, but if i try to kill it, its showing me error.
View 7 Replies
View Related
Jul 12, 2010
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
View 1 Replies
View Related
Aug 30, 2013
I want to upload csv file from share location(another host) & store data in table
View 2 Replies
View Related
Sep 16, 2013
I have a simple external table
CREATE TABLE xyz
(
UNIQUE_ID VARCHAR2(255 BYTE),
FULL_BUSINESS_NAME VARCHAR2(255 BYTE),
ADDRESS_1 VARCHAR2(255 BYTE),
ADDRESS_2 VARCHAR2(255 BYTE),
CITY VARCHAR2(255 BYTE),
[code].......
I need to make the location parametrized so we don't have to hard code the file name file1. Can I do this in Oracle or do I need to do this in a shell script on the UNIX server?
View 4 Replies
View Related
Oct 1, 2011
what exactly redo log file contains.? whether they contained the ddl or dml statements or more than that.........?
View 1 Replies
View Related
Jul 14, 2011
I just started working for a company and am modifying existing reports. This report has a logo in the layout in the form of an image. Is there a way to find out where this image is located? It cannot find the image when I open the layout or run the report so I assume I am not mapped to the drive/directory where the images are located? Is this information noted anywhere in the actual report?
View 7 Replies
View Related
Aug 2, 2013
I need to take RMAN full backup by every Sunday night and Wednesday night . I have the netbackup script which will take the backup to media.
Question is:1. We have 15 - backup scripts for 15 DB in that server , so if i configure crontab for backup do i need to give all 15 scripts one by one or (*) will work . ie) 00 20 * * 3,7 /app/oracle/rman/scripts/hot_db_<sid>.sh
instead of this if i give : 00 20 * * 3,7 /app/oracle/rman/scripts/hot_db_*.sh then it will works fine or not ? 2.
Already my backup will create .output file in the same location , do i need to give output file location in crontab using ( > ) ? 3.
Is the above crontab timing is correct ? (3,7) for Wednesday and Sunday 8pm? so Sunday is 0(zero) or 7 ?
View 6 Replies
View Related
Aug 3, 2011
Im trying to restore my spfile and (later controlfile) to a new location:
RMAN> connect catalog rman11cv/pwd@metarep
Connected...
RMAN> connect target
ansluten till msldatabasen: DATABASE (DBID=510270843)
RMAN> run
2> {
3> allocate channel ch1 type 'sbt_tape'
4> PARMS="BLKSIZE=262144,ENV=(CV_mmsApiVsn=2,CV_channelPar=ch1)"
5> TRACE 0;
6> Restore spfile to pfile 'y:
estorepfile.ora';
7> }
tilldelad kanal: ch1
kanal ch1: sid=537 devtype=SBT_TAPE
kanal ch1: CommVault Systems for Oracle: Version 9.0.0(BUILD84)
Starting restore at 2011-08-03
frigjord kanal: ch1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: restore-command failed at 08/03/2011 11:39:26
ORA-27191: sbtinfo2 returned an error
The spfile and control file exists.
RMAN>list backup of spfile
BS-nyckel Typ N Storlek Enhetstyp Tidsσtgσng Slutf÷randetid
------- ---- -- ---------- ----------- ------------ --------------
736698 Full 256.00K SBT_TAPE 00:00:04 2011-08-02
BP-nyckel: 736705 Status: AVAILABLE Komprimerad: NO Tagg: TAG2011080
2T181240
Handtag: dkmj0i8p_1_1 Media: V_73870
SPFILE har inkluderats: Σndringstid: 2011-08-02
View 5 Replies
View Related
Jul 6, 2010
I was wondering if there is any way to know in which Tablespace and Datafile my Table is located. I have exported a table and about to delete it as i am partitioning it.
View 9 Replies
View Related
Aug 2, 2010
I am creating physical standby database through Rman duplicate command from 2 node rac cluster. rman do all its work. now am try to start the mrp process on physical standby database. I am getting following errors
------------------------------------------------------------
Check that the primary and standby are using a password file
and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
and that the SYS password is same in the password files.
returning error ORA-16191
------------------------------------------------------------
ORA-16191: Primary log shipping client not logged on standby
------------------------------------------------------------
I copied the same pass file from primary to standby and many times verify the same but i got the same error.
View 4 Replies
View Related
Oct 31, 2011
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */
INTO DWH_BILL_DET rq
USING (SELECT rated_que_rowid,
detail_rerate_flag_code,
rerate_sel_key,
[code].....
View 39 Replies
View Related
Sep 30, 2010
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
View 24 Replies
View Related
Oct 4, 2013
We have taken export expdp backup from prod database (primary database- Data Guard).
1.) Import impdp is very slow 10GB/Hrs on staging database (Data Guard MAXIMUM AVAILBILITY)Since Server configuration, database version and configuration, operating system everything are same as production. No blocking, locking or waiting sessions
2.)import impdp is fast 90GB/Hrs on Test standalone database and this test database is running in NOARCHIVE LOG mode with oracle standard version after that no more difference.
CPU,Memory,network and disk I/O are look normal while importing on both databases.why that much difference on import.
View 1 Replies
View Related
Aug 20, 2010
redo generation. As I found the below statement in another forum."Undo segment generates the redo data also, because undo segment is database changes, so it generates the redo data also."
How a Undo segment can generate Redo and Undo datas.
View 12 Replies
View Related