Performance Tuning :: Tablespace With Different Block Size Inside Same Database?
Nov 25, 2011
All the analysis till now on our system proves that our system is clearly I/O bound and db sequential read is the biggest culprit.
We have even identified the index which is being affected by sequential read. I am thinking of creating a new tablespace with 32K blocksize (currently all table spaces are 8k) and migrate this index to the new space. That way, Oracle will have to do less number of reads to get the required data.
But is there anything wrong in having just one tablespace with a differnt block size? Or is there anything that I have to be watchful about while doing it?
i written this code i m facing ORA-04030: out of process memory when trying to allocate 16408 bytes error
/* Formatted on 2011/11/26 11:52 (Formatter Plus v4.8. */ DECLARE row_id varchar2(50); v_batch_id temp.batch_id%TYPE; v_slab_id temp.slab_id%TYPE; flag NUMBER (2); num varchar2(50) := &row_id;
I am running Oracle 10.2.0.1.0 on MS Windows 2003 server 64-bit with 16G RAM.
Here is the findings for my Oracle database.
SQL> select * * from v$sgainfo; NAME BYTES RES -------------------------------- ---------- --- Fixed SGA Size 1293560 No Redo Buffers 7094272 No Buffer Cache Size 830472192 Yes
[code]...
I find that the SGA component "Buffer Cache" is decreasing from the start "1.8G" and down to now 0.8G. On the other hand, the component "Shared Pool" is increasing from the start 0.3G to now 1.2G. I noticed that there are 100 operations of shrinking of "Buffer cache" and growth of "Shared Pool" in Oracle every day.Is it a indicator that I should raise up the SGA_MAX_SIZE?
I tried to increase the SGA_MAX_SIZE to 4G. But I cannot start the Oracle afterward.Is it a limitation of MS Windows(OS) or Oracle?I set the SGA_MAX_SIZE to 3G. This time, I can startup Oracle.What is the optimum/maximum I can set to SGA_MAX_SIZE?Is there any adverse effect/concern when setting the SGA_MAX_SIZE more than 2G?
Objective : To find solution to archieve data from 2 big tables which is occupying maximum size in the data base. With current data (From Jan 2005 to Sept 2011) it has records as mentioned below:
We need to load data and run monthly batches from October 2011 to current month which will increase this space.
1. Issue is there will not be having so much space.
2. Maintenance of such table is diffcult now.Also there is huge impact on performance. Can we think of partitioning the table base on date aswe query 1st table based on certain date range?
3. Most of reports use this table and creating performances issues
The REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
I would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
We have a table emp_details with 23772889 records. Our requirement is to increase few of the columns size in the table emp_details. We are following the below alter statement which is taking around 2 hours of time.
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ undo_management string AUTO undo_retention integer 96000 undo_tablespace string undo
Following are the details in AWR report (00:00 til 01:00 of 21-Apr-2013) .... not thet the error was produced at 00:42
Undo Segment Summary DB/Inst: DBCPY/dbcpy01 Snaps: 18853-18854 -> Min/Max TR (mins) - Min and Max Tuned Retention (minutes) -> STO - Snapshot Too Old count, OOS - Out of Space count -> Undo segment block stats: -> uS - unexpired Stolen, your - unexpired Released, uU - unexpired reUsed
[code]....
Undo Advisor information taken 'now' is as following
SQL> select dbms_undo_adv.longest_query(sysdate-2,sysdate) from dual; DBMS_UNDO_ADV.LONGEST_QUERY(SYSDATE-2,SYSDATE) ---------------------------------------------- 379650 SQL> select dbms_undo_adv.required_retention from dual;
[code]....
In above situation what should be my first choice (assuming increasing space is not an issue) - increase undo tablespace or increase undo retention?
If latter is the choice then what should be the value? Because as I understand present 96000 value is taken as lower limit and because of auto tuning the actual value (TUNED_UNDORETENTION) being used was 345600 In that case shall I set it to something > max(maxquerylen) i.e 379,650 + X?Or I shall increase the undo tablespace size?
From Undo Advisor output it looks to me that even if I increase the undo retention to 379650 current undo size will be able to support it (may be at the expense of DMLs)Is that right?
If I have NO datafiles other than of the default block size, would I need to define a size for those other buffer pool? Is there any process that would benifit of these pools?
i am using 11.2.0.3.0 version of oracle. We are planning to move some ~40 tables/indexes to new encrypted tablespace as a part of TDE(transparent data encryption). Currently three tables are having size ~30GB and one having ~800GB other have <2GB in size. And tables/indexes are altogether placed in different tablespaces.
whether i should create as many no of encrypted table spaces as it was before as unencrypted tablespace? or I should create one encrypted tablespace and move all the tables/indexes into that?
I don't have any dba privileges, can you share a scripts which can tell how many block my query is fetching with or without indexes. How do i also get buffer hit, how can i get i/o without sql trace as i don't have access to dump_dest
I have a below query
SELECT DISTINCT ser_id AS STA_ser_id, rct_name AS STA_name FROM sd_servicecalls, rep_codes, rep_codes_text WHERE ser_sta_oid = rcd_oid AND rcd_oid = rct_rcd_oid AND rct_name IN ('New', 'Awaiting Approval', 'Approved', 'In Progress', 'Awaiting Supplier', 'Awaiting RFC', 'Awaiting Release', 'Pending Release', 'On Hold', 'Resolved', 'Implemented', 'Closed');
Does large hash value in explain plan mean more resource needed and more time to execute the query, How can i use ADDM for the above sql.
we have a situation where both undo tablespaces were almost filled i.e UNDOTBS1 99% and UNDOTBS2 100% filled so i add data files to it and then i found a lot of blocking session and was just killing them through EM then i stop my front end listener and also down the service, now i don't have any blocking session but on EM a big WAIT is coming. alert log shows nothing serious, it was showing deadlock but now it is over as well.
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
One of our customer have problem with following sql statement:
SELECT c.table_name, c.column_name FROM user_tab_columns c, user_tables t WHERE c.table_name = t.table_name AND c.data_type IN ('CLOB', 'BLOB');
During execution it takes all the TEMP tablespace size(8GB).
I gather system stats (dbms_stats.gather_dictionary_stats(estimate_percent=>null)) but it doesn't resolve problem.Above sql statement works fine with RULE hint but I want to know what is the reason of problem with temporary tablespace.
I can't understand the following cursor declaration (inside the DECLARE of a PL/SQL block)
CURSOR c_emps IS SELECT emp_large_ot(empno, ename, job, mgr,hiredate, sal, comm, deptno) FROM emp_large; emp_large_ot is an object type created as CREATE TYPE emp_large_ot AS OBJECT ( empno NUMBER , ename VARCHAR2(10) , job VARCHAR2(9)
[code]...
and emp_large is similar to the standard emp table
While writing a procedure I went into this problem. Whenever I write Query : Select * from dba_pending_transactions It works fine.
But whenever I use same Select Query inside PL-SQL block it gives error Table or view not exist. Dba_pending_transactions is view.
SQL> declare 2 v_count number(2); 3 begin 4 execute immediate 'select count(*) from dba_ending_transactions' into v_count; 5 dbms_output.put_line(v_count); 6 end; 7 / declare * ERROR at line 1: ORA-00942: table or view does not exist ORA-06512: at line 4
Same error I get when i use it inside a procedure.
insight into the overheads for mutally authenticated SSL for database connections? This is over a fast local network, to a RAC cluster, with DB firewall in front. There's always a large element of "it depends"
Information I'm interested in are things like latency for initial session setup and subsequent data transfer. Also the increase in network packet size, and the increase in CPU cost for the database server. I guess there is some implications for session memory usage as well.
I have one big database which I need to migrate to Oracle, because it rocks with big databases, instead of other databases and when I was made transfer software and all works great except one more think. During this process I found that Oracle normally fill log & undo table, and my question is how to migrate (or can I migrate) database to oracle without filling undo database (deactivate this process) and after that to put database to work normally, because I just need to transfer data as is and from that point Oracle goes on...
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi PL/SQL Release 10.2.0.4.0 - Production CORE 10.2.0.4.0 Production TNS for Linux: Version 10.2.0.4.0 - Production NLSRTL Version 10.2.0.4.0 - Production
I have one problem in trigger execution. I have a small plsql block in trigger and, I want to execute it as a dynamic way. but it is giving the error. Please find the trigger code. Here my intension is that, the column name used in trigger should be dynamic. In future, if I want to switch the column name, I have to do without modification in trigger. The error im getting is "ORA-01008: not all variables bound".
CREATE OR REPLACE TRIGGER ETM_AR_IU AFTER UPDATE ON EXTERNAL_MAPPING REFERENCING NEW AS NEW OLD AS OLD FOR EACH ROW