Performance Tuning :: Used Same Dmp File To Restore Data And Settings
Oct 15, 2013
I am getting back into Oracle (from a long haul in MS only env.) and am now testing Oracle installs.I have been given a task of seeing the diff. between 12c and 10.2g...I set up 2 vms (excatly same configs) and used the same dmp file (on both env.) to restore data and settings for our jobs to run.We have some aggregated data, and cubes with DIM tables each being run on the vm machines. We run nightly jobs to rebuild our cubes.
I am supposed to see/analyze the value of 12c, and understand things might vary from company to company, but am perplexed at my result.12c is half the speed of 10.2g, both env. are the same out of the box with same dmp file and same hardware.
I am using the same dmp file, with the same jobs on each machine, with both vms having 10.2g or 12c installed out of the box as is.what default oracle settings might have changed from 10.2g to 12c that could make the exact same env. run twice as slow on the 12c?
Expectations were that out of the box with both machines running same jobs on same data (from dmp files) would have it that 10.2g would be slower than the 12c, except the 12c takes 2 times as long to run the jobs. I have reviewed every possibility as I know usually the problem is the person sitting in the chair and not the pc...but I confirmed all was identical from the one vm env. to the other, except the version of oracle out of the box.
What could be done to bring that default setting back to atleast equal time between the 2, that would give me a great starting point. Otherwise, I would have to toss this up to bloatware.
I read up a bit on the CBO, and know this might have changed in 12c.is there a way to bring it back to a backwards ealier config, so as to atleast match both env. execution plans?
we are ruining oracle 9i in windows 2000 server, oracle is the only database running in the server. The RAM capacity is 4GB but the sga showing is as below
Total System Global Area 118255568 bytes Fixed Size 282576 bytes Variable Size 83886080 bytes Database Buffers 33554432 bytes Redo Buffers 532480 bytes
pga size NAME VALUE ---------------------------------------------------------------- ---------- aggregate PGA auto target 0 global memory bound 0 total expected memory 0 total PGA inuse 19937280
[code]...
how can we using the 4gb RAM deducing 20% for OS Sometimes users are complaining slow process.
I have few queries on PGA memory management.Since these queries are based on 2-3 examples not exactly same by nature I am summarising it after my understanding for the same
As I understand many workareas can be allocated to a single sql statement and number and sizes of theses workareas is controlled internally by Oracle when Automatic Memory management (PGA_aggregate_target and workarea_size_policy=Auto are set) Since many sessions share the PGA memory, the amount of memory available to each session may vary and if less amount of memory is available for a session for sorting then TEMP tablespace is used
[1] Can we say paging happens and can be checked at this time?
[2] Is there a difference in handling memory while populating pl/sql tables?
As I have encountered ora-04030 while some our developers were populating pl/sql tables but never encountered this error for sorting, hash joins etc Though I don't remember the width of pl/sql table, I am sure the developer used 'LIMIT' clause during bulk collect and still faced the issue.
With a single session on the server, I noticed that the difference in values displayed issuing 'free' command in linux and output values from sesstat did not match at all while there wasn't any heavy OS process involved during the period. I was expecting 'used' and 'free' values displayed by free command (linux) will change and difference would be approximately equals 'before and after values of session pga memory.
[3] Isn't it expected to match?
[4] Can we say in dedicated server, at any moment of time, the SUM of 'session pga memory' represents all the memory used by Oracle SGA, at that point of time?
select sum(value)/1024/1024 "memory in MB" from v$sesstat where statistic#=20;
During one of the tests I got following output (divide value by 10 for my visibility and avoid formatting)
SQL> select a.name, to_char(b.value/10, '999,999,999') value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name like '%ga memory%'; 2 3 4
[code]...
The above query is showing above values even when the pl/sql block execution is completed 30 minutes back
[5] Do we call this as 'memory leak' where memory is not released even while some time has passed since session has done something?Of course I am not checking at OS level as mentioned in question [3] above the values won't match!
Still the output of free command for reference(After the pl/sql block executed)
SQL> select * from v$pgastat; NAMEVALUEUNIT aggregate PGA target parameter 524288000bytes aggregate PGA auto target 456256512bytes global memory bound 26214400bytes total PGA inuse 17328128bytes
[code]...
[6] What could be the significance of negative values of 'session pga memory/max'?
Last We have an OLTP system and in the night we run batch processes in 2-4 sessions
Suppose I have 10 GB RAM and with PGA setting of 3.5 GB Now I want the batch process sessions to use max possible memory during nighttime and toggle the setting back in the morning
[7] With above settings (10 GB RAM and 3.5 GB PGA) how can I divide the memory among 4 sessions?
Shall I set 1) PGA_aggregate_target=0 2)Workarea_size_policy=manual 3) Sort_are_size 4) Hash_area_size
[8] What would be approx values for parameter 3 and 4? will it be straight 3.5 GB/ 4?
I need to take the database trace of each page of my web application. I am giving the following commands.
BEGIN dbms_monitor.session_trace_enable(session_id=>122, serial_num=>NULL, waits=>true, binds=>true); END;
Accessing the web application. Once it renders completely, I am executing the following command.
BEGIN dbms_monitor.session_trace_disable(session_id=>122); END;
In the user_dump_dest folder, I am expecting to see a new trace whenever I execute these commands. But the same trace file is getting updated. How do I make oracle to create a new trace for each iteration. I am using Oracle 11g Release on CentOS 5.x
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
The REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
The controlfiles were corrupted due to some issue. So, I had to restore the recent backed up control file and then restore the database.
The weird thing which I have noticed is that. I tried backing up Yesterday's Control file. So, I would say that the backup pieces it should pick should be from yesterday. however, the backup pieces are getting restored from last week's backup.
I would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
I try to current redo log file location to multiplex redo log configuration. I did do this on the DB.I import a backup file to DB when I made this change, but its too slow running import process from old configuration. Approximately it is 4 times slow.
when I recovery old configuration for redo log file, import process is normal running...Why this changes hit to DB import process performance? The old redo log file location is below:
I executed a query which executed quickly (1.7 seconds) but since its output took time in displaying on the console the time shown by 'set timing on was 39.5 seconds
also I took trace (tkprof) for the same.My query is why the timings under 'Total Waited' (43.19 and 1.69) are not added to the elapsed time 1.83 seconds
In a 3-node RAC setup; one node is showing high CPU utilization around 40~50%. The CPU utilization was less than 20% 10 days back but from 9th oldest day it jumped and consistently shows the double figure. I ran AWR reports on all three nodes and found one node with high CPU utilization and shows below tops events-
EVENT WAITS TIME(S) AVG WAIT(MS) %TOTAL CALL TIME WAIT CLASS CPU time 5,802 34.9
RFS ping 15 5,118 33,671 30.8 Other
Log file sequential read 234,831 5,036 21 30.3 System I/O
Sql*Net more data from client 24,1711,08745 6.5 Network
Db file sequential read130,939 4533 2.7 User I/O
Findings:- On AWR report(file attached) for node= sipd207; we can see that "RFS PING" wait event takes 30% of the waits and "log file sequential read" wait event takes 30% of the waits that occurs in database.
1)Are these symptoms of undersized log buffer? 2)I feel Network wait can be reduced by tweaking SDU & TDU values based on MDU.
1) Split values from "INST" Column : suppose 23 2) Find all values from "NUM" column for above splitted value i.e 23 ,
Eg:
For Inst : 23 , It's corresponding "NUM" values are : 1234,1298
3) Save these values into
A table Y : INST, NUM are column names.
INST NUM 23 1234,1298
1) I have a thousand records in Table X , and for all of those records i need to split and save data into Table Y.Hence, I need to do this task with best possible performance.
2) After this whenever a new data comes in Table X, above 'split & save' operation should automatically be called and append corresponding data wherever possible..
sometimes when I re-run a query a few times, the speed after the first run become much faster. this is a problem for me when I'm trying to optimize a query. is there some sort of cache? can it be disabled?
CREATE OR REPLACE procedure fast_proc (p_rows out number) is TYPE object_id_tab IS TABLE OF all_objects.object_name%TYPE INDEX BY BINARY_INTEGER lt_object_id object_id_tab; CURSOR c IS
[Code]....
Warning: Procedure created with compilation errors.
We have few tables in our production database which are havoc in size and will increase in size in future too so as part of the corrective measures , we have jotted down the below 3 methods to manage the size of those tables :-
1> Partitioning the table and take the export of identified partitions and after that, truncate those partition. 2> Creating history tables and remove not so current data from the original table to history table.
I'm extracting/retrieving the data from the oracle database using Java application it's bit slow. However, when I retrieve from the SQL server it's faster than oracle.
We have a data migration scripts written for oracle. Data is not huge but we are observing that the migration is faster in the development labs but is 5x slower in the production site.
The development Oracle setup is on Windows and Production setup on Solaris. I have attached the AWR generated for a period where migration was run for 3 hours and stopped due to slow performance.
Here is my initial analysis.
1) The first timed events is the DB CPU. Hence I feel the migration scripts can be modified to run in parallel so that they can finish faster. However here the question arises why it should run faster in development env if this is an issue. 2) I tried increasing the a.large_pool_size set to 512M b.sga_max_size set to 8G c.sga_target set to 8G from 0, 4G and 4G respectively.
I have attached the AWR and below are the etc/system contents for solaris settings.
* Begin MDD root info (do not edit) rootdev:/pseudo/md@0:0,1,blk * End MDD root info (do not edit) set noexec_user_stack=1 set noexec_user_stack_log=1 * IBMdpo vpath_START (do not remove) * default SCSI timeout is 60 seconds * uncomment to change SCSI timeout * set sd:sd_io_time=0x1e forceload: drv/vpathdd * IBMdpo vpath_END (do not remove)
set noexec_user_stack=1 set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=1024 set semsys:seminfo_semmsl=256 set semsys:seminfo_semvmx=32767 set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10
P.S. The awr report is renamed to .txt from .html to be able to upload the file.
We are on Oracle 10.2.0.4 on Solaris 10. There is a table in my production db that has 872944 number of rows. Most of its data is now unnecessary, we need to retain, based on a date column in the table just last one month's data and delete rest of the data. So after that the table will have just 3000 rows.
However as the table was huge earlier(872k rows prior to delete) , does the delete of data release its oracle blocks and does the size of the table reduce? If not, will it rebuild the table online (online redefinition) so that the query that does a full scan on this table goes faster?
I checked using an example table that just delete of data does not remove the oracle blocks - they remain in the user_tables for that table and cost of full table scan remains same. We have a query that does the full table scan so I am thinking that after this delete I should do an online table re-definition , is that the right decision?
I create a view on production server which takes almost 10 to 12 minutes when it shows data. this view contains 3 or 4 tables on which all primary and unique columns have indexes.which index will be better for fast retrieval of data .
I have a table which contains 8,21,177 amount of data totally.Now I am trying to delete around 4,84,000 of data from this table by using just one filter i.e. my query is something like below
DELETE /*+ parallel(resource,4) */ FROM resource where created_by = 'MIGN'
This is going to delete 4,84,000 rows of data . But my current issue is this is taking lots of time to delete the data . To be precise , its almost taking 25 hours to delete this data..The created_by column is indexed .
Execution Plan ---------------------------------------------------------- Plan hash value: 2389236532
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------
We are copying our transaction tables data into another database for our reporting applications (say every day midnight refresh will happen).
The Transaction Database has some 30tables. Existing system is following below points and it is taking 2hours to complete.
1) Truncate data from reporting database (or schema)
2) Direct path Insert into reporting database (or schema) as select * from transaction tables.
3) Rebuild index and Enable constraints.
Note: Each tables data will vary from 30lakhs to 50lakhs. Dump/import/export is not advised by the client.
I want to cut down the time i.e., below 2hours. Instead of above method. Can go for a field in each table specifying the time of each records update/insert operation and then pick the modified records only and copy into reporting db.
I am inserting data using a procedure for 2012 and 2013 year which is using partitioned tables includes crore of data in a partition taking lot of time or taking months. Is there any other way by which I can insert data fast from our query.
I have created a materialized view and also a normal View, which has 3 tables used in both the views, when inserted new records it reflects in a normal view but when i select the materialized view i cant see the updated data.
here is the materialized view i created;
CREATE MATERIALIZED VIEW pct_sales_materialized BUILD IMMEDIATE REFRESH ON DEMAND ENABLE QUERY REWRITE AS SELECT A.DEP_NAME,B.EMP_ID,C.EMP_NAME FROM department_head A,department_child B,emp_detail C WHERE A.DEP_ID = B.DEP_ID AND B.EMP_ID = C.EMP_ID
We have a table with huge data which is skewed on a 'status' column. The 'status' column has 6 distinct values with 1 particular value occupying 80-85% records.
In the batch process we query the data on the status and process the retrieved records. My senior is insisting on partitioning which I see not much feasible considering cost implications just for a part of functionality
See there are 6 status 'A','B','C','D','E','F'
with 'A' occupying 80% records 'B' to 'F' occupies 2% till 14% records in the table(approx)
1) Create a conditional index on status (using case) to have records with all statuses except 'A' Then create If-ELSE structure
IF input parameter is 'A' select /*+ FULL Parallel(t) */ * from t where status='A'; ELSE Select /*+ INDEX (t conditional_index) */ * from t where status in ('B','C'); END IF;
I want to create conditional index here for 2 reasons
1] since it will have values for status except 'A' this nullify the chance that this index will be picked up when status='A' will be queried Thus making the performance worst (status ='A' is for 80% records) - The IF-ELSE is additional protection 2] Less impact on the DMLS as the index will not be on status='A' which contribute to large chunk of records
2)Populate a dummy table which would contain rowid and status. Since the business closes at 21:00 and batch process starts at 21:30 Between these times periods refresh the dummy table every day using merge (to catch business transactions during the day)
Now during the batch process retrieve records from the main table using the rowids in the dummy table depending on the input status value
3)Create index on status Make sure hard coded status values are used in the database procedures Gather stats with the histograms And leave it to the Optimizer to choose the best possible path
my sql query has three tables in from clause so it has two join conditions and one where condition.
account_no is number data type and v_account_no is varchar2() data type
The where clause is :
"where account_no=to_number(v_account_no)" with this condition in my sql query has the cost 392
we just modify the where clause as where v_account_no=to_char(account_no) with this condition in the sql query has the cost 11.
what is impact of this data type conversion and difference between these two "to_number() and to_char()" in performance wise to reduce the cost of query?