FOR i IN (SELECT table_name, sqlstring FROM table_list) -- returns maximim 30 tables loop execute immediate 'update '||i.table_name||' set status=''done'' '; end loop;
in this all tables are independent, it is a bit slow as update queries are getting executed sequentially. .
where i can execute update statements in parallel.
i have a procedure which outputs 10 refcursors. It is called called by a .NET utility which then writes the data to 18 different flat files. The proc is of below format
CREATE OR REPLACE PROCEDURE FILE_EXTRACT (OUTREC1 OUT SYS_REFCURSOR, OUTREC2 OUT SYS_REFCURSOR,) AS BEGIN OPEN OUTREC1 FOR SELECT /*+PARALLEL(A,8)*/ COL1,COL2 FROM TABLE A; [code]....
I have used parallel hint because all the tables used in the queries a huge tables and this is a nightly batch job. The issue here is that i have expected oracle to use 8 processes for this execution so all my select statements have parallel hint with degree 8 , but unusaully the procedure is erroring out on production databases because of maximum number of connections are being spawned and the database is not able to create any new session. When i debuged and did a quick test proc where i used only one out refcursor it ran only 9 threads . then i ran by adding one more out ref cursor it spawn 17 threads. I think its the way .NET is fetching data from each of these cursors.When i print the first refcursor i do see queries which are running for the second along with the first causing the parallel queries run more than expected. The cause of the problem here is all the refcursors are executed and waiting to return data. SO when .net starts reading first cursor the other queries also run.
11.2.0.1Aix 6.1 5L (quadcore, 16GRam) I am still confused how to take full advantage of these monitoring tools. Actually the our database performance is currently satisfactory, except for occasional few minutes spikes of CPU highs > 80 .I just want to catch the culprit process/program responsible for this spikes. Is it wise to run ASH, AWR, ADDM with an input from time 1AM to 1AM next day? What I mean is I will analyze a 1-day period, so that I can catch the program/process that has the higest cpu/memory usage for the day.
we had one problem one of oracle job scheduled using dbms_job was inactive for so long time,As one of our developer noticed this and reached me as i checked the dba_job was running without an process
SQL> select sid,serial#,paddr from gv$session where osuser='oracle';
We are trying to use the methods/constructors in the object types and find it more similar to the procedures and functions in the packages. I am wondering how they are different from stored procs and functions and what are the advantages?
I look after a database that contains GIS mapping data. We do not use Oracle Spatial - it's just a plain Oracle Standard DB. It is running in Noarchivelog mode (I know - it's not a good idea, but will be sorted when our new Sun T4-1 arrives).
There are only a couple of users who actually edit data in the database, but about 100 simultaneous users who access it. In day to day use we have no performance issues. The DB has 3 50Mb redo log groups, and these switch about every hour or so during normal use.
Every few weeks we do a bulk update of our underlying map data. This involves putting about 4Gb of data into the database (which is about 15Gb in total). This takes about 5 hours and whilst I'm sure our old Sun v240 server lacking power is a substantial cause, I think the lack of redo space makes matters worse. Last time we did this, the system clocked just over 200 redo log switches in 5 hours. There were lots of "Checkpoint Incomplete" messages in the log file too.
The software we use to load the map data doesn't allow the data to be loaded with a nologging switch.
I could resize the redo logs, but if I size them for the update workload - 3x 500Mb - we'll have some days where we don't get a redo log switch at all. Is this necessarily a problem?
The alternative I'm thinking of is prior to performing this update, we add an extra redo log group with a 1Gb file, run the update, then remove the redo log group and delete the file afterwards. Is there anything wrong with this approach?
What is the optimal redo log size for the database and how many log files required if desired to enable archive log mode.what can be the value for fast_start_mttr_target..?i think if it this parameter set we can have redo log advisor for optimal redolog size.we have 2 redolog groups with 2 members each size of 1 GB. Will it degrade db performance..?
Database version 11.1.0.7 Oracle apps R12 OS : Linux Redhat 5.5
I wanna know if the redo log members are mirror copies.All member files from a same redo group have the same data?Are there any different in mirror or multiplex a file?
I want to re-size redo log group on my production database .i have 10 redo log groups of 50mb each having 2 members.i want 4 redo groups to be of 250 mb each having 2 members and then i will drop that old 10 redo log group(50 mb ) , so that i will have only 4 redo log group of 250MB each having 2 members.But i have physical standby and logical standby configured on production database .
find attached file for redo log configuration of production database(CBSPROD),Logical standby database(CBSMIS), Physical standby database(CBSDR).
I got many times oracle ORA-00494 error and the database went down but since 29th of july the database have not been killed. The error message is below :
ORA-00494: mise en file d'attente [CF] d�tenue pendant trop longtemps ( (more than 900 seconds)) par inst 1, osid 176484 ORA-00028: votre session a �t� ferm�e
My database is used for datawarehouse of many terabytes.
Initially the redo log size was 500Mbytes and I've set it to 3Gbytes. The maximum log switch is after 5 minutes. I want log to be switched every 20 minutes or every 30 minutes.
To obtain the size of redo logs I've executed this query :
SQL> select OPTIMAL_LOGFILE_SIZE from v$instance_recovery;
OPTIMAL_LOGFILE_SIZE -------------------- 54763
53,5 Gbytes is it not very big as redo log size? What's the maximum size of redo log? To set very big redo log size what are the requirements? Which precautions should I take before? What are the risks? Are any other ways to change the log switch frequency?
BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production PL/SQL Release 11.1.0.7.0 - Production CORE 11.1.0.7.0 Production TNS for Linux: Version 11.1.0.7.0 - Production NLSRTL Version 11.1.0.7.0 - Productionredo logs multiplexed. [code]....
+ when the redolog 5 was archived how the archiving process works ?
let say both the log members are clean in case which one will be archived 5a or 5b ?
+ I can also see only one archive log is creating during log swtich
Redo log information can be transmitted in one of two ways from the primary database to the standby database: either by ARCH or LGWR.
1. when ARCH involves 2. when LGWR involves FAL_Client = (Should i enter net service name or Db name or service_name FAL_Server = Should i enter net service name or Db name or service_name ) FAL_CLIENT='whichone'
I am creating database instance from template. I have specified the location of redo log files. When I run the dbca utility it does creates the redo log files in specified directory. But the installation fails . When I checked the trace file. it says unable to locate the specified file(redo.log). But when i check in directory they are created.
I learned that Oracle uses supplemental logging mechanısm to add the changed rows to redo log files and identify the changed rows on target replication database? Is that mechanism mandatory to handle the replication of data between updated and back up databases?
I was asked by my systems administrator if I could tell him how much redo log volume, on average, do I figure we generate in a day?
Just wondering how I might calculate this?
We have several production databases. If I wanted to calculate the above for one of them, would it be take all the redo logs for a day and total up the size in bytes? Maybe take a 5 day work week and take the average over the 5 days?
As we know that, MV is generating more redo logs during the FAST refresh. but i need more clarifications on that.
see the below examples:
exec dbms_mview.REFRESH ( LIST => 'mv_test', method=>'F');
PL/SQL procedure successfully completed.
select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size';
NAME VALUE ---------------------------------------------------------------- ---------- redo size 147144
see the redo size is 147144 bytes. Immediately, i refreshed in MV view. now there is no update or insert or delete stats happened in source tables. but i do see redo log generation is high for NO DATA refresh.
select a.name, b.value-147144 from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size';
NAME VALUE ---------------------------------------------------------------- ---------- redo size 42352
For no rows refresh, it takes 42352 bytes.. why oracle generated redo logs when there is no DML operations happened in source table.
What's the difference between a dirty buffer and a redo buffer?
My understanding is that a dirty buffer is a changed buffer or whenever data changes in the buffer cache, it's marked as dirty. Also, a redo buffer keeps track of changes that were made to the data, so it's also referring to changed data as well...DWBn writes dirty buffers to disk and LGWR writes redo data to redo log filesHow can we differentiate between the two?
I have oracle 9i running on HP-UX, I would like to find how much redo we are generating in a given period of time, is there any script that I can use to get this information?
I learnt that logWriter writes in the redo log files when redo log buffer is 1/3 full, it means that 66 % of redo log buffer are always empty and never used,
if no, isn't a waste of memory (66 % always empty !)