SQL & PL/SQL :: How To Find Degree For Parallel DML
Feb 12, 2013
I am inserting 50 million records into a table MAIL_LOG. I am using the hint /*+ append parallel (MAIL_LOG, 12) */. But for my table degree is 1.
SELECT table_name, degree
FROM user_tables
WHERE table_name = 'MAIL_LOG';
I have following clarifications.
1) What degree I should use.
2) On what basis I have to give the degree.
3) Have we use constant degree all the times.
4) How to check my insert statement is using parallel degree.
5) How to find the degree at session level.
I have been told that i should use multiple's of 4 as degree in the parallel hint to get maximum performance, so i am wondering is it true? that i should always use multiples of 4 or i can use any number inside the parallel hint.
I have table A with size 120 Million and two more tables are of size 2 Million on Table B and less than 1 Million size on table C.I had created Partition and Parallel degree 4 on the table A. Created table B with Parallel degree 2 and Created table C with NOPARALLEL.
My query is using above tables with joins and inserting into table D using HINT /*+ APPEND NOLOGGING*/I had executed the explain on the above criteria the cost is showing 20767.
Later created Tables A,B and C with NOPARALLEL. Applied HINT on Table D /*+ APPEND NOLOGGING*/ and als applied HINT /*+ PARALLEL(A, 4) PARALLEL(B C, 2) on select query which uses to insert into Table D.
My question which is best practice on PARALLEL degree creation at table level or query level:
a) Creating table with Paralle (degree 4) b) Applying HINT /*+ PARALLEL (TABLE A , 4) */ at query level
In my recent performance improvement tasks I was experimenting with Parallel hints. I have been getting good results with them, however the question arises "How is Parallel degree proportionate to number of CPUs used during execution?
/*+ parallel(emp_tab,8) */ Oracle Version: Oracle 10g OS : UNIX Client OS: Windows XP V$PARAMETER VARIABLES as follows: Total CPUs available: 10 No of CPUs per core: 2 Max parallel : 100 Min Parallel : 5
Question 1: So how do I map the degree of 8 with CPUs. I believe that degree of 8 does not mean its going to use 8 CPUs. If so, how do I prove it. I tried SQL_TRACE and cpu comes 0. And I believe its not the CPU count.
Question2: How do I know how many CPUs were used to execute a SQL query ?
I am executing a sql statement which is doing FTS in parallel mode The server has 8 cpus and threads_per_cpu is 2
The v$sql shows PX_SERVERS_EXECUTIONS as 8
select PX_SERVERS_EXECUTIONS, sql_text from v$sql where sql_id='0q0nk5117yth2' 8, select /*+ full(a) parallel(a)....
however the px_sessions shows 17 sessions (16 parallel session + 1 parent session (where sid = qcsid) Now in px_sessions, these 16 parallel session are divided in 2 server sets 1 and 2 and values for degree and required degree are 8 and 16 respectively
However, all the time, only 8 sessions which belong to server set = 1, were active though its state was waiting with event "PX Deq Credit: send blkd"
The other session which belong to server set = 2 were never active and always had waint event ='PX Deq: Execution Msg'
what could be the reason that 16 parallel session could not be started though I am the only person using the server, there aren't any batch jobs, dbms_jobs,even archivelogs (not a prod system)?
Note that paralel_max_servers setting is 16
Another issue being the duing start of the query approximately 100-115 blocks were read for the query (checked from longops) however after 60-70% blocks are read the number of blocks read / seconds falls down to 10-20 blocks / second across all parallel sessions.
If we have not set parallel degree for a table then we can ( try to ) force parallel execution on a table using a parallel hint Does this 'parallelism' works on the index search in the query as well?
In which situations non-parallel non-partitioned table but parallel index (degree>2) will facilitate a query?
There is an index with degree 16 on a rac env. The base table has no degree. The table and index are not partitioned. Does the degree of index only affect index DDL (alter, rebuild etc)? Any effects on query (PQ)?
I have a query which has 5 unions, each clause of the union takes 1 hr to run and query results come back in 5 hrs, Is there any way I can make these clause to run in parallel?
CREATE OR REPLACE PROCEDURE EBILL_BULK_UPDATE_SERVICE(in_cycle VARCHAR2) AS v_cnt NUMBER; -----Variable used for checking table is partitioned or not partitioned CURSOR cur_update -----Cursor defined for Updating EBILL tables for service_id is SELECT table_name , cycle_name FROM NNP_EBILL_UPDATE
[code]....
As our requirement that Execute Immediate should work for 5 or more tables updation parallely at a time.If one table get completed then it should take next table from loop and then start the code till completion of all tables.
On a tab page should be displayed the result of four indifferent queries, each based on a stored procedure.At the moment, the queries are processed serially, by the statements:
i am trying to export table using datapump in oracle 10g, this expdp takes 5 hours time, so i want use use parallel keyword in expdp, my question is how should i know number of parallels can i use...?
oracle: 10.2.0.5.7...I can get this to work, but not the way the docs seem to say. I am wondering if I am reading the docs wrong or missing something.
The docs seem to say to get a query to run in parallel using an index you use the PARALLEL_INDEX hint. This doesn't seem to work for me. I have to do one of the following
1. change the parallel degree with an alter index, then use the PARALLEL hint (parallel index hint does nothing in this case) 2. use both the parallel_index and parallel hint
Just a general query on parallel query. My customer having 4 cpus and running the database in 11.2.0.3 in AIX 5.3(One is in AIX 6.1). Under which circumstances, we can propose to user parallel query options.
We have Data Migration for our application coded in PL/SQL. The DB server has 64 Cores available (Solaris 10 OS) however running the migration code written as a function, utilizes very little CPU and CPU utilization is to max 2%. To utilize CPU power available to increase the speed of migration, we are using DBMS_JOB to schedule this function multiple times.
However scheduling the function 10 times, we are seeing that at any moment only 4/5 oracle processes are active and utilizing the CPU and CPU utilization has gone up to 5-6%. The speed of migration is increased but not to a great extend which I feel would work if we could utilize more CPU.
I see a parameter job_queue_processes is set to 10 currently in the database and am planning to increase this (currently to 25 as I don't have exact count of how many other jobs may be running in the database).
I am trying to execute two scripts at the same time (concurrent) in Oracle SQL Developer. I know we can schedule a job using DBMS_job package and define the job. But is there any other way of doing it using Threads ?
We have very large table having data more than 1000 millions rows. We divide this table into four physical tables say A, B, C and D. The physical horizontal partition of data of this original table is done based upon their business policy.
Each partitioned table has contained data of particular business entity. Further each table has partition and sub partitions based upon business rule.
We have to retrieve data from all these tables as follows:
select a1, a2, a3, a4, a5, a6 from A where < logical filter condition> union all select b1, b2, b3, b4, b5, b6 [code].....
We observed that above each query block execute in serial one after another and individual each query block capable to process data in parallel from respective table.
How does this above query able to execute each query block in parallel?
At work we have a linux server with Oracle 10.2.0.5 g, and we have the need, to save time, to use parallel process launched in background (&) or (at now).
The table that we are going to insert is partitioned for date and subpartitioned for name of file_source, and we are having a lot of problem with this.
Our DBA staff say that is impossible to work in this way but for me, after ten year of oracle-work, sounds strange , sure may be that we don'r use really really correctly this procedure but i suppose that Oracle is able to recevie 20 ask of insert on the same table.
The situation is that we usede this procedure in other server and it's work but the answer of our DBA-staff is that we have less data storage and so on. In the end some times this proceduers have a low time of execution and some times no, this goes out of my ability to find out the problem.
We have a DWH with STAGING AREA, ODS Level, DDS Level and DM Level, the Staing area is paritionend for file_source, the ODS Area is partitioned by range (date) and subpartitioned for name (file_source), the DDS area is partitioned by range(date) and subpartitioned for Origin system(that include all the file_source) and the DM area is only partitioned by date.
So the most big problems that we meet are between the Staging are and ODS area, and between the Ods area and the DDS Area, the most important thing is that this table (DDS) is a monster of near 500.000.000 of rows (ITr of data) but we look only at the date to elaborate.
The solution is clear, divide this table in two, one online and one of storage as usually and correctly a normal situation require, but unfortunatly is a situation that we Erhedit from an old system and at the moment is not approoved the change request on this site.
The really strange thing is that sometimes work and some times not, without understand the cause of this. My opinion on this is that the DB is not correctly configured but the System Staff say that everything is correct and there are no problem. My first problem is to understand, if possible, wich is the limit of this way to operate, can i insert in a subpartition in the same time with twenty parallel process that write on same partition and different subpartition? Is correct to act in this way to save time about the data-load or better doing it one by one? On my experience i realized that Oracle can manage(is his work) a lot of request in the same time, but in this DB that we are using i continue to see problems that sounds like if we are usuing a tool that is not working in the correct way...
May be we went beyond some limit but in the end are less then 5.000.000 records per day that we move i think that a DWH have to support more than this...
I need to know if PARALLEL is enabled in my session. Would this be a session parameter or something else? Is there a view I have to query or some SQL*Plus command to execute?
P.S. Is there a way to correct the title spelling after submitting.
I need to call the same procedure with different parameter multiple time in parallel.
I have job_control Table
CREATE TABLE JOB_CONTROL ( JOB_CONTROL_ID NUMBER NOT NULL, JOB_SEQ_NO NUMBER NOT NULL, MODULE_NAME VARCHAR2(32 BYTE) NOT NULL, JOB_STATUS VARCHAR2(15 BYTE), NO_OF_RECORDS NUMBER, PROCESSED_RECORDS NUMBER );
Insert into JOB_CONTROL (JOB_CONTROL_ID, JOB_SEQ_NO, MODULE_NAME, JOB_STATUS, NO_OF_RECORDS) Values (20, 1, 'SALES', NULL, 5); Insert into JOB_CONTROL [code]........
We have a database that is accessed by ArcSDE, a product to modify maps. It uses BLOBs to store those maps.
We ran a load on the server and the response time was slow. By running the following query:
select event, total_waits, time_waited, avg_ms, round(ratio_to_report(time_waited) over () * 100) percent from (select substr(event, 1, 30) event, total_waits, time_waited, round(time_waited_micro / total_waits / 1000, 2) avg_ms from v$system_event where wait_class in ('System I/O') union select 'CPU' event, NULL, value, NULL from v$sysstat where statistic# = 12 order by 3 desc) where rownum <= 10;
I have a batch of generated SQL queries to be run.Say around 1000 update statements each updating a different table.Now to save the execution time,is it possible to run it in multiple sessions from a shell script/java .
Like having 100 queries in separate sql file and running all the file's parallel.Only thing that is problematic is if there is any error encountered in any of the queries the whole execution has to be rolled back.
If multiple queries are run in parallel(at the same time) against a table or a set of tables (query referencing multiple tables), does it reduce the performance.
In other words is Oracle capable of reading (selecting from) the same table multiple times in parllel.
I'm running Oracle EE 11.2.0.2 and when I look in OEM SQL Monitoring, it shows nearly every sql statement running with a degree of parallelism of *2*.
I've checked dba_tables and the 'degree' for all tables is only 1. I look at the actual sql statement, and there are no hints to tell it to use parallelism. So why and how is the database using parallelism?
I do see that parallel_threads_per_cpu is set to 2, but this is default for our Solaris 10 operating system.
REF: (for 11.2) =========== PARALLEL_THREADS_PER_CPU specifies the default degree of parallelism for the instance and determines the parallel adaptive and load balancing algorithms. The parameter describes the number of parallel execution processes or threads that a CPU can handle during parallel execution.
The default is platform-dependent and is adequate in most cases. You should decrease the value of this parameter if the machine appears to be overloaded when a representative parallel query is executed. You should increase the value if the system is I/O bound.
how to tell if my database is actually IO bound or not?
I need to identify, whether my current session is one of slave sessions scheduled as Oracle scheduler job via DBMS_PARALLEL_EXECUTE or not. I already succeeded using join of dba_scheduler_running_jobs, user_parallel_execute_chunks and v$session, but I feel that this is not optimal approach.
I have prepared shell scripts to do the parallel inserts on my DB table (LEGACY_SYSTEM).
There is a trigger (AFTER INSERT ON EACH ROW) associated with the above table. I am calling a package.function inside the trigger to do the required operation and finally it will insert records into my target table (PRICE_CHANGE).
Expectation: ------------ If I insert 10 rows into LEGACY_SYSTEM table, it should do few updates and finally insert 10 rows into PRICE_CHANGE table.
Result: ------- 10 rows got inserted into LEGACY_SYSTEM. All the updates are successful but I could see only 4 rows in PRICE_CHANGE table. If I run it for the second or third time, all the results will be perfect.
Instead of these shell script, if I insert one by one rows manually into LEGACY_SYSTEM table, I am getting all the expected results and the results are consistent. If you look at my scripts below, you will understand the problem better..
I am calling test_global.sh through the UNIX session and all the records got inserted into LEGACY_SYSTEM table and few rows are missing from PRICE_CHANGE table.
If I remove the '&' symbol and execute, the results are perfect. But the requirement is not to remove the '&' symbol. I have been facing this problem for the past 1 month.
I am inserting data into a global temporary table and then using 'parallel' hint to query from this temporary table. I remember reading that the queries on the temp table may not run in parallel as the parallel sessions may not be able to see the data in the temporary table
However the execution plan as well as px_session, v$sql indicate that the query on the temporary table in fact run in parallel mode
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT ----------------------------------------------------------------------------------------------------------------------- SQL_ID 7d68g52g0mskz, child number 0 ------------------------------------- select /*+ gather_plan_statistics parallel(t,4) */ * from dbo_gtt t order by id,object_id