PL/SQL :: XML Aggregate Function And Out Of Process Memory
Aug 8, 2012
I am getting most of the time 'out of process memory' when i run the 'ord' procedure.here i am providing the tables and the procedure.
i have 2 table orders which holds distinct values, and departments table has ordvalue column holds long string of values for a particular record from orders table. for example if the values in orders table as follows:
ord_code ord_level ordid ordstatus ord_num user utimestamp
SR11 1 2 A 101 V SYSDATE
SR11 1 2 A 102 V SYSDATE
SR11 1 2 A 103 V SYSDATE
SR11 1 2 A 104 V SYSDATE
SR11 1 2 A 105 V SYSDATE
SR11 1 2 A 106 V SYSDATE
SR11 1 1 B 101 R SYSDATE
SR11 1 1 B 102 R SYSDATE
SR11 1 1 B 103 R SYSDATE
SR11 1 1 B 104 R SYSDATE
SR11 1 1 B 105 R SYSDATE
SR11 1 1 B 106 R SYSDATE
ETC...
AT TABLE departments data will be like this
ord_code ord_level ordid ordstatus ord_num user utimestamp
SR11 1 2 A 101,102,103,104,105,106 V sysdate
SR11 1 2 B 101,102,103,104,105,106 R sysdate
from the get_ord procedure the data aggreates using the xmlelement and gets as a long string value into departments table of ord_num column.
CREATE TABLE test.orders
(
ord_CODE VARCHAR2(4 BYTE) NOT NULL,
ord_LEVEL VARCHAR2(1 BYTE) NOT NULL,
ordID NUMBER(5) NOT NULL,
ordstatus VARCHAR2(1 BYTE) NOT NULL,
ord_num NUMBER(3) not null,
user VARCHAR2(8 BYTE),
UTIMESTAMP DATE
[code]...
Can i know the internal process of initialization of DB into memory in timesten , when a new connection is establishing? Will timesten create tables and indexes in RAM when first connection is established if the RAM policy is default?
want to know the internal functional flow of timesten when any command is fired against it.
SQL> select name,decode(unit,'bytes',value/1024/1024,value) as mb from v$pgastat; NAME MB---------------------------------------------------------------- ----------aggregate PGA target parameter 25600aggregate PGA auto target 2724.14648global memory bound 1024total PGA inuse 22601.7333total PGA allocated 26653.6230maximum PGA allocated
[code]....
I understand I have soft limit( aggregate PGA target parameter) which was overlimited (maximum PGA allocated = 35374.4638) hence we have over allocation count>0.Extra bytes read/written=13GB,hence we have excessive 13Gb that we had to flush on disk(excessive I/O operations) cause of limitation in 1024MB(global memory bound)(it's not enough to join or to sort something so we must do onepassor multipass) ,which defines the size of single operation of sort or join(so does it mean that it's some kind of sort_area_size and hash_area_size for automatic workarea_size_policy? and in this case what about _smm_max_size?)aggregate PGA auto target - is the amount of space(total) that Oracle can give for work areas running in automatic mode.
So I can't understand ratio between global memory bound and aggregate PGA auto target - why does the aggregate PGA auto target such tiny?(relatively process count)?Is the global memory bound static for particular aggregate PGA target parameter?
I can change it only by redefining aggregate PGA target parameter?What would be with aggregate PGA auto target if I started 10 sort operation and each of them takes about 1Gb of memory.How huge it would be? 10Gb?
SELECT field1, COUNT(x) AS COUNT FROM my_table GROUP BY field1;
For field1 I want to get a count, but if field1 is like 'ABC%' then I want to combine all of those.
So if I have the following: ABC1 | 5 ABC2 | 10 XYZ1 | 3
I want results like this: ABC | 15 XYZ1 | 3
I've tried using some case statements like
SELECT CASE WHEN field1 LIKE 'ABC%' THEN 'ABC' ELSE field1 END AS field1, COUNT(x) AS COUNT FROM my_table GROUP BY CASE WHEN field1 LIKE 'ABC%' THEN 'ABC' ELSE field1 END;
but this just gives me ABC | 5 ABC | 10 XYZ1 | 3
How can I combine record 1 and 2 from the last record set example above?
I can' use sequence in the group by function and if I get equivalent analytic for above group by even then I can't write row_number as the order by gives detail record
I don't want to wrap this select inside other select
i created a materialized view, and whenever i try to refresh data, using the following package(dbms_mview.refresh), im getting the ORA-04030 error
ORA-12008: error in materialized view refresh path ORA-04030: out of process memory when trying to allocate 1052696 bytes (callheap,kllcqas:kllsltba)
As far as i know, there no shortage of ram in this machine, im not sure what is causing this error, Then i tried to do a bulk insert using a "INSERT /*+ APPEND */ " in a package, when i execute it, im getting the same error again.
ORA-04030: out of process memory when trying to allocate 1052696 bytes (callheap,kllcqas:kllsltba)
The number of records im trying to insert is around 600,000, but if i try to repeat the same process with 50000 records, it works fine. i Actually is there any oracle parameter i need to change, cause im afraid if i do that, later in production, it might effect some other modules.
this is not working ... always i am getting following output
QUOTE x processes w/o controlling ttys t by tty *********** output format ********** *********** long options *********** -o,o user-defined -f full --Group --User --pid --cols --ppid -j,j job control s signal --group --user --sid --rows --info -O,O preloaded -o v virtual memory --cumulative --format --deselect -l,l long u user-oriented --sort --tty --forest --version -F extra full X registers --heading --no-heading --context ********* misc options ********* -V,V show version L list format codes f ASCII art forest -m,m,-L,-T,H threads S children in sum -y change -l format -M,Z security data c true command name -c scheduling class -w,w wide output n numeric WCHAN,UID -H process hierarchy
I got the following error when upgrading from Oracle 9.2.0.8 to Oracle 11.2.0.2 version using catupgrd.sql script.
================================================================= CREATE OR REPLACE PACKAGE BODY kupm$mcp wrapped * ERROR at line 1: ORA-00603: ORACLE server session terminated by fatal error ERROR: ORA-03114: not connected to ORACLE
SP2-1519: Unable to write to the error log table sys.registry$error ORA-04030: out of process memory when trying to allocate 8392728 bytes (pga heap,redo read buffer) ORA-04030: out of process memory when trying to allocate 8392728 bytes (pga heap,redo read buffer) ORA-04030: out of process memory when trying to allocate 8168 bytes (callheap,kcbtmal allocation) Process ID: 81984 Session ID: 1 Serial number: 5
I am facing this problem continuously from last 1 week on my production oracle database.I have performed all kind of changes on parameter but still am getting this error.
Oracle version -> 11g O/S -> windows server 2003 RAM -> 11 GB #########################INIT CONFIG################### convdb.__db_cache_size=536870912 convdb.__java_pool_size=10485760 convdb.__large_pool_size=4194304
I have a stored procedure which uses BULK COLLECT and the table has 16 nested tables within it. When I limit the number of rows processed it works fine, if I let it run with all the 10,000 rows I get this:
ERROR: ORA-04030: out of process memory when trying to allocate 16408 bytes (koh-kghu call ,pmuccst: adt/record)
determine if a function is worth pinning in memory? I want to come up with a percentage, implying that if the function is already im memory 80%+ of the time then it is not worth it.
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ lock_sga boolean FALSE pre_page_sga boolean FALSE sga_max_size big integer 3G sga_target big integer 2G
from what I read I beleive this will initially grab 2GB of memory on startup and will grab up to to 3GB of memory total for the SGA. The "total" memory can be allocated to different peices of the SGA when needed but will never exceed 3GB. Is this correct or would these settings infringe on any available memory on a system that is already tight on memory?
Secondly, what happens if both these values are set to the same value?
TFIDF_TABLE ID | TERMS 309 |'computer,phone,mp3....'
Now I want to add TERMS column of TERMS_TABLE to terms column of TFIDF_TABLE but If TFIDF_TABLE already contains TERMS of TERMS_TABLE then I should not insert this term to the NEW_TFIDF_TABLE , like that
result should be:
NEW_TFIDF_TABLE ID | TERMS 309 |'computer,phone,mp3....,hardware,software'
The value is an aggregate Year to Date Figure And I was wondering what the best method of splitting this data out into a Monthly Figure so that it would look like below:
Year Month Mth Value 2011 01 15 2011 02 11 2011 03 8 2011 04 9
I intend to get for every client the start date and end of a contiguous range of days. Example for the same client have two records, in the first goes from day 1 to day 5 and the second from day 3 to day 9, i intend to get a record for this client where indicated that the start date is on day 1 and ending on Day 9.
SELECT 123 as CLI_ID, TO_DATE('20100101', 'YYYYMMDD') as DT_START, TO_DATE('20100105', 'YYYYMMDD') as DT_END FROM DUAL UNION SELECT 123 as CLI_ID, TO_DATE('20100208', 'YYYYMMDD') as DT_START, TO_DATE('20100321', 'YYYYMMDD') as DT_END FROM DUAL UNION SELECT 123 as CLI_ID, TO_DATE('20100219', 'YYYYMMDD') as DT_START, TO_DATE('20100228', 'YYYYMMDD') as DT_END FROM DUAL UNION SELECT 123 as CLI_ID, TO_DATE('20100227', 'YYYYMMDD') as DT_START, TO_DATE('20100405', 'YYYYMMDD') as DT_END FROM DUAL UNION SELECT 123 as CLI_ID, TO_DATE('20100901', 'YYYYMMDD') as DT_START, TO_DATE('20101013', 'YYYYMMDD') as DT_END FROM DUAL
I'm calculating a Z score based on some simple numerical data thus:
create table t (id number, val number);
insert into t values(1, 1795); insert into t values(2, 1753); insert into t values(3, 1743); insert into t values(4, 1876); insert into t values(5, 1848);
[Code] .....
the logic is quite simple - calculate a moving average over the previous 12 rows, and a stdev over the same window. Then subtract the prior row's moving average from the current value, and divide by the prior row's stdev.
The issue is I want to expose this logic in a BI tool (OBI EE v10g), meaning I can't use the nested analytic functions. How to achieve this logic in a single analytic pass? The sql above took about 2 minutes to write this morning, then I've spent all day looking at user-defined aggregate functions, but haven't even been able to get the first step, the moving average, working. I can understand that I can probably create an udaf to replicate the avg(val) over (order by id ROWS BETWEEN 11 PRECEDING AND 0 FOLLOWING) functionality, but I can't see how to bundle the logic for the other three steps in the calculation into this.
From what I've read, the ODCIAggregateMerge should allow me to combine different threads that can return the different values I need for the current row calculation. Is this correct?
The only example udafs I can find are either not relevant (STRAGG) or very simple (ie don't appear to invoke multiple passes). I've also had a look at the COLLECT function, but again I can't see a way to use this.
I'm trying to build up a materialized view with aggregate and FAST REFRESH for INSERT and UPDATE, DELETE with no success. But the web doesn't deny it ?
What is the best practices to maintain aggregate columns? Suppose I have the following 2 tables:
DROP TABLE TEST.ORDERS_DET_T; DROP TABLE TEST.ORDERS_T; CREATE TABLE TEST.ORDERS_T ( ID NUMBER NOT NULL PRIMARY KEY, ORDER_CODE VARCHAR2(100) NOT NULL,
[code]....
I want the following test script to act int the way, described in comments. Basically this means that the sum of TEST.ORDERS_DET_T.ORDER_DET_SUMA must be equal to TEST.ORDERS_T.ORDER_SUMA after transaction commits.
INSERT INTO TEST.ORDERS_T(ID, ORDER_CODE, ORDER_SUMA) VALUES(1,'FRUITS',100); INSERT INTO TEST.ORDERS_DET_T(ID, ORDERS_T_ID, ORDER_DET_CODE, ORDER_DET_SUMA) VALUES(2,1,'APPLES',40); INSERT INTO TEST.ORDERS_DET_T(ID, ORDERS_T_ID, ORDER_DET_CODE, ORDER_DET_SUMA) VALUES(3,1,'PEAT',60); COMMIT; --SHOULD BE OK, 40+60=100
[code]....
P.S. Creating views based on ORDERS_T and ORDERS_DET_T are not an option, if the user would still be allowed to modify data in tables (as in test scenarios). This is because of current business situation, where the client has 2 teams (outsourcing for back-office solution and insourcing for web solution) that are accessing/modifying the data in the same tables.
I am trying to use RANK() clause with a window clause...is that possible to use both together?
like
select col1, col2, col3, RANK() OVER (ORDER BY col3 desc RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) RK from table t
but getting error in SQL Developer
ORA-00907: missing right parenthesis 00907. 00000 - "missing right parenthesis" *Cause: *Action:
Error at Line: 2 Column: 33
The reason why i need to rank in window clause because i have data like this
Name Marks Quiz Ali 10 1 John 20 1 Sara 30 1 John 40 2 Sara 50 2 Ali 20 2 ... ... and so on
I want to rank them based on their cumulative sum marks after every test..ranking should be in such a way that it should look current row and preceding rows
like this
Name Marks Quiz cumulative_marks rk Ali 10 1 10 4 John 20 1 20 3 Sara 30 1 30 2 Peter 100 1 100 1 John 40 2 60 3 ==> becuase John has now third most overall cumulative marks (60) after quiz 2. Sara 50 2 90 2 ==> becuase Sara has now 2nd most overall cumulative marks (90) after quiz 2. Ali 20 2 30 4 ==> becuase Ali has now fouth most overall cumulative marks (30) after quiz 2.
i want to create a blood donor report, which consists of his profile, donations_n_tests details, donor_to_patient tracking. each is a different table which i have imported to the layout.
now the problem is that for the 1st ever layout that i make via the report builder wizard, it offers me to choose aggregate functions (like sum, cout, avg, etc). but for the rest of the tables that i import into this layout via ADDITIONAL LAYOUT OPTION (report block. it does not offer me these summary functions.
actually i can only configure aggrigate functions via the report layout wizard, perhaps report builder is not that friendly like form developer where we can put a display item set calculation mode to summary and specify the column for the aggregate function.
I have a confusion with MEMORY_TARGET and MEMORY_MAX_TARGET parameter. if i set SGA_TARGET, SGA_MAX_SIZE along with MEMORY_TARGET and MEMORY_MAX_TARGET then how oracle will manage the memory? Because as per my understanding if we set MEM
When you create a MAV, you automatically get a hidden column and an index. Here's an example,drop user jon cascade;
grant dba to jon identified by jon; conn jon/jon create table emp as select * from scott.emp; create materialized view mv1 enable query rewrite as select deptno,sum(sal) from emp group by deptno; select object_name,object_type from user_objects; select index_name,column_name from user_ind_columns where table_name='MV1'; select column_name,hidden_column from user_tab_cols where table_name='MV1'; select deptno,"SUM(SAL)",sys_nc00003$ from mv1;
I have a function that returns the situation for one month for some database. I need to implement it in some report medium for one year. The one year function works ok.
My problem is when trying to make another function that runs the monthly function 12 times and that error is "PLS-00653: aggregate/table functions are not allowed in PL/SQL scope".am trying to get around some restrictions and somehow until this part things seem to be ok.
I tried to use a union with 12 blocks but it works very slow in the reporting environment and now i want to try to make another function that runs another function 12 times depending on the parameter.
here is the code (there might be some name misuse since i had to change the names of the original code -
CREATE OR REPLACE FUNCTION anual(monthh in varchar2, year IN VARCHAR2) return anual_REP_var PIPELINED is BR anual_REP:=anual_REP(NULL,NULL, NULL,NULL); contor INT(2);