find the below table. The entire data is within DOCSTART and DOCEND. The data is further enclosed within BACCSTART and BACCEND. This type of block is repeatable. I have to pick up any of ABCD which is also repeatable and TOTAL(occurring once per block) and ACCNAME (occurring once per block) for each block within BACCSTART and BACCEND and form an xml like
for each such block. Presently I am using a for loop, but the performance is not up to the mark. It will have around 200 such blocks for which I have to form the xmls within 15 seconds. Presently the for loop is taking around 53 secs.
ROWNUM NAME VALUE
1 DOCSTART null
2 BACCSTART null
3 ABCD abcd
4 ABCD abcd2
5 PQRS pqrs
6 PQRS pqrs2
7 TOTAL 100
8 ACCNAME name
9 BACCEND null
10 BACCSTART null
11 ABCD abcd
12 ABCD abcd2
13 PQRS pqrs3
14 PQRS pqrs4
15 TOTAL 150
16 ACCNAME name
17 BACCEND null
18 DOCEND null
While increasing the tablespace i am getting below error. How to handle this
SQL> set lin 300 SQL> col TABLESPACE_NAME for a25 SQL> col FILE_NAME for a65 SQL> select TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE,sum(BYTES/1024/1024) MB 2 from dba_data_files where TABLESPACE_NAME='SYSAUX' group by TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE order by sum(BYTES/1024/1024) DESC,file_name;
TABLESPACE_NAME FILE_ID FILE_NAME AUT MB ------------------------- ---------- ----------------------------------------------------------------- --- ---------- SYSAUX 3 /ora2/oradata/dbname/sysaux_01.dbf NO 300
SQL> Alter database datafile 3 RESIZE 60000M; Alter database datafile 3 RESIZE 60000M * ERROR at line 1: ORA-01144: File size (7680000 blocks) exceeds maximum of 4194303 block
I have a block of code that looks something like this, I'll write it in pseudo code to avoid pasting 100's of lines of
IF <condition> THEN FOR record in (select query here..) LOOP --add data END LOOP END IF;
IF <condition> THEN --do soemthing END IF;
IF <condiiton> THEN --do soemthing END IF;
Now, in the 1st IF statement there is a for loop which basically builds up a tab type array - this takes at least an hour to execute based on the select query which returns 1000's of records.
The strange thing is, that my log files shows that at the time the procedure was executed, the stuff in the 2nd IF statement was executed almost straight away.. is this even possible?
If the loop in the 1st IF statement takes over an hour to finish, how is it possible for the 2nd IF statement stuff to process straight away? My log files show me that the loop in the 1st IF statement was going on for a good hour.. yet the 2nd IF statement was executed straight away.
I was confused by partitioed table, when i select a partition of table, how does oracle to scan blocks? it scan all blocks of table or scan a single partition blocks only?
SQL> Explain Plan For 2 Select Count(1) From Tb_Hxl_List Partition(p_L3);
I am trying to create a SQL query which will check that various postcode formats are valid, but I am having trouble getting oracle sql to check for values within ranges - for example the following returns no rows, even though most of the postcodes I am dealing with start with 'P'.
select postcode from mytable.addresses where postcode like '[N-R]%' ;
Below is the output of Tom Kytes script show_space, which I have run on one of my indexes.
Unformatted Blocks ..................... 0 FS1 Blocks (0-25) ..................... 0 FS2 Blocks (25-50) ..................... 102,936 FS3 Blocks (50-75) ..................... 0 FS4 Blocks (75-100)..................... 0 Full Blocks ..................... 28,615,887 Total Blocks............................ 28,748,800 Total Bytes............................. 235,510,169,600 Total MBytes............................ 224,600 Unused Blocks........................... 0 Unused Bytes............................ 0 Last Used Ext FileId.................... 233 Last Used Ext BlockId................... 1,574,409 Last Used Block......................... 12,800 PL/SQL procedure successfully completed.
If I look at the unformatted blocks its zero, which tells me that data is being placed into every block (pretty well compressed). But what I don't understand is why there are 102,936 blocks that are only 25-50% full? I would have expected to see some blocks that in the75%-100% full range as this index was recently dropped and rebuilt 2 months ago.
This index is on a partitioned tabled, where the 90th day and higher partitions are dropped daily.
Here is the layout of the index
CREATE INDEX T1.FEAT_IDX ON T1.FEAT (R_SEQ, SYSTEM_NAME, FEATURE, FLAG) NOLOGGING TABLESPACE TB1
[Code] .........
what I need to do to get the value of FS4 (# of block 75%-100% used higher)
I am interested about the fast way to access all data in physical block. what is the quick way to bring data blocks using the rowid, I found this script but soon as I can have faster access:
select * from table_name t WHERE ROWID between 'AAAUaOAAEAAHkJiAAA' and 'AAAUaOAAEAAHkJiAA8'; where 'AAAUaOAAEAAHkJiAAA' is the last element in the block and 'AAAUaOAAEAAHkJiAA8' is the first one
my question is can retrieve all the data in one block more quick than this query.
We are maintaining a DR of our Database Server(oracle 10g R2 atop SUSE SP1 Linux) using Platespin(
[URL]......
Platespin is set to replicate(block based), incremental data(delta) every 1.5 hour from Production to DR site over a 30 Mbps dedicated fiber link.
Our maximum changes of data per day(during business hours) wont exceed 300 MB. During business hours Platespin replicates at least 1 GB at every replication cycle, while during off hours it replicates 300 to 500 MB per replication cycle. We are facing this strange issue with this box only(SLES 10 SP1 + Oracle 10g R2), we have protected MS Exchange 2007 Server based workloads without this strange issue, i.e in case of Exchange only delta replicates from Production server to DR site on Platespin.
Platespin support says us that Oracle re-indexes its database for better performance, so it is possible that re-indexing causes the blocks level changes on the storage, and since Platespin works on Block level, thats why it replicates so much(even though data is not changed that much)
here is actual words of Platespin support
<snip>
I think whenever Oracle database Indexing happens, it changes almost most of the Blocks of database and Platespin replicate all those Blocks.
As you know, Platespin checks the Date/Time attribute of every blocks before replication and if Date/Time attribute changes from last replication, it considers as changed block and replicate those blocks on Platespin Appliance. So, my suggestion is just look into the Oracle server behaviour before/after Data indexing process and do needful or do some workaround to overcome this issue.
what empty blocks are, and how to remove them.What I'd like to do is not have empty blocks in the first place on loading a table. I load a lot of "static" tables and would like to not have any wasted space at the end, with minimal shinanigans.
I've set pctfree 0 I"ve set initial to close to the end table size I've set next to 1M I've set pctincrease 0 blocksize is 8k
Yet I still need to at least do an alter table deallocate unused
I have two blocks, both are multi record block. 1st block is control block and second is database block. Both the blocks have same fields(Example: Location,Location_name,Location_Type). In the first block(Control Block) I have check box. My goal is when I check the checkbox and click on add button all the records which are selected in first block should go to second block.
We are suffering from very bad application response for last few days, when i try to check and drill down, where the actual contention is? I came to know that there may be contention on data blocks, which may be a prime reason for degraded performance. Herewith i'm pasting my actual stats of gathered from v$waitstat. I gone through some of asktom docs and find that there may be a problem with freelist or segment space management. My data tablespace is segment space management = Manual.
My main question is
1) Should i need to increase freelist value (Right now my value is 1)
2) Or i have to move on segment space management = auto
SQL> select * from v$waitstat;
CLASS COUNT TIME ------------------ ---------- ---------- data block 2022 4052 sort block 0 0 save undo block 0 0 segment header 1 1 save undo header 0 0 free list 0 0 extent map 0 0 1st level bmb 0 0 2nd level bmb 0 0 3rd level bmb 0 0 bitmap block 0 0 bitmap index block 0 0 file header block 0 0 unused 0 0 system undo header 0 0 system undo block 0 0 undo header 6 0 undo block 0 0 18 rows selected.
i have master -detail form. both are database blocks.
i have inserted values for the master block bt not for the detail record my problem is.. "user should not be allowed" to move to next record of the master block before saving the current record
We are having two data blocks as Earnings and Deductions. We need to export this to an excel in single sheet parallel [ imagine your payslip format ].
if we use normal text_io we are not able to get the result we want. so we have tried using a package called export2excel. we achieved what we want. The form is working perfectly in client server concept. When we move the same form to our Unix application server, it is not working.
I have this error. To put it simply I have two blocks.
Block1 contains two drop down list with PL/SQL statements for queries. Block2 contains tabular form created from block wizard (I tried already in manual)
that will catch the result in Block1 queries.
Now I have a button with a trigger when-button-pressed that contains
BEGIN INSERT INTO dummy1 VALUES ('hello',1,2,3); COMMIT; END;
My goal is to add into dummy1 values from :block2.item_name1, :block2.item_name2, :block2.item_name3 but to put it simply I tried these values but I received the same error.
When I run it and first things first click the button, the values will be added into dummy1 table but when I execute the block1 - dropdown list queries and try to press the button. I received the error.
I HAVE 2 BLOCKS E.G. MASTER-DETAIL AND i have created it manually then how to insert values thrugh these blocks using a button called "save" using "commit_form;" suppose form fields are
master-block: emp_id emp name age detail block: address city where emp_id is primary key as well foreign key in detail table
I have created two non base table text items(from date, to date) and if values are not entered from front end i.e. null then hadrcoded in pre_query trigger to default values.But even i enter values dynamically from front end and when ever i press fetch button ,it is taking default values i.e. it is not accepting values what i have entered.
I have 4 blocks in my form which is basically used for travel booking for the employees in a company within India
1) Header block :- contains info abt the person who is booking the tickets for number of employees. Here i have given booking no a primary key.
2) Employee Detail :- Here the basic info of an employee is entered. Here i have taken booking number as a foreign key and then given emp_cd & booking number as a composite primary key.
3) Travel Detail :- Here the travel detail of individual employee will be entered wherein a unique trv_no will b generated 4 every single travel. Again i have taken foreign key as bkng_no frm 1st blck and emp_cd frm 2nd blck and tkn a composite primary key which comprises of bk_no,emp_cd and trv_no. this is used to maintain the uniqueness for single travel.
4) Vehicle Hotel Details :- This block is placed on different canvas for same form.It is meant for Other details in which details regarding hotel,vehicle etc booking aftr reaching the destination is entered. In this block thrs no primary key, but i hv taken the composite primary key of 3rd block as a foreign key since thr will be multiple entries for this one entire travel.
At every level there will be multiple entries for each corresponding entered record. I am able to enter one single record properly i.e. for 1 emp i am able to enter multiple travel details and his other requirements but asi try to enter more then one employee info, his travel details, other requirements i face an error stating foreign key constraint violated.Parent key not found for 3 rd level block.
How can i get the desired o/p wherein all the multiple records for every single subsequent record are stored correctly taking all the constraints in to consideration.
I receive the output from only one of the nested blocks:
bad PL/SQL procedure successfully completed.
SQL>
I understand that I don't need nested blocks for the example above, but this was just a condensed version of what I'm trying to do. I think nesting blocks will be easier to read and maintain, instead of having a huge CASE statement.
How can I execute only the nested block for which the condition is true and ignore the nested blocks that follow?
Are nested blocks not the correct answer here? Should I be looking at invoking procedures/functions instead?
we have 5 tempfile ( each of 65 gb ) allocated to TEMP tablespace...and still we are running in short of space..when i checked the TEMP segment usage, i am able to see much FREE blocks. how to release those space ?
1 row selected. further when i checked the session details using TEMP segment, i got below output:
SELECT b.tablespace, b.segfile#, b.segblk#, b.blocks, a.sid, a.serial#,a.username, a.osuser, a.status FROM v$session a,v$sort_usage b WHERE a.saddr = b.session_addr ORDER BY b.tablespace, b.segfile#, b.segblk#, b.blocks;
TABLESPACE SEGFILE# SEGBLK# BLOCKS SID SERIAL# USERNAME OSUSER STATUS ------------------------------- ---------- ---------- ---------- ---------- ---------- ------------------------------ ------------------------------ -------- TEMP 15001 3549184 576 475 1237 EQUIPMENT infa ACTIVE TEMP 15001 4002368 64 796 4677 CRM infa ACTIVE TEMP 15002 580608 20352 868 615 EDW infa ACTIVE TEMP 15002 3962112 832 92 1065 EDWSTG infa ACTIVE TEMP 15002 4021120 576 1236 7257 EQUIPMENT infa ACTIVE TEMP 15003 23936 64 819 5586 EDW infa ACTIVE TEMP 15003 3798400 832 855 1801 EDWSTG infa ACTIVE TEMP 15004 205056 21632 795 8171 EDW infa ACTIVE TEMP 15004 4031488 832 403 1299 EDWSTG infa ACTIVE TEMP 15004 4131456 576 19 6802 EQUIPMENT infa ACTIVE TEMP 15005 3617856 832 1166 6204 EDWSTG infa ACTIVE TEMP 15005 3741760 576 862 953 EQUIPMENT infa ACTIVE TEMP 15005 4042752 18176 1226 5379 CDM infa ACTIVE
3 rows selected. if i killed the SID - 1226, then those temp blocks ( 18176 blocks ) will be released and can other session use that space further ?
there is one more column - SEGBLK# explain what is the exact meaning of this column ?
to reclaim the space, should i issue below command - sql>alter tablespace TEMP coalesce;
If the data blocks in the buffer continuously get updated such that they never reach the Least used list of LRU,then when will they be written to disk?
I have two blocks, blockA and blockB, both from the same table which is tableX.
When I query from blockA and it has returned results, then I go to blockB and edit some data and save, there was no problem. But when I don't execute query from blockA or if the query returned zero results, then I go to blockB and edit some data then save, I got this error. THe cursor then go to blockA first item. So now I have to delete that empty row before saving.
I am working on form builder 6i. I have a header block A(Multi record block), child block B. Both are non control blocks. My requirement is, if i put the cursor in any header record, i want to auto populate corresponding child record like in master child relation.
which trigger i need to write logic to auto populate the child record.
I have analyzed that, datapump estimation is 9.902GB. When i check size of .dmp file, it's shows 1.44Gb.
Export: Release 11.2.0.1.0 - Production on Fri Apr 5 02:00:05 2013 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. ;;; Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_FULL_01": system/******** dumpfile=expdp_LVGITRN_30_24_050413.dmp directory=DP_DIR logfile=expdp_LVGITRN_30_24_050413.log full=y exclude=statistics Estimate in progress using BLOCKS method... Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA Total estimation using BLOCKS method: [bold]9.902 GB [/bold]