Forms :: Anyway To Speed Up Performance Of Go_record Built It
Mar 4, 2010
Is there anyway to speed up the performance of the go_record built it or is there an alternative way to do it.
I have a table with nearly 30,000 rows and I would like to implement a text field that will allow the user to jump to a specified record. The only problem is if they try to jump too far away it will take a long time to load (beginning to end of 30,000 takes over a minute).
This problem doesn't arise if all the records, or up to the one they are jumping to, have been fetched already, but even if I fetch all records at the beginning it will still take a long time to initially load them.
View 10 Replies
ADVERTISEMENT
Jun 2, 2011
I am gathering stats by using below block i.e., for some 3 million records and there are 6 indexes on the table. What is the relevance of value 4 here (i.e., method_opt => 'FOR ALL INDEXED COLUMNS SIZE 4')? If I increase 4 to 250 will there be any speed change in gathering stats. My intention is to speed up the gathering of stats.
begin
dbms_stats.gather_table_stats(
ownname => SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA'),
tabname => 'LEGAL_VIEW_TARGET',
method_opt => 'FOR ALL INDEXED COLUMNS SIZE 4',
cascade => TRUE
);
END;
View 12 Replies
View Related
Oct 24, 2013
How To Increase Data Retrieval / Insertion Speed my data base has more than 0.5 million records Forms Some Time Respond Very Slow .
View 8 Replies
View Related
Jul 31, 2011
I got below error. I didnot use go_record.FRM-40738: Argument 1 to builtin GO_RECORD cannot be null.
View 1 Replies
View Related
Aug 15, 2011
I have two design alternatives and need to understand how expensive (speed) is one of them against the other for a medium size table (100K-200K records):
create table xyz
(
f1 number not null,
f2 varchar2(20) not null,
f3 number not null,
f4 varchar2(50),
[code]....
the idea is to optimize the design by using a PK instead of the 3 keys and there is a debate that searching a unique index field(2nd scenario) is of the same speed than searching a PK field (1st scenario).
View 5 Replies
View Related
Feb 4, 2011
I've got a query running a select count (*) over a table. The default plan takes in the order of 15 minutes to return, a hinted plan to use a different index takes 3 minutes to return.
Unfortunately I cant get at the index stats and a few other areas which I suspect may be key here.When running autotrace against the two queries I see fairly different values as one would expect.
Query
select count (*) from fulfilmentitem bfi where created >= sysdate-30 AND bfi.status = 'FA' AND bfi.fulfilmentmethod = 'D'
Slow run
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | 33119 (1)|
| 1 | SORT AGGREGATE | | 1 | 15 | |
|* 2 | TABLE ACCESS BY INDEX ROWID| FULFILMENTITEM | 12525 | 183K| 33119 (1)|
|* 3 | INDEX RANGE SCAN | IDX_FULFIL_METHODSTATUS | 250K| | 1786 (1)|
----------------------------------------------------------------------------------------------
[code]....
IDX_FULFIL_METHODSTATUS is across FULFILMENTMETHOD & STATUS in that order.
IDX_BFI_CREATED is on CREATED and is approx 70% of the size of the other index
The row counts estimated in the explain plan are out, the count(*) comes in at 32.8k rows.As you will have seen, the fast run shows a pretty significant consistent get increase compared to the slow run and a decent though not dramatic physical read drop.
My uncertainty is around if these changes in consistent get/phys read values would typically be enough to suggest the real time improvements I'm observing or if other (albeit perhaps temporary) factors are involved. It is a prod OLTP environment so the data will be rapidly changing and that may be a factor.
I know it can never be an exact science without intimately knowing the hardware/current loads etc but I also know that there's enough experience on these boards to have a loose handle on if the time shifts between queries are likely (or not) to be reflective of the stat changes or if those differences alone shouldn't (or typically wouldn't account) for it.
I'm thinking about instructing the query to ignore its original plan but am hesitant to do so without being a little more confident that it's not just a timing thing or something other than the change of index approach which may be causing the improvement. the autotrace stat changes observed I couldn't put my hand on heart say "yup - that change is good, ignore the default index all the time for this job".
View 11 Replies
View Related
Jun 19, 2012
I have one form in which there are master detail blocks. I am entering one record in master block and corresponding entry in detail block. If again I am entering a new record in master block, of course the corresponding entry is getting erased since the block is getting changed.
After entering the data in master block I want to pop a message as 'whether you want to duplicate the same entry in detail block '. If yes, then how can I copy the same details which i have entered for previous record? Can I use duplicate_record built in? If yes, How?
View 1 Replies
View Related
May 7, 2010
I have been testing out a form using 10.1.2.0.2 on a v10.2.0.1 db and in my local env. the form works correctly i.e. if I make a change and 'post' it and then exit and press NO (when asked to save changes) then it correctly leaves the value in the database as it originally was.
The process works by the user pressing a button in form A (read only form) and this opens form B (using open_mode,session,activate) and the user makes their change(s) in form B (a 'post' command is issued in a When New Rec Inst trigger on a db block when the user navigates to a new record within the same block if it is determined that the block status <> 'QUERY') before returning to form A and pressing 'NO' when prompted to save changes.
However, if I run the same process in the TEST env. using the same executable against the same database then it actually updates the database value.
I have tested this by adding a debug message at the end of form B to retrieve the db value back AFTER having issued a clear_form(no_commit) just for the sake of the test and it still returns me the 'new' i.e. amended value - which is obviously incorrect. From what I can see it would appear that the commit occurs straight after the 'post' has been issued and well before the user even exits the form.
Is this a known bug with the 'post' built-in or could it be that a parameter is set to act in this way (i.e. is there an 'autocommit' setting that is 'ON') within the application server?
View 8 Replies
View Related
May 28, 2011
I have a client who is facing the above error. Everything used to work perfectly but since a week, he gets this error message on a specific FORM. No updates have been made to the FORM.
The FMXs are stored on the database server(10G R3, Windows 2003 R2) and the Client terminals access them through a network drive.When executing the conflicting FORM on the server, no error.
I call the DBMS_ERROR_CODE routine from a FORM-LEVEL ON-ERROR trigger :
DECLARE
ERR NUMBER;
BEGIN
IF DBMS_ERROR_CODE=-1 THEN
MSG('Client existant',ERR,'E');
ELSE
[code]....
Code for procedure MSG :
PROCEDURE MSG(TXT_MSG IN VARCHAR2 , RET_REP OUT NUMBER , TYPE_MSG IN CHAR DEFAULT 'I' ,
LABEL_BUT1 IN VARCHAR2 DEFAULT 'OK' , LABEL_BUT2 IN VARCHAR2 DEFAULT NULL ,
LABEL_BUT3 IN VARCHAR2 DEFAULT NULL) IS
VL_ALERT ALERT;
BEGIN
[code]....
View 6 Replies
View Related
Jul 21, 2012
I still have a legacy apps built using Forms 6i and Reports 6i running against Oracle 10g database on a Windows XP client and Windows 2008 server.It seems working well on Windows 7 client (32-bit only), but I think that it needs to test this apps.
View 11 Replies
View Related
Feb 16, 2013
How i can to built form (oracle 6i) that connect with report builder ?
View 2 Replies
View Related
Mar 18, 2011
How to hide oracle in-built message which appears at the bottom of the window(just above the task-bar) to the left side.
View 2 Replies
View Related
Sep 10, 2012
how can i speed up insert into.
Becuase when i used create table a as select * from table1;
for five rows only
Elapsed: 00:00:2.19
is faster then insert into a select * from table1;
Elapsed: 00:00:15.19
View 15 Replies
View Related
Oct 7, 2013
We have a large table of equally sized data blobs in our Oracle system and we'd like to select the whole table once into the memory. The corresponding tablespace is stored in SSD fast disks and is managed by ASM. However the achievable select speed (reading data into memory) of Oracle is not satisfactory. When we store the data in SSD disk using custom methods (e.g. in SQLite DB files) and load then into memory by multithreading (8 thread) techniques, the speed is more than 15 times that of Oracle.
Is there any way to optimize the oracle and ASM for increased full table select in our case. We tried FULL TABLE SCAN and PARALLEL hints and DB_FILE_MULTIBLOCK_READ_COUNT too, but with no success.
FYI:Our data blob is about 4.2kB and each DB_BLOCK_SIZE is 8kB. The ASM segment is configured for AUTO-ALLOCATE. The table is partitioned by HASH. Our oracle system is not RAC.
View 1 Replies
View Related
Jul 1, 2010
This error occurs while trying to print a report in a PDF format using Oracle Application Express 3.0. I'm not sure what this means and how to fix it. I'm running Oracle 10g on Windows Vista.
ORA-29273: HTTP request failed
ORA-06512: at "SYS.UTL_HTTP", line 1186
ORA-12570: TNS: packet reader failure
This is the first time I'm attempting to print to a PDF. I did follow configuration according to the Oracle documentation. I selected the Standard configuration that uses the built-in Apache FOP or another standard XSL-FO processing engine.
View 6 Replies
View Related
May 30, 2012
saw in one procedure using host function in below manner.
v_status:= host(v_cmd);
But i searched for the host function it is not available.
My question is --Is there any pre defined built-in function "host" in oracle 10g.
View 1 Replies
View Related
Feb 13, 2007
know if there's a built-in function to covert an Oracle CURSOR to VARCHAR? Or how about a XMLType to VARCHAR?
View 1 Replies
View Related
Nov 19, 2012
Is it possible to have query rewrite using MV's built from remote tables?
11.20.2
View 1 Replies
View Related
Jun 27, 2012
I was wondering if there is a substitution string ( or something else) that could be used to 'return' the name of a Form item that has cursor focus?
View 3 Replies
View Related
Oct 4, 2013
We have taken export expdp backup from prod database (primary database- Data Guard).
1.) Import impdp is very slow 10GB/Hrs on staging database (Data Guard MAXIMUM AVAILBILITY)Since Server configuration, database version and configuration, operating system everything are same as production. No blocking, locking or waiting sessions
2.)import impdp is fast 90GB/Hrs on Test standalone database and this test database is running in NOARCHIVE LOG mode with oracle standard version after that no more difference.
CPU,Memory,network and disk I/O are look normal while importing on both databases.why that much difference on import.
View 1 Replies
View Related
Dec 19, 2011
I have a Problem related to performance and quickness of a customized application
suppose that this application work on 2 clients. Those 2 clients logon to the same database and the same mapping application folder also the 2 PC configuration are the same.
the first client has OS windows XP , the second client has OS Windows 7. I noticed that the the second client (win7) is less performance than the first one (win xp) also I saw the application on the second client is slower than the first client.
I know that Oracle 6i on win7 is dis supported . Also i know i should upgrade to 10g or change the OS to xp. But is this the only way for enhancing the performance or is there another way i can do without changing anything (Patch , etc)
View 4 Replies
View Related
May 13, 2010
I have installed Oracle 10g on one system and Oracle developer on another machine, means i have different machines for DB Server and Application server. It all working excellent inside the company premises, but if i want to access my Oracle DB and application server outside the company then it gives me problem..how to access application (forms and reports ) remotely outside the company...having same db.
View 2 Replies
View Related
Oct 10, 2011
I just trying to import some informations from excel to Oracle using OLE2 over Oracle Forms 6i, but It´s very slow when I have import under then 10k lines. anything to optimize that ? Follow the code used...
application OLE2.Obj_Type;
workbooks OLE2.Obj_Type;
workbook OLE2.Obj_Type;
[Code]....
View 2 Replies
View Related
Sep 30, 2010
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns:
(id number,
job varchar2(20),
time date,
plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names.
For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
View 24 Replies
View Related
Jun 16, 2010
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
View 2 Replies
View Related
Jun 16, 2011
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Simple database with simple schema.
How many records begin to be too many?
View 2 Replies
View Related
Nov 15, 2010
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
View 13 Replies
View Related
Apr 12, 2013
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)?
2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
View 13 Replies
View Related
Aug 9, 2010
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname
from emp e, dept d
where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query,
Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that?
Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
View 6 Replies
View Related
Dec 6, 2011
I have an issue with export(expdp).
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.
expdp sys/xxxx directory=dbpdump dumpfile=expdp_trk_backup.dmp logfile=expdp_trk_backup.log exclude=statistics schemas=trk
Do i need any look into any memory parameters for this?
View 1 Replies
View Related