I have tried a lot by alternate solutions like rearranging the order of tables in join and moving where conditions before but no success...Its a bottleneck and I could not have indexes on these tables in production...I want to change the approach in subquery
SELECT g.COLUMN1, g.COLUMN2, e.COLUMN3, g.COLUMN4, MIN(e.dat1) KEEP ( DENSE_RANK FIRST ORDER BY date2 Desc) * -1, min(to_char(date3,'dd-mm-yyyy')) [code]....
I have a view on base tables holding historical data for previous 60 months(one table per month) with union all operators.create index on those base tables will improve performance or creating a primary key with disabled novalidate will improve for retrieving data?
The view has around 8 million data and used as a fact table with 4 dimension tables.A DTS package from MSSql side refreshes OLAP cube by retrieving data from these tables in oracle.
I Want to tune the attached query. I have tried by creating the normal indexes and composite indexes on the fields . I feel that , Only normal index is required for this instead of composite index?
We have a DELETE statement when coming from application is not using index but when run from Toad or SQLplus as same user uses index. Explain plan also shows using index.I did a query on v$sql below is the output of the query( I have attached the same as a txt file). All the stats are up to date and confirmed from the developer the variable B1 is using the same datatype as column MAXMKY.
SQL_TEXTSQL_ID DISK_READSOPTIMIZER_HASH_VALUE DELETE LOTA WHERE MAXMKY=:B1 2g2prrp3z56ah19,099,1891,846,735,884 DELETE LOTA WHERE MAXMKY=:B1 2g2prrp3z56ah0 1,846,735,884 OPTIMIZER_COST HASH_VALUEPLAN_HASH_VALUE MODULEPARSING_SCHEMA_NAME
My ERP Application is responding fast while running reports or saving entries, if Oracle 10g Express Edition (XE) is installed. But in Oracle 10g Enterprise Edition or Standard Editions the same application is running very slow.
I have created a program to access a oracle database using oracle.data access, .net framework 4, visual studio 2010 in C#. The program runs without any problem. If I try to create a windows service, I receive the error
Service cannot be started. System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=2.112.3.0, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format. File name: 'Oracle.DataAccess, Version=2.112.3.0, Culture=neutral, PublicKeyToken=89b483f429c47342'
when i execute the command
GlobalVariables.conn = new OracleConnection(oracelString);
Name some database tool from which I can check the SQL Queries which my application is running.
NOTE: I do not want to check the queries which I am executing at the SQL command prompt but queries that are being run by my application at the backend.
What could be the reasons that some queries execute fast when executed on sqlplus on server, whereas the same queries run slower with same input values fed from application screen?
One issue I guess would be bind variable peeking while using application whereas executing from sqlplus is causing hard parsing and thus getting rid of "peeking"
If displaying the data on application screen is taking time after data has been fetched, where I can see this delay?
I understand the elapsed time under 'Fetch' in tkprof will show time taken to fetch from database and not the time taken to be displayed in the application GUI
finally how to set arraysize in jdbc to improve performance by reducing roundtrips?
I'm an application developer of an automotive company and developing a lot of database-based applications with either oracle forms or c#.Since we've moved from a 10g rac to 11g using a shared server configuration, the prevailing and overwhelming topic of addm performance analysis is "unusual network wait event" caused by virtual circuit waits. Therefore I cannot use grid control to detect bad sql as I could in 10g anymore, because all "tunable" sql is wiped out by virtual circuit wait.In top activity, I see virtual circuit wait on every type of statement (select, insert...) and pl/sql execution.
What do I have to do as an application developer to avoid virtual circuit waits? Especially in C#: we normally use auto committed dml statements and selects to fill either a datatable or generic list with a data reader. Usually we close a connection after each statement, but/and we are using connection pooling. How can such a activity cause virtual circuit waits? In Oracle Forms: Seems that we have a virtual circuit wait if we show sorted data in a block where not all records are fetched from database. It doesn't make sense to us to rewrite all blocks to always get all records due to performance reasons.
How do I have to write and execute my statements in C#, oracle forms and/or pl/sql to avoid virtual circuit wait?
Unless I'm not understanding this error it means that it can find the ACTION part that is attached to the WHAT part of this Dynamic Action? The Dynamic Action does work when the application is run (in Development)also, there are 3 others that are similar to this one. The export was created by the export utility in the Application Builder.
If I export only the page and import that into Production the import is successful and the page runs correctly. This is error is happening only when I try to import the entire Application.There are many other changes made which is why I was trying to do an Application export/import instead of individual pages.
I have a query optimized as to it indexes and others runs immediately when the answer is few records in SQL Server such as Oracle, however when the result is large eg 20,000 records all data access times are very diferent. The query returns many fields (about 20) and some of them are of type Varchar 250 and some of 2000 I understand here may be the problem, but not is because for similar results (20,000 records) sql run in 2 seconds and Oracle but it responds little to have full data takes around 30 seconds. The problem is really in bringing information to all these fields since if the inquiry it also but only returning a numeric field is done in 2 seconds. Tests I've done them both through ODBC, in the Toad as in the own Oracle console on the server, so it is not problem Driver or flow of data through the network, I would like to think that this is some of the settings I think there is as much difference between Oracle and Sql. The databases are ORACLE 10 and SQL Server 2008.
Few days ago, My database server no access to StorageBox then I reboot it then after works fine. But, know DB import process is too slow. Before 100GB DB import process completed within 10 hours when server normal running. Now 2 day working, but not complete
How to investigate this issue? Maybe I miss increase some parameters on the Server or Oracle?
Here is my server brief info:
RAM is 16GB, SWAP size is 16GB, CPU 12 cores
SQL> show sga;
Total System Global Area 4294967296 bytes Fixed Size 1984144 bytes Variable Size 369105264 bytes Database Buffers 3909091328 bytes Redo Buffers 14786560 bytes
Looking to understand the difference between instance tuning and database tuning.
What is the difference between these two tuning exercises? I understand that an instance is memory based structures (logical) where as database consists of physical structures.
However, how does one tune a database the physical structure? Does it have to do with file placements/block sizes etc. Would you agree that a lot of that is taken care by ASM now in 11g? What tools are required/available (third party as well as oracle supplied) for these types of tuning scenarios?
I have a procedure which mainly run queries on a Table which has nearly 9.5 million recodes. This procedures takes nearly 15 min to complete execution on our main database. I exported and imported the schema to our backup database and the same procedure just took 3 seconds to complete.
I tried to analyze the table in our main database and tried to execute the procedure again but did not show any improvements. ANALYZE TABLE DN_ACTIONS COMPUTE STATISTICS;
I am not sure computing the statistics for all the tables in the schema will work. I also checked there is enough disk space where oracle data files are stored. I am also turning on the sql trace to see what sql statements in the procedure is taking longer time.
I am trying to use the APEX (as a substitute for my PHP forms base application) for a new client. Can I use the APEX and Oracle XE in production, without spending any money for a license from Oracle - i.e. for listener, Web server, etc,?
I already have setup the APEX that works fine on my laptop, able to see and work with the form on the browser, so can I take the same to a server as a production?
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */ INTO DWH_BILL_DET rq USING (SELECT rated_que_rowid, detail_rerate_flag_code, rerate_sel_key,
I will have to proceed with Oracle 9 database refresh from production server to integration server. 5 biggest schemas must be exported and imported. They constitute 97% space used in a database. This is very big database so I would like to be sure that everything will go smoothly. That is why i want to ask you some questions.
Have you got any clues for me before I start with exp/imp? From my side i will tell you that I will have to exp/imp schema by schema because there is small space both on production and integration disk for a dump. First thing I thought are dependencies between schemas that are exported and that which are not, and also between schemas that are exported/imported one by one.
This is procedure that I plan:
For every schema that is to be refreshed { 1. Export schema with ROWS=N CONSTRAINTS=Y 2. EXPORT schema with ROWS=y CONSTRAINTS=N 3. Import schema from step one 4. Disable all the foreign key constraints using ALTER TABLE DISABLE CONSTRAINT. 5. Import schema with rows } ALTER TABLE ENABLE CONSTRAINT
With above procedure i think that I will avoid problems with dependencies between schemas exported/imported one by one. But my concern is if there are any dependencies between those schemas and schemas that are not exported. Is there an way to check it before refresh ?
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns: (id number, job varchar2(20), time date, plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names. For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
I have a question about database fragmentation.I know that fragmentation can reduce performance in query times. The blocks are distributed in many extents and scans process takes a long time. Oracle engine have to locate the address of the next extent..
I want to know if there is any system view in which you can check if your table or index has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view?
There is a simple way to increase the performance of a query by reducing the row-size of the table it hits. I used it in the past by dividing the table into smaller parts and querying respective smaller table in each query.
what is this method called ? just forgot the method and can't recall it. what this type of row-reduction optimization is called ?
How many records could I have in a single table without performance degradation with Standard Edition without partitioning with cutting-edge server (8 or 12 cores, 72 GB RAM, FC 4 Gbit, etc...) and good storage?
300 Millions in only one table with 500K transactions / day is too much?
Testing our 9i to 11g upgrade, we've imported the entire DB into the new machine.We've found that certain procedures are really suffering performance problems. BUT, we've also found, that if we check out a production copy of the procedure from our source code control, and reinstall it, the performance issue goes away. Just alter the procedure and recompiling does NOT work.
The new machine where the 11g database exists is slightly different than the source, but it's not like we have this problem with every procedure. It's only a couple.
any possible reason that we'd have to re-install a procedure to correct a performance problem?
I need to check the package performance and need to improve the package performance.
1. how to check the package performance(each and every statement in the package)? 2. In the package using the delete statement to delete all records and observed that delete is taking long time to delete all the records in the table(Table records 7000000). This table is like staging table.Daily need to clean the data before inserting the data into it. what can I use instead of Delete.
Somewhere I read that we should not use hints in Oracle production environments, but we can use hints in the development environment and on achieving the desired execution plan we can adjust the 'statistics' to follow that plan without hints.
Q1. If it is true what statistics do we adjust for influencing the execution plan and how?
For example, I have the following simple query:
select e.empid, e.ename, d.dname from emp e, dept d where e.deptno=d.deptno;
emp.empid, emp.deptno and dep.deptno columns have indexes and the tables have the standard structure as found in the basic oracle examples.
If I look at the execution plan of the above query then I see that the driving table is empand the driven table is dept.Also the type of join that is taking place is 'Nested Loop'.
Questions: With respect to the above query, Q 2. If I want to make dept the driving table and emp the driven table then how can I adjust the statistics to achieve that? Q 3. If I want to use hash join instead of a nested loop join then then how can I adjust the statistics to achieve that?
I can put the ordered and the use_hash hint to effect this but again I have heard that altering statistics is a more robust way to control an execution plan as compared to hints.
When i exporting an user using expdp utility, the load the on the server is going up-to 5. The size of the database is 180GB. Below is the command that i use for export.