Performance Tuning :: Create Small Functional Indexes For Special Cases In Very Large Tables
Apr 5, 2012
Create small functional indexes for special cases in very large tables.
When there is a column having one values in 99% records and another values that have to be search for, it is possible to create an index using null value. Index will be small and the rebuild fast.
Example
create index vh_tst_decode_ind_if1 on vh_tst_decode_ind
(decode(S,'I','I',null),style)
It is possible to do index more selective when the key is updated and there are many records to create more levels in b-tree.
create index vh_tst_decode_ind_if3 on vh_tst_decode_ind
(decode(S,'I','I',null),
decode(S,'I',style,null)
)
To access the record can by like:
SQL> select --+ index(vh_tst_decode_ind_if3)
2 style ,count(*)
3 from vh_tst_decode_ind
4 where
5 decode(S,'I','I',null)='I'
6 group by style
7 ;
So our situation is pretty simple. We have 3 tables.
A, B and C
the model is A->>B->>C
Currently A, B and C are range partitioned on a key created_date however it's typical that only C is every qualfied with created date. There is a foreign key from B -> A and C -> Bhave many queries where the data is identified by state that is indexed currently non partitioned on columns in A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at this time.
It is typical that we qualifier A on either account or user or both. There are indexes (non partitioned on these) We have a problem with now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This often results in large full table scans. Our solution has been to remove the leading wildcard.
We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on created_date but rarely qualified by that. We are also wondering where and how we can benefit from either global partitioned index or local partitioned indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.
I have tried a lot by alternate solutions like rearranging the order of tables in join and moving where conditions before but no success...Its a bottleneck and I could not have indexes on these tables in production...I want to change the approach in subquery
SELECT g.COLUMN1, g.COLUMN2, e.COLUMN3, g.COLUMN4, MIN(e.dat1) KEEP ( DENSE_RANK FIRST ORDER BY date2 Desc) * -1, min(to_char(date3,'dd-mm-yyyy')) [code]....
I want to load 10 millions records from staging table to master table.One logic must be take during the load, the logic is rows already present in master table means, we need to update corresponding rows in master table otherwise rows insert in target table.
I have been using bulk collect and forall method to load data. it shows better performance compare then cursor row by row process. As per oracle doucmentation, we cannot use SELECT statements inside FORALL condition so we could not use logic inside the forall condition.
So I was reading about indexes here:[URL]...1Is there any reason to NOT use an index? If there aren't, then should you use an index on every column on every table?
What is the general best practice with indexes? After reading the section, it seems that there are only positive impacts of using an index, so why are they not automatically created?
Below query is getting delayed becasue of BitMap Indexes on the table. I am trying to avoid indexes by using Hints in the query but unable to do so, Details are as follows.
explain plan for SELECT cbu_cid, cbu_cid_customer_en_nm, COUNT (billg_acct_no) AS billg_acct_no, SUM (subscriber_cnt) AS subscriber_cnt FROM daily_view WHERE (billg_system_id = 'TM' AND mktg_sub_segment_a_nm = 'TM') AND (cbu_cid NOT IN ('0001988048', '0001379962', '0001350469')) GROUP BY cbu_cid, cbu_cid_customer_en_nm HAVING SUM (subscriber_cnt) > 10 ORDER BY subscriber_cnt DESC; [code]....
I have tried with ALL_ROWS & PARALLEL.how to avoid above two indexes in a query.
I am working with an online application with the database in Oracle 10G. We have a table with 10 million rows and this table is subjected to grow in future also. Moreover we cannot archive some of these rows as these records are required for referencing.
We have all necessary indexes on the table but querying this table takes a lot of time especially when it is joined with other tables. some methods with which I can manage this table in a better way so that queries joining this table would execute faster..
I don't have any dba privileges, can you share a scripts which can tell how many block my query is fetching with or without indexes. How do i also get buffer hit, how can i get i/o without sql trace as i don't have access to dump_dest
I have a below query
SELECT DISTINCT ser_id AS STA_ser_id, rct_name AS STA_name FROM sd_servicecalls, rep_codes, rep_codes_text WHERE ser_sta_oid = rcd_oid AND rcd_oid = rct_rcd_oid AND rct_name IN ('New', 'Awaiting Approval', 'Approved', 'In Progress', 'Awaiting Supplier', 'Awaiting RFC', 'Awaiting Release', 'Pending Release', 'On Hold', 'Resolved', 'Implemented', 'Closed');
Does large hash value in explain plan mean more resource needed and more time to execute the query, How can i use ADDM for the above sql.
I am having Oracle 9i relaese 2 on my db server. I am getting the following error every time I try to create a bitmap index:-
ORA-00439: feature not enabled: Bit-mapped indexes
I have queried the v$option table .Here the value of parameter Bit-mapped indexes is FALSE.
The result of v$version is :-
Oracle9i Release 9.2.0.1.0 - 64bit Production PL/SQL Release 9.2.0.1.0 - Production CORE 9.2.0.1.0 Production TNS for Solaris: Version 9.2.0.1.0 - Production NLSRTL Version 9.2.0.1.0 - Production
Actually when we created the database our installation was halted . so we manually created the database using Create database command.
I Want to tune the attached query. I have tried by creating the normal indexes and composite indexes on the fields . I feel that , Only normal index is required for this instead of composite index?
I have two design alternatives and need to understand how expensive (speed) is one of them against the other for a medium size table (100K-200K records):
create table xyz ( f1 number not null, f2 varchar2(20) not null, f3 number not null, f4 varchar2(50),
[code]....
the idea is to optimize the design by using a PK instead of the 3 keys and there is a debate that searching a unique index field(2nd scenario) is of the same speed than searching a PK field (1st scenario).
I have to do the optimization of a query that has the following characteristics:
- Takes 3 hours to process - Performs the inner join with 30 tables - Produces an output of 280 million records with 450 fields
First of all it is not feasible to make 30 updates (one for each table) to 280 million records.
The best solution that I had found so far was to create 3 temporary tables, where each of them to do the join with 1/3 of the 30 tables, and in the end I make the join between the main table and these three tables temporary.
I know that you will ask (or maybe not) to the query and samples, but it is impossible to create 30 examples.
how to optimize this type of querys that perform the join with multiple tables and produce a large output with (too) many columns.
We have a procedure, which do truncate to some of the tables. Most of the time it finished in short of spam of time. But from last few days, it is taking much longer time.
the literature equates dimension hierarchies with fuctional dependencies between the levels. I like to tst the strength of this assumption with the implementation of 'CREATE DIMENSION' which allows you to create roll-up hierarchies.
My question to put it simply is this: Given:
CREATE DIMENSION location_dim LEVEL location IS (location.loc_id) LEVEL city IS (location.city) LEVEL state IS (location.state) HIERARCHY geog_rollup ( location CHILD OF city CHILD OF state CHILD )
Can I insert the following rows into the dimension: loc_id, city, state 1, Epping, NSW 2, Epping, VIC
Please note that the two Eppings are different cities.
Given the roll-up hierarchy City -> State, will it require that for every city there can be only one state in which case the FD between City and State cannot hold. Or, is it that the roll-up hierarchy defined here has nothing to do with FD.
The second part of the question is if the answer to the above question is that the roll-up is not the same as FD, then is the ATTRIBUTE clause meant to define the n:1 (functional dependency) instead?
i am using 11.2.0.3.0 version of oracle. We are planning to move some ~40 tables/indexes to new encrypted tablespace as a part of TDE(transparent data encryption). Currently three tables are having size ~30GB and one having ~800GB other have <2GB in size. And tables/indexes are altogether placed in different tablespaces.
whether i should create as many no of encrypted table spaces as it was before as unencrypted tablespace? or I should create one encrypted tablespace and move all the tables/indexes into that?
Ways for improving the Table performance which holds million of records for oracle. Currently we have partitioning and indexing but it doesn't seem to work.
SELECT department_id FROM (SELECT department_id FROM employees UNION SELECT department_id FROM employees_old ) WHERE department_id=100; [code]....
The index has been created on both depart_id for the two tables. The only difference between the two I observed was the 1 recursive call for the 1st sql.and also, one additional view in the plan.There is a little difference in bytes sent over the network.
* 35 | ID TABLE ACCESS BY INDEX ROW | S_ORG_EXT | 3064K| 2472M| | 1 (0)| 00:00:01 | | 36 | INDEX FULL SCAN | S_ORG_EXT_U1 | 14 | | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id): --------------------------------------------------- 35 - filter("T2"."ACCNT_FLG"<>'N' AND ("T2"."INT_ORG_FLG"<>'Y' OR "T2"."PRTNR_FLG"<>'N'))
This unselective index scan on step 36 of the explain is returning 14 rows but optimizer is selecting 3064 K rows from the table .
I tried creating combined index on all 3 columns mentioned in the predicates for 35th step , but that is not utilized .
how to index this whole expression ::--
(ACCNT_FLG<>'N' AND (INT_ORG_FLG<>'Y' OR PRTNR_FLG<>'N'))
Something like CREATE INDEX XYZ on table((ACCNT_FLG<>'N' AND (INT_ORG_FLG<>'Y' OR PRTNR_FLG<>'N')) compute statistics ;
The scale of the tests that generate the following scenario is not huge right now, only 50 users simulated (or you can think of them as independently running threads if you like). But here is the crunch, the queries generated (from generic transaction layer) are all running against a table that has 600 columns! We can't really control this right now, but this is causing masses amounts of IO (5GB per request) making requests queue for disk availability (which are setup RAID 0/1); its even noticable for as few as 3 threads.
I have rendered the SQL on one occasion to execute in 13 seconds for a single user but this appears short lived as when stats were freshly gathered it went up to the normal 90-120 seconds. I've added the original query to the file, however the findings here along with our DBA (who I trust implicitly) suggest that no amount of editing the query will improve the response times, increasing the PGA/SGA (currently 4/6GB respectively) will only delay the queuing for a bit and compression can work either. In short it looks as though we've hit hardware restrictions already for this particular scenario.
As I can't really explain how my rendered query no longer takes 13 seconds, it's niggling me that we might be missing a trick.So I was hoping for some guidance on possible ways of optimising these type of queries against such wide tables, in other words possibilities that we haven't considered...
I have a view on base tables holding historical data for previous 60 months(one table per month) with union all operators.create index on those base tables will improve performance or creating a primary key with disabled novalidate will improve for retrieving data?
The view has around 8 million data and used as a fact table with 4 dimension tables.A DTS package from MSSql side refreshes OLAP cube by retrieving data from these tables in oracle.
We have a huge table in production, with LONG column. We are trying to change its datatype to CLOB. The table has 120 Million records and is of 270 GB in size.
We tried using the oracle expdp/impdp option to try the conversion in our perf environment. With 32 parallels, the export completed in 1.5 hrs. However, the import took 13 hrs.
I also tried the to_lob option using inserts, it went on for 20 hrs and I killed the process. Are there any ways to improve the performance of LONG to CLOB conversion on huge tables?
I have column containing three values:-N,E,Y.I want to get results with only E and Y values.Is it it possible to create index which would not look for N values.