Performance Tuning :: Create Small Functional Indexes For Special Cases In Very Large Tables
Apr 5, 2012
Create small functional indexes for special cases in very large tables.
When there is a column having one values in 99% records and another values that have to be search for, it is possible to create an index using null value. Index will be small and the rebuild fast.
Example
create index vh_tst_decode_ind_if1 on vh_tst_decode_ind
(decode(S,'I','I',null),style)
It is possible to do index more selective when the key is updated and there are many records to create more levels in b-tree.
create index vh_tst_decode_ind_if3 on vh_tst_decode_ind
(decode(S,'I','I',null),
decode(S,'I',style,null)
)
To access the record can by like:
SQL> select --+ index(vh_tst_decode_ind_if3)
2 style ,count(*)
3 from vh_tst_decode_ind
4 where
5 decode(S,'I','I',null)='I'
6 group by style
7 ;
[code]....
View 2 Replies
ADVERTISEMENT
Mar 28, 2011
I have several large tables in the live system! Those table are store historical information.
current situation:
Now, A table record was 129 million rows.
Every month added 4.5M records to this table.
This table data size 17GB and index size 28GB.
I have only 30 GB available free space on disk!
How to split this table to small pieces (partition table by month)?
What is the best approach?
I would like to do partitioning on this table month by month.
View 12 Replies
View Related
Jan 12, 2011
So our situation is pretty simple. We have 3 tables.
A, B and C
the model is A->>B->>C
Currently A, B and C are range partitioned on a key created_date however it's typical that only C is every qualfied with created date. There is a foreign key from B -> A and C -> Bhave many queries where the data is identified by state that is indexed currently non partitioned on columns in A ... there are also indexes on the foreign keys that get from C -> B -> A. Again these are non partitioned indexes at this time.
It is typical that we qualifier A on either account or user or both. There are indexes (non partitioned on these) We have a problem with now because many of the queries use leading wildcards ie. account like '%ACCOUNT' etc. This often results in large full table scans. Our solution has been to remove the leading wildcard.
We are wondering how we can benefit from partitioning and or sub partitioning table A. since it's partitioned on created_date but rarely qualified by that. We are also wondering where and how we can benefit from either global partitioned index or local partitioned indexes on tables A. We suspect that the index on the foreign key from C to B could be a local partitioned index.
View 3 Replies
View Related
Sep 27, 2013
I have tried a lot by alternate solutions like rearranging the order of tables in join and moving where conditions before but no success...Its a bottleneck and I could not have indexes on these tables in production...I want to change the approach in subquery
SELECT
g.COLUMN1,
g.COLUMN2,
e.COLUMN3,
g.COLUMN4,
MIN(e.dat1) KEEP ( DENSE_RANK FIRST ORDER BY date2 Desc) * -1,
min(to_char(date3,'dd-mm-yyyy'))
[code]....
View 5 Replies
View Related
Feb 4, 2005
16:28:32 SQL> create bitmap index bp_idx_ag_id on transactions(type);
create bitmap index bp_idx_ag_id on transactions(type)
*
ERROR at line 1:ORA-25122: Only LOCAL bitmap indexes are permitted on partitioned tables
how to create bitmap index on partitiioned tables
View 3 Replies
View Related
Aug 5, 2010
I have to create a hash partition on fact tables.. we can use temp tablespace or permanent tablespace.
View 10 Replies
View Related
Jan 19, 2011
I want to load 10 millions records from staging table to master table.One logic must be take during the load, the logic is rows already present in master table means, we need to update corresponding rows in master table otherwise rows insert in target table.
I have been using bulk collect and forall method to load data. it shows better performance compare then cursor row by row process. As per oracle doucmentation, we cannot use SELECT statements inside FORALL condition so we could not use logic inside the forall condition.
View 2 Replies
View Related
Dec 10, 2010
So I was reading about indexes here:[URL]...1Is there any reason to NOT use an index? If there aren't, then should you use an index on every column on every table?
What is the general best practice with indexes? After reading the section, it seems that there are only positive impacts of using an index, so why are they not automatically created?
View 3 Replies
View Related
Sep 26, 2012
what analyzing a table does to existing indexes? Do I need to rebuild the indexes after dbms_stats.gather_table_stats command ?
View 4 Replies
View Related
May 20, 2011
Below query is getting delayed becasue of BitMap Indexes on the table. I am trying to avoid indexes by using Hints in the query but unable to do so, Details are as follows.
explain plan for
SELECT cbu_cid, cbu_cid_customer_en_nm,
COUNT (billg_acct_no) AS billg_acct_no,
SUM (subscriber_cnt) AS subscriber_cnt
FROM daily_view
WHERE (billg_system_id = 'TM' AND mktg_sub_segment_a_nm = 'TM')
AND (cbu_cid NOT IN ('0001988048', '0001379962', '0001350469'))
GROUP BY cbu_cid, cbu_cid_customer_en_nm
HAVING SUM (subscriber_cnt) > 10
ORDER BY subscriber_cnt DESC;
[code]....
I have tried with ALL_ROWS & PARALLEL.how to avoid above two indexes in a query.
View 28 Replies
View Related
Aug 26, 2011
I am working with an online application with the database in Oracle 10G. We have a table with 10 million rows and this table is subjected to grow in future also. Moreover we cannot archive some of these rows as these records are required for referencing.
We have all necessary indexes on the table but querying this table takes a lot of time especially when it is joined with other tables. some methods with which I can manage this table in a better way so that queries joining this table would execute faster..
SELECT
TAB1.C6,
TAB1.C8,
TAB1.C10,
TAB3.C4,
[code]....
View 7 Replies
View Related
Aug 16, 2012
getting all indexes script in particular schema.
View 4 Replies
View Related
Jun 1, 2012
I don't have any dba privileges, can you share a scripts which can tell how many block my query is fetching with or without indexes. How do i also get buffer hit, how can i get i/o without sql trace as i don't have access to dump_dest
I have a below query
SELECT DISTINCT ser_id AS STA_ser_id, rct_name AS STA_name
FROM sd_servicecalls, rep_codes, rep_codes_text
WHERE ser_sta_oid = rcd_oid
AND rcd_oid = rct_rcd_oid
AND rct_name IN ('New', 'Awaiting Approval', 'Approved', 'In Progress', 'Awaiting Supplier', 'Awaiting RFC', 'Awaiting Release', 'Pending Release', 'On Hold', 'Resolved', 'Implemented', 'Closed');
Does large hash value in explain plan mean more resource needed and more time to execute the query, How can i use ADDM for the above sql.
View 7 Replies
View Related
Mar 19, 2012
getting how many local and global indexes on particular oracle table
View 2 Replies
View Related
Mar 29, 2004
I am having Oracle 9i relaese 2 on my db server. I am getting the following error every time I try to create a bitmap index:-
ORA-00439: feature not enabled: Bit-mapped indexes
I have queried the v$option table .Here the value of parameter Bit-mapped indexes is FALSE.
The result of v$version is :-
Oracle9i Release 9.2.0.1.0 - 64bit Production
PL/SQL Release 9.2.0.1.0 - Production
CORE 9.2.0.1.0 Production
TNS for Solaris: Version 9.2.0.1.0 - Production
NLSRTL Version 9.2.0.1.0 - Production
Actually when we created the database our installation was halted . so we manually created the database using Create database command.
How can we enable the BITMAP Index Feature now.
View 10 Replies
View Related
Feb 14, 2013
I Want to tune the attached query. I have tried by creating the normal indexes and composite indexes on the fields . I feel that , Only normal index is required for this instead of composite index?
11:15:19 SQL> @slot.sql
11:16:03 SQL>
11:16:03 SQL> drop table slot purge;
Table dropped.
Elapsed: 00:00:00.05
11:16:03 SQL>
11:16:03 SQL> create table slot
11:16:03 2 (
11:16:03 3 id varchar2 (40) not null,
[code]....
- dynamic sampling used for this statement
22 rows selected.
View 8 Replies
View Related
Aug 15, 2011
I have two design alternatives and need to understand how expensive (speed) is one of them against the other for a medium size table (100K-200K records):
create table xyz
(
f1 number not null,
f2 varchar2(20) not null,
f3 number not null,
f4 varchar2(50),
[code]....
the idea is to optimize the design by using a PK instead of the 3 keys and there is a debate that searching a unique index field(2nd scenario) is of the same speed than searching a PK field (1st scenario).
View 5 Replies
View Related
Jan 16, 2012
I have to do the optimization of a query that has the following characteristics:
- Takes 3 hours to process
- Performs the inner join with 30 tables
- Produces an output of 280 million records with 450 fields
First of all it is not feasible to make 30 updates (one for each table) to 280 million records.
The best solution that I had found so far was to create 3 temporary tables, where each of them to do the join with 1/3 of the 30 tables, and in the end I make the join between the main table and these three tables temporary.
I know that you will ask (or maybe not) to the query and samples, but it is impossible to create 30 examples.
how to optimize this type of querys that perform the join with multiple tables and produce a large output with (too) many columns.
View 15 Replies
View Related
Jan 13, 2012
We have a procedure, which do truncate to some of the tables. Most of the time it finished in short of spam of time. But from last few days, it is taking much longer time.
where should i start the investigation.
View 4 Replies
View Related
Sep 28, 2010
the literature equates dimension hierarchies with fuctional dependencies between the levels. I like to tst the strength of this assumption with the implementation of 'CREATE DIMENSION' which allows you to create roll-up hierarchies.
My question to put it simply is this: Given:
CREATE DIMENSION location_dim
LEVEL location IS (location.loc_id)
LEVEL city IS (location.city)
LEVEL state IS (location.state)
HIERARCHY geog_rollup (
location CHILD OF
city CHILD OF
state CHILD
)
Can I insert the following rows into the dimension:
loc_id, city, state
1, Epping, NSW
2, Epping, VIC
Please note that the two Eppings are different cities.
Given the roll-up hierarchy City -> State, will it require that for every city there can be only one state in which case the FD between City and State cannot hold. Or, is it that the roll-up hierarchy defined here has nothing to do with FD.
The second part of the question is if the answer to the above question is that the roll-up is not the same as FD, then is the ATTRIBUTE clause meant to define the n:1 (functional dependency) instead?
View 4 Replies
View Related
Aug 25, 2013
i am using 11.2.0.3.0 version of oracle. We are planning to move some ~40 tables/indexes to new encrypted tablespace as a part of TDE(transparent data encryption). Currently three tables are having size ~30GB and one having ~800GB other have <2GB in size. And tables/indexes are altogether placed in different tablespaces.
whether i should create as many no of encrypted table spaces as it was before as unencrypted tablespace? or I should create one encrypted tablespace and move all the tables/indexes into that?
View 2 Replies
View Related
Mar 11, 2013
Ways for improving the Table performance which holds million of records for oracle. Currently we have partitioning and indexing but it doesn't seem to work.
View 2 Replies
View Related
Jul 30, 2010
SELECT department_id
FROM (SELECT department_id
FROM employees
UNION
SELECT department_id
FROM employees_old )
WHERE department_id=100;
[code]....
The index has been created on both depart_id for the two tables. The only difference between the two I observed was the 1 recursive call for the 1st sql.and also, one additional view in the plan.There is a little difference in bytes sent over the network.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
6 consistent gets
0 physical reads
0 redo size
[code]....
Is there any performance impact you find in those above two sqls if you compare?
View 14 Replies
View Related
Mar 30, 2013
I am going through this scenario:
* 35 | ID TABLE ACCESS BY INDEX ROW | S_ORG_EXT | 3064K| 2472M| | 1 (0)| 00:00:01 |
| 36 | INDEX FULL SCAN | S_ORG_EXT_U1 | 14 | | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
---------------------------------------------------
35 - filter("T2"."ACCNT_FLG"<>'N' AND ("T2"."INT_ORG_FLG"<>'Y' OR "T2"."PRTNR_FLG"<>'N'))
This unselective index scan on step 36 of the explain is returning 14 rows but optimizer is selecting 3064 K rows from the table .
I tried creating combined index on all 3 columns mentioned in the predicates for 35th step , but that is not utilized .
how to index this whole expression ::--
(ACCNT_FLG<>'N' AND (INT_ORG_FLG<>'Y' OR PRTNR_FLG<>'N'))
Something like CREATE INDEX XYZ on table((ACCNT_FLG<>'N' AND (INT_ORG_FLG<>'Y' OR PRTNR_FLG<>'N')) compute statistics ;
View 3 Replies
View Related
Nov 13, 2012
The scale of the tests that generate the following scenario is not huge right now, only 50 users simulated (or you can think of them as independently running threads if you like). But here is the crunch, the queries generated (from generic transaction layer) are all running against a table that has 600 columns! We can't really control this right now, but this is causing masses amounts of IO (5GB per request) making requests queue for disk availability (which are setup RAID 0/1); its even noticable for as few as 3 threads.
I have rendered the SQL on one occasion to execute in 13 seconds for a single user but this appears short lived as when stats were freshly gathered it went up to the normal 90-120 seconds. I've added the original query to the file, however the findings here along with our DBA (who I trust implicitly) suggest that no amount of editing the query will improve the response times, increasing the PGA/SGA (currently 4/6GB respectively) will only delay the queuing for a bit and compression can work either. In short it looks as though we've hit hardware restrictions already for this particular scenario.
As I can't really explain how my rendered query no longer takes 13 seconds, it's niggling me that we might be missing a trick.So I was hoping for some guidance on possible ways of optimising these type of queries against such wide tables, in other words possibilities that we haven't considered...
Attached is the query and plan.
View 9 Replies
View Related
Dec 9, 2010
I have a view on base tables holding historical data for previous 60 months(one table per month) with union all operators.create index on those base tables will improve performance or creating a primary key with disabled novalidate will improve for retrieving data?
The view has around 8 million data and used as a fact table with 4 dimension tables.A DTS package from MSSql side refreshes OLAP cube by retrieving data from these tables in oracle.
View 1 Replies
View Related
Jan 25, 2013
We have a huge table in production, with LONG column. We are trying to change its datatype to CLOB. The table has 120 Million records and is of 270 GB in size.
We tried using the oracle expdp/impdp option to try the conversion in our perf environment. With 32 parallels, the export completed in 1.5 hrs. However, the import took 13 hrs.
I also tried the to_lob option using inserts, it went on for 20 hrs and I killed the process. Are there any ways to improve the performance of LONG to CLOB conversion on huge tables?
View 6 Replies
View Related
Dec 6, 2011
I have column containing three values:-N,E,Y.I want to get results with only E and Y values.Is it it possible to create index which would not look for N values.
View 13 Replies
View Related
Feb 4, 2011
How to find the tables in the database on which high DMLs are firing.
View 5 Replies
View Related
Feb 15, 2011
I am trying to update a million rows in one table with the values from another tables.
Table being updated CI_ADJ_CHAR column CHAR_VAL_FK1
Table from which values will be used CK_ADJ columns (cx_id, ci_id)
The CI_ADJ_CHAR.CHAR_VAL_FK1 values match CK_ADJ.CX_ID and should be updated with the value CK_ADJ.CI_ID.
The CK_ADJ table has 1.3 million rows and both the columns have indexes defined. Table definitiuon mentioned below
The CI_ADJ_CHAR table has 14 million rows and will update 1 million rows and has an index on the ADJ_ID column but not on the CHAR_VAL_FK1 column.
View 1 Replies
View Related