I have a requirement where the user input values will be passed as comma separated string to backend, which is queried against the table using instr. But the index present on the table is not using the index , due to the instr function.How can I create a index in such a way that The instr function uses it.
The below query is going for full table scan due to this.
select * from test_idx where (INSTR (','||'E10000'||',', ',' || ccn || ',') <> 0 OR 'E10000' = 'DEFAULT')
and mod='90396' and rpt_flag='O' and smp_identifier=2
how to recreate the above index so that these queries uses this index.
Example: I want to search 'Hello world' for the first instance of the letter 'o' starting from the end, backwards.As you can see, result for DBMS_LOB.instr is null when entered -1 for offset.
select DBMS_LOB.instr('Hello world','o',-1,1) lob_i, instr('Hello world','o',-1,1) std_i from dual;
There is an 'emp' table with a column name as 'mgr' with datatype 'number'. following is the detailed description of the table:
SQL> desc emp;
Name Null? Type ----------------------------------------- -------- --------------------------- EMPNO NOT NULL NUMBER(4) ENAME VARCHAR2(10) JOB VARCHAR2(9) MGR NUMBER(4) HIREDATE DATE SAL NUMBER(7,2) COMM NUMBER(7,2) DEPTNO NUMBER(2)
Now when I run the 'select mgr from emp e;' query the output which I get is:
7902 7698 7698 7839 7698 7839 7839 7566
7698 7788 7698 7566 7782
Note: One value in between here is null, the required to me is that I want to print a character value 'President' in place of null .
create or replace directory UTL_FILE_DIR as 'k:lob_test'; GRANT READ,WRITE ON DIRECTORY UTL_FILE_DIR TO user1;
and then ,I login use user : user1
create or replace package body p_utl_mail_attach_raw as procedure send_single_user_attach_raw(........) fil BFILE; filenm VARCHAR2(50) := 'aaa.jpg' begin fil := BFILENAME('UTL_FILE_DIR', filenm); end ;
this result it show : UTL_FILE_DIR/aaa.jpg, exists=F, length=0, open=F
I came across situation where a Nullable column is not using index for 'order by' clause. I added Not Null condition in the 'where' condition but it wasn't useful. I don't wanted to make composite index with not nullable column or with constant or modify column to 'Not Null'
So I carried out test cases and during which I found that in one case the sql statement does 'fast full scan' for data access but does not use index for 'order by' sorting
here are the steps
Initially I kept the column Nullable
SQL> create sequence s5; Sequence created.
SQL> create table t5 as select s5.nextval id,a.* from dba_objects a where rownum<1001; Table created.
SQL> set pages 100 SQL> select column_name,nullable from user_tab_columns where table_name='T5';
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 SORT ORDER BY (cr=16 pr=0 pw=0 time=4771 us) 1000 TABLE ACCESS FULL T5 (cr=16 pr=0 pw=0 time=1157 us)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 49.49 49.72 ********************************************************************************
select /*+ index(t i5) */ * from t5 t where id is not null order by id
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 TABLE ACCESS BY INDEX ROWID T5 (cr=150 pr=0 pw=0 time=5167 us) 1000 INDEX FULL SCAN I5 (cr=71 pr=0 pw=0 time=3141 us)(object id 4673065)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 69 0.00 0.00 SQL*Net message from client 69 22.89 28.04
Now I modified the 'id' column to Not Null
SQL> alter table t5 modify id not null;
SQL> set pages 100 SQL> select column_name,nullable from user_tab_columns where table_name='T5';
COLUMN_NAME N ------------------------------ - ID N OWNER Y OBJECT_NAME Y SUBOBJECT_NAME Y OBJECT_ID Y DATA_OBJECT_ID Y OBJECT_TYPE Y CREATED Y LAST_DDL_TIME Y TIMESTAMP Y STATUS Y TEMPORARY Y GENERATED Y SECONDARY Y
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 SORT ORDER BY (cr=16 pr=0 pw=0 time=2398 us) 1000 TABLE ACCESS FULL T5 (cr=16 pr=0 pw=0 time=1152 us)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 37.74 37.91 ********************************************************************************
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 TABLE ACCESS BY INDEX ROWID T5 (cr=150 pr=0 pw=0 time=4166 us) 1000 INDEX FULL SCAN I5 (cr=71 pr=0 pw=0 time=3142 us)(object id 4673065)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 8.28 8.45
Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 5
Rows Row Source Operation ------- --------------------------------------------------- 1000 SORT ORDER BY (cr=6 pr=0 pw=0 time=1342 us) 1000 INDEX FAST FULL SCAN I5 (cr=6 pr=0 pw=0 time=1093 us)(object id 4673065)
Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ SQL*Net message to client 68 0.00 0.00 SQL*Net message from client 68 1.88 1.89
Questions are
1) Why adding 'where id is not null wasn't enough for the index to get used in 'order by'? 2) While we got 'fast full scan' why index wasn't used for 'order by' clause? 3) Do we need the indexed column in where clause for being used in 'order by clause' too? 4) Do we need 'order by' clause if we are selecting only the indexed column with sequence generated values?
I'm currently moving an IOT from one database to another using expdp/imdp. The IOT is non-partitioned and about 100GB in size containing ~1,1 billion rows.
The dumpfile contains nothing else but the IOT. I'm importing with no special parameters, no pre-created IOT, just ordinary dumpfile import. (impdp username/password dumpfile=impdp:iot.dmp nologfile=y )
During import I got unable to extend TEMP errors from impdp.
ORA-39171: Job is experiencing a resumable wait. ORA-01652: unable to extend temp segment by 128 in tablespace TEMP ORA-39171: Job is experiencing a resumable wait. ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
I had to add 2 additional files to my Temp tablespace (total 96GB of temp) before the import could finish off.
Is this temp usage to be expected when importing IOT's ?
determine if a function is worth pinning in memory? I want to come up with a percentage, implying that if the function is already im memory 80%+ of the time then it is not worth it.
I have a table which sees a lot of use for queries
CREATE TABLE CASE_STAGE ( ID NUMBER(9) NOT NULL, STAGE_ID NUMBER(9) NOT NULL, CASE_PHASE_ID NUMBER(9) NOT NULL, "CURRENT" NUMBER(1) NOT NULL, --and other columns )
ID is a primary key CASE_PHASE_ID is a foreign key
"CURRENT" should only ever have values of 0 or 1. When it has a value of 1 it is unique for that CASE_PHASE_ID
What I have tried that doesn't work is
create unique index case_stage_F_IDX1 on case_stage("CURRENT", case_Phase_id) which gives me ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
What is the correct syntax, something like ("CURRENT"=1,case_phase_id) seems right but fails with an error about a missing bracket. Do I need to use a CASE statement here?
I have created a function based index(FBI) with trim(header_date), but when i query the table by passing the hardcoded date, it is not working and i have to manually apply trim to get the result?
my query after applying FBI is
select * from abc where header_date = '21-JUN-11', no results are returned and when i apply trim to header_date it works fine .
This table has a query where one of the condition is AND STATUS <> 'C'
Now the data is as following
select count(*) record_count, status from new_business group by status;
record_countstatus 4298025C 15N 13Q 122S
I want to know if following index would be useful in this case while the condition in where clause is
"AND STATUS <> 'C'"
create index nb_index_1 on new_business(case when status in('N','Q','S') then 1 else NULL end); Or create index nb_index_1 on new_business(case when status ='N' then 'N' when status='Q' then 'Q' when status='S' then 'S' else NULL end);
I tried it on a sample table but the index is simply not picked up even when hinted following are the db level settings
One of our query is not using function based index, the required priv is granted to the user executing the query and also tables stats are gathered? what could be the reason for the query to not to pick the FBIndx? the table is a huge one having million of records, is it that CBO thinks that not picking FB indx is the best execution plan? let me know how can we make the query use the FB indx, also there is a restriction that we cannot force it using hints.
Using oracle 10g R2 on sun-solaris 10 (sparc-64) Well in the MIS system we have lot of ad-hoc queries coming up. We have proper indexing. Say an example which runs very slow;
Here GLUPMJ already indexed so the second query returing an index scan but the first query does a FTS naturally.Now even if I plan to create a function based index on 'the bold highlighted' but how.
create index glupmj_idx on f0911(TO_DATE('1 JAN' || (19+substr( GLUPMJ , 1, 1)) || substr( GLUPMJ ,2,2)) + substr( GLUPMJ , 4, 3 ));..Error If I don't use a FBI my query will result in FTS.
1> how to create a FBI here in this case
2> In MIS systems where 'n' no of ad-hoc queries can come up, how to avoid FTS.
It is not possible to run SHRINK SPACE against a table with a function based index. This is documented, and I've tested on the current release. I've reverse engineered it a bit, and the issue is in fact that you cannot SHRINK SPACE if there is an index on a virtual column:SQL> SQL> create table t1(c1 number, c2 as (c1*2)) segment creation immediate;
Table created. SQL> alter table t1 enable row movement; Table altered. SQL> alter table t1 shrink space; Table altered. SQL> create index i1 on t1(c2); Index created.
SQL> alter table t1 shrink space; alter table t1 shrink space
ERROR at line 1: ORA-10631: SHRINK clause should not be specified for this object.
I have a script that is using the INSTR function to search through a block of data for a specific string (CALL). I am ONLY looking for that string set but unfortunately, there are other words within that block of text that have that string set within it (e.g. CALL_MY_PHONE). Is there any way to make the INSTR search DISTINCT? Below is the code that I am using:
This is used the where clause of the REF CURSOR SELECT query which send the data back to SSRS
ie., SELECT BU.* FROM BU_DETAIL BU WHERE INSTR(V_BU_LST,BU_ID) <> 0;
INSTR has a chance to fail in this scenario if the value send from the front end is 123456,3456,4577
here 123456 does not exist in table, but it will be true for INSTR and values 1234 from table will be send back to SSRS which is wrong. Earlier I was using a function to convert the comma separated values to multi-rows and treat it like a lookup table.
But the main table has around million records , and each row has to processed against each row of lookup table, which makes it slower. To avoid this I used INSTR which is faster but can give wrong results.
I am on 11.2.0.3 Enterprise Edition. We are using the new feature "Composite Domain Index" for a Domain index on a very large table (>250.000.000 rows). It really works with mixed queries. We added two number columns using FILTER BY.We have lots of DML on this table. Therefore, we are executing synchronize and optimize once the week. The synch behaves pretty normal. But "optimize_index" takes a very very long time to complete. I have switsched on 'logging' for the optimize process. The $I table takes some time but is finished normally. But the optimization of the $S table (that is the table created for the CDI feature) is running over 12 hours now - and far from being finished. From the logfile, I can see that it optimizes 1000 rows every 20 minutes. Here is the output of the logfile:
Oracle Text, 11.2.0.3.0 14:33:05 06/26/12 begin logging 14:33:05 06/26/12 event 14:33:05 06/26/12 process $N for optimize: SEQDEV.GEN_GES_DESCRIPTION_CTX_I 14:33:16 06/26/12 14:33:16 06/26/12 [code]....
I haven't found a recommendation from Oracle not to use "optimize_index" for Domain Indexes with CDI. But in my case, it would be much faster just to drop and recreate the Domain Index in question.
I have a huge table (about 60 gb) partition over range. The index on this table is global index created on 4 columns together. I have a query which is running very slowly. The explain plan is showing the use of this global index.Explain plan is not showing pstart and pend because the index is global.
I am facing the error "ORA-01502: index or partition of such index is in unusable state " while loading the text data using sql loader with direct path (direct = Y ,rows = 10000) option. Table consists an composite non unique index. If I query the dba indexes for the effected index it shows the index status as VALID. There was no maintaince done on the effected table or index. I have tried loading the same data using conventional path but didn't found any issues for the same.
where @var is user supplied input at runtime...We had a index on a.c2 . The CBO would use this index to generate an opitimised query plan.We found some records from table "b" were dropping due to inner join. So we made a change in join. It'd be like
a.c1(+)=b.c1 and nvl(a.c2,@var)=@var
This query is no longer using the index, instead its doing a full table scan causing the query to slowdown.I have tried creating index on nvl(a.c2,'31-dec-9999')
But the CBO won't use it.Anyway to create index on this col so that full table scan can be avoided?