I have read and used the AWR script (mentioned in the page Finding unused index for finding unused customised (Z) indexes in our SAP system using oracle 10.2.0.2 as the SAP database.
But this returns no rows. Is there any precondition? I want to know how much / many times the indexes are used...We are smelling that lot of unused index are there in the database.
CODECREATE TABLESPACE my_ts DATAFILE 'C:\Oracle\oradata\db\my_ts.dbf' SIZE 5M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K; ALTER DATABASE DATAFILE 'C:\Oracle\oradata\db\my_ts.dbf' AUTOEXTEND ON;
Its was sucessfully created and my_ts.dbf file has 5MB
charging with data...
CODEcreate table big_table tablespace my_ts as select * from dba_objects; select * from big_table; begin for i in 1..10 loop insert into big_table select * from dba_objects; end loop; end;
Now the my_ts.dbf file has 90MB
Now I want drop this table: CODEdrop table big_table purge;
And my tablespace file still has 90MB.
I already tried to restart the database but doesn't works...
I have been trying to drop an unused column in a partitioned table, and the number of records stored in this unused column was very high. I kept on running into errors as follows:
ORA-01562: failed to extend rollback segment number 10
ORA-01650: unable to extend rollback segment R09 by 256 in tablespace RBS
I tried to "SET TRANSACTION USE ROLLBACK SEGMENT <name>" with a larger rollback segment, but it still did not work. Can I drop the "unused column" from each partition instead?
How to apply that? Or, what are my options besides increasing the size of the rollback segment?
If you mark a column unused, is there any way to project it? I know the docs say you can't, but as the data is still there I would have thought it should be possible. I can see the column in dba_tab_cols, but the obvious ways of making it usable don't work:
orcl> select column_name,hidden_column from user_tab_cols where table_name='DEPT';
COLUMN_NAME HID ------------------------------ --- LOC NO DNAME NO DEPTNO NO
orcl> alter table dept set unused column loc;
Table altered.
orcl> select column_name,hidden_column from user_tab_cols where table_name='DEPT';
COLUMN_NAME HID ------------------------------ --- SYS_C00003_13071316:19:02$ YES DNAME NO DEPTNO NO
orcl> select "SYS_C00003_13071316:19:02$" from dept; select "SYS_C00003_13071316:19:02$" from dept * ERROR at line 1: ORA-00904: "SYS_C00003_13071316:19:02$": invalid identifier
orcl> alter table dept rename column "SYS_C00003_13071316:19:02$" 2 to loc; alter table dept rename column "SYS_C00003_13071316:19:02$" * ERROR at line 1:ORA-00904: "SYS_C00003_13071316:19:02$": invalid identifier
orcl> alter table dept modify "SYS_C00003_13071316:19:02$" 2 visible; alter table dept modify "SYS_C00003_13071316:19:02$" * ERROR at line 1: ORA-00904: "SYS_C00003_13071316:19:02$": invalid identifier
I am in the task of clean up of tables. I need to find the list unused tables and procedures. Is there any way where i can find when was the last time the table queried?
Give sql query to find the list of unused tables and procedures.
what happens if you mark a column unused in a compressed table and then alter table drop unused columns? We had a customer do this and Oracle threw a -3113 (end of communication) error. They did a system restore before contacting us and blew away any evidence in alert logs/trace files. They did this on a 400GB compressed table.
My question is, when you drop an unused column off a compressed table, does it uncompress? Where does this uncompression occur? In the instances default tablespace? In the tablespace configured for the table?
Basically, we are wondering whether the error was due to poor error-handling of the system running out of space during decompression and trying to see if we can reproduce it. This was on an 11.1.0.7 system.
I am on 11.2.0.3 Enterprise Edition. We are using the new feature "Composite Domain Index" for a Domain index on a very large table (>250.000.000 rows). It really works with mixed queries. We added two number columns using FILTER BY.We have lots of DML on this table. Therefore, we are executing synchronize and optimize once the week. The synch behaves pretty normal. But "optimize_index" takes a very very long time to complete. I have switsched on 'logging' for the optimize process. The $I table takes some time but is finished normally. But the optimization of the $S table (that is the table created for the CDI feature) is running over 12 hours now - and far from being finished. From the logfile, I can see that it optimizes 1000 rows every 20 minutes. Here is the output of the logfile:
Oracle Text, 11.2.0.3.0 14:33:05 06/26/12 begin logging 14:33:05 06/26/12 event 14:33:05 06/26/12 process $N for optimize: SEQDEV.GEN_GES_DESCRIPTION_CTX_I 14:33:16 06/26/12 14:33:16 06/26/12 [code]....
I haven't found a recommendation from Oracle not to use "optimize_index" for Domain Indexes with CDI. But in my case, it would be much faster just to drop and recreate the Domain Index in question.
I have a huge table (about 60 gb) partition over range. The index on this table is global index created on 4 columns together. I have a query which is running very slowly. The explain plan is showing the use of this global index.Explain plan is not showing pstart and pend because the index is global.
I am facing the error "ORA-01502: index or partition of such index is in unusable state " while loading the text data using sql loader with direct path (direct = Y ,rows = 10000) option. Table consists an composite non unique index. If I query the dba indexes for the effected index it shows the index status as VALID. There was no maintaince done on the effected table or index. I have tried loading the same data using conventional path but didn't found any issues for the same.
where @var is user supplied input at runtime...We had a index on a.c2 . The CBO would use this index to generate an opitimised query plan.We found some records from table "b" were dropping due to inner join. So we made a change in join. It'd be like
a.c1(+)=b.c1 and nvl(a.c2,@var)=@var
This query is no longer using the index, instead its doing a full table scan causing the query to slowdown.I have tried creating index on nvl(a.c2,'31-dec-9999')
But the CBO won't use it.Anyway to create index on this col so that full table scan can be avoided?
We have occurrences of enq : TX - index contentions in the database. Using the SQL ID, we have identified the INSERT statement and the table which they are trying to insert.
This table has almost 25 different indexes, some of which are unique as well.I am wondering how to identify the actual index causing issue, out of these 25 indexes.
Is there any way to pin point to the name of index which is causing the lock?My plan is, once the index is identified, I would like to check the extents and inittrans and other attributes of this index to fix.
I'm not really sure why oracle is not finding my Foreing Key, I'm creating an easy set of table for a company and I'm declaring all Primary keys and foreing keys as necessary and this is my
mission_id, mission_type_id, security_level and code_name.
What i have to do is get the 10 most recent missions and change their security level to the highest one in their mission_type_id but ONLY if code_name length is >7
So far I have this, the problem is that the oracle moans about the order and wants me to close the bracket before the order
sql
UPDATE missions m SET m.security_level = ( SELECT max(m2.security_level) FROM missions m2 WHERE m2.mission_type_id = m.mission_type_id AND length(m2.code_name) > 7 ) WHERE m.mission_ID IN ( Select m3.mission_id From missions m3 ORDER BY m3.mission_id desc)
How can i come to know that which current alert log file is being used for database? Is their a command at database level to find out the current alert log file to which database is using ?