I have normal tables with hugh Data and would like to increase the performace by following means:
1) Add a new column in each table. Say this column Name is IS_LIVE. This new column have only two value 1 ( LIVE ) OR 0 ( NOT LIVE ). 2) Change the normal tables to Partitioned table. There would be only two partitioned in all the table. The partitioned key column would be IS_LIVE and both partitioend recrods would be in two different tablespace. 3) Added a POLICY function to these partitioned table to Always add a Query Predicate of '1' to all queuries.
I am interested to know that what kind of Indexes ( Global Or local ) would be suitable for these kind of Design.Is there any use of having Local index on IS_LIVE.Please note that Primary Key doesnot have this new column in it.
Is it possible for the DBMS_STATS "LIST STALE" command to show a stale partition but NOT have its table show as stale?
I had a scenario where the table itself AND 1 partition showed as stale. I ran a fnd_stats gather table stats just on that 1 partition. Once it was completed it showed the partition to no longer be stale. it also showed that the table was no longer stale. so I guess I do not need to run stats on the whole table as well?
so if this is the case, when would I need to run stats on the full partitioned table if running it on the partitions themselves removes the staleness of the table?
As per Article mentioned in Oracle Base,I have converted non-partitioned table (1 million data) into range-partition table,but,I don't see performance improvement in explain .
I have a huge table (about 60 gb) partition over range. The index on this table is global index created on 4 columns together. I have a query which is running very slowly. The explain plan is showing the use of this global index.Explain plan is not showing pstart and pend because the index is global.
If we have not set parallel degree for a table then we can ( try to ) force parallel execution on a table using a parallel hint Does this 'parallelism' works on the index search in the query as well?
In which situations non-parallel non-partitioned table but parallel index (degree>2) will facilitate a query?
1DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDITEMID4 2DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDITEMTYPE3 3DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDOWNERID2 4DGT_ITEMEFFORTDATA_DAILYHCLT_IDX_DGT_IFDOWNERTYPE1 There is no index on DGT_ITEMEFFORTDATA_TEMP table
We have a program that is taking about 13-14 hours to run and we need to generate traces to see where it is taking so long. I usually use 10046 for the tracing, I'm wondering if the traces can be built incremently so that it doesn't become one huge trace file.
I have been used to the consciousness that we should use the minimum length for varchar2 field that can store the data we need manipulate. But recently I was told that it has little impact on performance if we assign a much longer size.
I have a Query(report) which is running in <5 mins in one Scheme, where as the same is running for a long time in second schema. I have identified that an Index is scanning for more than 2000 Millions of records in second Schema, but this is scanning only 440 Millions in First Schema and hence it is fast. I am expecting the same to be done in Second schema.
I have verified the following All records in tables in 2 schemas are same. All indexes are same Analyzed the tables Gathered Histogram on all the columns as per the first schema.
But now i still have the same problem, don't know what could be the problem.
We have a huge table in production, with LONG column. We are trying to change its datatype to CLOB. The table has 120 Million records and is of 270 GB in size.
We tried using the oracle expdp/impdp option to try the conversion in our perf environment. With 32 parallels, the export completed in 1.5 hrs. However, the import took 13 hrs.
I also tried the to_lob option using inserts, it went on for 20 hrs and I killed the process. Are there any ways to improve the performance of LONG to CLOB conversion on huge tables?
I am facing some challenge while running update query on newly added column in existing table.
Environment Details Oracle 9i, version 9.2.0.6 Os Unix Aix 6.1
No of records in table : 12572770
Below are the step i followed.
1. In table testtablename, I have added new column COLUMNNAME29 with datatype VARCHAR2(8). 2. After adding the new column, i executed the update query to populate the data form COLUMNNAME1 to COLUMNNAME29. 3. The query is executed using COLUMNNAME24 in where clause, to drive query in index based.
SQL> desc testtablename Name Null? Type ----------------------------------------- -------- ---------------------------- COLUMNNAME1 VARCHAR2(8) COLUMNNAME2 CHAR(1) COLUMNNAME3 CHAR(1) COLUMNNAME4 VARCHAR2(8) COLUMNNAME5 VARCHAR2(11)
[Code]...
Table altered.
SQL> select index_name, column_position, column_name from dba_ind_columns where table_name = 'TESTTABLENAME' order by index_name,column_position;
1. The update query is hanging in database, it's not progressing (In single update, approximately 40000 records will get update) 2. No oracle error thrown in alert log or in session where the query being executed. 3. The event for the query is "db file sequential read". 4. When i update the newly added column COLUMNNAME29 with static value "1", the update completed successfully in few seconds. 5. Then i changed the static value to "1111" and executed the update statement, which result to query hanging in database. 6. I tried to update the existing column(COLUMNNAME1) in table with static value "1111", the update completed successfully.
Below are the queries completed successfully
Update Testtablename Set Columnname29 = '1' Where Columnname24 >= To_Date('01-12-2002 00:00:00', 'DD-MM-YYYY HH24:MI:SS' ) And Columnname24 < To_Date('01-01-2003 00:00:00', 'DD-MM-YYYY HH24:MI:SS')
[Code]...
Below are the queries hanging in database
Update Testtablename
Set Columnname29 = Columnname1 Where Columnname24 >= To_Date('01-12-2002 00:00:00', 'DD-MM-YYYY HH24:MI:SS' ) And Columnname24 < To_Date('01-01-2003 00:00:00', 'DD-MM-YYYY HH24:MI:SS')
Update Testtablename
Set Columnname29 = '1111' Where Columnname24 >= To_Date('01-12-2002 00:00:00', 'DD-MM-YYYY HH24:MI:SS' ) And Columnname24 < To_Date('01-01-2003 00:00:00', 'DD-MM-YYYY HH24:MI:SS')
Below is character set in database
SQL> select * from v$nls_parameters; PARAMETER VALUE ---------------------------------------------------------------- ---------------------------------------------------------------- NLS_LANGUAGE AMERICAN NLS_TERRITORY AMERICA NLS_CURRENCY $ NLS_ISO_CURRENCY AMERICA
I got just confused while looking at the below two create table statements:
CREATE TABLE Test ( TestID integer not null, Name varchar2(20) not null ) PARTITION BY LIST (TestID) ( PARTITION testPart1 VALUES (1) TABLESPACE tbspc1, PARTITION testPart2 VALUES (2) TABLESPACE tbspc2@RemoteServer);
and
CREATE TABLE Test ( TestID integer not null, Name varchar2(20) not null ) tablespace tbspc1 PARTITION BY LIST (TestID) ( PARTITION testPart1 VALUES (1) TABLESPACE tbspc1, PARTITION testPart2 VALUES (2) TABLESPACE tbspc2@RemoteServer);
How the length of column width effects index performance?
For example if i had IOT table emp_iot with columns: (id number, job varchar2(20), time date, plan number)
Table key consist of(id, job, time)
Column JOB has fixed list of distinct values ('ANALYST', 'NIGHT_WORKED', etc...).
What performance increase i could expect if in column "job" i would store not names but concrete numbers identifying job names. For e.g. i would store "1" instead 'ANALYST' and "2" instead 'NIGHT_WORKED'.
I got an exception when I was using sesame adapter to dump a turtle file which contains long texts as objects into oracle semantic database. The exception information is:
org.openrdf.repository.RepositoryException: org.openrdf.sail.SailException: java.sql.SQLException: ORA-01461: can bind a LONG value only for insert into a LONG column
ORA-06512: in "SF.ORACLE_ORARDF_ADDHELPER", line 1 ORA-06512: in line 1 at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:439) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:395) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:802) ...
* 35 | ID TABLE ACCESS BY INDEX ROW | S_ORG_EXT | 3064K| 2472M| | 1 (0)| 00:00:01 | | 36 | INDEX FULL SCAN | S_ORG_EXT_U1 | 14 | | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id): --------------------------------------------------- 35 - filter("T2"."ACCNT_FLG"<>'N' AND ("T2"."INT_ORG_FLG"<>'Y' OR "T2"."PRTNR_FLG"<>'N'))
This unselective index scan on step 36 of the explain is returning 14 rows but optimizer is selecting 3064 K rows from the table .
I tried creating combined index on all 3 columns mentioned in the predicates for 35th step , but that is not utilized .
how to index this whole expression ::--
(ACCNT_FLG<>'N' AND (INT_ORG_FLG<>'Y' OR PRTNR_FLG<>'N'))
Something like CREATE INDEX XYZ on table((ACCNT_FLG<>'N' AND (INT_ORG_FLG<>'Y' OR PRTNR_FLG<>'N')) compute statistics ;
resolve problem with move lob objects ? I move table partition and lob (BLOB) from one tablespace to another :
alter table EBIF.APO_T_VER_DISP_ACC_RESP MOVE PARTITION P1M20120901 LOB(SIGNATURE_PATTERN) STORE AS (TABLESPACE tmp) t able EBIF.APO_T_VER_DISP_ACC_RESP MOVE PARTITION have : pbeb_ap1.SYS>select partition_name , tablespace_name from dba_lob_partitions where table_name='APO_T_VER_DISP_ACC_RESP';
I have two tables with 113M records in DWH_BILL_DET & 103M in prd_rerate_chg_que and Im running following merge query, which is running for 13 hrs to update records, which is quiet longer time.
SQL> explain plan for MERGE /*+ parallel (rq, 16) */ INTO DWH_BILL_DET rq USING (SELECT rated_que_rowid, detail_rerate_flag_code, rerate_sel_key,
I have column containing three values:-N,E,Y.I want to get results with only E and Y values.Is it it possible to create index which would not look for N values.
On Oracle 10g, I create, delete and drop a lot of tables. Therefore, the disk is highly fragmented.The execution of a very simple create statement takes more than a minute. If I execute the same statement but first truncate the table and insert the data, it takes less than a second!
I think this has to do with the high fragmentation of the disk. Obviously, I can defragment the disk, but I will always have a high fragmentation since I use a lot of create, delete and drops.
how I can improve the performance of create statements on highly fragmented disks?