I am trying to use Global temporary tables, and index on this table to get my results faster. I can see even if I run any query on this table, it does full table scan and not Index scan..
create global temporary table abc_tab on commit preserve rows
as select a,b,c from xyz;
I am writing a procedure that will be called from a java wrapper.
The procedure do a lot of data manipulations and in between i am creating global temp table and saving the data into it for each request thats given as a parameter to the procedure. After all the processing i have to write the data from this global temp table into a physical table and atlast drop the temp table.
Create or replace proc_name ()
update table........
delete from ..........
CREATE GLOBAL TEMPORARY TABLE TSAAG ( supplier_id numeric(10) not null, supplier_name varchar2(50) not null, contact_name varchar2(50) )
insert into............
drop table TSAAG;
End;
creating a global temp table inside a procedure is expensive...
Do we have anything like creating table before and calling the instanse of it in procedure.
I have a huge table (about 60 gb) partition over range. The index on this table is global index created on 4 columns together. I have a query which is running very slowly. The explain plan is showing the use of this global index.Explain plan is not showing pstart and pend because the index is global.
We had an issue with a PL/SQL package taking hours to run as a concurrent program. Database version is 10.2.0.4.0, running on Linux x86 64-bit. A tkprof'd trace file revealed the problem SQL statement to be a cursor. This one SQL statement would run for 3+ hours. I copied the SQL statement and ran it in TOAD and it completed in seconds, returning the exact same result set. To resolve the issue in the PL/SQL package I created a global temp table and ran the exact same SQL statement as an INSERT into the global temp table.
Again, instead of hours, the SQL statement completes in seconds. If I revert the change, it goes back to taking hours. I've attached the relevant sections of the tkprof showing the two SQL statements (identical other than the insert in front of one) and the resulting explain plans and performance data. I've always been under the impression that a cursor was a better option than a temp table and I've never run into a situation where the same SQL statement runs so much longer when executed as a cursor.
Attached File(s)
SQL_As_Cursor.jpg ( 274.02K ) Number of downloads: 7
Explain_for_SQL_As_Cursor.jpg ( 189.43K ) Number of downloads: 4
SQL_as_Insert.jpg ( 277.38K ) Number of downloads: 4
Explain_for_SQL_As_Insert.jpg ( 180.66K ) Number of downloads: 2
I have a view, which has a union. (Union is required because of the nature of the data fetched). THis view is later joined with a global temp table which holds the -say employee Id the user selects.
So at runtime there is a join with the global temp table and the view. But the performance is really bad. I have tried using various hints, like materialize, /*+ CARDINALITY(gtmp 1) */ etc.
When i query the view alone,. the performance is good. When I remove the union, the performance is good. Some how with the union- there is a full table scan on one of the joining tables.
I have a partioned (by row_create_date) table, lets called it TABLE_X, which has about 300 million records. This table has 7 columns including the primary key and a non-unique, locally partitioned column called trace_id; 99% of queries access this table via this column.
Lately, querying TABLE_X via the trace_id has been very very bad. Queries run for over 1 hr in some cases. So we decided to change the index for trace_id to a global index. Now queries against TABLE_X return in seconds. So far so good.
However, when the query has to join TABLE_X to another table, the query sometimes runs for over 1 hours; back to the same old problem. Here is an illustration;
SELECT COUNT(1) FROM TABLE_X WHERE TRACE_ID = 'XXXXX'; -- returns in seconds SELECT COUNT(1) FROM TABLE_X, TABLE_Y WHERE TABLE_X.TRACE_ID = 'XXXXX' AND TABLE_X.TRACE_D = TABLE_Y.TRACE_ID;
-- runs for over 1 hr, even when TABLE_Y.TRACE_ID is a primary key.
ALTER TABLE SNYT.PART_ESTD SPLIT PARTITION ESTD_M13_S22 AT ('ESTD', 13, '22') INTO (PARTITION ESTD_M13_S21, PARTITION ESTD_M13_S22) update indexes;
(per http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1401247200346349807) ================================================================= If Global index used
ALTER TABLE SNYT.PART_ESTD SPLIT PARTITION ESTD_M13_S22 AT ('ESTD', 13, '22') INTO (PARTITION ESTD_M13_S21, PARTITION ESTD_M13_S22) UPDATE GLOBAL INDEXES;
I have a partitioned table like below. I want to create a B-Tree index on SALES_RGN column which is neither the part of Primary key or the Partitioned key. Should I create this index as local or Global ?
CREATE TABLE sales_dtl ( txn_id number (9), salesman_id number(5), salesman_name varchar2(30), sales_rgn varchar2(10), -----------------------------> This column needs to be indexed sales_amount number(10), sales_date date, constraint pk_sales_dtl primary key (txn_id) [code]....
I have two database DB1 for EBS database and DB2 for Portal database. DB2 is always up.
DB1 uses some Global Temporary tables to write and store session level information.
I have Secondary database also for DB1. Whenever DB1 is down and its secondary database base is up, my requirement is to enable write operation to these Global Temporary Tables. Since secondary database we open Read-Only mode , I can't write to these GTTs.
DB2 is always up so I want to create the copies of these GTTs in DB2 portal database. Is there any harm on doing this.
Is there any harm storing session level information of DB1 database In DB2 database through DB-Link.
I'm currently moving an IOT from one database to another using expdp/imdp. The IOT is non-partitioned and about 100GB in size containing ~1,1 billion rows.
The dumpfile contains nothing else but the IOT. I'm importing with no special parameters, no pre-created IOT, just ordinary dumpfile import. (impdp username/password dumpfile=impdp:iot.dmp nologfile=y )
During import I got unable to extend TEMP errors from impdp.
ORA-39171: Job is experiencing a resumable wait. ORA-01652: unable to extend temp segment by 128 in tablespace TEMP ORA-39171: Job is experiencing a resumable wait. ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
I had to add 2 additional files to my Temp tablespace (total 96GB of temp) before the import could finish off.
Is this temp usage to be expected when importing IOT's ?
Is it possible to create local temp variable based table within sql procedures, i am planning on writing few select queies,and want to store teh result of each select to a temp variable based table.
Within sql serevr, i can create temp tables within and then later after i use teh data at teh end drop the table.
Is it possible, since i don't want to create any physical tables.
There's an Application Express application which is based on a schema named TRAFOGLED. In order to let users test new features, there's a test application (Apex has export/import capabilities; no problem about that) which is based on another schema whose name is TRAFOTEST.
I'd like to export TRAFOGLED and import it into TRAFOTEST.I'm using 10gR2 EXPDP utility with a parameter file. Everything seems to be OK, except the fact that I'm unable to export global temporary tables (GTT). How can I tell? I didn't see them after import!
These are my GTTs: SQL> show user USER is "TRAFOGLED" SQL> SQL> select table_name from user_tables where temporary = 'Y';
[code]...
C:TEMP> No tables were exported. Certainly, I don't expect any data to be exported, but I'd be happy with CREATE TABLE statements so that I don't have to create these tables separately.
There is a readonly user on our reporting server. Developers want to use global temporary tables with this user. I don't want the user to have permissions other than readonly.I can grant the user CREATE TABLE privilege and did not grant quota on any permanent tablespace, therefore user would not be unable to create permanent tables but still should be able to create global temporary tables.
Question is would a user with such permissions still be able to utilize temp tables as part of a scheduled job?
One of our customer have problem with following sql statement:
SELECT c.table_name, c.column_name FROM user_tab_columns c, user_tables t WHERE c.table_name = t.table_name AND c.data_type IN ('CLOB', 'BLOB');
During execution it takes all the TEMP tablespace size(8GB).
I gather system stats (dbms_stats.gather_dictionary_stats(estimate_percent=>null)) but it doesn't resolve problem.Above sql statement works fine with RULE hint but I want to know what is the reason of problem with temporary tablespace.
I am trying to run on Oracle report via Oracle Application Concurrent job. Concurrent job is completing normal but I don't get anything on print out page. In log file of this request I see message 'MSG-01003: Errors =>ORA-01652: unable to extend temp segment by 128 in tablespace TEMP'. I almost doubled the TEMP tablespace in size but still I am not able to get rid of this error message.
1SELECT STATEMENT 1 HASH JOIN 1 MERGE JOIN CARTESIAN 1 TABLE ACCESS BY INDEX ROWID WAT_SOURCE_DATA BITMAP CONVERSION TO ROWIDS BITMAP INDEX SINGLE VALUE INDX_WAT_SRC_DATA_BIT
[code]....
Error Message : ORA-01652:unable to extend temp segment by 128 in tablespace TEMP
this huge report that uses inline views. I keep getting the following error message when running the script through toad. I was thinking about using the USE_HASH hints. The sql optimizer we use is very buggy in Toad. I'm using oracle database version 10.2.0.3.
I have a TEMP tablespace with autoextend on next 10M and maxsize 5120M, now my tablespace is 99.98% full. Am getting ORA-1652: unable to extend temp segment by 128 in tablespace temp error, can i use the method to increase the maxsize value to 10240M.
I would like to ask about indexes in partitioned tables.I have indexes on a partitioned table, it is partitioned by range method i.e based on Creation date time.All select queries sent to the table use the Creation date time. I have an index on Creation date time.Here is an example:
SELECT col1, col2, col3 FROM table1 where date_time BETWEEN TO_DATE ('20120117 10:00:00','YYYYMMDD HH24:MI:SS') AND TO_DATE ('20120117 13:00:00','YYYYMMDD HH24:MI:SS') AND frmt_name = 'XXXX' AND sender = 'YYYYY' AND nature = 'ZZZZ' AND type LIKE '548' ORDER BY date_time WHERE ROWNUM <= 5000 [code]....
do I have to add DATE_TIME to all indexes (IX_NAME_FORMAT_TYPE,IX_CCY) or not?
SELECT department_id FROM (SELECT department_id FROM employees UNION SELECT department_id FROM employees_old ) WHERE department_id=100; [code]....
The index has been created on both depart_id for the two tables. The only difference between the two I observed was the 1 recursive call for the 1st sql.and also, one additional view in the plan.There is a little difference in bytes sent over the network.
I have a view on base tables holding historical data for previous 60 months(one table per month) with union all operators.create index on those base tables will improve performance or creating a primary key with disabled novalidate will improve for retrieving data?
The view has around 8 million data and used as a fact table with 4 dimension tables.A DTS package from MSSql side refreshes OLAP cube by retrieving data from these tables in oracle.
I am on 11.2.0.3 Enterprise Edition. We are using the new feature "Composite Domain Index" for a Domain index on a very large table (>250.000.000 rows). It really works with mixed queries. We added two number columns using FILTER BY.We have lots of DML on this table. Therefore, we are executing synchronize and optimize once the week. The synch behaves pretty normal. But "optimize_index" takes a very very long time to complete. I have switsched on 'logging' for the optimize process. The $I table takes some time but is finished normally. But the optimization of the $S table (that is the table created for the CDI feature) is running over 12 hours now - and far from being finished. From the logfile, I can see that it optimizes 1000 rows every 20 minutes. Here is the output of the logfile:
Oracle Text, 11.2.0.3.0 14:33:05 06/26/12 begin logging 14:33:05 06/26/12 event 14:33:05 06/26/12 process $N for optimize: SEQDEV.GEN_GES_DESCRIPTION_CTX_I 14:33:16 06/26/12 14:33:16 06/26/12 [code]....
I haven't found a recommendation from Oracle not to use "optimize_index" for Domain Indexes with CDI. But in my case, it would be much faster just to drop and recreate the Domain Index in question.
I am facing the error "ORA-01502: index or partition of such index is in unusable state " while loading the text data using sql loader with direct path (direct = Y ,rows = 10000) option. Table consists an composite non unique index. If I query the dba indexes for the effected index it shows the index status as VALID. There was no maintaince done on the effected table or index. I have tried loading the same data using conventional path but didn't found any issues for the same.
where @var is user supplied input at runtime...We had a index on a.c2 . The CBO would use this index to generate an opitimised query plan.We found some records from table "b" were dropping due to inner join. So we made a change in join. It'd be like
a.c1(+)=b.c1 and nvl(a.c2,@var)=@var
This query is no longer using the index, instead its doing a full table scan causing the query to slowdown.I have tried creating index on nvl(a.c2,'31-dec-9999')
But the CBO won't use it.Anyway to create index on this col so that full table scan can be avoided?