High Speed Full Table Select?
Oct 7, 2013
We have a large table of equally sized data blobs in our Oracle system and we'd like to select the whole table once into the memory. The corresponding tablespace is stored in SSD fast disks and is managed by ASM. However the achievable select speed (reading data into memory) of Oracle is not satisfactory. When we store the data in SSD disk using custom methods (e.g. in SQLite DB files) and load then into memory by multithreading (8 thread) techniques, the speed is more than 15 times that of Oracle.
Is there any way to optimize the oracle and ASM for increased full table select in our case. We tried FULL TABLE SCAN and PARALLEL hints and DB_FILE_MULTIBLOCK_READ_COUNT too, but with no success.
FYI:Our data blob is about 4.2kB and each DB_BLOCK_SIZE is 8kB. The ASM segment is configured for AUTO-ALLOCATE. The table is partitioned by HASH. Our oracle system is not RAC.
View 1 Replies
ADVERTISEMENT
Aug 15, 2011
I have two design alternatives and need to understand how expensive (speed) is one of them against the other for a medium size table (100K-200K records):
create table xyz
(
f1 number not null,
f2 varchar2(20) not null,
f3 number not null,
f4 varchar2(50),
[code]....
the idea is to optimize the design by using a PK instead of the 3 keys and there is a debate that searching a unique index field(2nd scenario) is of the same speed than searching a PK field (1st scenario).
View 5 Replies
View Related
May 23, 2013
I am only able to extract only 4000 characters from the clob column "DESCRIPTION".how to get more characters or max for that column with the same query concept?
select distinct
o.id "Organization ID",
en.entity_id "Contact ID",
en.entity_cd "Note Entity Code",
ed.entity_name "Note Entity",
en.entity_id "Note Entity_id",
[code]....
View 28 Replies
View Related
Sep 10, 2012
how can i speed up insert into.
Becuase when i used create table a as select * from table1;
for five rows only
Elapsed: 00:00:2.19
is faster then insert into a select * from table1;
Elapsed: 00:00:15.19
View 15 Replies
View Related
Feb 3, 2012
when i am Executing the following statement
SELECT DISTINCT EXPOSURE_REF FROM KBNAS.VW_EXPOSUREDETS_FOR_CCYREVAL
WHERE EXPOSURE_CURRENCY='THB' AND BASE_TXN_CCY='USD' AND BRANCH_CODE='7000'
AND (REVAL_STATUS='O') AND CONV_RATE<>'62' AND (EXPOSURE_AMOUNT<>0)
UNION
SELECT DISTINCT ED.EXPOSURE_REF FROM KBNAS.EXPOSURE_DETAILS ED,
[code].....
I have attached DDL for table EXPOSURE_DETAIL(PARTITION),LEDGERCARD,LEDGERCARDDETAILS, DDL for INDEX on those tables and DDL for Views..
Issue: we have created the Indexes but when we check the explain plain .. full table scan is going on..I have attached the explain plan ..
View 11 Replies
View Related
Mar 4, 2010
Is there anyway to speed up the performance of the go_record built it or is there an alternative way to do it.
I have a table with nearly 30,000 rows and I would like to implement a text field that will allow the user to jump to a specified record. The only problem is if they try to jump too far away it will take a long time to load (beginning to end of 30,000 takes over a minute).
This problem doesn't arise if all the records, or up to the one they are jumping to, have been fetched already, but even if I fetch all records at the beginning it will still take a long time to initially load them.
View 10 Replies
View Related
Jun 2, 2011
I am gathering stats by using below block i.e., for some 3 million records and there are 6 indexes on the table. What is the relevance of value 4 here (i.e., method_opt => 'FOR ALL INDEXED COLUMNS SIZE 4')? If I increase 4 to 250 will there be any speed change in gathering stats. My intention is to speed up the gathering of stats.
begin
dbms_stats.gather_table_stats(
ownname => SYS_CONTEXT('USERENV', 'CURRENT_SCHEMA'),
tabname => 'LEGAL_VIEW_TARGET',
method_opt => 'FOR ALL INDEXED COLUMNS SIZE 4',
cascade => TRUE
);
END;
View 12 Replies
View Related
Aug 11, 2012
i have query lie below.
select
(SELECT MIC.MICR_BANK_NAME FROM M_ECS_MICR_MST MIC WHERE MIC.PK_MICR_ID = BANK.ECS_MICR_CODE) AS BANKNAME,
, (SELECT PAY.PAYMENTCODE FROM M_ECS_PAYMENT_MODE PAY WHERE TO_CHAR(PAY.ID) = BANK.ECS_FACTORING_HOUSE) AS
[code].....
i have used composite index below column which used in the tbl_bank_statement table.like column name tbl_bank_ statement (policy_ no,ecs_ micr_code,ecs_factoring_house,ecs_mandate_status)
but still this table giving me TABLE ACCESS FULL.
View 3 Replies
View Related
Sep 24, 2010
I am trying to update full table with row by row by using pl/sql blocks but I am getting "INVALID ROW ID" error. FYI. following is the screen shot from sql*plus.
SQL> declare
2 cursor csr is
3 select upc_code from pos.tbk_pos_fact_newslink_bk FOR UPDATE OF upc_code nowait;
4
5 begin
6 For row in csr
[code]....
View 2 Replies
View Related
Aug 18, 2012
I�m Using Oracle 11.I have a table with 16 million rows and an index (let's call it the employee table with an index on department). I need to select all the employees whose departments are located in the uk. I achieve this by selecting all the department numbers from departments where location = 'UK' in a sub select then plug this into the main query as follows:
SELECT *
FROM employees
WHERE department IN (SELECT department from departments where location = 'UK');
It takes ages, 25 seconds or more, the explain plan shows its doing a full table scan on emplyees. I need it to use the index. The sub query is instant and returns only 5 rows. If I explicitly put the 5 numbers in the IN clause the query uses the index and executes in 0.04 seconds. See below:
SELECT *
FROM employees
WHERE department IN (1,2,3,4,5);
I need it to use the subquery once and then use the index on the main table.
View 2 Replies
View Related
Oct 24, 2013
How To Increase Data Retrieval / Insertion Speed my data base has more than 0.5 million records Forms Some Time Respond Very Slow .
View 8 Replies
View Related
Nov 16, 2012
my working is relating with PUMP of oracle.
I would like to use command, for ex:
expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=expdat.dmp COMPRESSION=ALL LOGFILE=export.log SCHEMAS=hr
But some tables in Schema HR, I don't want to export data, just only need table structure.
View 2 Replies
View Related
Jan 14, 2011
We have an old full export .dmp file from a 10g db and there are 451 records in one specific table that we need to export. Is it possible to IMP just the one specific table from a full dump? Or, another option, can we extract the records from the one table in the .dmp file into an xml file?
View 14 Replies
View Related
Jan 2, 2013
I am doing an export using the following parfile information:
userid=/
directory=datapump_nightly_export
dumpfile=test_expdp.dmp
logfile=test_expdp.log
full=y
content=all
However when I run this I do not see the sys.aud$ in the log file. I know I can do a seperate export to specifically get the sys.aud$ table but is there any way to include it in with my full export?
View 8 Replies
View Related
Feb 4, 2011
I've got a query running a select count (*) over a table. The default plan takes in the order of 15 minutes to return, a hinted plan to use a different index takes 3 minutes to return.
Unfortunately I cant get at the index stats and a few other areas which I suspect may be key here.When running autotrace against the two queries I see fairly different values as one would expect.
Query
select count (*) from fulfilmentitem bfi where created >= sysdate-30 AND bfi.status = 'FA' AND bfi.fulfilmentmethod = 'D'
Slow run
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 15 | 33119 (1)|
| 1 | SORT AGGREGATE | | 1 | 15 | |
|* 2 | TABLE ACCESS BY INDEX ROWID| FULFILMENTITEM | 12525 | 183K| 33119 (1)|
|* 3 | INDEX RANGE SCAN | IDX_FULFIL_METHODSTATUS | 250K| | 1786 (1)|
----------------------------------------------------------------------------------------------
[code]....
IDX_FULFIL_METHODSTATUS is across FULFILMENTMETHOD & STATUS in that order.
IDX_BFI_CREATED is on CREATED and is approx 70% of the size of the other index
The row counts estimated in the explain plan are out, the count(*) comes in at 32.8k rows.As you will have seen, the fast run shows a pretty significant consistent get increase compared to the slow run and a decent though not dramatic physical read drop.
My uncertainty is around if these changes in consistent get/phys read values would typically be enough to suggest the real time improvements I'm observing or if other (albeit perhaps temporary) factors are involved. It is a prod OLTP environment so the data will be rapidly changing and that may be a factor.
I know it can never be an exact science without intimately knowing the hardware/current loads etc but I also know that there's enough experience on these boards to have a loose handle on if the time shifts between queries are likely (or not) to be reflective of the stat changes or if those differences alone shouldn't (or typically wouldn't account) for it.
I'm thinking about instructing the query to ignore its original plan but am hesitant to do so without being a little more confident that it's not just a timing thing or something other than the change of index approach which may be causing the improvement. the autotrace stat changes observed I couldn't put my hand on heart say "yup - that change is good, ignore the default index all the time for this job".
View 11 Replies
View Related
Apr 23, 2012
I am rebuilding some UNUSABLE local index partitions on Oracle 8.1.7.4.0 (64bit) database . The platform is a HPUX machine.
The DDL of the partition table/indexes:
=========================
CREATE TABLE TESTME
( INST_NO CHAR(3) NOT NULL,
ACCT_NO CHAR(16) NOT NULL,
REC_NO CHAR(9) NOT NULL,
TRAN_TYPE CHAR(2) DEFAULT ' ',
STAT CHAR(2) DEFAULT ' ',
[code]...
View 2 Replies
View Related
Jul 11, 2013
Below query is degrading the performance of database. As we know that, without where clause, query do full table scan.Now, it is written to generate the sequence no.
SQL> explain plan for
2 SELECT NVL(MAX(P.NUM_SERIAL_NO), 0) + 1 FROM CNFGTR_IRDA_ENVELOPE_DTLS P
3 /
Explained.
SQL> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 3345343365
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
[code].....
Index is not created on the column.
View 6 Replies
View Related
May 12, 2011
I have created an non unique index lk_fein on lookup_fein( code,map_id,trash). When I check the explain plan it does a full table scan on lookup_fein. if I force it to use index by it does and the cost also decreases.
SQL> SELECT WORK_FEIN,
2 NON_FEIN ,
3 FI_FEIN ,
4 MFEIN ,
5 TOTAL_FEIN ,
[code]...
View 1 Replies
View Related
Oct 5, 2013
Let's consider such table that all rows fit into single block:
SQL> create table test as select rownum id, '$'||rownum name from dual connect by level <= 530;
Table created.
SQL> create index i_test on test(id);
Index created.
SQL>
SQL> begin
[code].....
why does approach with full scan take longer even if table occupies only one data block? PS. 11gR2
View 8 Replies
View Related
Apr 9, 2013
let us consider mytest schema is having 6 tables
tname tabtype
myt table
myaxpertlog table
abb table
ccc table
ddd table
xxx table
now from this schema i want full dump and also from myaxpertlog table i required metadata only not records.
c:> export mytest/log file=20130409mytest0904pm.dmp tables=(myaxpertlog) rows=n
if i tried i am get only one table but it does have records.
View 6 Replies
View Related
Oct 4, 2013
We have taken export expdp backup from prod database (primary database- Data Guard).
1.) Import impdp is very slow 10GB/Hrs on staging database (Data Guard MAXIMUM AVAILBILITY)Since Server configuration, database version and configuration, operating system everything are same as production. No blocking, locking or waiting sessions
2.)import impdp is fast 90GB/Hrs on Test standalone database and this test database is running in NOARCHIVE LOG mode with oracle standard version after that no more difference.
CPU,Memory,network and disk I/O are look normal while importing on both databases.why that much difference on import.
View 1 Replies
View Related
Aug 31, 2010
I have the below data in table test_1.
select * from test_1
IDNameTotal
-----------
1A100
2B100
3C100
4D100
test_2 table contains the concatination of ID's with comma seperated. Actually in this table ID column is of datatype varchar2.
select * from test_2
ID
----
1,2,3
My requirement is to select the data from test_1 table where the id values in this table exists in test_2 table. I tried with the belowselect statement, but could not get any data.
SELECT * FROM test_1 WHERE to_char(id) IN (SELECT id FROM test_2)
create table test_1 (id number, name varchar2(100), total number)
create table test_2(id varchar2(100))
insert into test_1 values (1,'A',100)
insert into test_1 values (2,'B',100)
insert into test_1 values (3,'C',100)
insert into test_1 values (4,'D',100)
View 4 Replies
View Related
Jan 19, 2011
I have a table with zip codes and their plus four values. For ex: zip code of 10000, which has corresponding plus four values of 001, 002, 003, and 008, 009, 010. The issue is just that--a zip code can have sequential plus four values, and then it will skip several potential plus four values, and then start again. I would like to assign a low plus 4 value and high plus four value to a zip code, keeping in mind that the plus four values are not always sequential. So, it would be similar to this:
zip plus4 low plus4 high
10000 001 003
10000 008 010
View 2 Replies
View Related
Jun 24, 2010
My requirement is while sending a data file from oracle to mainframe, first 3 bytes for the header row should contain low values and trailer should contain high value.
How to pass oracle values to mainframe high and low values ?
View 7 Replies
View Related
Oct 27, 2011
I have a priority column(possible values are 1 or 0) in a table where
i need to get 70% of high(1) and 30% of low (0) and max i can fetch for select is 50 records.
Eg1: Total if i have 60 in which 20 high and 40 low then 70% of 20 = 14 and remaining should be taken from Low i.e. 36 from low. so total will be 50 transactions.
Eg2: Total if i have 60 in which 40 high and 20 low then 70% of 40 28 + remaining should be taken from Low i.e. 22 from Low.
Eg 3: If i don't have any high then total should be picked from low vise versa.
I have below query but it is having problem when there is no low priority.
SELECT ID,PRI FROM temp tbl WHERE pri = '1' AND ROWNUM < ((70/100)*50)+1
UNION ALL SELECT * FROM temp WHERE pri = '0'
AND ROWNUM < 50-(SELECT COUNT(*) FROM temp WHERE pri = '1' AND ROWNUM < ((70/100)*50)+1)
View 3 Replies
View Related
Oct 22, 2010
I have setup of two node (prod-db1, prod-db2) clustered database 11gR2 on windows 2008 R2 server. Everything is working fine at this setup.
My question is: Is there a way to make the Enterprise manager Database control run and be available at both the nodes independently. What I see that even at node 2 (which is prod-db2) the EM-DBControl is (https://prod-db1:1158/em) - which means the agent is running at node 1 (prod-db1) only.
My question is that how to make the EM-DBControl also run separately at prod-db2. My idea is to make the high availability of EM-DBControl (in case Prod-db1 machine is down).
View 2 Replies
View Related
Nov 15, 2010
I am in the very early planning stages of a project the goal of which is to identify separate organizations which may in fact be the same organization.
Our first implementation of this task was a process designed to look for a few thousand organizations in a pool of a few hundred thousand organizations. To accomplish this we made heavy use of Oracle's Text index as well as a custom index type we created which utilized n-grams. This approach worked quite well for on-demand editing of the organizations, in which a user might log in and say in addition to what we already know about organization A we also know x, y and z does that change anything and worked acceptably well for the bulk processing we did on our "known" information once a week running for a couple of hours on the weekend.
We have now been tasked with reworking this initial implementation only now we want to look at a set consisting of several million organizations for potential matches which exist within the set. As in our initial implementation we will be breaking what we know about organizations into groupings so we aren't comparing a phone number to an email address and normalizing the data as much as we can so we ignore things like case and punctuation. Even after all this we are still talking about looking for similar values in a group which might be in the tens of millions (some types of data will have more than one value per organization).
My initial thought on the problem is to use n-grams though not in the way we did in the past. The basic idea here is that we break the search values up into all the substrings it is made of and look for other values which have a high number of those substrings in common.
SQL & PL/SQL was the best place for the question, but I could not think of a better one.
View 10 Replies
View Related
Jul 23, 2010
I am facing one performance issue, in which the query cost is very low compare to cpu cost and as a result the cpu always show the high graph.I am also attaching the gv$sql and gv$sql_plan data of this query.
This is the query:
SELECT PTLS.ITEMTYPE , PTLS.ITEMID , PTLS.STAGEID, TS.USERID, SUM(PREVIOUSHOURS) AS PREVIOUSHOURS, MIN(STARTDATE) AS STARTDATE, MAX(STARTDATE) AS ENDDATE FROM PROJECTTIMELOGSSTAGE PTLS, PROJECTTIMESHEETITEM PTSI, TIMESHEET TS WHERE PTLS.PROJECTID = :B2 AND TS.TIMESHEETID = PTSI.TIMESHEETID AND TS.USERID = :B1 AND PTSI.TIMESHEETID = PTLS.TIMESHEETID AND PTSI.ITEMTYPE = PTLS.ITEMTYPE AND PTSI.ITEMID = PTLS.ITEMID AND (PTSI.ISPWFITEM = 'N' OR PTSI.ISPWFITEM IS NULL) AND PTLS.ITEMTYPE NOT IN ('OtherTsk','NewTsk','Loc','Glb') AND (PTLS.ITEMTYPE, PTLS.ITEMID ) IN (SELECT ITEMTYPE, ITEMID FROM PROJECTTIMELOGSSTAGE PTLS1 WHERE PTLS1.PROJECTID = :B2 AND PTLS1.TIMESHEETID = :B3 ) GROUP BY PTLS.ITEMTYPE, PTLS.ITEMID, PTLS.STAGEID, TS.USERID
View 17 Replies
View Related
Jan 25, 2013
clould you sand me sqary for find high cpu user in oarcle 10g
View 3 Replies
View Related
Aug 10, 2012
I've plan to use "CPU Time Per User Call" metrics.
The thresholds are:
Warning: 8000
Critical: 10000
But this alarm raise every minute.
I think it's too low.But which is the correct value to identify performance problems?
View 4 Replies
View Related