1) Split values from "INST" Column : suppose 23
2) Find all values from "NUM" column for above splitted value i.e 23 ,
Eg:
For Inst : 23 ,
It's corresponding "NUM" values are : 1234,1298
3) Save these values into
A table Y : INST, NUM are column names.
INST NUM
23 1234,1298
1) I have a thousand records in Table X , and for all of those records i need to split and save data into Table Y.Hence, I need to do this task with best possible performance.
2) After this whenever a new data comes in Table X, above 'split & save' operation should automatically be called and append corresponding data wherever possible..
I am trying to split comma separated string. My table has more than 5 lacks data. I have tried the following SQL but its taking more than 5 minutes. Any Alternative solution to return data quickly ?
SELECT REGEXP_SUBSTR(order_id, '[^,]+', 1, LEVEL) order_id FROM order_detail CONNECT BY REGEXP_SUBSTR(order_id,'[^,]+',1,LEVEL) IS NOT NULL
SELECT REGEXP_SUBSTR(order_id, '[^,]+', 1, LEVEL) order_id FROM order_detail CONNECT BY LEVEL <= LENGTH(order_id) - LENGTH(REPLACE(order_id, ',')) + 1
I am having Oracle 9i relaese 2 on my db server. I am getting the following error every time I try to create a bitmap index:-
ORA-00439: feature not enabled: Bit-mapped indexes
I have queried the v$option table .Here the value of parameter Bit-mapped indexes is FALSE.
The result of v$version is :-
Oracle9i Release 9.2.0.1.0 - 64bit Production PL/SQL Release 9.2.0.1.0 - Production CORE 9.2.0.1.0 Production TNS for Solaris: Version 9.2.0.1.0 - Production NLSRTL Version 9.2.0.1.0 - Production
Actually when we created the database our installation was halted . so we manually created the database using Create database command.
I created one collection report and in the report when i click on the next >> then the report will show the next columns to enter data in some columns but the data which i entered before going to next will not get retain (data is getting lost) when i come back by clicking << previous but i want the data to get retained even when i click on next >> or previous << in report so that i can enter large amount of data at a time in the report columns by clicking >> & << and click on submit button to save all the data.
I have two tables with same columns(15 of them), I am trying to find difference between two tables using minus operator and then insert in stage table using below code
Issue is table1 has 50 million records table2 is empty
so when first time when we execute this v_collection1,v_collection2 collection will have 50 million records in it which will go in memory, I think this is not good, because going in memory will eat memory and resources while sorting and other activities ?
After fetching records in collection we are inserting that in stage table and then COMMIT so i think that wont be good because committing 50 million will generate large amount of redo?
below is snippet of my code
DECLARE type lst_collection1 IS TABLE OF table1.col1%type INDEX BY binary_integer; type lst_collection2 IS [code].......
Here is what i have in ms-sql, how to convert this into t-sql ?
@MortgagePurposeID is parameter with comma seperated values ('1,2,3,4') if(substring(@MortgagePurposeID, LEN(@MortgagePurposeID)-1,1)<>'','') Set @MortgagePurposeID = @MortgagePurposeID + '','' Set @pos=0
I have a few questions about querying using ranges and comma separated lists. The basic situation is a request comes in with part numbers that can be formatted in a range, comma separated lists or both. For an example, the request contains the following part numbers:
<pnum> 1-10, 14, 17, 11, 21-24 </pnum>
I can muster a basic SQL statement to query for this by hand (more then one way to do this)-
SELECT * FROM part_table WHERE pnum BETWEEN '1' AND '10' OR pnum BETWEEN '21' AND '24' OR pnum IN (14, 17, 11);
is there a way to create the BETWEEN statement so that the dash doesnt need to be parsed out of the request? (like BETWEEN '1-10') or something that functions to that extent? Is it also possible to nest the BETWEEN statements (or the functionality of the BETWEEN) in the IN statement?
Outside of convoluted loop using the SUBSTR() function, is there an easy way to extract each element from a comma-sepearted list that's passed in to a stored proc?
No err-----------------------------1 rishi,rahul2 rishi,ak I want output like:
No ERR1 rishi1 rahul2 rishi2 ak i am using the below query for this:
select no,regexp_substr(err,'[^,]+', 1, level) from abcd connect by regexp_substr(err, '[^,]+', 1, level) is not null but this query is giving me output:
1rishi1rahul2ak2rishi1rahul2ak if i am using distinct then only desired output is coming. select distinct no,regexp_substr(err,'[^,]+', 1, level) from abcd connect by regexp_substr(err, '[^,]+', 1, level) is not null but i don't want to use distinct because my table has millions of rows and err contains comma separated varchar(6000);
I have a requirement to sort a comma seperated string. For example if I pass '1234,432,123,45322,56786' as string, then it should return '123,432,1234,45322,56786', after sorting the numbers inside the string.
I have done it creating Global Temporary table. Is there a way without creating the Temp table. I understand I can write the whole logic to sort and append the string, but if there is any direct way.
CREATE GLOBAL TEMPORARY TABLE TEMP_TAB(COL1 VARCHAR2(100)) ON COMMIT DELETE ROWS; CREATE OR REPLACE FUNCTION func_sort_string(pi_string IN VARCHAR2, pi_delimiter IN VARCHAR2 DEFAULT ',') RETURN VARCHAR2 IS PRAGMA AUTONOMOUS_TRANSACTION; l_str VARCHAR2(2000) DEFAULT pi_string || ',';
I am building a search for use in one of our major applications. I have written a PL/SQL package that deals with it. I would like to present the requirement list to the group and see what, if anything, you may have done differently than I have.
1.) The search interface must have a single box, like google.
2.) Multiple search terms will be separated by a comma.
3.) The table has the following columns: -- Name -- Title -- addr -- addr2 -- city -- state -- zip -- phone -- email
4.) Number of Search Terms per query will be unlimited. (for now, as practicality dictates)
5.) Each search term will be checked against various columns.
6.) Search terms must not have a preference in order. Name, Address = Address, Name
7.) Records will be returned only for the rows where all search terms are found.
I am running a fairly busy Oracle 10gR2 DB, one of the tables has about 120 columns and this table receives on average 1500 insertions per second. The table is partitioned and the partitioning is based on the most important of the two timestamp columns. There are two timestamps, they hold different times.
Out of these 120 columns, about 15 need to be indexed. Out of the 15 two of them are timestamp, at least one of these two timestamp columns is always in the where clause the queries.
Now the challenge is, the queries we run can have any combination of the 13 other columns + one timestamp. In reality the queries never have more than 7 or 8 columns in the where clause but even if we had only 4 columns in the where clause we would still have the same problem.
So if I create one concatenated index for all these columns it will not be very efficient because after the 4th or 5th column the sorting would no longer be very useful and I believe the optimiser would simply not use the rest of the index. So queries that use the leading columns of the index in sequence work well, but if I need to query the 10th column the I have performance issues.
Now, if I create multiple single column indexes oracle will have to work a lot harder to maintain all these indexes and it will create performance issues (I have tried that). Besides, if I have multiple single column indexes the optimiser will do nested loops twice or three times and will hit only the first few columns of the where clause so I think it will kind of be the same as the long concatenated index.
What I am trying to do is exactly what the Bitmap index would do, it would be very good if I could use the AND condition that a Bitmap index uses. This way I could have N number of single column indexes which the optimiser could pick from and serve the query with exactly the ones it needs. But unfortunately using the Bitmap index here is not an option given the large amount of inserts that I get on this table.
I have been looking for alternatives, I have considered creating multiple shorter concatenated indexes but this still would not address the issue since many queries would still not be served properly and therefore would take a very long time to complete.
What I had in mind would be some sort of multidimensional index, I am not even sure if such thing exists. But essentially it would be some sort of index that could serve a query efficiently regardless of the fact that the where clause has the 1st, 3rd and last columns of the index.
So considering how widely used Oracle is and how many super large databases there are out there, this problem must be common.
what the principal things to look at when we have for the same query different performance results are?I have 2 different bases: the plan and data are the same but performance results are very differents.
I have the following query : for :P_LEG_NUM Parameter when i am passing values like 1,2,5 as string type i am getting invalid number error... I have defined in clause for it but still it does not work.. For individual values like 2, etc it works... how can i pass comma separated values for this bind variable
select trip_number as prl_trip_number, flight_number as prl_f_number, trip_leg_id as prl_trip_leg_id, leg_number as prl_leg_num, dicao as prl_dicao, [code]........
sometimes when I re-run a query a few times, the speed after the first run become much faster. this is a problem for me when I'm trying to optimize a query. is there some sort of cache? can it be disabled?
CREATE OR REPLACE procedure fast_proc (p_rows out number) is TYPE object_id_tab IS TABLE OF all_objects.object_name%TYPE INDEX BY BINARY_INTEGER lt_object_id object_id_tab; CURSOR c IS
[Code]....
Warning: Procedure created with compilation errors.
We have few tables in our production database which are havoc in size and will increase in size in future too so as part of the corrective measures , we have jotted down the below 3 methods to manage the size of those tables :-
1> Partitioning the table and take the export of identified partitions and after that, truncate those partition. 2> Creating history tables and remove not so current data from the original table to history table.
I'm extracting/retrieving the data from the oracle database using Java application it's bit slow. However, when I retrieve from the SQL server it's faster than oracle.