SQL & PL/SQL :: How To Implement Pagination For Large Table Joins
Sep 2, 2011
I have two large tables(rptbody and rpthead) which has over millions or even more records. Below is the table schema
describe rpthead
Name Null Type
--------------------------- -------- -------------
RPTNO NOT NULL NUMBER
RPTDATE NOT NULL DATE
RPTD_BY NOT NULL VARCHAR2(25)
PRODUCT_ID NOT NULL NUMBER
[code]...
What I want is getting all data if the referenced RPTNO belongs to a particular product_id from rptbody table, here's the sql
SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE
FROM RPTBODY t0
WHERE
(
t0.RPTNO IN
(
SELECT t1.RPTNO FROM RPTHEAD t1 where t1.PRODUCT_ID IN ('4647')
)
)
ORDER BY t0.LINENO
Since the result set is pretty large, so my application(think it as c couple of jobs, each job should be finished in a time window) can only process a subset of all data, so I need pagination so that the next job can continue the processing until all data is processed, below is the SQL with pagination
select * from (
select a.*, ROWNUM rnum from
(
SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE
FROM RPTBODY t0
WHERE
(
[code]....
As you can see each query will take 100 rows from the db. The problem for now is that the query taking too much of time(10+ mins), I know the slowness is due to "ORDER BY t0.LINENO", but it's required for pagination.
We have to load 10 million rows in a table from another table based on the multiple joins. How much tablespace size we allocate to the table and for performance point of view how much should be the SGA size.
From report attribute ,in layout and pagination section.I have selected one pagination template,where 1,2,3 numbers are displayed (customize css).Now when number of records in report are very few,still I am able to see page 1,which I dnt want.
.classname li:first-child{ display:none } this is not working.
I need to implement the foreign key on a column of a table from 2 tables. My requirement is in bellow.
drop table t1; create table t1 (slno number, acc_no number);
drop table t2; create table t2 (acc_no number primary key, acc_name varchar2(100));
drop table t3; create table t3 (acc_no1 number primary key, acc_name1 varchar2(100)); [code]...
It is provided that the values of acc_no in t2 and acc_no1 in t3 are unique.Now it required that while inserting into t1 , the system will check either t2 or t3 tables.
I want to add column to table which has huge amount of data and fill with data from another table. What is the best way to do it? Is it faster to use CTAS instead of ALTER TABLE ADD COLUMN?
I need to dump the contents of a very large table into text files for archiving as we retire this old DB. The table has about 16 million rows, and a few of the columns are up to 4000 characters wide (varchar2(40000)). I've got 2 problems:
1) How can I select records that occur in a certain month of a year (there is a date column) and put the selected records into a file?
2) I don't have access to the server OS, so UTL_FILE is not possible. The output is also so large that I'm having trouble with the DBMS_OUTPUT.PUT_LINE.
I'm trying to get the first block of the IF working first, so the rest is just placeholders.
DECLARE v_mm number (2); v_yyyy number (4); min_mm number (2); min_yyyy number (4); max_mm number (2); max_yyyy number (4); min_date date; [code]....
Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.
Method I have used:
1. Created Temp table
2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500. delete from B where exists (select 1 from temp where b.col1=temp.col1);
3. Why it is taking too much time for deleting records in table A.
Is there any thing that during deleting data from such master table, it is referring to all dependent tables even if dependent data is not present ?
I am working with an online application with the database in Oracle 10G. We have a table with 10 million rows and this table is subjected to grow in future also. Moreover we cannot archive some of these rows as these records are required for referencing.
We have all necessary indexes on the table but querying this table takes a lot of time especially when it is joined with other tables. some methods with which I can manage this table in a better way so that queries joining this table would execute faster..
We have two databases running on 10.2.0.4 and 9.2.0.8. Both are having the same unpartitioned table of size 80G. I am exporting the table on 10g by using parallel=8 and dumpfile with %U option. That took around 4 hours to export the table.
And on 9.2.0.8, i am exporting using below parameters, taking around 5 hours.
buffer=2000000 recordlength=64000
options i can try to speed up the export in both versions.
I have a large table and want to calculate just a few values. Therefore, I don't want to create a new table, I want to update the table. Here an example:
I want to calculate the VALUE_LAG with ID = 4 only (-> two values).
create table zTEST ( PRODUCT number, ID number, VALUE number, VALUE_L1 number );
[Code]..
I tried this, but obviously, windows functions are not allowed in the update statement.
update zTEST set VALUE_L1 = lag(VALUE) over (partition by PRODUCT, order by ID) where ID = 4
I am searching for a decent method / example code to subdivide a large table (into a global temp table (GTT) for further processing) based on a list of numeric/alphanumeric which is the resultset from a view.
I am groping with the following strategy in PL/SQL:
1 -- set up cursor, execute the view (so I have the list of identifiers)
2 -- create a second cursor (or loop?) which: accepts each of the identifiers in turn executes a query (EXECUTE IMMEDIATE?) on the larger table INSERTs (or appends?) each resultset into the GTT
3 -- Then the GTT contains just the requires subset of the larger table for further processing and eventual import into iReport for reporting.
GTT is defined and ready to go, the larger table contains approx 40,000 rows and I need to extract a dozen subsets or so which add up to approx 1000 rows.
At moment we use range-hash partitioning of a large dimension table (dimension model warehouse) table with 2 levels - range partitioned on columns only available at bottom level of hierarchy - date and issue_id.
Result is a partition with null value - assume would get a null partition in large fact table if was partitioned with reference to the large dimension.Large fact table similarly partitioned date range-hash local bitmap indexes
Suggested to use would get automatic partition-wise joins if used reference partitioningWould have thought would get that with range-hash on both dimension.
Our application is using a two instance, one for the live active data and the other for the reports data. We have a process which moves the data from the live instance to reports instance every night. In a single db environment the process is working without any issues. However when we move to the RAC environment the reports db's (insert) in large table get locked and we are unable to insert data to the reports db.
What we are performing is:
Insert into my_table_rpt select * from may_table_live@db_link_to_livedb;
Issues:
my_table_rpt get locked
We have found the workaround by disable locking in destination and subsequent to the insert enable locking
ALTER TABLE my_table_rpt DISABLE TABLE LOCK;
Insert the data to the reports database table
Then
ALTER TABLE my_table_rpt ENABLE TABLE LOCK
Question:
Why does the large destination table (my_table_rpt) get locked in the RAC environment?
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below
Plan hash value: 2798378986 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 || 1 | DELETE | MON_TXNS | | | | ||* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 || 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns. Is it possible to design report in below format(half columns on one side of page, half on other side ofpage :
Column1DataColumn11DataColumn2DataColumn12DataColumn3DataColumn13DataColumn4DataColumn14DataColumn5DataColumn15DataColumn6DataColumn16DataColumn7DataColumn17DataColumn8DataColumn18DataColumn9DataColumn19DataColumn10DataColumn20Data I am using Apex 4.2.3 version on oracle 11g xe.
I have two design alternatives and need to understand how expensive (speed) is one of them against the other for a medium size table (100K-200K records):
create table xyz ( f1 number not null, f2 varchar2(20) not null, f3 number not null, f4 varchar2(50),
[code]....
the idea is to optimize the design by using a PK instead of the 3 keys and there is a debate that searching a unique index field(2nd scenario) is of the same speed than searching a PK field (1st scenario).
Name Null Type --------------------------- -------- ------------- RPTNO NOT NULL NUMBER RPTDATE NOT NULL DATE RPTD_BY NOT NULL VARCHAR2(25) PRODUCT_ID NOT NULL NUMBER
describe rptbody
Name Null Type ------------- -------- ------------- RPTNO NOT NULL NUMBER LINENO NOT NULL NUMBER COMMENTS VARCHAR2(240) UPD_DATE DATE
The fact is that we store some header in RPTHEAD and store real data in RPTBODY, the question is that if I use below SQL to query all data for a 'PRODUCT_ID'.
SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE FROM RPTBODY t0 , RPTHEAD rpthead WHERE ( t0.RPTNO = rpthead.RPTNO AND t0.UPD_DATE>=to_date('1970/01/01 00:00:00','YYYY/MM/DD hh24:mi:ss') AND rpthead.PRODUCT_ID IN ('4647') )
I do not want to have 'ORDER by' clause since data set is too large, the sorting takes long time, is there any way to get the result rows in the order sorted by RPTNO? We have the index for RPTNO on RPTBODY.
In mys store procedure I am using a subquery with INNER JOIN. This subquery calls a user defined function which takes main query fields as parameter values. But i am facing issue for accessing main query fields. my query is like below:
SELECT ID, Name, sub.Desc, sub.Date FROM MainTable main INNER JOIN ( SELECT * FROM RMF_GetData(main.ID) ) sub ON main.ContactID = sub.ContactID
I need data from a function in a table format based on some case conditions. Hence i need to join it with main table. But here oracle gives error as "invalid identifier" for main.ID parameter.
SELECT so.* FROM shipping_order so WHERE (so.submitter = 20) OR (so.requestor_id IN (SELECT poc.objid FROM point_of_contact poc WHERE poc.ain = 20)) OR so.objid IN (SELECT ats.shipping_order_id FROM ac_to_so ats WHERE (ats.access_control_id IN (selectac. objid FROM access_control ac WHERE ac.ain = 20 OR ac.group IN ('buyers', 'managers'))))
rewrite this query to use joins. That would greatly simplify my sql query building code. The ids, objids, submitter, ain are numeric and group is a varchar.
I pulled in 1121 SSN's into a table and am using that table as the basis for returning data from other tables...including how many documents a user has in their folder.
My query; however, is only returning 655 rows...it is returning only those rows that have documents in their folders. I want to return ALL rows...WHETHER OR NOT THEY HAVE A DOCUMENT COUNT (count(*)). How can I get all 1,121 rows to return? I would like the output to look like:
SSN LOCATION EMP_STATUS FOL_STATUS COUNT(*) -- For those folders containing documents: XXX-XX-XXXX WHATEVER WHATEVER WHATEVER 12
-- For those folders containing 0 documents: XXX-XX-XXXX WHATEVER WHATEVER WHATEVER NULL
Here is the query in it's current state:
-- Get User/Folder/Doc Count Information SELECT b.ssn, b.location, b.emp_status, c.fol_status, COUNT (*)
[Code]....
So again, my problem here is that...not all FOLDERS contain DOCUMENTS...but I still want the folder data lised...I just need it listed with either a zero count (0), or a NULL in the COUNT(*) column.
I'm trying the various joins, but none of them seem to be working.
I've tried the old 8i (+) join as follows:
AND c.fol_id = d.doc_fol_id(+)
I've tried the inner join:
AND c.fol_id(+) = d.doc_fol_id
...and I've tried the 9i method (left outer and full outer) using the following types of notations:
folder c full outer join documend d on c.fol_id = d.doc_fol_id
...so far, no luck. I'm still having only 655 rows returned (the 655 are those folders that HAVE document count > 0. Any folder that has zero documents in the document table just aren't being returned in the query.)
what is the best practice to implement in Indexing,is it global indexing or local indexing, I would like implement one of them in object that has been partitioned horizontally.i dont know exactly what to make of it.