I need to dump the contents of a very large table into text files for archiving as we retire this old DB. The table has about 16 million rows, and a few of the columns are up to 4000 characters wide (varchar2(40000)). I've got 2 problems:
1) How can I select records that occur in a certain month of a year (there is a date column) and put the selected records into a file?
2) I don't have access to the server OS, so UTL_FILE is not possible. The output is also so large that I'm having trouble with the DBMS_OUTPUT.PUT_LINE.
I'm trying to get the first block of the IF working first, so the rest is just placeholders.
DECLARE
v_mm number (2);
v_yyyy number (4);
min_mm number (2);
min_yyyy number (4);
max_mm number (2);
max_yyyy number (4);
min_date date;
[code]....
Currently I have a requirement where I need to create 2 more output rows using each result row.
In my requirement I am populating charges table with types of charges, on each line item of charges, I need to apply 2 types of taxes and populate it along with the charge line item. I will be storing charges in table charges and the 2 taxes to be applied in taxes table respectively. For each row of charges, i need to apply these 2 taxes present in taxes table resulting in 3 rows output.
I am searching for a decent method / example code to subdivide a large table (into a global temp table (GTT) for further processing) based on a list of numeric/alphanumeric which is the resultset from a view.
I am groping with the following strategy in PL/SQL:
1 -- set up cursor, execute the view (so I have the list of identifiers)
2 -- create a second cursor (or loop?) which: accepts each of the identifiers in turn executes a query (EXECUTE IMMEDIATE?) on the larger table INSERTs (or appends?) each resultset into the GTT
3 -- Then the GTT contains just the requires subset of the larger table for further processing and eventual import into iReport for reporting.
GTT is defined and ready to go, the larger table contains approx 40,000 rows and I need to extract a dozen subsets or so which add up to approx 1000 rows.
The view data should get display as follows vivaram varavu pattru sambalam kamishan null 101.00 panam kamishan 51.00 null panam sambalam 50.00 null
Logic: Each table row will have only one value either in varavu_patti or in pattru_patti. On selecting the row, thogai must be posted in varavu when varavu_patti is not null or should be posted in pattru when pattru_patti is not posted.on selecting the table row, vivaram should contain all other rows varavu_patti and pattru_patti on equating chitta_enn
11.2.0.3...just trying to learn the syntax. I have not worked with IOTs and I am exploring a feature I have not really used to try to learn something new. I know about intervals.This exact split syntax below, works on a heap table without errors. When I run the following split against a regular heap table it works.
alter table MYTABLE split partition "FUTURE" at ( to_date('09_MAY_2013_13','DD_MON_YYYY_HH24' ) ) into ( partition "B4_09_MAY_2013_13", partition "FUTURE" ) update global indexes * ERROR at line 1: ORA-00932: inconsistent datatypes: expected BINARY got NUMBER
I want to add column to table which has huge amount of data and fill with data from another table. What is the best way to do it? Is it faster to use CTAS instead of ALTER TABLE ADD COLUMN?
I have two large tables(rptbody and rpthead) which has over millions or even more records. Below is the table schema
describe rpthead Name Null Type --------------------------- -------- ------------- RPTNO NOT NULL NUMBER RPTDATE NOT NULL DATE RPTD_BY NOT NULL VARCHAR2(25) PRODUCT_ID NOT NULL NUMBER [code]...
What I want is getting all data if the referenced RPTNO belongs to a particular product_id from rptbody table, here's the sql
SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE FROM RPTBODY t0 WHERE ( t0.RPTNO IN ( SELECT t1.RPTNO FROM RPTHEAD t1 where t1.PRODUCT_ID IN ('4647') ) ) ORDER BY t0.LINENO
Since the result set is pretty large, so my application(think it as c couple of jobs, each job should be finished in a time window) can only process a subset of all data, so I need pagination so that the next job can continue the processing until all data is processed, below is the SQL with pagination
select * from ( select a.*, ROWNUM rnum from ( SELECT t0.LINENO, t0.COMMENTS, t0.RPTNO, t0.UPD_DATE FROM RPTBODY t0 WHERE ( [code]....
As you can see each query will take 100 rows from the db. The problem for now is that the query taking too much of time(10+ mins), I know the slowness is due to "ORDER BY t0.LINENO", but it's required for pagination.
Consider tables A,B,C,D,E,F. all are having 100000++ records Tables B,C,D are dependent on table A (with foreign key constraint). When I am deleting records from all tables, table B,C,D are taking max 30-40 seconds while table A is taking 30-40 mins. All tables are having indexes.
Method I have used:
1. Created Temp table
2. then deleted all records from B,C,D,E,F for all records in temp table for limit of 500. delete from B where exists (select 1 from temp where b.col1=temp.col1);
3. Why it is taking too much time for deleting records in table A.
Is there any thing that during deleting data from such master table, it is referring to all dependent tables even if dependent data is not present ?
I am working with an online application with the database in Oracle 10G. We have a table with 10 million rows and this table is subjected to grow in future also. Moreover we cannot archive some of these rows as these records are required for referencing.
We have all necessary indexes on the table but querying this table takes a lot of time especially when it is joined with other tables. some methods with which I can manage this table in a better way so that queries joining this table would execute faster..
We have two databases running on 10.2.0.4 and 9.2.0.8. Both are having the same unpartitioned table of size 80G. I am exporting the table on 10g by using parallel=8 and dumpfile with %U option. That took around 4 hours to export the table.
And on 9.2.0.8, i am exporting using below parameters, taking around 5 hours.
buffer=2000000 recordlength=64000
options i can try to speed up the export in both versions.
I have a large table and want to calculate just a few values. Therefore, I don't want to create a new table, I want to update the table. Here an example:
I want to calculate the VALUE_LAG with ID = 4 only (-> two values).
create table zTEST ( PRODUCT number, ID number, VALUE number, VALUE_L1 number );
[Code]..
I tried this, but obviously, windows functions are not allowed in the update statement.
update zTEST set VALUE_L1 = lag(VALUE) over (partition by PRODUCT, order by ID) where ID = 4
At moment we use range-hash partitioning of a large dimension table (dimension model warehouse) table with 2 levels - range partitioned on columns only available at bottom level of hierarchy - date and issue_id.
Result is a partition with null value - assume would get a null partition in large fact table if was partitioned with reference to the large dimension.Large fact table similarly partitioned date range-hash local bitmap indexes
Suggested to use would get automatic partition-wise joins if used reference partitioningWould have thought would get that with range-hash on both dimension.
Our application is using a two instance, one for the live active data and the other for the reports data. We have a process which moves the data from the live instance to reports instance every night. In a single db environment the process is working without any issues. However when we move to the RAC environment the reports db's (insert) in large table get locked and we are unable to insert data to the reports db.
What we are performing is:
Insert into my_table_rpt select * from may_table_live@db_link_to_livedb;
Issues:
my_table_rpt get locked
We have found the workaround by disable locking in destination and subsequent to the insert enable locking
ALTER TABLE my_table_rpt DISABLE TABLE LOCK;
Insert the data to the reports database table
Then
ALTER TABLE my_table_rpt ENABLE TABLE LOCK
Question:
Why does the large destination table (my_table_rpt) get locked in the RAC environment?
We have data archive scripts, these scripts move data for a date range to a different table. so the script has two parts first copy data from original table to archive table; and second delete copied rows from the original table. The first part is executing very fast but the deletion is taking too long i.e. around 2-3 hours. The customer analysed the delete query and are saying the script is not using index and is going into full table scan. but the predicate itself is the primary key,More info below
Plan hash value: 2798378986 ------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------------| 0 | DELETE STATEMENT | | 2520 | 233K| 87 (2)| 00:00:02 || 1 | DELETE | MON_TXNS | | | | ||* 2 | HASH JOIN RIGHT SEMI | | 2520 | 233K| 87 (2)| 00:00:02 || 3 | INDEX FAST FULL SCAN| OTW_ID_TXN | 2520 | 15120 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL | MON_TXNS | 14260 | 1239K| 83 (0)| 00:00:02 |
------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------
I have a report with single row having large number of columns . I have to use a scroll bar to see all the columns. Is it possible to design report in below format(half columns on one side of page, half on other side ofpage :
Column1DataColumn11DataColumn2DataColumn12DataColumn3DataColumn13DataColumn4DataColumn14DataColumn5DataColumn15DataColumn6DataColumn16DataColumn7DataColumn17DataColumn8DataColumn18DataColumn9DataColumn19DataColumn10DataColumn20Data I am using Apex 4.2.3 version on oracle 11g xe.
I have two design alternatives and need to understand how expensive (speed) is one of them against the other for a medium size table (100K-200K records):
create table xyz ( f1 number not null, f2 varchar2(20) not null, f3 number not null, f4 varchar2(50),
[code]....
the idea is to optimize the design by using a PK instead of the 3 keys and there is a debate that searching a unique index field(2nd scenario) is of the same speed than searching a PK field (1st scenario).
I have a table Product as; desc product Name Null Type -------------------------------------------------------------- PRODUCT_ID NOT NULL NUMBER INGREDIENT VARCHAR2(20)
The data in Ingredient is separated by ','. PRODUCT_ID INGREDIENT ---------------------- -------------------- 1 A,B,C 2 A,D 3 E,F
I need to write a sql statement which will retrieve a pair of product and ingredient in each row as;
PRODUCT_ID INGREDIENT ---------------------- -------------------- 1 A 1 B 1 B 2 A 2 D 3 E 3 F
Suppose, I am having Oracle BLOCK_SIZE of 16k and my Linux OS level BLOCK SIZE is 4k. then How 16k oracle block will store in OS level? and What will be the internal block splitting process?
I was wondering if there is an Oracle function available to split a string based on a delimiter character. For example, if I have a table consisting of:
Aim: Architecture change in existing application Domain: Health Care Background: There are 2 application ( Front end: one in oracle forms - deals with accounts module and another in some legacy application - deals with patient, clinical and diagnose module) using and sharing the same Oracle 9i database.
Patient related modules are moved into another database ( java as a front end, oracle 10g as backend ) which is normalized - eliminating duplicate tables and column, also its tables and columns names are not matching with existing (patient)system.
Now the requirement is making the existing application related only to Accounts module ( having complicated business logic written in packages ) to work as it is without changing the code, design drastically.
Questions:
1. Now how best this task can be completed without affecting existing Accounts system drastically ( with minimal changes )? 2. what are the possible best approach to achieve this ? 3. what are the best way for communicating the 2 DB in this scenario ( may be creating synonym, views etc ) ? 4. What are challenges that needs to be addressed ?
I need to join ISSUED_REMOVED Table with ITL Table. having each quantity each row.
Eg. If a unit Serial no '354879019900009' has a part (1015268) issued 8 times and then unissued 4 times so finally the part was issued 4 times. so I need 4 rows to show for each qty 1 for that part and unit serial number.
create table ISSUED_REMOVED_ITEM (REPAIRED_ITEM_ID, ISSUED_REMOVED_ITEM_ID, ISSUED_PART_ID, OPER_ID, ISSUED_REMOVED_QUANTITY) as select 122013187, 1323938, 1015268, 308, 2 from dual union all select 122013187, 1323939, 1015269, 308, 2 from dual union all select 122013187, 1323940, 1015268, 308, 2 from dual union all select
[code]....
-- The way I need to join the Issued_Removed Table
select * from ITL_TEST ITL left join issued_removed_item iri on iri.REPAIRED_ITEM_ID = ITL.ITEM_ID --ITL.ITEM_ID --rlsn2.item_id --126357561 and iri.oper_id = 308 --in ( 308, 309)