I have a table which has plenty of rows. In production, I would estimate it to be from 30 millions to 300 millions. I need to update on column (flag) in all the rows (created before certain date).Now saying just:
UPDATE MyTable SET flag = 3 WHERE created < to_date('2010-10-08 23:59:59', 'YY-MM-DD HH24:MI:SS');
COMMIT;
Does not seem like a good idea - the commit-buffer would become too big.I will write a PL/SQL script for this. The question is, whether I should:
a) Update each row separately, and commit after every 10000 rows. ( WHERE RowId = [rowId] )
b) Update 10000 rows with set of dates ( WHERE rowId > [some_row_id] AND RowId < [some_row_id_2]
In the latter example the some_row_ids would naturally be fetched. The rowIds come from sequence. So which one would be more effective?I am not too familiar with PL/SQL or Oracle for that matter.
Now if there is more than one row with same email the one with the latest edit date should be updated with missing fields by using same field value other rows (if the field is present in more than one row, the one with the next latest edit date is to be considered) and the archived status of all rows with same email except this master row must be set to 1.
The Create_Date must be set to the minimum of all the create_date values of rows with same email value The create table would be as follows:
CREATE TABLE student(Id NUMBER PRIMARY KEY,first_name VARCHAR2(30) NOT NULL,last_name VARCHAR2(30) NOT NULL,email VARCHAR2(30) NOT NULL,contact NUMBER,adress1 VARCHAR(30),adress2 VARCHAR(30),city VARCHAR(30),edit_date DATE,create_date DATE,archived CHAR(1))
Sample insert statements would be: insert into student values
I have a table A, whose table structure is in the below format.
Table A
ID DESC VALUE 123 A 454 123 B 1111 123 C 111 123 D 222 124 A 123 124 B 1 124 C 111 124 D 44
Now i need to insert the data from this table to another table B, the sturcture of which is as below
Table B
ID A B C D 1234541111111222 124123111144
How do i frame a query to fetch data from table A and insert that into table B? I don't want to use max and decode combination. as it would return only single row for an ID. I need all the id's to be displayed.
I have a set of rows based on a complex view from multiple table.
I will be updating some of its columns from front-end . Is there any possible ways to lock those rows of data while updating and no other users can update it;
I need to find the identical rows in the below table based on ID column and update the previous identical record's end_date with latest record's start_date-1.
I have two tables eim_asset and eim_asset1.I want to update the table eim_asset1 using the following update SQL (Or Logic)
update eim_asset1 set emp_emp_login = (select login from s_user where row_id in (select row_id from s_emp_per where row_id in (select pr_emp_id from s_postn where row_id in (select position_id from s_accnt_postn where ou_ext_id in (select row_id from s_org_ext where row_id in (select owner_accnt_id from s_asset where owner_accnt_id is not null)))))
It gives me the ORA error : ORA-01427:single-row subquery returns more than one row.know why I am getting it, because of the one-to-many relationship between owner accounts and their assets.
I am trying to delete the one month data from the table , which contains the end customer sales data.The total data count in the table is 30 crores. And the data of one month is nearly 10 Crores.I am using oracle 10g , The date field to be used in the condition of delete is indexed.
i have a problem to view tens of millions of data in oracle 11g i have table Book_Issue that have tens of millions of data but when i execute query to see all the data select * from Book_Issue
it only view 5.000 data
what query should i execute to view million data in oracle 11g
There is a table in Database with millions of records and a query --- Select rowid, ANI, DNIS, message from tbl_sms_talkies where rownum<=:"SYS_B_0" ---- using the high CPU and also this query having high number of executions.
which is the fast way of inserting 60 millions of records from a view to a table.
method 1:
create table t_temp_table as select * from v_dump_data;
method 2:
through bulk collect
---Bulk_Collect With FORALL---------- DECLARE TYPE srvc_tab IS TABLE OF t_temp_table%ROWTYPE; l_srvc_tab srvc_tab := srvc_tab(); l_start_time NUMBER; l_end_time NUMBER; l_error_count NUMBER;
We are having a major issues with the batch run. we are using oracle 11g db. We run the scripts to populate the tables and then call scripts to run the extractions. The issue here each time we run the sql it takes so much inconsistent time.We have created index and run the db stats then run the extractions.The sql sometimes takes 10 minutes or sometimes takes hours to run? This is major show stopper of the project.
I am executing multiple PL/SQL files(.sql) with single batch file. The batch file sql.bat has got 3 sub sql sub-tasks to complete once its run. The sql.bat is show below
@Echo off
CD C:Report echo Loadin tables from text file Report.txt sqlplus security/password <c:Reportloader_security.sql echo Creating Security table sqlplus security/password <c:Reportcreating_security_final.sql echo Inserting text file Security table sqlplus security/password <c:Reportinsert_security_final.sql
PAUSE
The sql.bat runs perfectly if I double click on the sql.bat file separately. But if I call the sql.bat from a different batch file 'Final.bat' it throws the below error.
Error ----------- Executing SQL commands and loading file into SQL tables Loadin tables from text file Report.txt Error 6 initializing SQL*Plus SP2-0667: Message file sp1<lang>.msb not found SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory [code].....
The Final.bat file calls other bat files too. It is as show below.
CD C:ReportSecurity echo Merging all Files CALL merge.bat
CD C:ReportSecurity echo Deleting old files CALL del.bat
CD C:ReportSecurity echo Executing SQL commands and loading file into SQL tables CALL sql.bat
Im looking for a query which returns the batch for which all the child should either be in 'A_STATUS','B_STATUS' or 'C_STATUS'. In this query im expecting a query which returns batch 2,3 and 4.
create table batch (batchid number); insert into batch values(1); insert into batch values(2); insert into batch values(3); insert into batch values(4);
I have a .bat file in my client system,which will open a web page after executing(after double clicking on it).I want to execute the same batch file from my pl/sql block.So,after executing my pl/sql block that .bat file should execute,and it should open the same web page.
I'm having an issue with stale optimizer statistics for some SQLs that are run in a batch process. The problem is that the process runs many times during the day - sometimes 20 to 30 times. And each time, the tables are updated, i.e. rows are inserted or deleted, etc.
So eventually the optimizer statistics for those tables become stale and the performance of the SQLs start to slow down (a lot). How best to gather the optimizer stats on the tables so they don't become stale when the batch process runs each time? The problem is that I also can't add/modify the code in the batch process because it is delivered by the vendor as is.
1. Make the jobname distinct, because it keeps giving me multiple entries for each jobname 2. Add the the start_time of SOD_start_data9_UAT1 to end_time fodba_MUAT1 to get the combined duration 3. CONCAT jobnames SOD_start_data9_UAT1 and end_time fodba_MUAT1 4. Generate the last seven days batch run times 5. Generate a report into .csv format and email out 6. I have access to sqlplus and plsql developer
I am trying to create a batch file which will be executed with windows scheduled task. This batch file will have sqlplus script running Oracle query. I can run this query from the command prompt, no problem,
We have a critical application batch that runs daily and had a 7.5 hr window after which the application needs to come online. During the peak batch times the truncates are running very slow leading to slow down of the batch jobs quite considerably due to which the batch is going beyond the window.
The wait events that show up when the truncates are running are local write wait
enq: RO fast object reuse enq: CR block range reuse ckpt db file parallel write
The ASH reports show that the top sessions are those executing DMLs(insert, merge, update) and DDLs(Create/alter index & truncate). In addition to this it also shows that the blocking sessions are background wait events: CKPT and DBWR. Changes to DB configuration done with respect to addressing these issues are:
1) We have increased the DBWR processes to 2 2) Reduced the buffer cache size to 20G(from the original 30G) 3) Flushing the buffer cache before the batch begins in order to reduce the load on DBWR during the batch peak time 4) Set the parameter filessytem_io to SETALL(from none) 5) Tuned the EVA(SAN storage) to improve its performance - by distributing the loads evenly between the controllers, reducing the IO transfer block size, etc 6) Suggested using the reuse storage clause to improve truncate performance.
All of these have worked bring a semblance of control but the fact remains that the batch is generating more jobs(hence increasing data volume) over time due to it being the peak season. This causes an inevitable increase in the number of sessions all running DMLs and DDLs which are IO intensive operations.
Suggestions pending from our end:
1) Increase DBWR beyond 2. For this we need a H/W upgrade since we have maxed out the maximum number of DBWR that can be configured 2) Implementing asynchronous IO for DBWR which on HP-UX requires moving to raw disks. Hence have suggested using ASM. 3) Tuning the application to either reduce the IO generated or redistribute the jobs such that those with maximum loads don't run together
instead of truncating tables, can we rename the tables and delete them later . will this improve performance ?
I'm working on a Self assessment project regarding our tax returns. Currently, this is how it works - a return lodged generates a return number, but is batched later. In the change proposed, they want the same process whereby a return is generated still, but at a count of 10 returns generated on the same screen, a batch is to be created and these 10 returns will have to be added to that batch. We are on Oracle 10G and work with Forms, Reports 10G and TOAD/SQL Plus as tools so I was thinking of changing it on Post-Query but suggestions are to add on to System Parameter table.
I want run servral scripts in batch, and I use autorun.bat to call main.sql, which including servral scripts. If there has any pl/sql error in script, then the script will stop to run, but not exit SQL*Plus. If the pl/sql must exit, can it output the error messages in a file?
Please don't use "whenever sqlerror exit|continue...", because it will exit pl*sql tool or continue to run the other sql, it's not easy to know where the error happened
autoRun.sql --------------------------------------------------------------------------- sqlplus "sys/manager@ORADB as sysdba" @main.sql
When i double click .bat file, it will get proper info.But when i call this .bat file thru form, it will show blank screen..why this .bat file not running ?