I have written a purge package that would delete records older than 10 years. Since the data is huge, the purging was taking 14 hours plus. To improve performance, I disabled constraints , deleted records and then reanabled them. This was quite quick but the only problem is rollback. Say for some reason if enabling constraints fails there is no way to rollback as enabling and disabling constraints does an implicit rollback.
TABLE NAME: ========== create table TEST_PREC (NO NUMBER(4,2)); DECLARE BEGIN INSERT INTO TEST_PREC VALUES (12.34); DBMS_OUTPUT.PUT_LINE('the no of records before commit '||SQL%ROWCOUNT); commit; /* What's happening inside commit */
I want to purge a table which is having more then 98M rows...here are the details...
Purge Process I followed --------------------------------------------- Step 1. Created backup table from Main table with required data only create table abc_98M_purge as select * from abc_98M where trunc(tran_date)>='01-jul-2011' -> table created with 5325411 rows
Step 2. truncate table abc_98M
Step 3. inserted all 5325411 rows back to abc_98M from abc_98M_purge using below procedure
DECLARE TYPE ARRROWID IS TABLE OF ROWID INDEX BY BINARY_INTEGER; tbrows ARRROWID; row PLS_INTEGER; cursor cur_insert is select rowid from abc_98M_purge order by rowid; BEGIN open cur_insert; loop [code]....
I'm currently working on a project which is to archive the old data and then purge the same data from the main table.
Here is a detail description:
There are around 50 odd tables from which I would need to archive the old data(matching certain filter conditions...not date based). Meaning I have to store the data in a temp table. Once stored in temp table then I would have to delete those rows from the main table. This temp table will be later exported and stored on ARchive database(a seperate database). These tables are very huge. One of the table is actually 250 GB in size. And all these tables have many indexes built - both normal and bitmap.The 250 GB size table has 40 million rows that need to be archived and purged. The total number of rows in the table are 540 million.On this table alone there are 50 bitmap indexes and 2 normal indexes. This table is partitioned based on date column.This date column is not used/useful in identifying the old data. There are around 20 tables which are quite similar in size to the above described table. Rest of them are little small when compared to the above table.
We have to execute this activity over a weekend which gives us about 48 hours time to complete the activity. Best possible ways to handle this activity. Most importantly should be able to complete the activity within the specified 48 hour window.
The solution what we are now thinking of is:
1. Create the temp table ---Create tmp_tbl as select * from main_table where <<conidtions identifying old data>>
2. Once the temp table is created. Make copy of indexes that exist on the main table and eventually drop them.
3. Execute a PL/SQL script to perform the bulk delete from main table and commit for every 100000 rows.
4. Once the bulk delete is finished then recreate the indexes on the main table using the copy made at earlier step.
Our main worry is about the step#4. Considering the size of these tables and the number of indexes to be built,we are not sure how long the index re-creation will run for each table.
depending on the possibilities we may have to split the activity in to 2-3 phases spreading across 2-3 weekends. Even then we are not sure whether we will be able to pull off this activity.
I'm working on a Java-based web application and we have unit tests that we use to test all our all code that interacts with the database or code that interacts with our DB code. The Spring framework allows us to perform some DML within a transaction before each test and then rollback the changes. For the most part, this works, however when I run the full suite of unit tests, it will randomly commit data to the database causing the rest of the tests to fail.
will Oracle's auditing let me see where this odd-ball commit is occurring? Is there another way for me to see when data is being committed?
This does not appear to be happening on any of the systems we've deployed, however this is a bit unsettling and would like to know why this is occurring so that we can prevent it from happening in production.
By actually the program are suppose to return the records instead or returning the error. I've included the table that i'm trying to retrieve the records from as well.
difference between between these two constructs. Finally when i read the asktom.oracle.com , I was totally confused. The reason is thatTom says...we can retrieve more than one row in implicit cursor. If that would be case, what's the difference between these two cursors?? when to use?? My understanding was implicit cursors" ---> single-row queryExplicit cursors ---> multi-row query Experts
In my Project, there are many queries have join conditions between Number and Vrahchar2. Will it give any wrong results or Invalid Number exception even the varchar2 column always contains the number data?
Example: Select * from Table_t1 t1, table_t2 t2 where t1.num_col = t2.var_col -- Here var_col always hold only numaric data.
i write a select statement in proc that contains 44 columns.
when i precompile it. it is showing the error: implicit conversion of string literal to "char *" is deprecated.when i compile the same select with 40 columns it is not showing any error.
but for more than 40 columns (41-44) it is showing the above error.
create or replace procedure inserta_en_B (numregistros in integer) as ultimo_año_nuevo date := trunc (sysdate,'year'); dias_transcurridos number(3) := sysdate - ultimo_año_nuevo; begin [code]........
First i insert into b 400000 rows using:
execute inserta_en_b(400000)); commit;
But then i need to insert 100000 rows more using the stored procedure and without removing the 400000 rows stored before. I think i need to use the commit clause, but i dont know where.
Tell me restriction on commit means where this keyword is not used....like i somewhere read in trigger we can't used commit...instead of that we use pragma autonomous_transaction..
but my confusion arise when i see commit used in trigger in our database table....
is commit used in trigger , if not then what will be use...
Another one is commit used while creating procedure or function?
create or replace trigger comt after insert on tbl_city declare pragma autonomous_transaction; begin commit; dbms_output.put_line('Value is committed'); end;
Now when I perform an insert in tbl_city---->trigger fires properly and gives output stream. But If I perform rollback now --->there are the data rollbacked in table.
why?I think after commit(which is in trigger associated at insert to table)there should no any rollback in table.
I am using the SQL-Developer to access and manipulate a database. I am not very sure about the format of the database (I'm new to databases), but I had to setup the TNS-folder.
Anyway, I guess the problem is the same for any database.
I am having a table with the BOM (bill of material) positions of certain articles and I want to change the BOM quantities of some of the articles. What happens is that I can only change some of the rows. For other rows I get the message like (it is in German, so I try to translate it):
"data was commited in another/the same session already. row cannot be updated"
This error message looks like there is somebody else locked on the database and manipulating it, correct? Is that possible to see somewhere which processes/people are currently accessing to the database?
I saw that there is one process/another database, which is having the authorization to access to the database. But where can I check if this process is accessing to the database?
BTW: I used to do this process before, and it worked. I had been able to manipulate arbitrary entries on the database. I guess that the process or the person, mentioned above, hasn't been accessing to the database at that time.
I have made a correlated update statement using rowid. Find my attachment. Its updating all columns which i wanted but issue is that its not updating in 1st commit.
Suppose 6 rows is to be updated, then in 1st commit its updating 1 record, then in 2nd commit its updating 2nd record and so on. And in Toad its showing 6 rows updated in 1st commit, then 5 rows updated in 2nd commit and 1 rows updated in last record. I want that all records to be updated in first commit only.
I am calling a child form from a parent form.It works perfectly if the parent form is adding records and while entering records when i press the button to call the child form, the whole things work perfectly according to plan.
The problem begins when i run execute query command in the parent form and then call child form then it does not "commit_form". So this is my problem that child form does not work perfectly when parent form is being called in execute_query procedure.
My working:
1) I read in the documentation that When parent form status is query_only then child will also have the same mode regardless of the parameter given in call_form.So i checked the :SYSTEM.FORM_STATUS of both the parents and child form,it shows "CHANGED" hence this point is covered. (dont know how come the parent form is in changed status but at least it is doing my work)
2) I further read and found that Commit_form procedure make the :SYSTEM.FORM_STATUS as QUERY. Here i am facing problem as in child form when i make changes and press commit form. Then before commit_form and after commit_form the :SYSTEM.FORM_STATUS results in "CHANGED".You can see this in the following code which i have written in save button.
message(:SYSTEM.CURRENT_FORM || ' a ' ||:SYSTEM.FORM_STATUS); pause; commit_form; message(:SYSTEM.CURRENT_FORM || ' b ' ||:SYSTEM.FORM_STATUS); pause; IF Form_Success THEN Commit; IF :System.Form_Status <> 'QUERY' THEN Message('Error prevented Commit'); RAISE Form_Trigger_Failure; END IF; else message('FAIL'); END IF; exit_form;
then at last exit_form module shows that" i have unsaved data in the form" save Yes-No-Cancel?
I hv a situation where a webservice interacts with the database.
Here the webservice will first make a request to database for some operation but i dont want the database to commit changes in first request itself. A response will be sent to webservice further a second request will be sent to database for committing the changes. So can that be done?
What would be the best way to Commit after every 10 000 records inserted from one table to the other using the following script :
DECLARE l_max_repa_id x_received_p.repa_id%TYPE; l_max_rept_id x_received_p_trans.rept_id%TYPE; BEGIN SELECT MAX (repa_id) INTO l_max_repa_id [code].........