Best Method To Remove Old Data Running On Variety Of Systems
Oct 17, 2011
Our application has been installed at customers in North America, Europe and South America for several years, in some cases, over 10 years. At least one of our customers has hundreds of gigabytes of data. We are considering options for cleaning out the old data.
The database runs on a variety of systems (Linux, Windows, Unix) and in several version (Oracle 9, 10, and 11). We need a solution that works in all environments.
Two of our main criteria for a successful solution are that:
-It maintains application data referential integrity. Our application makes little use of foreign key constraints, so the cleanup process will apply critical business rules to candidate data to determine if it can be deleted or not.
-The operation of the cleanup program does not impact use of the system in production.
For various reasons (license cost, installation issues) the partitioning option is not available to us.
Alternative 1: Flag records for cleanup
This requires adding a 1-character column to each table. That is a one-time operation done during implementation. The procedure applies the business rules and sets the flag according to whether a row is to be deleted or not. Rows marked for deletion can be checked, reset, exported, etc. Finally, a separate process deletes all marked rows.
Advantage of this is that the deletion process will use a full table scan to find the marked records. There is no index navigation, so hopefully less overhead. Disadvantage is that its updating application data which might affect user's perceived system response. There is some undefined concern that locking or other table activity involved with updating the flags could impact users.
Alternative 2: Build a list of keys for data to be deleted
We will build a list table during implementation. The first process examines the application data, applies the deletion rules and writes key information to the list for data that can be deleted. The list can be checked, reset, rebuilt and listed rows can be exported as required. Finally, the cleanup process uses the list to find and delete the data.
Advantage is that it doesn't update the application data as its building the list. Disadvantages are that it that there is some overhead in building and checking the list. The list requires more space than the flags in alternative 1 but we can handle that in various ways. The procedure needs to navigate key structures during the delete step as well as in the list-building phase.
We have oracle 9i database named A with 2 schema A1 and A2.User enters data at A1 schema and the incremental data moved to A2 schema after some verfications performed by some scheduled job on A1 schema.
we want to move this incremental data to B1 schema of another oracle 9i database B at the real time when data gets entered to A2 schema of A oracle 9i database. We have access to A2 schema of A database and B1 schema of B database.way to do it or best practice to do this activity or if there is any third party tool or any available oracle utility to perform it?
Note: A2 schema of A database and B1 schema of B database has one to one mapping.We want to avoid using trigger on A2 schema, how data gets populated from A1 to A2 schema .
I have few tables in Oracle 9i/10g , and they already have data in them. I am trying to migrate the data coming from various source systems into these Oracle tables. There is a chance that after loading I might get some unwanted data into these tables.
How do I remove just that data which I have loaded recently, and do not disturb the original data it already has.
Need to backup those tables and reload the data back if there is any problem, but I am looking at a different approach. I just don't want to change the existing system, as lot of users use the system.
i have faced one problem with reports. i had created 4 reports. recently we are designed new application using oracle forms6i. we are created more than 10 forms and added to application they were working fine. but when i was added these reports they are not generated. but in server report are running. i mean the report genereted in server but not in local system.
make one form for many systems i have (i.e) I need to make it like portal many button for many systems When i enter the button i login to specified system (i.e) I need to make new connection to this schema and disconnect the previous one.
As you can see, I removed the first four columns because the eventkey is the same. In this case, there is only the applicant which is different.So the rest should be blank.
I have a CLOB column in one of my tables (Table1), which stores very large (150MB+) XML files. I have new/another table (Table2) in the DB where I have an XMLType column. I want to take the CLOB data (xml) from table1 and remove some part of that and store the rest into to the XMLType column of Table2.
I want to remove the data inside the XML tags
<Attachments>
very long data goes here... which I don't need, which should be replaced with a single word
</Attachments>
store the CLOB to XMLType column after removing the unwanted data.
We would like to remove the partitions from a particular table. The table in question has 12 partitions. Based on some initial investigation, I've come up with the following options. because the table we going to remove partition will have millions of records so on considering the db downtime we are looking for a alternative way. Is there a better way?
Copy data into another table, drop all partitions, then copy the data back into the original table Copy data into another table, drop the original table, then rename the new table and rebuild the indexes.
in a RAC multi-node environment with single instance dataguard..Recovery is not running automatically I have to re-run the recovery command to make them sync and this command works for already archived log files no upcoming archive log file applied I have to re-run the recovery command again for newly archived log files.
ASM instance SID = +ASM and the database installation was under SID =rdbms112
Im just in the process of running dbca and have specified which ASM disks to use for data, but then on the next screen of the dbca, im asked for the ASMNMP password,
the password for 'oracle' user is 'oracle' but im getting this message:
"Could not validate ASMNP password due to the following error: "ORA-01031-insufficient privilages" Do you want to continue? If you contiue ASM will not be configured to be moitored by database controll "
when I was asked for privilaged operating system groups when setting up ASM
OSDBA, OSOPER and OSASM ... were all set to group 'dba'
Primary RAC and Standby RAC databases are using version 10.2.0.4. They are configured & controlled via Data Guard Broker. Currently, the observer runs from one of the Standby nodes. All are on Solaris x86 64bit OS version 10.
Now, I have to move Observer to 3rd DC and I am looking If I could use another OS for observer instead of same as DB OS version. Why? Our new infrastructure is on RHEL Linux, therefore, I would like to deploy observer once and then wait for existing databases to be upgraded and moved to RHEL. Also, i understand that database and observer software should be of same version e.g. 10g, 11g.
My question is what if i install & configure observer on RHEL ? The end result would be:
1. Primary DB: Solaris 10, Oracle Software binaries version 10.2.0.4 2. Secondary/Standby DB: Solaris 10, Oracle Software binaries version 10.2.0.4 3. Observer: RHEL 5.8, Oracle Software binaries version 10.2.0.4
Is it accepted mix configuration say e.g. to run for six months? Also, it would be grateful to know if there any license implications when running observer on a different node running no databases at all.
I thought this was the easy bit in APEX when you just create a form based on a table, with some validations etc. and use it to insert,update data. However on inserting the first record, I get the following error:
is_internal_error: false ora_sqlcode: 100 ora_sqlerrm: ORA-01403: no data found
[Code]....
The form is based on a table with a primary key and the primary key is populated from an APEX-generated sequence.
I tried recreating the form, but still no good and now I get the no data error even when clicking "RUN" at page level, so the page does not even display.
I am using oracle 10g with data guard configured , I have primary ( A ) and standby database ( B ) .But because of some unavoidable conditions the primary database ( A ) got shutdown and was not starting , We shifted the standby database ( B ) at new location and changed it to primary with following command ,
startup mount; alter database recover managed standby database finish; alter database commit to switchover to physical primary; shutdown; startup;
This new primary ( B ) was open for end users for 2 days during which old primary ( A ) was shutdown .
I took the backup of ( B ) and restored it on A AND shutdown the B . Now A is acting as Primary database. Server B is shutdown . I want to change server B to standby database with A running as Primary .Is it possible ?
We have a requirement to update one column as given in the below mentioned pl/sql block.The challenge is to update the a large volume of data every day.Generally this kind of one to one update will takes long time (45 min approxmately) We need a solution that will be able to update 10 million records in few minutes.
begin for i in (select t3.event_id, t1.header_id from table_1 t1, table_2 t2, table_3 t3 where t1.header_id=t2.header_id and t2.entity_id = t3.entity_id) loop update table_1 set event_id = i.event_id where header_id =i.header_id; end loop; commit; end;
1) CREATE TABLE mutu (id NUMBER, text VARCHAR2(20));
2) INSERT INTO mutu SELECT rownum , 'mutu' FROM dual CONNECT BY LEVEL <= 10000;
3) Insert Duplicate row. INSERT INTO mutu VALUES (42, 'DUPLICATE');
4) COMMIT;
5)Create a NON-UNIQUE INDEX's CONSTRAINT as below. ALTER TABLE mutu ADD CONSTRAINT mutu_pk PRIMARY KEY(id) USING INDEX(CREATE INDEX mutu_pk ON mutu(id)) ENABLE NOVALIDATE;
6) Insert Duplicate row. INSERT INTO mutu VALUES (42, 'DUPLICATE');
7) ERROR at line 1: ORA-00001: unique constraint (SYS.ZIGGY_PK) violated
Why it shows the error. Even though I had created a NON-UNIQUE INDEX's Constraint only.
I have 2 tests Oracle instances (a 10g and a 11g) in which somebody added many users (almost 50k), every user having a TEST table with no rows in its schema.
I have to drop these users without recreating the instances, but DROP USER username CASCADE is taking almost 1 minute for each user. This will take me to almost a month.
i got a table and it had 5000 rows of data...ive deleted around 2000 to decrease the db size but i have no success. My harddrive is still showing the same size with no increase in mb.
I've looked at shrink etc methods but some are not compatible with 8i.
I take it the db is still reserving that those deleted rows thinking it may be used again which is the reason for no increase in space.
How to find out the status of the import job(how much work is done, ETA), i have started the job few days back it still running on the server. i am using import not datapump, because i got the dump which took with export. DB version is 11.2.0.2.0
for datapump i know i can use the following queries, but these are not giving any output l think these are only for datapump
select sid, serial#, sofar, totalwork from v$session_longops; select sid, serial# from v$session s, dba_datapump_sessions d where s.saddr = d.saddr;
and I was wondering if there is a quick method of populating it with calendar data so it would look like the following:
Product Year Month A 2008 Jan A 2008 Feb A 2008 Mar A 2008 Apr A 2008 May A 2008 Jun A 2008 Jul A 2008 Aug A 2008 Sep A 2008 Oct A 2008 Nov A 2008 Dec A 2009 Jan A 2009 Feb Etc.
I have a form of a user feedback with SMS Sending. which I gave to my all clients when ever my any client enter a feedback they press send SMS button so I got SMS if they press the button 2 times I got 2 same sms and if press the button 3 times I got 3 same sms and so on I want to restrict them and allow them to send only 2 sms of that feedback.any method on WHEN-BUTTON-PRESSED Trigger.