I have a requirement to develop audit trails for non-updatable view.In order to do this I created an audit trail table and row level triggers on view's underlying tables.When I update any column value from front-end oracle form then both triggers (on both tables) fire and log audit information for all columns(for both tables) into generic audit table. so far it's good but there is small difference in time interval between the triggers firing and seeing time stamp difference for both tables audit data.
sample audit data:
changed_by changed_on changed_type table_name column_name old_val new_val myself 10/23/2012 10:15:*48* AM U TABLE1 COLUMN1 X Y myself 10/23/2012 10:15:*48* AM U TABLE1 COLUMN2 C D myself 10/23/2012 10:15:*49* AM U TABLE2 COLUMN5 A B myself 10/23/2012 10:15:*49* AM U TABLE2 COLUMN6 F G
My requirement is to see all columns data from both tables in audit table with same time stamp and we will query the audit table using time stamp to show up old and new values of the non-updatable view on particular time.How can I make two triggers log the same time stamp on audit table ?.
i need to set up a central server with all the master tables and two other local database which will hold the updatable materialized view of the master table...the databases must be synchronized with central server..and user will work on the materialized view database...
We are facing a project where it is mandatory that the migration (from 9i to 11g) happens without any downtime. We thought about using Goldengate do to this migration. But i would like to listen to somebody who already did such kind of migration (i never used goldengate before). The basic steps to do such migration would be:
1) Install the Goldengate client on both source and target 2) Export only the metadata (structure of the table, for example) from source to target (here is one point of doubt of mine. This export can be only done using exp/imp?) 3) Perform the initial load from source to target (here i have another doubt: It it possible to perform an initial load from a whole database?) 4) Configure manager, export and replicat to perform the migration with the source database open in read-write
With the steps above, would i be able to perform a migration without downtime? What other considerations do you have?
I have a form which utilizes 2 canvases that the user can toggle between. There is a database column that I would like to have appear on both and be updatable from either place. In my search here first, I found where I could set up a non-database item and copy to it at the point of Post-Query...and that comes close, but I need both columns to not just reflect the db column but be able to update it.
I am about to try using a second trigger to move things from a non-DB column to a DB-column next, but just wondered if there is a better way. When I first compiled with the designer the duplicate column I set up as a DB column also. It only gave me warnings (that I could have lived with) but the ultimate compile my system does outside of the designer calls it an error.
CREATE MATERIALIZED VIEW LOG ON ABC; CREATE MATERIALIZED VIEW MV_ABC REFRESH FAST START WITH SYSDATE NEXT SYSDATE+20/(24*60) FOR UPDATE AS SELECT * FROM ABC WHERE TMSTP> SYSDATE-1;
It is working fine. Appended the Query with WHERE clause in MV_ABC MVIEWS.
DROP MATERIALIZED VIEW LOG ON ABC; DROP MATERIALIZED VIEW MV_ABC; CREATE MATERIALIZED VIEW LOG ON ABC; CREATE MATERIALIZED VIEW MV_ABC REFRESH FAST START WITH SYSDATE NEXT SYSDATE+20/(24*60) FOR UPDATE AS SELECT * FROM ABC WHERE TMSTP> SYSDATE-1;
ORA-12013: updatable materialized VIEWS must be simple enough TO DO fast refresh
SELECT waarde1,waarde2, APEX_ITEM.POPUP_FROM_QUERY (3,waarde3,'select select ((waarde1-0.1)+(level*0.1)) d, ((waarde1-0.1)+(level*0.1)) r from (select * from lov_test where waarde1 = c001) connect by level <= (((waarde2-waarde1) *10)+waarde1)') waarde3 FROM lov_test ORDER BY 1
The idee is to get a popup or dropdown box for "waarde3" in witch the selectable values are waarde1 to waarde2 rising with 0.1 at a time.
The error I get is:Error in init lov: ORA-00936: Ontbrekende uitdrukking. p_lov:select select ((waarde1-0.1)+(level*0.1)) d, ((waarde1-0.1)+(level*0.1)) r from (select * from lov_test where waarde1 = c001) connect by level <= (((waarde2-waarde1) *10)+waarde1)wwv_flow_security.g_security_group_id:1264429985836387wwv_flow_security.g_curr_flow_security_group_id:1264429985836387 Unable to initialize query. For every row in the table lov_test.
I'm planning to upgrade a small database (~150GB) from 10.2.0.3 on windows 2003 23bit to 11.2.0.3 RAC on Linux 5.8.The database contains oracle spatial too. A suitable method and link to document to be followed.
I recently performed an upgrade on a new server from oracle 10gr2 to oracle 11gr2 (11.2.0.3).
I take the rman backup from oracle 10g server and restore it on new server where I installed oracle 11gr2.
But on my previous oracle 10gr2 server I enabled the auditing. After doing successful upgrade now when I try to login with any user except sys I receive the following error:
SQL> conn scott/tiger ERROR: ORA-00604: error occurred at recursive SQL level 1 ORA-00904: "OBJ$EDITION": invalid identifier ORA-02002: error while writing to audit trail ORA-00604: error occurred at recursive SQL level 1 ORA-00904: "OBJ$EDITION": invalid identifier
I got the workaround by setting the parameter audit_trail=FALSE (Previous value was DB_EXTENDED) .But I want my auditing to be enabled as per y requirements.
find reference note IDs for DB upgrade from 11.2.0.2 to 11.2.0.3.2, as I am finding only Exadata which I don't want but I want to find for Ebiz database, on OS - Solaris 10 9/10 s10s_u9wos_14a SPARC.
Along with existing RMAN backups we do Exports - of our DB using and OS User and Oracle Wallet.Of the DB's we have upgraded the Data Pump Directory
Select * from dba_directories; (there are other commands to get this info as well).
I captured screens from the DBUA upgrades, but did not see an option to change this information.Is there a way to feed this information to the install moving forward. IE, ./DBUA -silent ?
Also, anyone tracked the percentage of storage increase from 10.2/11.1 to 11.2.
I am going to upgrade database from 11.1.0.7 to 11.2.0.3
1) If compatible is set to 10.2.0 in 11.2.0.3, will it work ? 2) If compatible set to maximum level, will it affect our application ? 3) Whether any code related problem occurred after upgrading like PL/SQL codes ?
We are planning to upgrade our database from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi to Oracle Standard Edition 11g . We also have oracle apex installed on Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi database with oracle apex 3.1
No our plan to upgrade the database and oracle apex to 4.2. Since Oracle Enterprise Edition is licensing is very expensive we though of buying standard edition and upgrade to this version.
can we upgrade the oracle database from enterprise edition to latest standard edition?
I am trying to upgrade the database 32bit from 10.2.0.1 to 10.2.0.4 version in lunux 32 bit . I faced the version incompatibilty error during patchset installation and hence ran the patchset installation with -ignoresysprereqs option.Now during the patchset installation i encountered the below error in in the install logfile.
INFO: Start output from spawned process: INFO: ---------------------------------- INFO: INFO: /u01/app/oracle/product/10.2.0/bin/genclntsh INFO: genclntsh: genclntsh: Could not locate /u01/app/oracle/product/10.2.0/network/admin/shrept.lst [code]....
Need certified os and versions for oracle db 10gr2
After many tries of upgrade'ing oracle I decided to make backup of my database and then remove 11.2.0.1 and install 11.2.0.3 to recover db. Unfortunately I am not sure how should I perform restoration.I backed up data by calling RMAN> backup database include current controlfile;
After that I moved files from fast_recovery_area and cleared my machine from current oracle release.I also did a copy of directories:
I am trying to come up with a plan for an upgrade that is needed for a server I maintain. It is a Windows 2003 32bit running Oracle 10.2.0.3 on old Hardware. It also has two obscure 3rd party applications that are running on it that directly access the database. These applications are supported by off site consultants.
My initial plan was to Create a Windows 2008 R2 Virtual Server and install the same version of Oracle 10.2.0.3. Using Rman clone the database to the new server. Have the consultants come in and get the applications working. Once everything in the new environment seems to be working fine, run RMAN again and reclone the database to have all the latest data. Then at a later time upgrade the database to 11g 32bit. Virtually no downtime and we could spend all the time we needed getting the applications working and testing the new environment.
The plan is dead right of the bat though because I realize 10.2.0.3 is not supported by Windows 2008 R2. I really did not want to add an Oracle DB upgrade into the mix at the same time. Just because their are so many changes from the old environment to the new that I want to break this down into manageable chunks. And I can maybe get by with 1 day of down time.
So now I am looking at installing 11g on my Virtual Server, Clone the database, upgrade the database, have the consultants come in and get the applications working. All the while we are down. If we run into any problems, which you always do, it just completely blows the schedule.