I am trying to maintain data audit in the database using triggers where i want to write the row level trigger in an generic way using the following concept .Using USER_TAB_COLUMNS table inside the trigger i want to bind all column values of the row into a single string in the following format
COLUMN_NAME = Value(:new/:old.COLUMN_NAME)=> this value would be bound dynamically is it possible to create a string for each row instance in the trigger at run time using the above mentioned format and user_tab_column table
For auditing, I need to insert the user, among other data, into different tables. The thing is, I have an application with DB account authentication, so a real database user is connected, when auditing, the user field inserted is "ANONYMOUS".
Apex 4.2 EPG Oracle Enterprise Linux 5.5 Database 11.2 EE
I have a database in which DB extended auditing is enabled but there are no audit specifications in privileges or statements or objects. So what will be audited in that case.
I have enabled auditing in my oracle9i DB, it is running fine, generating trails and I can capture those. Recently I checked in dba_audit_session table and found os_username, userhost, terminal showing null value whereas username is captured as my own (having dba prvis). Strange thing is that it doesn't occurs everyday.
One of the possibility of running batch files may occurs such issues, but I ran this batch everyday then why it is occurring some days only.
Attached File(s)
dba_audit_session.txt ( 2.71K ) Number of downloads: 4
I'm working on a Java-based web application and we have unit tests that we use to test all our all code that interacts with the database or code that interacts with our DB code. The Spring framework allows us to perform some DML within a transaction before each test and then rollback the changes. For the most part, this works, however when I run the full suite of unit tests, it will randomly commit data to the database causing the rest of the tests to fail.
will Oracle's auditing let me see where this odd-ball commit is occurring? Is there another way for me to see when data is being committed?
This does not appear to be happening on any of the systems we've deployed, however this is a bit unsettling and would like to know why this is occurring so that we can prevent it from happening in production.
In PL/SQL Plus, i can enable/disable auditing when i connect as sysdba by using these command:
SQL> ALTER SYSTEM SET audit_trail=db SCOPE=SPFILE SQL> shutdown SQL> startup
I've done it successfully with PL/SQL Plus command line. But in PHP, how can i do that?How to execute "shutdown" and "startup" from PHP? I've found this code for connect to oracle as sysdba:
I would like to be aware of all select statements that are run against the schema I am responsible for (for performance analysis reasons) My privileges are restricted and I think I won't get access to any dba views.
So is there a recomondation how I can solve this requirement?
I'd like to audit a table for any SELECT queries that are executed against it with no WHERE clause. I've read the documentation on DBMS_FGA carefully, and as close as I can tell, creating a policy with a NULL audit_condition causes all queries against the table to be audited, which isn't what I'm looking for.
I'm attempting to audit unsuccessful Select statements in order to trap a problem we're experiencing with our application. I have set the AUDIT_TRAIL initialization parameter to DB_EXTENDED, and bounced our database.
I've issued the AUDIT SELECT ANY TABLE WHENEVER NOT SUCCESSFUL command, and when I issue a SELECT statement as an application user, nothing appears in SYS.AUD$ even though the application has issued a select statement which returned no rows.
I have a problem with a PCI DSS - requirement in Oracle 11.2. (PCI DSS = Payment Card Industry Data Security Standard)
Problem:
we connect via ' ssh -2 -X -l oracle hostname ' to the databaseserver and become os-user 'oracle'. we have also two offshore locations with dba's and each dba comes with his personalized user to the jumphost and then with the above ssh command to the database server.
the problem is that each dba becomes the oracle-os-account and can now connect with '/ as sysdba' to the database.in pci-dss this is not allowed !
now my question:how can I audit these '/ as sysdba'-connections and prove which user connected at which time with the '/ as sysdba' command ?
database is in audit mode. we log to syslog on linus redhat 5. I know one solution could be setting "SQLNET.AUTHENTICATION_SERVICES" parameter to "NONE" in sqlnet.ora file will make it not possible to connect to the database without a password as sysdba. (sqlplus / as sysdba). but we have to many applications and jobs and this is not really the solution in this case.
I think I can only solve this problem with personalized OS-user DBA-accounts in the dba-goup on os-site and os-user oracle should not be used for the future ?? I also need personalized dba-user-accounts in the database. using sys and system is not allowed. this users has to be locked and only for special administration work could it be unlocked.
once i query SQL> SELECT username, extended_timestamp, owner, obj_name, action_name FROM dba_audit_trail WHERE owner = <Username>
there are many many rows , my question is , are you enable to truncate it from time to time , if not ,is it effect on the performance of the database ?
It seems that dml trigger doesn't fire when lob field is being updated using dbms_lob package.
As it stated in Oracle documentation:
QUOTE Using OCI functions or the DBMS_LOB package to update LOB values or LOB attributes of object columns does not cause Oracle to fire triggers defined on the table containing the columns or the attributes.
I need to know that table was updated (or is about to be updated), how can I do that in case it is lob field that is being updated?
It seems that dml trigger doesn't fire when lob field is being updated using dbms_lob package. As it stated in Oracle documentation:
Quote:Using OCI functions or the DBMS_LOB package to update LOB values or LOB attributes of object columns does not cause Oracle to fire triggers defined on the table containing the columns or the attributes.
I need to know that table was updated (or is about to be updated), how can I do that in case it is lob field that is being updated?
I have enabled Auditing but when i run the below given statement i get the output with count of 20 null username.I tried to run NOAUDIT ALL but still the same result.
Why does it show auditing for null username and how can i disable it.
select count(*) from DBA_STMT_AUDIT_OPTS where user_name is null; 20
I am importing some data from Oracle into another database on a regular basis. It works fine for most of the queries but couple of queries don't work sometimes (random). I don't get any errors or any data.
We switched on the Oracle auditing to find out the queries being sent to oracle db. We can see all the queries in the Audit log. Is it possible to configure Auditing to get the "Number of Rows" returned by Select statements so that we can be sure that some data was returned.
I downloaded a sample chart application from oracle website
[URL].......
it is working perfectly, but I want to know how it is build exactly. This is one of the queries,
"select null link, task_name, id, parent_task, start_date, end_date, decode(status,'Closed',100,'Open',60,'On-Hold',10,'Pending',0) status from eba_demo_chart_tasks order by project"
But I don't know where eba_demo_chart_tasks is stored, where can I find it!
In my page, I have two items(type Popup LOV): P2_APP and P2_MOD and I've created two LOVs for each item. What I want is that when I select one value in first LOV in second LOV I'll get data that is related with select value in first LOV.
My table logic in database is ok, and select statements are alright.
I think that select statement in second LOV is not fetching data from first LOV item:
select MOD_NAME as display_value, MOD_CODE as return_value from MODS where APPLICATION= *:P2_APP* <-------- this is first LOV item with data previously selected order by 1
I would like my application to have access to 2 different data sources [reached by two different database links]:
- DS_ONLINE that would be the main data source dedicated for an ONLINE access - DS_OFFLINE, consisting of some materialized views which would refer to the objects of the DS_ONLINE data source, maintained in case of the DS_ONLINE's temporary inaccessibility A flag DS_ONLINE_ACCESS_FLAG indicating the DS_ONLINE's accessibility would be maintained.
I would like to make the choice of the current data source TRANSPARENT to my PL/SQL application [I wouldn't like the logic of determining the current data source to be embedded in my application code]. How can I do it?
I thought I could write the code as follows:
OPEN CURSOR c FOR SELECT A, B, C FROM TABLE_SYNONYM;
Where the definition of the TABLE_SYNONYM would change in function of the DS_ONLINE_ACCESS_FLAG flag. It could be done as DDL statement of "create or replace synonym ..." placed in a procedure dedicated to set the DS_ONLINE_ACCESS_FLAG flag...
...but I'm not sure if it is going to work and even so
We are planning to migrate data from an application called clintrace to another application called argus safety. Both the applications are related to pharmacovigilence safety operation. Both the applications functionality are similar. So both the database are having the same data though the table structures might be different. Both the database are oracle clintrace db is 9i and argus db is 11g.
I am using Oracle XE (11g) with APEX 4.1.1.0023 and Glassfish for the APEX Listener.I created a Data Upload set of pages and things worked great. I then exported the whole application and imported it into a new environment that was the same except the schema name was different. It was a different owner. I then tested the data upload in the new schema/environment and could not get the data loading to recognize the table. Upon comparison of the Shared Components between the two environments I discovered that the imported application in the new environment was still looking for the original schema name. The name is not editable via the Shared Components page. I had to recreate the pages and have it create a new Data Loading object before things worked again.
I have one question. Is there any way to get some users data from active directory? I already have authentication scheme wich interact with AD, but now I need to get e-mail address from user who will login into application. Our Apex version is 4.1.
I am new to the Upload data from excel to Table..... how to implement on this.....I need code for UpLoad CSV/XLS Files to the Table ....Table name T_UPLOAD have contains 40 columns....
if I click button RUN here -> f?p=4000:1500 i have error: ORA-01403: no data found
Problem is if I set Application - BuilderApplication xxx - User Interfaces - User Interface Details - Home URL = f?p=&APP_ALIAS.:2:&SESSION. (&APP_ALIAS searched alias for apex builder).
I'm trying to unload data from a table, so i go to SQL Workshop > Utilities > Data Workshop > Data Unload to Text, i select my table, select the columns, give the condition to unload the data of the last month (between sysdate-28 and sysdate), then i select the comma separator, including the column names, and finally press "Unload Data"...the file is downloaded correctly, but when i open the file, the data is not ordered as i expected.