Server Administration :: Cannot Audit Entry In Dump
Jul 26, 2012
I am trying to enable auditing as
SQL> alter system set audit_trail=OS SCOPE=SPFILE;
System altered.
SQL> STARTUP FORCE
ORACLE instance started.
Total System Global Area 171966464 bytes
Fixed Size 2019320 bytes
Variable Size 113246216 bytes
Database Buffers 50331648 bytes
Redo Buffers 6369280 bytes
Database mounted.
Database opened.
SQL> show parameter audit
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest string /u01/app/oracle/admin/orcl/adu
mp
audit_sys_operations boolean FALSE
audit_syslog_level string
audit_trail string OS
SQL>
SQL> create user apexos identified by abc1;
User created.
SQL> grant connect, resource to apexos;
Grant succeeded.
SQL> audit select table, insert table by apexos by access;
Audit succeeded.
SQL> audit table by apexos by access;
SQL> SELECT audit_option, failure, success, user_name
FROM dba_stmt_audit_opts;
AUDIT_OPTION FAILURE SUCCESS USER_NAME
---------------------------------------- ---------- ---------- ------------------------------
TABLE BY ACCESS BY ACCESS APEXOS
SELECT TABLE BY ACCESS BY ACCESS APEXOS
INSERT TABLE BY ACCESS BY ACCESS APEXOS
cONN APPOS/ABC1
SQL> CREATE TABLE TAB1 (ID NUMBER, NAME VARCHAR2(20));
Table created.
SQL> insert into tab1 values (10, 'Michel');
1 row created.
SQL> insert into tab1 values (30, 'Andrew');
1 row created.
SQL> select * from tab1;
ID NAME
---------- --------------------
10 Michel
30 Andrew
SQL> /
ID NAME
---------- --------------------
10 Michel
30 Andrew
SQL>
SQL> select username, timestamp, action_name, action, SES_ACTIONs, sql_text
2 from USER_audit_trail where username='APEXOS';
no rows selected
SQL>
I also did not find any file contiaing the above statement as audit record in
/u01/app/oracle/admin/orcl/adump.
There are numerous old file in the /u01/app/oracle/admin/orcl/adump locaton. But When I executed the sql statement then that time no audit file was not generated in the location.
### Changes made ### 1 week before we did a change on tablespace segment management - from MANUAL to AUTO by following method: 1. create INVD2 & INVX2 & LOBD tablespace. 2. Move TABLE from INVD to INVD2. 3. Rebuild INDEX from INVX to INVX2. 4. Move LOBSEGMENT from INVD to LOBD tablespace. 5. After confirm no segments exist in old tablespace, offline and drop INVD & INVX. 6. Change default tablespace for INV user to INVD2. 7. RENAME TABLESPACE INVD2 to INVD, INVX2 to INVX. 8. Change default tablespace for INV user to INVD back. 9. Run Gather Schema Stat for INV using UNIX scheduler which work usually. However, error ended with ORA-03113 & ORA-03114. 10. Manual execute with same statement the following day, procedure completed successfull.
After 1 week later, inventory forms detected error FRM-40735 in all forms. Checked the gather schema stat job was run in the morning before user feedback..
AFter refer notes from metalink, I understand this is a bug where RENAME of the tablespace could not rename as the previous one, as the deleted entry is still exist in sys.ts$?
There is no segments exist in the deleted tablespace, or any user default tablespace is assigned to the deleted tablespace.
My Question:How can we delete the deleted entry from sys.ts$?And should we rename the tablespace from INVD to INVD3 (or can we use back INVD2) to avoid any unforseen error again?
How can i implement audit logs in oracle XE ? Is there any way to enable the audit logs in Oracle XE? I also want to view the audit log, so is there any tool to view those ?
I am trying to record audit info about sql statement run by user (only one audit entry per specific type of operation such as create table, or insert table). Such as if a user create three tables, but database record only one entry of create table type per session.
I am giving you all the statement I issued...
SQL> create user saimon identified by abc1;
User created.
SQL> grant connect, resource to saimon;
Grant succeeded.
SQL> audit table, insert table by saimon by session;
Audit succeeded.
SQL> show parameter audit
NAME TYPE VALUE -------------------- ----------- ------------- audit_file_dest string /u01/app/oracle/admin/orcl/adum audit_sys_operations boolean FALSE audit_syslog_level string audit_trail string DBSQL>
[oracle@DBTEST ~]$ sqlplus saimon/abc1
SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jul 19 21:45:09 2012
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production With the Partitioning, OLAP and Data Mining options
SQL>
SQL> create table TB1 (id number, name varchar2(20));
Table created.
SQL> create table TB3 (id number, name varchar2(20));
Now my question is I have enabled statement auditing for session not by access. So only one audit entry should have been recorded for two table creation. Why database is recording every create statement?
SQL> show user USER is "SYS"
SQL> SELECT audit_option, failure, success, user_name 2 FROM dba_stmt_audit_opts;
AUDIT_OPTION FAILURE SUCCESS USER_NAME ----------------------------------- ---------- ---------- ------------------------------ TABLE BY SESSION BY SESSION SAIMON INSERT TABLE BY SESSION BY SESSION SAIMON
alter session set events 'immediate trace name treedump level 51786'
/u01/app/oracle/admin/oracl/udump/oracl_ora_2679.trc Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
I saw big size .core file is generated in $ORACLE_HOME/dbs folder even when no dump dest parameter is set to dbs folder. How to check what causing this genrarating of these files.
I am facing problem in user_dump_dest directory...I have noticed that there are a lot of trace files with huge size in MBs.I clean it and after 4 days there are 40G of size..
AVDF current version 12.1 not support External/SAN storage. my question is, if customer get a huge number of Audit log and DBFW event records, then how max size can Audi Vault server support for online data (not archive data)? and can I use a Hardware server with multiple HDDs for AV Server?
Suppose that, I have two tables: emp, dept emp records the empid, emp_name, deptid dept records the deptid, dept_name
Here is a record, it's a president or some special position in company, so it's deptid is set to NULL. Here comes the question, how can I print all the emp_name with their deptartment name?
I know how to print all the emp_name with their department name if they have dept_id, but is that possible that I merge the record with dept_id NULL?
Is it possible to determine whether the dump file is created using data pump export or normal export method by just looking at dump file, If yes, how ?
Why i am asking such question is...normal export and data pump export would create a dump file with an same extension filename.dmp. So to avoid confusion during import, i would want to determine by what method the dump file was created.
Also this would be useful for me at the scenario when the customer sends me only the dumpfile and ask to import into target database. ( may be the customer don't know in what method the dump file was created ).
I have A Daily hot backup using Expdp Command On oracle 10g R2 installed on the Linux server. And I'm trying to move this Dump File to Another directory on Windows server 2003 over network using Ftp script which will be run after the export process finished Automatically.
select sum(bytes)/(1024*1024*1024) "GB" from dba_segments where owner='JACK';
The above select query give the output of Schema size with 15 GB. When i perform the same schema export, the dump file size generating is 2 GB. What is the difference between the two scenarios as how come there could be a variation in file size?
I am in the process of upgrading our 9i DB to 10g . As they are on different servers, I have installed 10g on the new server and applied the latest patchset 10.2.0.4.
I am creating the production database and importing th e9i dump file into this.Now I will be testing the whole application that uses this database.After a week, I need to take the latest 9i dump and export to the new 10g DB.
Do I need to just import the latest 9i dump into the 10g db or do I need to do anything else?
I am facing a problem importing DMP file in 11g. While importing it gives me error not responding. I have to attached the jpg file for that to clear you my point whats wrong is going during import. My Dump is on 9i i want to import that on 11G R2.
We have two databases running on 10.2.0.4 and 9.2.0.8. Both are having the same unpartitioned table of size 80G. I am exporting the table on 10g by using parallel=8 and dumpfile with %U option. That took around 4 hours to export the table.
And on 9.2.0.8, i am exporting using below parameters, taking around 5 hours.
buffer=2000000 recordlength=64000
options i can try to speed up the export in both versions.
I want to import dump file (without 2 tables) .The dump file contains 100 tables,indexes and constraints. So out of 100 tables i want to import 98 tables from dump file (without 2 tables).
I want to take a schema level export .The schema size is 115 GB size . Do we require same amount of space to be available in server side (where we are taking a dump) as the schema size or less or more space is required in server side ?
I tried to import a dump in 11g that was taken in oracle 9i. The import started but it hangs after some time. Exactly say it check only the character set of the DB's then it hangs. let me know if there are any specific procedures to import a dump from 9i to 11g directly.
I got: ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-31619: invalid dump file "C:P7DBfpac052912_dp01.dmp"
Done in AIX: create directory dp as '/bak' grant read, write on directory dp to public; grant exp_full_database to username;
Done in Windows: create directory dp as 'C:P7DB'; grant read, write on directory dp to public; grant exp_full_database to username; grant imp_full_database to username;
We are DB users (not DBAs) and used always exp/imp bevore application upgrade.
Was googling arround and read something like "Oracle Data Pump - Time to let go of Exp / Imp". It seems exp/imp is obsolete.
Our system doesn't have "expdp" command
> find . -name expdp >
is this because of too old SQL*Plus?
> sqlplus SQL*Plus: Release 8.1.7.0.0 - Production on Tue May 29 16:05:28 2012 (c) Copyright 2000 Oracle Corporation. All rights reserved. Enter user-name: ^C^C
- does our DBA need to give us privileges to run expdp/impdp?
- is that true that a expdp/impdp dump will be on the Oracle server (not the client machine)?
While trying to import a schema using Data Dump, I am facing the following issue - UDI-00018 - Import utility version can not be more recent than the Data Dump server.Following is the version information of the source and target DB and the utilities :
Source DB server : 10.1.0.2.0 Export utility : 10.1.0.2.0 Import utility : 10.1.0.2.0
Target DB server : 10.1.0.2.0 Export utility : 10.2.0.1.0 Import utility : 10.2.0.1.0