alter session set events 'immediate trace name treedump level 51786'
/u01/app/oracle/admin/oracl/udump/oracl_ora_2679.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORACLE_HOME = /u01/app/oracle/product/10.2.0/db_1
1) Can we fetch 'select statements' from redo log files through the use of log miner utility or any other? (I think redo log file contains only insert,update,delete and DDL/DCL commands only)
2) If "No" to the above answer then how can i fetch all select statements fired on the system for a day or particular time. (setting of sql_trace may be the one of them, but can it be possible for system level)
SQL> alter system set audit_trail=OS SCOPE=SPFILE;
System altered.
SQL> STARTUP FORCE ORACLE instance started.
Total System Global Area 171966464 bytes Fixed Size 2019320 bytes Variable Size 113246216 bytes Database Buffers 50331648 bytes Redo Buffers 6369280 bytes Database mounted. Database opened.
SQL> show parameter audit
NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ audit_file_dest string /u01/app/oracle/admin/orcl/adu mp audit_sys_operations boolean FALSE audit_syslog_level string audit_trail string OS SQL>
SQL> create user apexos identified by abc1;
User created.
SQL> grant connect, resource to apexos;
Grant succeeded.
SQL> audit select table, insert table by apexos by access;
Audit succeeded.
SQL> audit table by apexos by access;
SQL> SELECT audit_option, failure, success, user_name FROM dba_stmt_audit_opts;
AUDIT_OPTION FAILURE SUCCESS USER_NAME ---------------------------------------- ---------- ---------- ------------------------------ TABLE BY ACCESS BY ACCESS APEXOS SELECT TABLE BY ACCESS BY ACCESS APEXOS INSERT TABLE BY ACCESS BY ACCESS APEXOS
cONN APPOS/ABC1
SQL> CREATE TABLE TAB1 (ID NUMBER, NAME VARCHAR2(20));
Table created.
SQL> insert into tab1 values (10, 'Michel');
1 row created.
SQL> insert into tab1 values (30, 'Andrew');
1 row created.
SQL> select * from tab1;
ID NAME ---------- -------------------- 10 Michel 30 Andrew
SQL> /
ID NAME ---------- -------------------- 10 Michel 30 Andrew
SQL>
SQL> select username, timestamp, action_name, action, SES_ACTIONs, sql_text 2 from USER_audit_trail where username='APEXOS';
no rows selected
SQL>
I also did not find any file contiaing the above statement as audit record in /u01/app/oracle/admin/orcl/adump.
There are numerous old file in the /u01/app/oracle/admin/orcl/adump locaton. But When I executed the sql statement then that time no audit file was not generated in the location.
I saw big size .core file is generated in $ORACLE_HOME/dbs folder even when no dump dest parameter is set to dbs folder. How to check what causing this genrarating of these files.
I am facing problem in user_dump_dest directory...I have noticed that there are a lot of trace files with huge size in MBs.I clean it and after 4 days there are 40G of size..
I am in charge of several instances located on a Linux server CentOS, virtualized on a ESX 3.5 environment.
From time to time (every 4 to 5 days), I have some errors in the alert.log. Last occurence was last night :
Corrupt block relative dba: 0x01004e12 (file 4, block 19986) Fractured block found during buffer read Data in bad block: type: 6 format: 2 rdba: 0x01004e12 last change scn: 0x0000.131aaa5b seq: 0x2 flg: 0x04 [Code] ........
We are doing user manual backup (with BEGIN/END BACKUP) every night at 8PM, ending at 9PM approx. Then, fractured blocks never occur during backups. At 1AM, the maintenance window is opening, thus explaining the GATHER_STATS_JOB job.
When I check corruption on early morning, I am always unable to reproduce the problem. DBV is OK without issues. We never had a problem with the data itself, whatever it is a table or an index in the reported failed block.
I would like to know what could cause these logical corruption, and how to stop them ?
I need to display the parameter and status of DB for listener and Read Only.
I know those value could be get from command line , but could we get the values of Listener and Read only by SQL/PlSQL? So I can get it through the query of DB.
I am facing a strange issue on 11gR2 (OEL 5.4) standby readonly with apply database.It's throwing 16000: database open for read-only access during SELECT's .
Here is snapshot of errors.
ORA-00604: error occurred at recursive SQL level 1 ORA-16000: database open for read-only access
I have created an user named "Raja" with a default tablespace as "Raja_TBS" along with a datafile "rajadata.dbf". I have taken the tablespace offline
SQL> alter tablespace raja_tbs offline;
Tablespace Altered. when I take a tablespace offline, which means I cannot read or write and the tablespace is currently unavailable for users. I am still able to create a table on the "Raja_TBS" while it is offline.
Is it possible to determine whether the dump file is created using data pump export or normal export method by just looking at dump file, If yes, how ?
Why i am asking such question is...normal export and data pump export would create a dump file with an same extension filename.dmp. So to avoid confusion during import, i would want to determine by what method the dump file was created.
Also this would be useful for me at the scenario when the customer sends me only the dumpfile and ask to import into target database. ( may be the customer don't know in what method the dump file was created ).
I have A Daily hot backup using Expdp Command On oracle 10g R2 installed on the Linux server. And I'm trying to move this Dump File to Another directory on Windows server 2003 over network using Ftp script which will be run after the export process finished Automatically.
select sum(bytes)/(1024*1024*1024) "GB" from dba_segments where owner='JACK';
The above select query give the output of Schema size with 15 GB. When i perform the same schema export, the dump file size generating is 2 GB. What is the difference between the two scenarios as how come there could be a variation in file size?
I am in the process of upgrading our 9i DB to 10g . As they are on different servers, I have installed 10g on the new server and applied the latest patchset 10.2.0.4.
I am creating the production database and importing th e9i dump file into this.Now I will be testing the whole application that uses this database.After a week, I need to take the latest 9i dump and export to the new 10g DB.
Do I need to just import the latest 9i dump into the 10g db or do I need to do anything else?
I am facing a problem importing DMP file in 11g. While importing it gives me error not responding. I have to attached the jpg file for that to clear you my point whats wrong is going during import. My Dump is on 9i i want to import that on 11G R2.
We have two databases running on 10.2.0.4 and 9.2.0.8. Both are having the same unpartitioned table of size 80G. I am exporting the table on 10g by using parallel=8 and dumpfile with %U option. That took around 4 hours to export the table.
And on 9.2.0.8, i am exporting using below parameters, taking around 5 hours.
buffer=2000000 recordlength=64000
options i can try to speed up the export in both versions.
I want to import dump file (without 2 tables) .The dump file contains 100 tables,indexes and constraints. So out of 100 tables i want to import 98 tables from dump file (without 2 tables).
I want to take a schema level export .The schema size is 115 GB size . Do we require same amount of space to be available in server side (where we are taking a dump) as the schema size or less or more space is required in server side ?
I tried to import a dump in 11g that was taken in oracle 9i. The import started but it hangs after some time. Exactly say it check only the character set of the DB's then it hangs. let me know if there are any specific procedures to import a dump from 9i to 11g directly.
I got: ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-31619: invalid dump file "C:P7DBfpac052912_dp01.dmp"
Done in AIX: create directory dp as '/bak' grant read, write on directory dp to public; grant exp_full_database to username;
Done in Windows: create directory dp as 'C:P7DB'; grant read, write on directory dp to public; grant exp_full_database to username; grant imp_full_database to username;