we use Oracle11gR2 on Win2008R2 .By erroneous operation of OracleDB, there was trouble that we could not log in to Application(Oracle Insight) running on Oracle11gR2. I looked up and understand by running the below ALTER DATABASE ADD SUPPLEMENTAL LOG DATA we can find the SQL statement that was executed in the past from SQL_REDO column of V$ LOGMNR_CONTENTS table, undo SQL statement to rollback from SQL_UNDO column of V$ LOGMNR_CONTENTS table. But , we are concerned about the adverse effect of enabling the supplemental log because it is production environment . I heard the contents of the REDO log will increase if we enable supplemental log.
Q1-How much the REDO log would increase roughly if you enable supplemental log ? ( I am concerned about the issue of the capacity of the REDO log and the problem of DB performance . )
Q2-Is it necessary to restart the DB instance to enable supplemental logging after executing the below ?ALTER DATABASE ADD SUPPLEMENTAL LOG DATA.
Q3-Is there any other adverse effect we should care if you enable supplemental log ?
Q4-Logminer is usable if DB is running on no-archive mode ? In my understanding , it is possible if you enable supplemental log and have the REDO logs we would like to analyze.
I renamed 100 tables and recreated them, now I need to copy supplemental login information from renamed table to new tables. Environment is oracle 10G.
what is the difference on DBMS_STATS for table level and partition level , which will provide the best optimizer . If the table xxxx is partitioned from 1 to 10 ,then running gather stats on table xxxx as whole table level or partition level which will provide best result on the performance.
I want to write a sql query which will fetch the data from manual_temp_master and manual_temp_detl.But from manual_temp_detl table, Price_bkt_cds columns should be displayed as columns. Like the should look like as below:
can a table level check constraints have conditional checking (if else clause or case conditional structures) and checks which are limited through something like a where clause which inside the table level check constraints.And can a table level check constraints refer to a column in another table column which should have a the same value.
I configure logical standby online .when I execute dbms_logstdby.buid,first
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
it was blcoked by other sesson,then i kill the holding session,but no work.then i cancel this step and execute it again . the error is
SQL> EXECUTE DBMS_LOGSTDBY.BUILD; BEGIN DBMS_LOGSTDBY.BUILD; END; * ERROR at line 1: ORA-01354: Supplemental log data must be added to run this command ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 3669 ORA-06512: at "SYS.DBMS_LOGMNR_INTERNAL", line 3755 ORA-06512: at "SYS.DBMS_LOGMNR_D", line 12 ORA-06512: at line 1 ORA-06512: at "SYS.DBMS_INTERNAL_LOGSTDBY", line 370 ORA-06512: at "SYS.DBMS_LOGSTDBY", line 157
I've been an Oracle DBA for almost twenty years. In that time I have worked in various organisations - sometimes as a "permanent" employee (although nothing is permanent these days ) and other times as a contractor. I have been in my current role for the last six months since being made redundant from my last position.
In this organisation we support a lot of Oracle databases on many different servers spanning various networks. There are two DBAs who have been here for several years
However, they are both in the habit of logging in as SYSDBA for everything. They find it amusing that I refuse to do so for daily administration as I know that this is bad practice. Import and exports are even taken as SYS. Whenever I mention that it's bad practice they shrug it off saying that they've been doing it for years with no problems.
So I have been scouring Tahiti and Metalink looking for a definitive document from Oracle that states just why this is such bad practice. I would imagine that an import as SYS could have catastrophic consequences for the data dictionary if you're not careful, but I can't think of much else.
We've all seen Michel's default "just don't do it" message whenever a poster mentions that they have done something as SYS.
I got a call today saying that some users were having a hard time logging on to Oracle using our in-house PowerBuilder-built application. I have Oracle 10.2.0 running on 32-bit Windows 2003. Poked around a little, looking in the dump directories, checking CPU utilization, checking OEM to see if there were any obvious performance issues or blocking issues or whatever - didn't see anything unusual. Logged in with SQL Plus with no problem. Then I checked the audit logs. All we audit is user logons and logoffs. What I was seeing is that a handful of users were showing a logon event with a logoff (action 101) 10 to 15 seconds later. One user in particular was showing this over and over again.
Materialized views are normally used for summarized data access.
CREATE MATERIALIZED VIEW mv_snapshot_A REFRESH FAST START WITH SYSDATE NEXT SYSDATE + 20/1440 WITH PRIMARY KEY AS SELECT * FROM A;
This does not seem to be the case here as the materialized view seems to be just a full select. The overhead of the snapshot logs are concerning for this core table. Can we turn off logging in 10g ? the materialized view is defined as fast refresh/ build immediate .
The main requirement here is to keep the snapshot every 15 minutes so that the users can see the updated information ( the flow of data from one location to other).
User get the location wise count of data and can go further in details like in which location wise system wise data count. As the base table is volatile the materialized view is used so that the moment the user clicks for location wise details the data is static for 15 min and user don't get confused.
We have our own portal page. We are trying to pass username and password so they do not have to log in twice. This worked in version 4.1 but when we put in patch p13331096 to get to version 4.11.00.23 the form stopped working. Users have to log into our portal page an then into APEX. That does not make them happy.
I am trying to trouleshoot an issue with Oracle and want to see what commands are being sent by by process. By reading the documentation it appears there is client logging to do this but I can not get it to work. I added the following commands to my SQLNET.ORA file but no log file is created:
tnsping.trace_directory = C:appproduct11.2.0client_4 etwork race tnsping.trace_level = admin trace_level_client = user trace_directory_client = C:log log_directory_client = C:log trace_unique_client = on trace_timestamp_client = on
Ths TNSPING works just fine it logs the information I expect. The client logging, however, never creates a file .
we have a database application which is done frequently.in these we load data throught Sql loader, we create an DB instances, we do several DML operation on the database.
now for such task in an application we need to keep an logging track of each task performed in PL/SQl procedure packages.
When I try to log into my db with a specific user I get this message. Below is from the alert log. I can login as system just fine.
----- Error Stack Dump ----- ORA-00604: error occurred at recursive SQL level 1 ORA-01438: value larger than specified precision allowed for this column ORA-06512: at line 2 Oracle 10g OEL 5.5
. I need to configure simple standby database. I have followed this[URL]...-guard-setup-11gr2.php tutorial to do that Problem is that primary db cannot log on to the standby db. Informations privided below
:Primary DB:CentOS 6.4Oracle 11gR2ORACLE_SID=primdb1SQL> SELECT MESSAGE FROM V$DATAGUARD_STATUS; MESSAGE--------------------------------------------------------------------------------ARC0: Archival startedARC1: Archival startedARC2: Archival startedARC1: Becoming the 'no FAL' ARCHARC1: Becoming the 'no SRL' ARCHARC2: Becoming the heartbeat ARCHARC1: Beginning to archive thread 1 sequence 31 (336165-356856)Error 12514 received logging on to the standbyPING[ARC2]: Heartbeat failed to connect to standby 'stbydb1'. Error is 12514.ARC1: Completed archiving thread 1 sequence 31 (336165-
Can you take an incremental backup level 1 or level 0 without archivelogs?
syntax would bebackup as compressed backupset cummulative level 1 database.
The reason I ask is because when I run backup as compressed backupset cummulative level 1 database plus archivelogs # it runs fine, but when I run backup as compressed backupset cummulative level 1 database it just hangs.
between statement level or row level trigger, which trigger will execute first.We have BEFORE_UPDATE_ROWLEVEL_TRIGGER and BEFORE_ UPDATE_ STATEMENT LEVEL_TRIGGER triggers on table product.
A single master schema where many developers are accessing. all share same password.
now i would like to trace all the changes made by each users. so i create a individual users for all and grant permission to access that schema.do i have a possibility of auditing the changes did by each user for that particular schema
I have a transaction table with some custom properties and two status columns. There are 2 different applications(.Net and Pl/SQL Procedure) using the table. Both the process run parallel and fetch records one by one, perform some calculation and update the status column.
There are likely more chances that both applications will fetch the sane record and try to update the same row. This will cause a lock. Can i use row level lock before update by each application? Or is there any other methods/process in which this can be handled. ?
I want to display result with respective levels. for example p21 ,p30 are coming under first level . p22,p25 ,P31,P32are 2nd level. p23,p24,p26,p27 are 3rd level p28,p29 are FOURTH level item_id's.
Already I 'VE tried using CONNECT BY PRIOR clause.BUT STILL I COULDN'T GET THE RESULT.