Redo log information can be transmitted in one of two ways from the primary database to the standby database: either by ARCH or LGWR.
1. when ARCH involves
2. when LGWR involves
FAL_Client = (Should i enter net service name or Db name or service_name
FAL_Server = Should i enter net service name or Db name or service_name )
FAL_CLIENT='whichone'
In statspack I find a lot of wait events "ARCH wait on SENDREQ". The database is configured with the LGWR ASYNC attribute. In the documentation I can read that the "ARCH wait"-events may happen in case the database is configured with the ARCH attribute.How is it possible I get these "ARCH wait"-envents and do these wait events have any effect on the "on line" performance of the database? Is it possible that a user that saves data in the database has to wait for thest "ARCH wait"-events?
I have RMAN arch backup setup for every 5 hours, it seems running perfect and taking archives backup, but it not removing from archive log location, I have DR setup for this environment, how RMAN removes archives in DR environment.
Here is the information.
skipping inaccessible file /PRD1/arch02/arch/PRD1_1_8349_790425892.arc RMAN-06061: WARNING: skipping archived log compromises recoverability skipping archived logs of thread 1 from sequence 8440 to 8478; already backed up skipping archived logs of thread 1 from sequence 8440 to 8478; already backed up channel disk1: starting compressed archived log backup set channel disk1: specifying archived log(s) in backup set input archived log thread=1 sequence=8479 RECID=151073 STAMP=816692206 input archived log thread=1 sequence=8480 RECID=151076 STAMP=816695235
I have an environment in which backup is performed of Oracle 10/11 databases with the use of RMAN and Tivoli Storage Manager (Data Protection for Oracle).There are several databases and for every one there is a daily full backup and hourly archive logs backup.
Sometimes when full db backup takes longer (up to 4 hours) archive logs backups are missed - as TSM node cannot perform two backups at a time. I would like not to have those missed backups.
Option A was to delete association of the arch log scheduler during full backup. But when removing association we lose historical data about backup. And we need historical data to be able to create weekly / monthly / quarterly statistics of completed backups. We need to have 99% completed.
Option B was to create two nodes in TSM (TDPO) and one will do full backup only and another one only arch logs backup. So the problem is moved to RMAN. But from RMAN specialist I heard that this may cause problems with full backup. During full backup also archive logs are backed up (at the start and end) so there might be a problem with accessing the file that is used by another process. And this may cause problem with full backup - which we want to avoid especially.
We had an issue last week were we had a session with a very basic SQL query lock up the database, spiking the CPU at 100%. When you would kill the session, the lock would just jump to another session and so on. We finally had to restart the database since our clients were being kicked out. After the restart of the database, the LGWR ended up locking and held the CPU between 85-95%. The archive logs were switching every 5 minutes, when normally it would be every 45min. We spoke with Oracle Support, but they just ended up brushing the issue off and saying it was a hardware issue and were not able to provide any kind of backing to that.
I have configured data guard in the windows XP same server .It is not able to apply log to the Standby database.when I queried the following, I go the following erros..
its possible to have multiple LGWR processes for a single database.If its possible how does the multiple processes write from redo log buffer to online redo log file.
1)On primary Database Standby Redo log is required for switchover and on standby database Standby Redo log is required for
--Real time Apply --Maximum protection or Maximum availability
Am I correct?
2)My database is in Maximum Performance mode. I set up following entry on init.ora: LOG_ARCHIVE_DEST_2='service=standby LGWR ASYNC'. My question is do I need to have STANDBY Redo log file on standby database in order to use LGWR transport (LGWR ASYNC)mode from primary? Without Standby redo log on standby database can it transport redo data from primary to standby using LGWR transport mode (LGWR ASYNC)?
3)I have changed from the "ARCH" attribute to "LGWR" attribute of the LOG_ARCHIVE_DEST_n initialization parameter. But I have not changed the protection mode. I would like to know whether is there any impact in the behavior of the database, if we do not change the mode from "MAXIMUM PERFORMANCE" to "MAXIMUM AVAILABILITY"?
What the LGWR process writes in the disk before the DBWn flush the data?
I think that are a kind of "ids" (maybe the rowid?) of data blocks present in the header of the blocks, that the SMON uses to locate and exclude the "uncommited" data in ROLLFOWARD process.
I have been implementing a script to change a lot of data in a database production.Because of this the database will be 100% dedicated to the execution of that script, in the sense that nothing else will be running in this period (the application will be stopped).
what can i do to improve performance of that execution? is there any oracle manual online for this type of problem? I do not know if it's possible, but I'm thinking of things like disabling locking mechanism (if possible I could run instead of a process many processes in parallel), disabling index growing (during the process), disabling constraints.
I am developing some automated test packages for my PL/SQL Packaged code. Going forward I can code the test package in conjunction with the code but I have some historic packages that I would like to develop these test packages for.
To save time I would like to employ oracle data dictionary views in order to construct the framework for my test package. This includes using SQL to get a list of procedures / functions within the package in order to create the test procedures (spec and body). I can do this in a basic way using the user_procedure view with something like...
SELECT 'PROCEDURE test_' || LOWER(procedure_name) || ' (p_result OUT VARCHAR2 IS BEGIN JTA.ACCOUNT_PROFILE_MAINT.' || procedure_name || ' END ' || LOWER(procedure_name) || ';' FROM user_procedures WHERE object_name = 'ACCOUNT_PROFILE_MAINT' AND subprogram_id != 0 ORDER BY subprogram_id;
However, the above only really works (in simplistic form.. without parameters) for procedures within the package. I would also like to be able to determine if the procedure listed is actually a function or procedure (so that I can alter the syntax accordingly to generate a correctly formatted string calling the program unit).
So, initially how do I determine the type of package program unit I have (Proc/Function)? Do I need to go to all_source to get this information or are there other views available I can join to?
Eventually I would like to extend this to be able to automatically include any parameters in the generated calling string.. again, is there any other option apart from all_source to get this information?
The sqlcache is getting over-written.I would like to capture information from AWR snapshots and feed those (as a workload) into the DBMS_ADVISOR.I can't see where it's possible (other than manually creating my workload from AWR information)
How can i retrieve which tables have composite primary keys, and only one of the primary key columns is a foreign key to another table?
Like: CREATE TABLE club ( clubId NUMBER, name VARCHAR(20) NOT NULL, PRIMARY KEY (ClubId) );
CREATE TABLE team ( teamid NUMBER(10), clubid NUMBER(10), teamname VARCHAR(10) NOT NULL, PRIMARY KEY (teamid, clubid), FOREIGN KEY (clubid) REFERENCES club (clubid));So in this case, the team table
)How do you view the value of the parameter that is being used by instance? show parameter..? 2)How do you get the information of hidden parameters 3)What is the database object that stores information related to various types of db connections over network 4)How do you verify since when the db session is running 5)How do you verify the Originating machine details of the database session 6)How do you verify the name of program that the db session is running 7)What is the naming convenion of Base tables. Where is the information of base tables stored? 8)How are dynamic views created. Whre is the information of dynamic views stored?
We are running Oracle 10g. I need to pull the DDL information from our Oracle tables. The following SQL statement returns the result as a "HUGECLOB". Is there a way to convert the result to text in varchar2 data type? Since tables being processed have numerous partitions, the DDL information for them is quite large, therefore, using substring would not be a viable alternative.
[code]select DBMS_METADATA.GET_DDL('TABLE','EMPLOYEE') from DUAL; [/code]
I look after a database that contains GIS mapping data. We do not use Oracle Spatial - it's just a plain Oracle Standard DB. It is running in Noarchivelog mode (I know - it's not a good idea, but will be sorted when our new Sun T4-1 arrives).
There are only a couple of users who actually edit data in the database, but about 100 simultaneous users who access it. In day to day use we have no performance issues. The DB has 3 50Mb redo log groups, and these switch about every hour or so during normal use.
Every few weeks we do a bulk update of our underlying map data. This involves putting about 4Gb of data into the database (which is about 15Gb in total). This takes about 5 hours and whilst I'm sure our old Sun v240 server lacking power is a substantial cause, I think the lack of redo space makes matters worse. Last time we did this, the system clocked just over 200 redo log switches in 5 hours. There were lots of "Checkpoint Incomplete" messages in the log file too.
The software we use to load the map data doesn't allow the data to be loaded with a nologging switch.
I could resize the redo logs, but if I size them for the update workload - 3x 500Mb - we'll have some days where we don't get a redo log switch at all. Is this necessarily a problem?
The alternative I'm thinking of is prior to performing this update, we add an extra redo log group with a 1Gb file, run the update, then remove the redo log group and delete the file afterwards. Is there anything wrong with this approach?
Sequence nextval is a pseodocolumn, but where the value of nextval is stored and why before we use CURRVAL for a sequence in our session, we must first initialize the sequence with NEXTVAL?
Is there any way to capture the version wise objects (Packages) information from the oracle data base.
Ex : I have package P1 which was created on 01-NOV-2012 with version 1 . After 10 days same package has been updated version 2 with some enhancements.Like that the package will be updated according to the latest requirement.
Now I need to capture the total audit trail history of the package with version wise specific changes and when changes has been occurred hoe can I achieve this?
I am trying to setup logon/logoff auditing for our databases which reside in 9i and 10G on sun solaris servers. I am asked to turn on auditing sending the audit data to syslog! How exactly do you do that?