How To Use Perl DBI Script To Poll Logs From Oracle Databases
Jul 27, 2010
I have 2 separate servers each with oracle database installed.I am trying to pull oracle logs from these 2 databases to my main linux server using perl DBI. On this linux server I have 1 dbipoll.pl script and 2 wrapper scripts that pass in parameters to dbipoll.pl.
Here is the sample wrapper script that I've used to call dbipoll.pl(separate script with different parameters for each databases used):
As you can see,the script should retrieve logs and write to oracledb1.log and update the countfile1 with the latest timestamp.The wrapper scripts are scheduled to run at 5mins interval.
The first wrapper script,which I've named as call_dbipoll1.sh runs fine. I am able to retrieve events and both oracledb1.log and countfile1 are successfully updated.
However,the second wrapper script,call_dbipoll2.sh is successful only for the first run. For subsequent runs,I've noticed that countfile2 does not get updated with the latest timestamp(ie. the file returns to blank) and oracledb2.log returns to blank as well. Therefore,am unable to pull new events then. Unable to determine what went wrong here..
Here is sample of my second wrapper script that tries to retrieve events from the 2nd database:
I am trying to load data into various tables through a perl script using sql loader. Log files are created which say rows successfully loaded, but there is no data in the database. is there any way of explicitly saying commit with sql loader command (except for the rows options, Ihave tried using that also, with rows=1, but it doesn't work)?
I have the task to migrate the total databases(Exact copy to be moved to another server).The current server is going for format.After I did the following steps I am getting the tablespaces(databases)-4 sizes same ,but I am facing issue like some default tablespaces i.e temp,system are not matching.
temp tablespace *************** current server - 4.0(approximately) Migrating server - 160 MB
System tablespace ***************** current server - 580 MB Migrating server - 220 MB
Also I checked the tables are also matching for the 4 databases.Also Provide the solution or method which is correct.
steps done for migrating(By me) ******************************** EXPORTING DATA USING DATAPUMP *********************************
1 From command prompt MKDIR 'c:oraclexeapp mp';
2 From SQL prompt conn system/kotak;
3 create or replace directory dmpdir as 'c:oraclexeapp mp';
4 grant read,write on directory dmpdir to kotak;
5 From command prompt
expdp system/kotak@xe full=Y directory=dmpdir dumpfile=xe.dmp logfile=expdpxe.log; IMPORTING DATA USING DATAPUMP ***************************** in another server machine
1 From SQL prompt conn system/kotak;
2 create or replace directory dmpdir as 'c:oraclexeapp mp';
I have cloned the database into another using DBMS_METADTA API (export the metadata in xml form and recreate it on destination). I need to synch these two periodically. I need to update the XML to synchronize the databases.
if you are using ShadowProtect to backup your Oracle databases. What are your experiences and the pros and cons in using ShadowProtect as the one and only backup/recovery tools for Oracle databases? Can the potential of ShadowProtect replaces the commonly use Oracle backup/recovery tools?
The reason why I raise this question is because my boss has the idea that ShadowProtect is THE best backup/recovery tools. Personally I don't think so, because there can be cases where we only need to recover the database and not the whole OS.
I have to migrate two different databases to oracle.i have made two Migration Repository for each to do the migration, migration is done.but i would like to know can it be done with one migration repository.if yes then with one is best way to do it.data of two databases is different but table and sp are 99% same.
How can we bring down the databases in oracle fail safe environment?
We have one database X in two server�s windows A & B with oracle fail safe environment. What procedure should we fallow to bring down the database X.
Today I was strangling to bring down the database because database was automatically coming up once brought down the database. what procedure should we follow to bring down the database in OFS environment.
I am using Oracle database server and I want to show existing databases on that server. Are there any SQL plus command to list all databases there in the server?
I want to upgrade all the Oracle 10g Release (10.2.0)master sites (bi-directional) databases to Oracle 11g Release (latest). In fact, we are using bi-directional oracle streams and snapshot replication, it means capture,propagate and apply process is running.
If we want to know the number of instances, number of RAC databases and whole total disk space used by oracle (not file system size),1. Any script can be ran from OEM grid control against all instances/databases? or2. we have a repository unix server which has all tnsnames of whole databases, any script we can run from there?
I got many times oracle ORA-00494 error and the database went down but since 29th of july the database have not been killed. The error message is below :
ORA-00494: mise en file d'attente [CF] d�tenue pendant trop longtemps ( (more than 900 seconds)) par inst 1, osid 176484 ORA-00028: votre session a �t� ferm�e
My database is used for datawarehouse of many terabytes.
Initially the redo log size was 500Mbytes and I've set it to 3Gbytes. The maximum log switch is after 5 minutes. I want log to be switched every 20 minutes or every 30 minutes.
To obtain the size of redo logs I've executed this query :
SQL> select OPTIMAL_LOGFILE_SIZE from v$instance_recovery;
OPTIMAL_LOGFILE_SIZE -------------------- 54763
53,5 Gbytes is it not very big as redo log size? What's the maximum size of redo log? To set very big redo log size what are the requirements? Which precautions should I take before? What are the risks? Are any other ways to change the log switch frequency?
I am using Oracle 11.2.1.0 version.I want to restrict archiving for some tables. I think NOLOGGING will solve this problem. Is there any option for restricting archiving.
For example, I have three tables called A, B and C. I want to archive only 2 tables A and B but not C.
java.sql.SQLException: Unexpected exception while enlisting XAConnection java.sql.SQLException: XA error: XAResource.XAER_RMERR start() failed on resource 'weblogic.jdbc.jta.DataSource': XAER_RMERR : A resource manager error has occured in the transaction branch javax.transaction.xa.XAException: Unexpected error during start for XAResource 'EOD': null at weblogic.jdbc.wrapper.XA.createException(XA.java:103) at weblogic.jdbc.jta.DataSource.start(DataSource.java:765) at weblogic.transaction.internal.XAServerResourceInfo.start(XAServerResourceInfo.java:1182) at weblogic.transaction.internal.XAServerResourceInfo.xaStart(XAServerResourceInfo.java:1115)
I am trying to create materialized views based on a few tables in a logical standby database.
The target database (11g R2) where the MVs will be created is a stand-alone database.
The DB where the base tables reside is a logical standby database (11g R2).
The requirement is to do a "FAST REFRESH" of the Materialized Views.
My questions are :
1. Can I create MV logs in the logical standby DB?
2. If the answer to question no. 1 is "Yes", do I need to do anything different or configure the logical standby DB in a specific manner in order to create MV logs. From what I understand, the objects in the logical standby database are in a locked state.
Assuming you have a 9i database . where you have it enabled in archive mode , yet constantly deleting the archived redo logs , due to space constraints .
Will you be able to perform a full level 0 backup , and the following incremental backups , in the absence of the archived redo logs ? And are these incremental backups enough to recover the database or particular data files , to the point of the backup itself at least ?
I'm currently working on a project in which I do not have permissions to access the Server where the database is installed and configure.Because of company policies, I do not have Admin Rights over Oracle, but I do have an account that can make Selects to DBA_USER_PRIVS for instance.
I would like to know if there is any way to access the database logs to know if there was any kind of problem within the database, because one of my Schemas misteriously went clean (all tables, sequences, triggers, ... vanished)
I have Sap r/3 system which runs on Oracle 9 database. The problem is that the sql queries produces an awful lot of logs thus my disk is full after very short time.
I do not need the logs since its development environment. Are there any tools that erases the logs automatically?
i have a sequence for one of my table that this sequence's current value was 3000 yesterday but today when i checked current value of it, i surprised because the value changed to 50, can i check who changed my sequence? is exists any data dictionary that shows logs of modified database objects.