Server Administration :: Oracle 10g Blocking Sessions
Aug 19, 2010
I am using oracle 10g as server in my lab. I faced some problems initially, but later after increasing the USERS tablespace it is working fine.
But there is still one problem. During the query execution some queries will be blocked and it doesn't leave any consequent queries to execute from the same user.
The blocked sessions will be displayed in the admin page under blocking sessions link. There is a option to kill the session. But when i do that, it affects all the users and the connection will be lost to all the users. again I have startup the database from beginning.
The TM lock that occurred 3 times appears to be disastrous.Historically, this could be caused by missing indexes on foreign key columns.
How can I be alerted when this event occurs, so that I can do some real-time investigation into the sessions and the SQL that hit it? I suppose I could schedule a job to query v$lock/dba_waiters/dba_blockers every few minutes, but is there a better way? Any standard edition scripts for this?
One issue happens frequently in My database,Logwr process blocking other session,when i checked blocked session in v$session wait_class was commit and event was logfile sync,
As per this we predicate Logbuffer Might be full so i just to reboot server,Note server was 32 Bit i can allocate only 1GB of SGA SYStem not allows me to increase SGA further. Server reboot is proper solution?
My application is opening a lot of sessions in my DB server. I applied resource_limit=true and idle_time=15 min. ans assign this profile to all application user.
Now I am seeing a lot of sessions having status sniped in v$session.
I want to clean up these sniped sessions and what they mean.
We had an issue last week were we had a session with a very basic SQL query lock up the database, spiking the CPU at 100%. When you would kill the session, the lock would just jump to another session and so on. We finally had to restart the database since our clients were being kicked out. After the restart of the database, the LGWR ended up locking and held the CPU between 85-95%. The archive logs were switching every 5 minutes, when normally it would be every 45min. We spoke with Oracle Support, but they just ended up brushing the issue off and saying it was a hardware issue and were not able to provide any kind of backing to that.
I have a simple question about database sessions. The value of parameter "sessions" is set to 500 and the users connect to database through an application server(Jboss). There are more than 500 users connect to the database through application.
My question is, how more than 500 users can connect to the database without any issue, if we set the value of "sessions" parameter to 500?
I'm sure you are all familiar with proxy users, they've been around since 9i: orcl> create user low identified by low;
User created. orcl> create user high identified by high; User created. orcl> grant dba to high; Grant succeeded.
orcl> alter user high grant connect through low;
User altered.
orcl> connect low[high]/low Connected. orcl> sho user USER is "HIGH" orcl>
Is there any way that I can find out which of the current sessions was proxied, and through what user? I know that from within the session I can query my userenv context and find out, but I can't see how to do it otherwise. It must be possible: the audit trail records both the real user and the proxy user.
I killed some 70 sessions which where inactive couple of days ago using alter system kill session(sid,serial#) immediate;
But even after killing them it still exists in the v$session view when I query select sid,serial# from v$session where username is not null; It has the same sid and serial#....I tried killing them every now and then but it still shows those exists
We have quite a number of sessions in database MES (production) coming from another machine.
From v$session, the program is oracle@WID27 (TNS V1-V3). This WID27 (hostname) consists of quite a number of development databases inside. We have to trace which jobs are actually triggering this, as WID27 are not suppose to connect to production databases.
How can we tell whether the sessions came in is from dblink or from the machine itself?
our db shows more than 200 INACTIVE Sessions ; and the DBA plans to reboot the db to get rid of these sessions . Can we not KILL these sessions and avoid the reboot ?
I have had the following problem open with Oracle support since March 2011 (8 months), and still no resolution.
When I export all our schema's on Sunday night it takes about 1 hour 50 minutes. When I export the same schema's on any other night it takes 7 hours. The only difference is that on Sunday at 4:00am we drop all connections in the connection pools and reestablish new connections. Then 19hours later on Sunday at 23:00 we perform the exports which only take 2 hours to complete.
I have also tried recreating the connections in the connection pools during the week, and the exports have then only taken 2 hours to complete. But the following night after the connections have been used during the day, the exports again take 7 hours. So it appears the export speed gets significantly slower when there are many open connections that have been used and not closed.
From the stats pack report I found 2 SQL statements internal to the export command, that had an order of magnitude in difference when looking at the elapsed execution time between the fast export, and the slow export (see below).
How to speed up the exports without having to drop and recreate the database connections in the connection pools each night.
FAST: elapsed_time: 430.90 executions: 161,388 Module: exp@Oracle1 (TNS V1-V3) SELECT COLNAME, COLNO, PROPERTY, NOLOG FROM SYS.EXU10CCL WHERE CNO = :1 ORDER BY COLNO
I use the following export command for each schema: $ORACLE_HOME/bin/exp user/pass file=somefile.dmp owner=$SCHEMA log=somelog.log buffer=9000000
I have an Oracle Standard edition 11.1.0.7 database on 64bit Linux with a 7GB SGA. I currently export (I use exp not datapump because datapump is a lot slower and we can't use parallel processing features of datapump on a standard edition database) approx 200 schema's each night. The export normally takes 1 hour 50 minutes which is approximately 2 schema's exported every minute. When the exports run slowly each export takes almost 2 minutes to complete.
The database has about 20 GB data and 50 GB indexes. The database has also approx 500 connections via toplink connection pools from 8 application servers.
I was just wondering how Oracle manages multiple sessions in a database performing DML. I believe this is related to 'Read Consistency' and I tried to search for the same but could not get any satisfactory online documents.
CASE 1: user A logs in to a database1 issues select on table A and then inserts 4 rows user B logs in to databse1 issues select on table A and then inserts 5 rows issues rollback user C logs in to a database1 issues select on table A and then inserts 6 rows issues commit
How many rows can user C see in the table A when he issues select?
CASE 2: user A logs in to a database1 issues select on table A and then inserts 4 rows user B logs in to databse1 issues select on table A and then inserts 5 rows user C logs in to a database1 user B issues rollback user C issues select on table A and then inserts 6 rows issues commit
How many rows can user C see in the table A when he issues select?
NOTE: All the users are currently logged in to the same database and none has logged out.
I have some questions about Oracle + EMC shared storage. I have Oracle 11gR1 RAC (2nodes) + ASM environment with shared shorage EMC Clariion AX4.
The database is running no archivelog mode. I'd like to implement point-in-time recovery using Snap View snapshots.
Currently my AX4 platform has the following LUNS:
LUN1 - registry 1 LUN2 - registry 2 LUN3 - vote 1 LUN4 - vote 2 LUN5 - vote 3 LUN6 - ASM - DISKGROUP DATA DISK DATA01 (actual db datafiles) LUN7 - ASM - DISKGROUP DATA DISK DATA02 (actual db datafiles)
Using source LUNs in consistent session will take sync snapshots of all the LUNs working against my database. Once something happens with database, the LUNs can be returned to specific point in time when snapshot consistent session was taken and I'm expecting the database will up and continue to work.
Questions:
1. Is my approach correct at all? (The database is running noarchivelog mode, not dealing with hot backups. The recovery point in time implemented purely via EMC snapshot consistent sessions.) 2. The AX4 platform has a limit 8 source LUNs in session. If I understand it correctly, I can't place more than 8 LUNs in session. What will be the workaround if my database will occupy more than 8 LUNs, I'd like to take their consistent snapshot; for example:
LUN1 - registry 1 LUN2 - registry 2 LUN3 - vote 1 LUN4 - vote 2 LUN5 - vote 3 LUN6 - ASM - DISKGROUP DATA DISK DATA01 (actual db datafiles) LUN7 - ASM - DISKGROUP DATA DISK DATA02 (actual db datafiles) LUN8 - ASM - DISKGROUP DATA DISK DATA03 (actual db datafiles) LUN9 - ASM - DISKGROUP DATA DISK DATA04 (actual db datafiles)
We are experiencing a problem with SSO causing 2nd or 3rd concurrent Oracle sessions to hang. The Oracle application hangs during loading and the task manager has to be used to close the application.
I have tested logging onto our application servers using SSO and I cannot load more than 3 concurrent Oracle sessions. When I bypass the SSO and logon to the same server I can load more than 20.
i am trying to install Oracle 10.10.2.0 on Windows Server 2003 standard x64 Edition Service Pack, but when i try to run the installer or open DVD it gives me below error.
"The image file D: is Valid, but is for a machine type other than the current machine."
I was trying to delete the database in the test server. When i was deleting listener was already stopped, i continued deleting using dbca, it shown me some alert that datafiles cant be deleted because system could't find database, since listner was stopped so only service was deleted(the one showing in the windows administrator toolsservicesOracleServiceTEST).
All the datafile parameter files are still there. How can i delete the datafiles and parameter files belongs to that database or how to create the deleted service, so that i will start the listener and do the complete deleting of the database.
I am trying to find the unix process for one of my application in the database but I am unable to view the same. To simulate, I did the following.
1. My database runs on different server. 2. I invoked "sqlplus" from another unix box to login to the database. 3. I found that the process id (ps -ef |grep sqlplus). 4. When I execute the below mentioned query it does not display the process id that I am looking for. But the osuser, username, program and machine details are correct. How can I know the process details from the database?
SELECT SYS.GV_$SESSION.OSUSER, SYS.GV_$SESSION.USERNAME, SYS.GV_$PROCESS.SPID, SYS.GV_$SESSION.MACHINE, SYS.GV_$SESSION.PROGRAM, SYS.GV_$PROCESS.PROGRAM ,SYS.GV_$SESSION.SQL_ID FROM SYS.GV_$PROCESS, SYS.GV_$SESSION WHERE SYS.GV_$PROCESS.ADDR=SYS.GV_$SESSION.PADDR and SYS.GV_$SESSION.USERNAME='TEST' and SYS.GV_$SESSION.MACHINE like '%hostname%'
I want to install Oracle 11g R2 in windows 2008 64 bit server. How can I know whether my server is ready to install Oracle ie is all components are available in server or any patch is to be applied etc.
I'm trying to connect a oracle client application on the client machine to a remote oracle server on the server machine but i get a connection fail.
On the server machine I configured oracle server in the following way:
Installed oracle server. Created a database "DB_Test" with the database configuration assistant Created a LISTNER with the Oracle NET Manager with the following parameter:
Protocol: TCP/IP HOST: server pc hostname (ENZOVAIO) or server machine address ip (192.168.0.71) in the network lan Port Number: 1521 Created "dbtest" service with the Oracle NET Manager with with the following parameter: Service Name: "dbtest" Protocol: TCP/IP HOST: server pc hostname (ENZOVAIO) or server machine address ip (192.168.0.71) in the network lan Port Number: 1521
All services on the server machine are running and I opened port number (1521) in the router. On the client machine I installed SQL PLUS and SQL Developer.
With SQL Plus as by the official documentation I have entered the following command:
CONNECT username/password@[//]host[:port][/service_name]. In my case is: CONNECT SYSTEM/oracledb@//ENZOVAIO:1521/testdb.
With SQL Developer I have entered the same parameter.
But with both SQLPlus and SQL Developer the connection fails.
We performed image copy of production Oracle server (OS and instances) to a backup server. After a few weeks, we try to restore a latest Oracle database backup from production server to backup server. As we know, Oracle instance must be unique on the network.
Even we log on to backup server and bring up the instance, I think that still point to production instance since all init file, TNSNAMES.ora and listener file are still same. If we restore the database, we will end up bring down the production instance and restore on top of productions. How to change instance name on backup server including TNSNAMES, sqlnet, listener files in order for us to restore Oracle database from production to backup server?
I recently installed Oracle 10g on my windows Xp laptop. It has become considerably slow since then. I want to start the database server only when I need it, and not every time I start my laptop. I looked around in OEM and did found a way.
I am connected as System. It was the only user I set-up a password when installed the database on personal computer.
SQL> alter user sys identified by mypass007 2 / User altered. SQL> connect sys/mypass007 ERROR: ORA-28009: connection as SYS should be as SYSDBA or SYSOPER
We will be having a meeting with our client regarding their Database Server Migration (They are planning to buy a new server). Their current database is Oracle 10gR2, they will not upgrade to 11g, they just plan to migrate to a new more powerful machine.
I was planning to ask the following questions.
1. Specifications of the current server and the new one. 2. Operation system (I think they will use same OS, just an updated one) 3. Can the business afford full downtime on current servers? 4. Size of the DB, because it can take hours to move large files.
And is there documentation regarding Server Migration (Change of machine only, not database upgrade or anything,