Listener Log File Size
Oct 1, 2013I'm using Oracle10G (10.1.0.2) Enterprise Edition. The size of the listener.log file approximately 2.46GB. Is this responsible for any performance problem of any service in my current instance...
View 9 RepliesI'm using Oracle10G (10.1.0.2) Enterprise Edition. The size of the listener.log file approximately 2.46GB. Is this responsible for any performance problem of any service in my current instance...
View 9 RepliesIn one of our RAC envrionment, we have more than 15 databases in a server running in AIX Operating system.the Listener file I am able to find only one entry (SID). But while trying to execute lsnrctl status it displays all the 15 SIDs in the list hosted in the ser ver. Not sure how it works?
As we created another one new database in the database, where i need to add this SID in the listener.ora file to reload the listener.
I am using oracle 8.1.5 database and my temp01.dbf file size is increased upto 19.8 GB now i want reduce its size .
View 13 Replies View RelatedMy listener.trc file has grown to a size of 56G and is contineously growing with the following message:
9:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:669] naeshow: [05-DEC-2012 19:56:49:670] naeshow: [05-DEC-2012
[code]...
i have a software...i installed it and configured it to the database now i copied the listener file from a working server to this both have same name and configuration and databases with same port number.
but when i try to log in it throws a error
ORA-12154 TNS COULD NOT RESOLVE THE CONNECT IDRNTIFIER SPECIFIED
I checked my listener tns.ora files and all are correct but still i cant connect.
how to purge the alert log and listner log file in 11g.
OS:IBM/AIX RISC System/6000
DB version: 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
1.2.0.2 on RHL.. 3 Log Groups with 1 member each. db_recovery_file_dest string /oracle/oraarch For the purpose of increasing log file size, if i use ALTER DATABASE ADD LOGFILE GROUP 1 SIZE 300M; but it creates Log Group with 2 member. one is at /oracle/oraarch location and other at /oracle/oradata (db_create_file_dest).
We are using ORACLE MANAGED FILE SYSEM . I want only 1 member at /oracle/oraarch (to keep the previous setting intact ...just increasing the size from 100 to 300M). If I manually give the path where to create the logfile member, I get this error:
ALTER DATABASE ADD LOGFILE GROUP 1 '/oracle/oraarch/DB/onlinelog/' SIZE 300M;
ALTER DATABASE ADD LOGFILE GROUP 1 '/oracle/oraarch/DB/onlinelog/' SIZE 300M
*
ERROR at line 1:
ORA-00301: error in adding log file '/oracle/oraarch/DB/onlinelog/' - file cannot be created
ORA-27038: created file already exists
Additional information: 1
If Listener.log file destination is full what will happen? whether connections through the listener will go hang? DB version 10.2.0.4 & 11.2.0.3
View 1 Replies View RelatedWe are getting the below error messages in LISTENER LOG FILE. That too every 5 Mins.
10-SEP-2012 16:25:43 * (CONNECT_DATA=(SERVICE_NAME=dpm)(CID=(PROGRAM=W:Applicationdpm2010.exe)(HOST=ISSLDMUM01PC169)(USER=bharathi))) * (ADDRESS=(PROTOCOL=tcp)(HOST=10.80.100.39)(PORT=2241)) * establish * dpm * 0
10-SEP-2012 16:25:46 * service_update * DPM * 0
10-SEP-2012 16:25:47 * 12546
TNS-12546: TNS:permission denied
TNS-12560: TNS:protocol adapter error
TNS-00516: Permission denied
I am not getting the reason for it.
information on this listener.ora parameter and the endpoints file? There is no reference in the docs (except for a sample listener.ora file in the RAC Installation Guide, no explanation).
Following a clusterware with gPNP installation, the listener.ora file has only IPC addresses, plus this parameter:
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))# line added by Agent
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3))))# line added by Agent
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2))))# line added by Agent
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))# line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON# line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON# line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON# line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON# line added by Agent
The endpoints_listener.ora file specifies the fixed and VIP addresses of the public NIC:
LISTENER_GR7244=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.182.201)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.180.44)(PORT=1521)(IP=FIRST))))# line added by Agent
The listener named LISTENER is in fact listening on both its IPC address, and the addresses in the endpoints_listener.ora file:
[grid@gr7244 admin]$ lsnrctl status listener
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 02-APR-2010 04:52:02
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
[code]...
The scan listener on this node is listening on its IPC address, and its VIP:
[grid@gr7244 admin]$ lsnrctl status listener_scan3
LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 02-APR-2010 04:46:15
Copyright (c) 1991, 2009, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN3
[code]...
I did wonder if this parameter and file could be part of the mechanism whereby a listener discovers the DHCP assigned VIPs on which it has to listen?
I am getting the following error when i try to re size the data file.But the data file is having lot of free space.
ORA-03297: file contains used data beyond requested RESIZE value
Presently table space size is 220GB with 8 data files.As the process of the performance tuning we moved data to different table spaces.Now used space of the tablespace is 90GB.So I am trying to resize the datafile but it throw an error.
I noticed my DB is generating a lot of "small" .arc files and I am usure why. As you can see from the v$log query my log file size is set to 50MB. But yet BLOCKS*BLOCK_SIZE never adds up to 50MB.
Is there anything else I can look into to see how to make the .arc files larger?
SQL> select group#, thread#, bytes from v$log;
GROUP# THREAD# BYTES
---------- ---------- ----------
1 1 52428800
2 1 52428800
3 2 52428800
4 2 52428800
select blocks, block_size, blocks*block_size from v$archived_log where sequence# between 63876 and 72851 and thread# = 1
BLOCKS BLOCK_SIZE BLOCKS*BLOCK_SIZE
---------- ---------- -----------------
28 512 14336
28 512 14336
28 512 14336
55 512 28160
[code]...
The REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
View 3 Replies View Relatedwe have an environment with many Oracle databases in different versions (v8, V9, V10 and V11). Besides that, there are many versions of the Oracle Client in use, ranging from V6 to V11. Many of the clients have a TNSnames.ORA and as was to eb expected, many of these are old as well with a lot of (by now) invalid information in there.
Is there any reasonable maximum size to the TNSnames.ORA file?What are the consequences from going over the max size? Are there alternatives to using TNSnames.ora which you would recommend?
I have a standalone DB of version 10.2.04.I am facing log file parallel write as one of the top events.I have increased the REDO log file size to 500m.But even then REDO switching is happening frequently.
select group#, bytes, archived, status, first_change#, first_time from v$log order BY first_change#;
GROUP# BYTES ARC STATUS FIRST_CHANGE# FIRST_TIME
---------- ---------- --- ---------------- ------------- ----------
10 262144000 NO INACTIVE 8509999 30-08-2012
12 524288000 NO INACTIVE 8612142 30-08-2012
11 262144000 NO INACTIVE 8676390 30-08-2012
9 262144000 NO CURRENT 8706330 30-08-2012
[code]....
I get an ERR-7621 from Apex whenever I do anything in an application the tries to read a file. For example importing images, css files, themes, or applications. Even the data loader app will get the error if you choose to load a "csv" file. The following appears in my Apex Listener Log (version 2 early adopter). I am running 4.1.1 of Apex and also have another server running the same where the problem does not exist. Following is the log output whenever the load occurs:
Sep 28, 2012 11:21:18 AM com.sun.grizzly.http.servlet.ServletAdapter doService
SEVERE: service exception:
java.lang.NumberFormatException: For input string: ""
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
[Code].....
I noticed that listener file in diag/trace is rapidly increasing. I am using 11g Rel 1 on window 2008 server Its 1 GB in couple of days. Is there any way if I can turn it off ?
View 5 Replies View RelatedI am storing customer's snaps in a table ( column's data type as LONG RAW) using oracle forms Webutil. Now there are 250 snaps in the table. The file type of these snaps is JPG with the average size 30KB.
I made a backup using export utility before storing these snaps and the exported DMP file's size was 36MB. Now after storing these just 250 snaps of 30KB the DMP file's size is gone over 300MB.
i need to change column's datatype? or some where in oracle forms's image item. Because on window's file system the size of these files is just 8MB.
I am having I/O issues if i create 20 GB DATAFILES on SMALL TABLE SPACE. guide me with the maximum size limit of data file that I can create in Windows 2003 32 bit server.
View 3 Replies View RelatedI am writing a program for doing some file transfer between the client machine and the application server.I am using Webutil_File_Transfer.Client_To_AS to do the transfer and also using Webutil_File.File_Size to check on the file size at the source.
Once the transfer is complete, I also need to check on the destination file size (the application server running on Linux) for verification purposes and can't find the way to do it.
I would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
#Optimal log file size:
select optimal_logfile_size
from v$instance_recovery
----------------------------
OPTIMAL_LOGFILE_SIZE
1842
[code]....
What's the maximum size of the control file in one database ?
i calculated it according to the following steps:
SQL>SELECT (BLOCK_SIZE/1024/1024)*20000 MB FROM V$CONTROLFILE WHERE ROWNUM = 1;
MB
------
312.5
The maximum number of the data block in one control file is 20000.
I am having issue with configuring listner name on Oracle Server.My default listener is working,But I have stopped the default listener and tried to create another listener is differnt port,but no success,It always says The address of the specifed listener name is incorrect.Below is the listener.ora file
My db name and sid name are etl
LIST1=(DESCRIPTION=(ADDRESS=
(HOST=127.0.0.1)
(PROTOCOL=TCP)
(PORT=1530)
)
and the list1.log file says The address of the specified listener is incorrect.
what is meaning for port number,in listener.ora file associated with ? is there any difference for port number in windows & in linux server.,?
View 3 Replies View RelatedI want to upload file using restful service This is my code to send file to rest service
MultipartEntity reqEntity = new MultipartEntity(HttpMultipartMode.BROWSER_COMPATIBLE);
FileBody bin = new FileBody(f);
FormBodyPart bodypart = new FormBodyPart("file", bin);
reqEntity.addPart(bodypart);
[code]...
But how can i retrive at server side in restful service using plsql?
SQL> select block_size from v$controlfile;
BLOCK_SIZE
----------
16384
SQL> show parameter db_block_size;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_block_size integer 8192
the 2 can have difference block size?
I'm facing problem with archive log file size, Archive logs are generated with only of 90m or 92m or 94m(Variable sizes of less than 100m), Although i had set 100m for each of my redo log file. Here i'm providing my create db script for your reference. I want to know why the log switches before it reaches 100m.Is there any connection of intial 10m for my .dbf files.
create database mydev
maxlogmembers 3
maxloghistory 100
maxdatafiles 50
maxinstances 1
logfile
[Code]....
My configuration is
APEX 4.2 on Windows XP with APEX Listnere on Glassfish 3.1.2 Open Source Server Edition.
The database is 11G XE
Let me explain the scenario which is not working.A Modal region which is having a file browse item based on WWV_FLOW_FILES and a button called add file.
When I click on add file it display a blank page and in the URL I could see [URL]...But actually it inserted file in the table.
I have put wwv_flow* in Listener Configuration for allowed procedures.I have another page where I have a page item which is based on BLOB column specified in item source attribute. And that is inserting file and displaying the pages correctly.
Our product uses Oracle11gR1 and in new release we are going to use Oracle11gR2. For this we are performing following steps:
(1) Install Oracle11gR2 on a machine where our product (Oracle11gR1) is already installed.
(2)Upgrade Oracle11gR1 schema to Oracle11gR2.
(3)For using upgraded schema in our product installer we create clone of upgraded schema.
(4)For creating clone we are using Oracle11gR2 DBCA utility.
(5) Clone files are successfully created (DBC, CTL, DBF).
Now we performed same steps from another machine and DBF file size changed very much. On one machine it was 89 MB and on second machine it was 150 MB. There is no different in schema and both machines are Windows 7 machines.
Tor educe size of schema we tried different space reclamation commands but size is not changing.
I want to export the oracle data into an excel sheet. I have written the code by using UTL_FILE package. but i am getting the output as shown in the screen shot(without formatting the column size as the width of the data it has). But I want the output column width to be set according to the size of the data automatically.
View 5 Replies View Related