I am getting the following error when i try to re size the data file.But the data file is having lot of free space.
ORA-03297: file contains used data beyond requested RESIZE value
Presently table space size is 220GB with 8 data files.As the process of the performance tuning we moved data to different table spaces.Now used space of the tablespace is 90GB.So I am trying to resize the datafile but it throw an error.
We have Oracle 10G database over Unix platform, Customer want to reduce the size of database as much as possible and the ami of customer to move the storage area of this database to other one. so we resize some datafiles and get lots of free space at mount point but while checking the utilzation of table is showing some what different as other. Below O/P:
% % MaxPoss Max Tablespace Name KBytes Used Free Used Largest Kbytes Used ------------------------------ --------------- --------------- --------------- ------ --------------- --------------- ------ *a DATA 45,875,200 8,740,992 37,134,208 19.1 1,728,512 100,663,248 45.6 *a HIGH_S_DATA 21,504,000 1,331,520 20,172,480 6.2 3,048,704 0 .0 *a HIGH_S_IND 15,360,000 853,568 14,506,432 5.6 1,661,504 0 .0
[Code]....
above all o/p is different, no able to understand it. is there any way to reset the HWM at Datafile level and how we reset the HWM of those tables having Materlized view?
i got a table and it had 5000 rows of data...ive deleted around 2000 to decrease the db size but i have no success. My harddrive is still showing the same size with no increase in mb.
I've looked at shrink etc methods but some are not compatible with 8i.
I take it the db is still reserving that those deleted rows thinking it may be used again which is the reason for no increase in space.
how can I reduce the size of ------------- when table is null. I m in sqlplus I typeSelect A,B,C,D,F,G,H from SOMEHERE where B='GOAT1';
if A is 10 char long B is 50 c is 10 d is 30 e is 10 f is 50
if any of those don't have data it still outputs ----------------------------- (50) for B and tht covers the whole screenhow can I make is to show less if it null
I am having I/O issues if i create 20 GB DATAFILES on SMALL TABLE SPACE. guide me with the maximum size limit of data file that I can create in Windows 2003 32 bit server.
I want to export the oracle data into an excel sheet. I have written the code by using UTL_FILE package. but i am getting the output as shown in the screen shot(without formatting the column size as the width of the data it has). But I want the output column width to be set according to the size of the data automatically.
I have Oracle 10.2.0.4.0 on RAC environment. I had 19 Data files on ASM. However one of my colleague has created new data file on a Node using Physical Drive which like this.
G:ORACLEPRODUCT10.2.0DB_1DATABASEDATA_001.DBF instead of +DATA/ref/datafile/data.400.70812332
Now this file is in Recover State and when i try to make it online it says Block # 12 of this file is corrupted.
Not able to understand what's wrong with the code. I am trying to import data to a table using a CSV file. I have exported the data (CSV) from the interactive report and I am just trying to insert the same data to the table, through a process. When, I tried to do so; its throwing an error message saying NO_DATA_FOUND and file is not getting inserted into wwv_flow_files table.
But when I removed the data from the CSV file for the comments field and then tried importing the file, the process worked. I don't understand whats the problem with the code.
I have a sample app setup in my workspace for this weird problem.
[URL]
Workspace details:
CSV file with comments field and data in it - when trying to import - throws an error message NO_DATA_FOUND
CSV file with comments field and without data in it - tried importing - this worked
I'm getting an error when trying to use the new Data Pump Export/Import utility.
I am able to create a directory using SQLPLus, and I get the "Directory Created" message, but no directory actually gets created on the server.
SQL> CREATE DIRECTORY datapump AS 'C:Inetpubdatafiledatapump';
Directory created. But I dont see the directory created on the server.
Then on the server:
C:Documents and SettingsAdministrator>expdp ******/****** FULL=y DIRECTORY=datapump DUMPFILE=expdata.dmp LOGFILE=expdata.log Export: Release 10.2.0.1.0 - Production on Wednesday, 01 November, 2006 1:51:55 Copyright (c) 2003, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options ORA-39002: invalid operation ORA-39070: Unable to open the log file. ORA-29283: invalid file operation ORA-06512: at "SYS.UTL_FILE", line 475 ORA-29283: invalid file operation
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit ProductionPL/SQL Release 11.2.0.3.0 - ProductionCORE 11.2.0.3.0 ProductionTNS for Solaris: Version 11.2.0.3.0 - ProductionNLSRTL Version 11.2.0.3.0 - Production I'm trying to load a table, small in size (110 rows, 6 columns). One of the columns, called NOTES is erroring when I run the load. It is saying that the column size exceeds max limit. As you can see here, the table column is set to 4000 Bytes)
CREATE TABLE NRIS.NRN_REPORT_NOTES ( NOTES_CN VARCHAR2(40 BYTE) DEFAULT sys_guid() NOT NULL, REPORT_GROUP VARCHAR2(100 BYTE) NOT NULL, AREACODE VARCHAR2(50 BYTE) NOT NULL, ROUND NUMBER(3) NOT NULL, NOTES VARCHAR2(4000 BYTE),
I'm using Oracle10G (10.1.0.2) Enterprise Edition. The size of the listener.log file approximately 2.46GB. Is this responsible for any performance problem of any service in my current instance...
1.2.0.2 on RHL.. 3 Log Groups with 1 member each. db_recovery_file_dest string /oracle/oraarch For the purpose of increasing log file size, if i use ALTER DATABASE ADD LOGFILE GROUP 1 SIZE 300M; but it creates Log Group with 2 member. one is at /oracle/oraarch location and other at /oracle/oradata (db_create_file_dest).
We are using ORACLE MANAGED FILE SYSEM . I want only 1 member at /oracle/oraarch (to keep the previous setting intact ...just increasing the size from 100 to 300M). If I manually give the path where to create the logfile member, I get this error: ALTER DATABASE ADD LOGFILE GROUP 1 '/oracle/oraarch/DB/onlinelog/' SIZE 300M; ALTER DATABASE ADD LOGFILE GROUP 1 '/oracle/oraarch/DB/onlinelog/' SIZE 300M * ERROR at line 1: ORA-00301: error in adding log file '/oracle/oraarch/DB/onlinelog/' - file cannot be created ORA-27038: created file already exists Additional information: 1
I noticed my DB is generating a lot of "small" .arc files and I am usure why. As you can see from the v$log query my log file size is set to 50MB. But yet BLOCKS*BLOCK_SIZE never adds up to 50MB.
Is there anything else I can look into to see how to make the .arc files larger?
The REDO log file size is important DB performance issues when DB is run archivelog mode.If DB run noarchivelog mode, REDO log file size not impact to DB performance.
we have an environment with many Oracle databases in different versions (v8, V9, V10 and V11). Besides that, there are many versions of the Oracle Client in use, ranging from V6 to V11. Many of the clients have a TNSnames.ORA and as was to eb expected, many of these are old as well with a lot of (by now) invalid information in there.
Is there any reasonable maximum size to the TNSnames.ORA file?What are the consequences from going over the max size? Are there alternatives to using TNSnames.ora which you would recommend?
I have a standalone DB of version 10.2.04.I am facing log file parallel write as one of the top events.I have increased the REDO log file size to 500m.But even then REDO switching is happening frequently.
select group#, bytes, archived, status, first_change#, first_time from v$log order BY first_change#;
GROUP# BYTES ARC STATUS FIRST_CHANGE# FIRST_TIME ---------- ---------- --- ---------------- ------------- ---------- 10 262144000 NO INACTIVE 8509999 30-08-2012 12 524288000 NO INACTIVE 8612142 30-08-2012 11 262144000 NO INACTIVE 8676390 30-08-2012 9 262144000 NO CURRENT 8706330 30-08-2012 [code]....
I am storing customer's snaps in a table ( column's data type as LONG RAW) using oracle forms Webutil. Now there are 250 snaps in the table. The file type of these snaps is JPG with the average size 30KB.
I made a backup using export utility before storing these snaps and the exported DMP file's size was 36MB. Now after storing these just 250 snaps of 30KB the DMP file's size is gone over 300MB.
i need to change column's datatype? or some where in oracle forms's image item. Because on window's file system the size of these files is just 8MB.
I am writing a program for doing some file transfer between the client machine and the application server.I am using Webutil_File_Transfer.Client_To_AS to do the transfer and also using Webutil_File.File_Size to check on the file size at the source.
Once the transfer is complete, I also need to check on the destination file size (the application server running on Linux) for verification purposes and can't find the way to do it.
I would like to make a change on the live system!I have read a book and found a information about REDO log file size is impact on DB performance.My DB current log file size is 100 MB. But, Oracle 10g's Redo Logfile Sizing Advisor offer the optimal log file size is 1845 MB.What REDO log file size is best for my Oracle database?
I'm facing problem with archive log file size, Archive logs are generated with only of 90m or 92m or 94m(Variable sizes of less than 100m), Although i had set 100m for each of my redo log file. Here i'm providing my create db script for your reference. I want to know why the log switches before it reaches 100m.Is there any connection of intial 10m for my .dbf files.
We are working on migrating from 9.2.0.4 to 11.2 and we've set up a test machine so that we could test the install and the import (as well as test additional 11g features that we want to begin using).
So we created the database and created all of the tablespaces beforehand.
However, when we run the import, we get the errors like so:
Import: Release 11.2.0.1.0 - Production on Tue Oct 5 15:01:19 2010 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Export file created by EXPORT:V09.02.00 via conventional path
[code]....
First of all, the block size in our "newly" created tablespaces is 8192...and these are obviously trying to recreate the tablespaces with a block size of 2048.
1) Why is it not ignoring these create tablespace commands when those tablespaces already exist?
2) how in the world do we get around the block size issue? We've tried nearly everything we could find, but we've still not had any luck.
Our product uses Oracle11gR1 and in new release we are going to use Oracle11gR2. For this we are performing following steps:
(1) Install Oracle11gR2 on a machine where our product (Oracle11gR1) is already installed. (2)Upgrade Oracle11gR1 schema to Oracle11gR2. (3)For using upgraded schema in our product installer we create clone of upgraded schema. (4)For creating clone we are using Oracle11gR2 DBCA utility. (5) Clone files are successfully created (DBC, CTL, DBF).
Now we performed same steps from another machine and DBF file size changed very much. On one machine it was 89 MB and on second machine it was 150 MB. There is no different in schema and both machines are Windows 7 machines.
Tor educe size of schema we tried different space reclamation commands but size is not changing.
I'm working on an RDF report. This report runs as a part of Payment Process in Oracle Payables. This report has the printer style as BACS. The layout has only main section. The body of the main section of the report has only one repeating frame and the body field inside the frame. The body field gives concatenated values from the select query. I have to modify the SQL query(where clause) of the report. After I change the query and save it, the report compiles sucessfully on report builder.
However after following payment process and submitting request, the report fails with error: