I just want to know what are precautionary measures if tablespaces in a database is in autoextend mode. I'm wondering if these tablespaces reached its maximum sizes.
In our case, we are administering a database (turned over by our outsourcer after a 2-year maintenance) with SAP interface, and we noticed that most of it's tablespaces were created with initial size of 2Gb up to a maximum size of 10Gb, all were 'autoextensible'.
I am using oracle 10g with sga_max_size =4GB and db block size 16k. Now i am creating a tablespace with block size 32 kb , whats value i select for the parameter db_32k_cache_size.
Is there any standard way to calculate the value of this parameter.
Is there any way I can calculate percentage of space used in a block.Eg if a table size is 100 blocks,How Can I check the percentage of used space in block.
1) What is PHYSICAL/LOGICAL Corruption. 2) How it occurs. 3) Will RMAN works on both the types of corruption or only Physical (My senior told it works on both).
One of our developers team member had created a Anonymous block program to do something in the Database, and he forgotten and terminated the session without confirming the program's status, whether it was fully ran or not.
Is there any way to check out the status of this, which happened yesterday?
I am in charge of several instances located on a Linux server CentOS, virtualized on a ESX 3.5 environment.
From time to time (every 4 to 5 days), I have some errors in the alert.log. Last occurence was last night :
Corrupt block relative dba: 0x01004e12 (file 4, block 19986) Fractured block found during buffer read Data in bad block: type: 6 format: 2 rdba: 0x01004e12 last change scn: 0x0000.131aaa5b seq: 0x2 flg: 0x04 [Code] ........
We are doing user manual backup (with BEGIN/END BACKUP) every night at 8PM, ending at 9PM approx. Then, fractured blocks never occur during backups. At 1AM, the maintenance window is opening, thus explaining the GATHER_STATS_JOB job.
When I check corruption on early morning, I am always unable to reproduce the problem. DBV is OK without issues. We never had a problem with the data itself, whatever it is a table or an index in the reported failed block.
I would like to know what could cause these logical corruption, and how to stop them ?
I am not able to get the relevant segment from the above information
SQL> select segment_name, segment_type, owner 2 from dba_extents 3 where file_id = 4 4 and 756652 between block_id 5 and block_id + blocks -1;
no rows selected
DBVERIFY Summary DBVERIFY - Verification complete Total Pages Examined : 3932160 Total Pages Processed (Data) : 3119107 Total Pages Failing (Data) : 0 Total Pages Processed (Index): 755048
[code]....
I have uploaded the complete logfile.
Below is a part of logfile
DBVERIFY - Verification starting : FILE = /prd/dvp/ora/oradata/LHF/disk06/gds_t01_01.dbf Block Checking: DBA = 21728172, Block Type = KTB-managed data block **** kdxcoavs = -84 < 0, avail = 3129 ---- end index block validation Page 756652 failed with check code 6401 ##not here that 756652 is the same block# mentioned in v$database_block_corruption
I am having 3 oracle database instances running on 3 seperate Linux Node(RHEL Node).
Instance -1 named - DS Instance -2 named - MIS Instance -3 named - OAS
Among of these 3 nodes, we are facing Block Corruption issues with sysaux tablespace.Error in Instance name DS is
Errors in file /u01/home/dba/oracle/diag/rdbms/ds4db/DS/trace/DS_j000_655388.trc (incident=300847): ORA-01578: ORACLE data block corrupted (file # 2, block # 38428) ORA-01110: data file 2: '/DSdb1/dssysaux.dbf' ORA-26040: Data block was loaded using the NOLOGGING option GATHER_STATS_JOB encountered errors. Check the trace file. [code]....
for this I googled and found some solution as oracle doc [430230.1] related to sysaux couruption.After this again we are facing the same issue in sysaux tablespace.
i written this code i m facing ORA-04030: out of process memory when trying to allocate 16408 bytes error
/* Formatted on 2011/11/26 11:52 (Formatter Plus v4.8. */ DECLARE row_id varchar2(50); v_batch_id temp.batch_id%TYPE; v_slab_id temp.slab_id%TYPE; flag NUMBER (2); num varchar2(50) := &row_id;
My understanding of DB_FILE_MULTIBLOCK_READ_COUNT parameter is that it affects only Full Table Scans and Fast Full Index Scans - all other disk retrieval is single block.If so, then maybe I'm reading this trace incorrectly:
select /*+ first_rows */ pk from test_join_tgt where pk >= 0 and rownum > 1
Is the tablespace actually off-line when doing a user-managed hot backup? I know the data blocks are copied into a redo stream but I am not sure if that means it(tablespace) is actually on-line or off-line.
ORA-00607: Internal error occurred while making a change to a data block ORA-00600: internal error code, arguments: [4194], [89], [83], [], [], [], [], []
I have a 10G Express system running. I Have 2 tablespaces in production. WHen taking backup, it terminates unsuccessfully saying system01.dbf is damaged. The application works fine and no data loss is found through the application interface.
So can I shift the data to a new server using the dbf files of the tablespaces in use?
I'm rying to import schema's from a dump file that came from a different environment.
What I have is:
1. dump file 2. log file of the export
I'm trying to import the file(containing three schemas) with remap_schemas, and it fails, gives a lot of ORA-00959: tablespace 'string' does not exist.
Now, I've read in OTN:
[URL]
that what you need to do in that case is to use the REMAP_TABLESPACE option,to redirect the objects to a different tablespace.
I don't see a name of the tablespace I'm getting the error for in the export log.I don't know if I have more tablespaces I have to redirect with REMAP_TABLESPACE.
I don't want to perform this 3 times, have an error, by that find out what's the next tablespace needing redirection and only then starting over...
How can I know from the dump file and the log file,what is the tablespace names i need for the redirection to my names? Or its just that the tablespace giving me the error is the only one in the dump file?
I was about to move some tables from one table space to another but it seems it is not possible to move partitioned tables between table spaces of different block sizes.
So far the only option I have is to export and then import back the data.
know if there is any way to move a partitioned table between table spaces of different block size?
what people have set for the SGA and PGA sizes for their larger usage, larger data, databases? I've been seeing one of our warehouses grow both in terms of tables (number and sizes) and users groups querying the database. We're at 96g sga and 10g pga, but was thinking in terms of 1/2 tB machine to pin some larger tables. I know we'll have SSD soon, but am seeing enourmous numbers of reports using windowing and analytic functions and in line sets being created. How big in general do you have your larger systems set to?
I did some google searches about large number of extents and ASSM. I see bits and pieces on the web. This is something I need to look at while testing an application. Not looking to go into 'why' I would use smaller extents, I just want to make sure I have what I need to look for during testing..Issues with massive numbers of extents:
1. DBA_EXTENTS query is really slow. 2. issues truncating tables (due to having to read lots of extents) 3. issues splitting maxvalue partitions and with dropping partitions. 4. if I stay away from ASSM, would this reduce these issues? Are there any other performance issues or other issues I need to know about to check when I do tests?
Any issues with query or insert wait time? The tables that would get smaller events would have thousands of partitions/sub-partitions . Most of these sub-partitions will be rather smaller.I just want to test for a variety of different cases. The 'why' will come out during testing.
We are using oracle 10g. with our code, Currently Oracle partitions are size the same way, each partition is using 10MB for data and 12MB for indexes (with the 6 default indexes); even of very few records are written in the partition.
We create partitions in advance as a part of nightly job with 10 minutes duration.Can some intelligence can be added where based on statistics we can decide the size of partition dynamically? Lot of space is getting wasted because of this reason.
Before I begin, I want to clarify that I am newbie in the administration of data warehouse.I need to know how to calculate the sizes of the archive and redo on data warehouse DB, in order to make an initial sizing of the BD on disks level.
is there a way to set the default sizes for the canvas, object navigator and properties window in forms designer so that they don't maximise when opening them. i tried to set them in the caupref and cagpref files to no avail.