Indexing Changes Most Of The Blocks Of Database On Storage
Jun 20, 2011
DB: Oracle 10g R2
OS: SUSE Linux SP 1 x864
We are maintaining a DR of our Database Server(oracle 10g R2 atop SUSE SP1 Linux) using Platespin(
[URL]......
Platespin is set to replicate(block based), incremental data(delta) every 1.5 hour from Production to DR site over a 30 Mbps dedicated fiber link.
Our maximum changes of data per day(during business hours) wont exceed 300 MB. During business hours Platespin replicates at least 1 GB at every replication cycle, while during off hours it replicates 300 to 500 MB per replication cycle. We are facing this strange issue with this box only(SLES 10 SP1 + Oracle 10g R2), we have protected MS Exchange 2007 Server based workloads without this strange issue, i.e in case of Exchange only delta replicates from Production server to DR site on Platespin.
Platespin support says us that Oracle re-indexes its database for better performance, so it is possible that re-indexing causes the blocks level changes on the storage, and since Platespin works on Block level, thats why it replicates so much(even though data is not changed that much)
here is actual words of Platespin support
<snip>
I think whenever Oracle database Indexing happens, it changes almost most of the Blocks of database and Platespin replicate all those Blocks.
As you know, Platespin checks the Date/Time attribute of every blocks before replication and if Date/Time attribute changes from last replication, it considers as changed block and replicate those blocks on Platespin Appliance. So, my suggestion is just look into the Oracle server behaviour before/after Data indexing process and do needful or do some workaround to overcome this issue.
We cant delete old database but we are using and we want to use both of them.
Now problem
Both dbs have a table called retailer.
I want to prevent duplication in them.
I.e it there is a retailer 12 created in old one new one shldnt allowo us to create retailer
id---12
similarly if there is a retailer 13 in new one it shouldnt be created in old table
retailer ids are unique. its not the case that with retailer id for example 5 there is a
different retailer in these two tables of diffferent DBS. Its unique
Now I have to put this criteria
how can i do it?
Second questions===================
if a primary key is auto increment. how can i skip some numbers lets say it is 1,2,3,4
i want that when next record shld be inserted it shld be 6 not 5. i dont want to feed the value 6. i want to know how to change indexing of primary key to skip some numbers when its set to auto increments.
While increasing the tablespace i am getting below error. How to handle this
SQL> set lin 300 SQL> col TABLESPACE_NAME for a25 SQL> col FILE_NAME for a65 SQL> select TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE,sum(BYTES/1024/1024) MB 2 from dba_data_files where TABLESPACE_NAME='SYSAUX' group by TABLESPACE_NAME,FILE_ID,FILE_NAME,AUTOEXTENSIBLE order by sum(BYTES/1024/1024) DESC,file_name;
TABLESPACE_NAME FILE_ID FILE_NAME AUT MB ------------------------- ---------- ----------------------------------------------------------------- --- ---------- SYSAUX 3 /ora2/oradata/dbname/sysaux_01.dbf NO 300
SQL> Alter database datafile 3 RESIZE 60000M; Alter database datafile 3 RESIZE 60000M * ERROR at line 1: ORA-01144: File size (7680000 blocks) exceeds maximum of 4194303 block
i have master -detail form. both are database blocks.
i have inserted values for the master block bt not for the detail record my problem is.. "user should not be allowed" to move to next record of the master block before saving the current record
My organisation is currently discussing different storage options for the database storage. Our production database is nearly 2TB and we do not want to continue with the existing NetApp storage (we use a 2 node RAC running 11.2.02 with nfs filesystem from NetApp filer).
We were looking at different options and came across Nimble Storage, they are very fast growing company aiming mid-range storage customers. The initial talks and demonstration looked very promising in terms of IO performance (they claim 40,000 - 60,000 IOPs for their CS400 series Nimble Storage array) and other options they are providing but we understand that majority of their customers are using it for VDI and other infrastructures.
They have demonstrated us using if for Oracle database with ASM storage over iSCSI LUNs. We are yet to do the POCs and benchmarking.
Has anyone come across Nimble Storage for running Oracle databases?
I have, for example, two tables: COMPANIES and EMPLOYEES. COMPANIES has a primary key on column COMP_ID. EMPLOYEES has a foreign key EMP_COMP_ID. I always do query COMPANIES with WHERE clause on COMP_ID and COMP_ADDRESS. What indexes on COMPANIES should I create in this case? First unique on COMP_ID, second composite on COMP_ID and COMP_ADDRESS? Or one unique index on COMP_ID and COMP_ADDRESS will be enough?
what is the best practice to implement in Indexing,is it global indexing or local indexing, I would like implement one of them in object that has been partitioned horizontally.i dont know exactly what to make of it.
Does Oracle supports an index that only contains a sub-set of the records in a table?
For eg - A table has 1 million records, can we have a index which contains suppose 100k records. We have used a similar logic in DMSII in Unisys. Not so sure if Oracle supports it.
i want to ask about indexing in partition table. i have table that indexed by local index. when i want to select all data. I execute this query
select * from Book_Issue_Part where status='Pinjam';
but it does not select all data, only partly data have selected. is it a wrong query to select all data in indexing partition table? so what query should i execute to get all data.
I am having an issue with the LOBs in our Database. Our production DB is about 71G right now - if i do export/import using the DataPump - I see the data volume becomes 40G. I am sure its because of the LOBs in my Database. Our application uses lots of LOB datatype and I think the space is not being released.
The challenge is I cant do a full export/import in my Production DB as its a 24/7 system and to do export/import I need min. 5 hrs - which my business will not permit. I can I claim the unused space here. I have used Segment Advisor - but it has only given me 6G of space gain.
Is there any script I can try to use to reclaim space in the database.
however im taking database size into account. i believe there has to be some sort of medium when it comes to infrastructure and performance.
this is specific to tables which map other tables. these tables that im creating usually consist of only 4 columns. 3 columns are mapped ids and the other a date in field. the 3 columns need to be indexed because they are used in joins within views. also those fields in the other tables are indexed as well.
I have a question about Oracle schemas. Oracle schema is the user with its datase objects (table, index, eg) .In two different Oracle schemas, there can be two tables with same names. When the users connect to same Oracle instance they can access these tables with schema.tablename convention.
However, how is that structure handled in physical database storage. Are there two tables created with same name physically? I mean are these two tables with same name stored in different database files?
Sun Solaris 10, Oracle 11gr2 x86 How are you doing? I am trying to build a test system. I never build ASM before. if I need to install database first then Migrating to ASM? The other problem I am having is that I only have a Raw disk. I have one disk for root and oracle and the other one for Raw disk for ASM. Do you know it's possible to build ASM on 1 raw disk?
We have standalone database running on ASM. Its 11Gr2 linux version5 server. After the Database bounce, the DB isnt coming up and is showing the below error.
SQL> startup nomount ORA-01078: failure in processing system parameters ORA-01565: error in identifying file '+DATA/test/spfiletest.ora' ORA-17503: ksfdopn:2 Failed to open file +DATA/test/spfiletest.ora ORA-15056: additional error message ORA-17503: ksfdopn:2 Failed to open file +DATA/test/spfiletest.ora ORA-15001: diskgroup "DATA" does not exist or is not mounted ORA-00450: background process 'ASMB' did not start ORA-00443: background process "ASMB" did not start ORA-06512: at line 4
Also i checked the ASM disk groups. I can see all those are MOUNTED properly. In fact i could also see the spfile present in ASM disk physically. It looks like it couldn't identify the spfile to start up the db. however i could see it physically present in ASM disk group. Find below snapshot.
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 358400 329103 0 329103 0 N DATA/ MOUNTED EXTERN N 512 4096 4194304 358368 358288 0 358288 0 N FRA/ MOUNTED EXTERN N 512 4096 4194304 20480 18780 0 18780 0 N REDO/
I cannot start ASM on my Oracle Database Appliance.
crsctl status resource -t says: ora.asm ONLINE ONLINE node1 Started ONLINE ONLINE node2 Started
however, if I try to access the ASMCMD it says: [grid@node1 ~]$ asmcmd Connected to an idle instance. ASMCMD> startup ORA-00304: requested INSTANCE_NUMBER is busy Connected to an idle instance.
I am using 10.2.0.1 on OEL5. I have installed CRS, Oracle home, ASM on both nodes and everything is fine.
When invoked dbca to create a database, it says:
*'DBCA could not startup the ASM instance configured on this node. To proceed with database creation using ASM you need the ASM instance to be up and running. Do you want to recreate the ASM instance on this node?'*
Is it a bug? because some blogs say this is a bug. And patch 8288940 will solve this. They also say this patch is to solve the incompatibility between 11g ASM incompatibility with 10g. But here I am using everything of 10.2.0.1.
Below is the output of Tom Kytes script show_space, which I have run on one of my indexes.
Unformatted Blocks ..................... 0 FS1 Blocks (0-25) ..................... 0 FS2 Blocks (25-50) ..................... 102,936 FS3 Blocks (50-75) ..................... 0 FS4 Blocks (75-100)..................... 0 Full Blocks ..................... 28,615,887 Total Blocks............................ 28,748,800 Total Bytes............................. 235,510,169,600 Total MBytes............................ 224,600 Unused Blocks........................... 0 Unused Bytes............................ 0 Last Used Ext FileId.................... 233 Last Used Ext BlockId................... 1,574,409 Last Used Block......................... 12,800 PL/SQL procedure successfully completed.
If I look at the unformatted blocks its zero, which tells me that data is being placed into every block (pretty well compressed). But what I don't understand is why there are 102,936 blocks that are only 25-50% full? I would have expected to see some blocks that in the75%-100% full range as this index was recently dropped and rebuilt 2 months ago.
This index is on a partitioned tabled, where the 90th day and higher partitions are dropped daily.
Here is the layout of the index
CREATE INDEX T1.FEAT_IDX ON T1.FEAT (R_SEQ, SYSTEM_NAME, FEATURE, FLAG) NOLOGGING TABLESPACE TB1
[Code] .........
what I need to do to get the value of FS4 (# of block 75%-100% used higher)
I have a block of code that looks something like this, I'll write it in pseudo code to avoid pasting 100's of lines of
IF <condition> THEN FOR record in (select query here..) LOOP --add data END LOOP END IF;
IF <condition> THEN --do soemthing END IF;
IF <condiiton> THEN --do soemthing END IF;
Now, in the 1st IF statement there is a for loop which basically builds up a tab type array - this takes at least an hour to execute based on the select query which returns 1000's of records.
The strange thing is, that my log files shows that at the time the procedure was executed, the stuff in the 2nd IF statement was executed almost straight away.. is this even possible?
If the loop in the 1st IF statement takes over an hour to finish, how is it possible for the 2nd IF statement stuff to process straight away? My log files show me that the loop in the 1st IF statement was going on for a good hour.. yet the 2nd IF statement was executed straight away.
I am interested about the fast way to access all data in physical block. what is the quick way to bring data blocks using the rowid, I found this script but soon as I can have faster access:
select * from table_name t WHERE ROWID between 'AAAUaOAAEAAHkJiAAA' and 'AAAUaOAAEAAHkJiAA8'; where 'AAAUaOAAEAAHkJiAAA' is the last element in the block and 'AAAUaOAAEAAHkJiAA8' is the first one
my question is can retrieve all the data in one block more quick than this query.
find the below table. The entire data is within DOCSTART and DOCEND. The data is further enclosed within BACCSTART and BACCEND. This type of block is repeatable. I have to pick up any of ABCD which is also repeatable and TOTAL(occurring once per block) and ACCNAME (occurring once per block) for each block within BACCSTART and BACCEND and form an xml like
for each such block. Presently I am using a for loop, but the performance is not up to the mark. It will have around 200 such blocks for which I have to form the xmls within 15 seconds. Presently the for loop is taking around 53 secs.
ROWNUM NAME VALUE 1 DOCSTART null 2 BACCSTART null 3 ABCD abcd 4 ABCD abcd2 5 PQRS pqrs 6 PQRS pqrs2 7 TOTAL 100 8 ACCNAME name 9 BACCEND null 10 BACCSTART null 11 ABCD abcd 12 ABCD abcd2 13 PQRS pqrs3 14 PQRS pqrs4 15 TOTAL 150 16 ACCNAME name 17 BACCEND null 18 DOCEND null
what empty blocks are, and how to remove them.What I'd like to do is not have empty blocks in the first place on loading a table. I load a lot of "static" tables and would like to not have any wasted space at the end, with minimal shinanigans.
I've set pctfree 0 I"ve set initial to close to the end table size I've set next to 1M I've set pctincrease 0 blocksize is 8k
Yet I still need to at least do an alter table deallocate unused
I have two blocks, both are multi record block. 1st block is control block and second is database block. Both the blocks have same fields(Example: Location,Location_name,Location_Type). In the first block(Control Block) I have check box. My goal is when I check the checkbox and click on add button all the records which are selected in first block should go to second block.