Hybrid Columnar Compression is dependent on the underlying storage system. See Oracle Database Licensing Information for more information.
The below is from the Oracle® Database PL/SQL Packages and Types Reference
Compression Constant Compression Level Description
COMP_FOR_OLTP 2 OLTP compression
COMP_FOR_QUERY_HIGH 4 High compression level for query operations
COMP_FOR_QUERY_LOW 8 Low compression level for query operations
COMP_FOR_ARCHIVE_HIGH 16 High compression level for archive operations
COMP_FOR_ARCHIVE_LOW 32 Low compression level for archive operations
To use Compression Level 4 or higher do we have to have ZFS or Pillar storage ?
I have noticed that Oracle text related objects, particularily the $I tables are some of the largest objects in our database. I have been actively pursuing utilizing Oracle advanced compression in our databases for OLTP table compression and LOB object compression. I have been unable to find any documentation or notes on if it is advisable to implement either table OLTP or LOB compression for Oracle text objects.
Using Oracle 11g's compression feature in production? I haven't read anything negative yet, that doesn't meant that there isn't anything that could have an adverse affect. I wanted to check to see if there are any affects on the performance or any disadvantages of using this compression feature. I have tested this on one my major tablespace and I did see a big difference in the reduce size on the tablespace but I am still hesitated to put this into production.
I am researching a performance problem on an Oracle Preprd DB in a RAC cluster using AQ. The queue table has 88 records, is about 900Meg in size, takes 90+ seconds to do a select count(*). In Prod the same table is about 44 records, 80 Meg in size, and takes about 9 seconds to query the table. The DB is at 10.2.0.4 running on a LINUX/Sun host. In the USER_DATA column I am seeing an entry in the STR_VALUE that displays 'Unable to successfully deliver a message after "MaxDeliveryCnt" attempts. Please verify that the onMessage() method of the MDB does not throw a RuntimeException' for each record in the table. I don't have direct access to the PROD DB so I can not verify this message in that environment. My question, is this table holding onto records that should be cleared out and should this table be dropped and rebuilt to reduce its size. I have seen this technique improve performance for other non-queueing tables. But I don't know if this is possible with a queue table.
I have few BDs replicated using Advanced Replication, some tables are read only (Basic Replication) and another ones are Updatable (Advanced Replication).
The infraestucture is Materialized View Replication, my trouble is, I do not know how to should treat the updatable tables that have foreign keys, I read in another FAQs that you should replicate the indexes using the same way to replicate tables (dmbs_repcat.create_master_repobject)
Which is the correct manner to treat the foreign keys with this kind of updatable snapshots,
We are seeing volume issue when taking Rman level 0 backup for a database , the database version is 11.2.0.2 and its on RHEL 2.1. As 11g supports compression for RMAN, we have implemented so as to reduce the backup space used.
" CONFIGURE COMPRESSION ALGORITHM 'LOW' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE "
However during full backup the volume size increases, meaning we have to increase /data volume (currently 500G) to more then a 1T for just rman to go through, else the backup hangs. Once backup is done we again bring down the volume size to less then 1T. The other compression parameters are HIGH and MEDIUM, hoever I am not very sure if changing to high or low will work as I couldn't find any right doc in meta link or may be I didn't searched correctly, I will continue to look for that.
We want to use archive compression for our standby(standalone non-exadata). We don't have license for Advanced Compression. Is the archive compression possible without Advanced Compression?
I had created a Primary key and wanted to compress as per my senior instructions.Below are my results the size increased after compression.
select compression from dba_indexes where index_name = 'TEST_IDX'; Compression ---------- DISABLED select sum(blocks) no_of_blocks, (sum(blocks)*8192)/(1024*1024)size_MB
[code]....
We ran a compression on the primary key index TEST_IDX
ALTER INDEX SCOTT.TEST_IDX REBUILD INITRANS 15 TABLESPACE DATA_01 COMPRESS; ANALYZE INDEX SCOTT.TEST_IDX VALIDATE STRUCTURE;
Now when i ran the below select statement:
select compression from dba_indexes where index_name = 'TEST_IDX'; Compression ---------- ENABLED select sum(blocks) no_of_blocks, (sum(blocks)*8192)/(1024*1024)size_MB
[code]....
As you can see after compression the blocks and size has been increased, but i ran for many tables and other indexes, we observed the blocks and size was reduced by 50-70%, i am not sure why this happened to the index compression.
What are the default features available along with 10g R2 Enterprise Edition , Especially for RMAN Backup ? What are the features required for licensing in 10g R2 EE version .
I am trying to enable OLTP compression on tables and at tablespace level for the tables
Steps I am following are:
1. Move indexes to its own tablespace 2. enable OLTP compression at table level: alter table table_name move compress for OLTP 3. Rebuild indexes 4. Issue I have is what to do with tables with LOB columns ALTER TABLE lob_table MOVE LOB (LOB_COL) STORE AS (TABLESPACE index_tbsp); -- Is this correct? 5. alter tablespace data_tablespace default compress for OLTP;
I have a question, is the sequence of steps correct. For tables with LOB columns do we needto move lobindex to index tablespace. Beacuse lobsegment and lobindex are created in data tablespace?
3) as a workaround, i compressed these 2 SWP tables with OLTP option, and then i was able to drop the column from these 2 SWP tables.
4) Below statement is correct or not ? IF A TABLE USING BLOCK LEVEL COMPRESSION, THEN this error will come - ORA-39726: unsupported add/drop column operation on compressed tables.
if above statement is correct, then how to find out whether table data is using block level compression ?
5) we have DBMS_COMPRESSION.GET_COMPRESSION_TYPE. using this i just tried to find out, but i am getting "1" as output. I am not getting the exact meaning of it.
confirm what is the conclusion on this ?
SQL> declare rid rowid; n number; begin select max(rowid) into rid from NOVAR.PAYMENT_SWP; n := dbms_compression.get_compression_type('NOVAR','PAYMENT_SWP',rid); dbms_output.put_line(n); end; / 2 3 4 5 6 7 8 9 1
PL/SQL procedure successfully completed.
SQL> SQL> SET SERVEROUTPUT ON SQL> / 1
PL/SQL procedure successfully completed.
SQL> SELECT max(rowid) from NOVAR.PAYMENT_SWP;
MAX(ROWID) ------------------ AAsz4fAHSAAAD3IABs
(ii) 2nd table
SQL> set serveroutput on SQL> declare rid rowid; n number; begin select max(rowid) into rid from NOVAR.PREPAYMENT_SWP; n := dbms_compression.get_compression_type('NOVAR','PREPAYMENT_SWP',rid); dbms_output.put_line(n); end; 2 3 4 5 6 7 8 9 10 / 1
PL/SQL procedure successfully completed.
SQL> SELECT max(rowid) from NOVAR.INVOICELINE_SWP;
I am using oracle forms 10g and basically we have a system that takes over 300 photos on a daily basis, this all works fine and with no issues except for say maybe 2-3 photos a month. Occasionally we will get a 'corrupt photo' (it not actually corrupt, it displays in everything as it should except forms) . When we encounter these photos forms just crashes out and the user is unable to query the record with the associated photo until it is deleted and a new one is taken (or alternatively if we take the photo from the database open it in paint.net and just hit save it will then work). There is no difference that we can see in the photo which doesnt work and those that do work. I have tried using WRITE_IMAGE_FILE to save the photo to disk and Read_Image_File to read from the disk to see if that makes a difference. If i save the file as jpeg and no compression it still crashes, if i save it with low compression it works fine but we lose quality which we dont want to lose. Bitmap wont work at all. Saving as JFIF and GIF works fine without any compression but we still lose quality.
The photo will display fine if we use a javabean to display it but in this instance a javabean is not an option.
One weird thing we noticed is that when we are on the form that crashes with these photos and query back a working photo first, and THEN query the 'corrupt photo' the corrupt photo displays fine, but if we go into the form and query a corrupt photo first forms crashes as explained.
We have a requirement from the customer to start using data and index compression in our 11g database.. Is this something available in Oracle 10g,11g without any additional costs? We are not sure if this will work with our application so we will have to test it in-house, is it possible to compress the existing table data/index to test it out?
whether Oracle has any capability of automatically checking which lossless compression algorithm it should apply by analyzing a data stream on data load? Does Oracle have any compression advisors/wizards that would make recommendations as to type and level of compression?
How to configure Oracle EM with newly created Oracle Instance on Oracle 10g DB,which is Single Instance DB but not RAC ,when I start the Oracle EM it is starting the default DB which created during Oracle Server Installation.
But i do a lot of work-arounds to make sure the installation does not make corrupt my OracleAgent,OracleService and OracleDataGatherer. If i just let the installation do its thing i have problems with my libraries, and can't start anything.
The errors are : Procedure entry point 'BlaBla' could not be located in the Dynamic Link library 'AnyName'.
I notice that i have 2 versions of the libraries in OracleHomeBIN (one version 8, one version 11). The programs that start the OracleAgent,Service and DataGatherer call the old libraries but expect to find values that can be found only in new libraries.
I have a requirement to put together a Oracle SQL template to create re-usable DDL/DML Scripts for Oracle databases.Only the Oracle DBA will be running the scripts so permissions is not an issue.
The workflow for any DDL is as follows:-
1) New Table
a. Check if the table exists from the system/admin views.
b. If table exists then give message "Table Exists"
c. If table does not exist then execute DDL code
2) Add Column
a. Check if Column exists for a given table from system/admin views
b. If column exists in the specified table,
b1. backup table.
b2. alter table to make changes to the column
b3. verify data or execute dml script convert from backup to the new change.
c. If Column does not exist
c1. backup table
c2. alter table to add column
c3. execute dml to populate column with default value.
The DML scripts are for populating base tables with data required for business operations.
3) Add new row
a. check if row exists by comparing old values of each column with new values to be added for the new record.
b. If exists, give message row exists
c. If not exists, add new record.
4) Update existing record (We have createtime columns in these tables so changes can be tracked)
a. check if row exists using primary key.
b. If exists,
b1. deactivate the record using the "active" column of the table
b2. Add new record with the changes required.
c. If does not exist, add new record with the changes required.
We would be moving oracle 11g unix sun solaries to oracle 11g Linux readhat OS. what would be the disadvantage and what are the item needs to be verified. Basically advantage of oracle 11g Linux readhat OS.
I would like to update an XML element without using the function APPENDCHILDXML or INSERTCHILDXML because they are not available in Oracle 10GR1. In my database, Oracle XDB is not installed.
The following query fail with the following error : ORA-00904: "INSERTCHILDXML" : identificateur non valide
update scl_profile set profile_data = insertChildXML(profile_data,'/exportImportMarcheCriteria','colonnesExport', XMLType('<colonnesExport>ENTETE_GESTIONNAIRES_AUTORISES</colonnesExport>')) where profile_xmltype = 'fr.mipih.marches.marche.criteres.ExportImportMarcheCriteria' and profile_type = 'eMagh2.MRGS.AccesMarche.ListeMarche.Export.OptionsExportImport'; [code]........
If i try to use the package DBMS_XMLDOM, i have the following error :
ORA-06550: Ligne 3, colonne 11 : PLS-00201: l'identificateur 'DBMS_XMLDOM.DOMDOCUMENT' doit etre declare ORA-06550: Ligne 3, colonne 11 : PL/SQL: Item ignored
I think it's because ORACLE XDB component is not installed in my database.
This time i am downloading the Oracle 11g and then the Oracle APEX...(they are not in one package...right ? i mean they both have to download separately)i really new in oracle / PL/SQL. I'm not a programmer but i interest about database programming.For now i still use Ms. Access to develop an aplication and i really suprised i found this Oracle APEX because it an RAD tool too same as Access but more powerfull i think.