Using Archive Compression For Standby?
Jul 13, 2013
Database version: 11.2.0.3(Exadata)
We want to use archive compression for our standby(standalone non-exadata). We don't have license for Advanced Compression. Is the archive compression possible without Advanced Compression?
View 0 Replies
ADVERTISEMENT
Mar 14, 2013
My DB is in Max Protection mode. In case Standby archive destination gets full & oracle cannot archive standby redo log to standby archive destination, will the primary database shutdown?
View 1 Replies
View Related
Mar 29, 2013
If the standby archive destination is full, what will happen? Will the standby DB freeze? or will oracle stop redo transport from primary to standby?
DB Version: 11.2.0.2 running on RHEL5
View 1 Replies
View Related
Sep 3, 2010
ORACLE VERSION: 11.2
ENVIRONMENT: physical standby database
MODE: maximum performance mode.
SQL> select * from v$archive_gap;
no rows selected
SQL> select sequence#, applied from v$archived_log where applied='NO';
SEQUENCE# APPLIED
---------- ---------
10929 NO
10930 NO
10931 NO
10932 NO
10933 NO
10934 NO
10935 NO
10936 NO
10937 NO
11073 NO
11074 NO
SEQUENCE# APPLIED
---------- ---------
11075 NO
11076 NO
11077 NO
11078 NO
11079 NO
11080 NO
11081 NO
11082 NO
11083 NO
11084 NO
11085 NO
SEQUENCE# APPLIED
---------- ---------
11086 NO
11087 NO
11088 NO
11089 NO
11091 NO
11092 NO
What would be the cause
View 1 Replies
View Related
Jun 18, 2011
We are planning to setup a data guard (Maximum performance configuration ) between two Oracle 9i databases on two different servers.
The archive logs on the primary servers are deleted via a RMAN job bases on a policy , just wondering how I should delete the archive logs that are shipped to the standby.
Is putting a cron job on the standby to delete archive logs that are say 2 days old the proper approach or is there a built in data guard option that would some how allow archive logs that are no longer needed or are two days old deleted automatically.
View 1 Replies
View Related
Mar 25, 2013
We recently configured data guard in test machine.Archives not applied in physical standby.Where i need to start investigation?
Primary
SQL> select THREAD#,max(sequence#) from v$archived_log where applied='YES' group by thread#;
THREAD# MAX(SEQUENCE#)
---------- --------------
1 301
[code]...
View 8 Replies
View Related
Aug 30, 2012
My oracle database is 11.2.0.2 RAC RDBMS on RHEL 5.6
We recently created physical standby database. How do we delete standby archive logs from physical standby?
View 2 Replies
View Related
Feb 15, 2012
The archive log can not send to the standby database, how to do?
primary database spfile:
*.db_name=oracl
*.db_unique_name=oracl
[Code]....
Error 12170 received logging on to the standby
Error 12170 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'oraclbak'
Error 12170 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'oraclbak'
ORA-12170: TNS:Connect timeout occurred
*** 2012-02-15 08:31:50.678 60679 kcrr.c
PING[ARCq]: Heartbeat failed to connect to standby 'oraclbak'. Error is 12170.
*** 2012-02-15 08:31:50.680 58941 kcrr.c
kcrrfail: dest:2 err:12170 force:0 blast:1
kcrrwkx: nothing to do (end)
View 6 Replies
View Related
Nov 23, 2010
1) The Primary Database is UP. The Physical Stand By Database is DOWN. The Current Archive Log Sequence is 99 in Primary.
We have to apply Archive Log from 51 to 99 to the Standby Database. But Unfortunately, there is no backup of those Archivelogs and the ArchiveLogs from 51 to 98 have got deleted at Primary end.
Now how will you apply these Archive Logs from Primary Database to Physical Standby Database?
Note : The Physical StandBy Database is DOWN.
View 2 Replies
View Related
Nov 15, 2013
I trying to backup archive logs using rman in standby database. I'm able to backup archive logs using simple command it get's successfully completed. rman > BACKUP ARCHIVELOG ALL When i'm trying to do with keep command it's getting failed.I'm trying to do on physical standby databaseBACKUP ARCHIVELOG ALL KEEP UNTIL TIME 'SYSDATE+100' TAG = 'TEST'.
View 1 Replies
View Related
Dec 9, 2010
I successfully created the standby database and the archive logs were properly moving on both the primary and the standby databases. For the proper transfer of the archive logs on the STANDBY database I used "FAL_CLIENT AND FAL_SERVER" in the pfile of the primary database specifying the location of the primary and the Standby respectively.
When I removed both the parameters from the pfile of the primary database still there was the transfer of the archive logs however there should not be "If I am not wrong" as I have removed both the parameters.
why there is still the transfer of the archive logs on the standby database.
View 3 Replies
View Related
Jun 30, 2010
I have configured data guard in the windows XP same server .It is not able to apply log to the Standby database.when I queried the following, I go the following erros..
sql>select message from v$dataguard_status where dest_id=2;
FAL[server, ARC0]: Error 12514 creating remote archivelog file 'STNDBY'
PING[ARCk]: Heartbeat failed to connect to standby 'STNDBY'. Error is 12528.
LGWR: I/O error 1089 archiving log 1 to 'STNDBY'
"ARCk: Attempting destination LOG_ARCHIVE_DEST_2 network reconnect (3113)
"
"ARCk: Destination LOG_ARCHIVE_DEST_2 network reconnect abandoned
"
PING[ARCk]: Error 3113 when pinging standby STNDBY.
"LNS: Closing remote archive destination LOG_ARCHIVE_DEST_2: 'STNDBY' (error 1089)
"
LGWR: Error 1041 closing archivelog file 'STNDBY'
************************************************
Also if
SQL> select sequence#,applied from v$archived_log;
query gives the following message----
SEQUENCE# APP
---------- ---
7 YES
5 NO
8 NO
9 NO
6 NO
10 NO
10 NO
11 NO
11 NO
12 NO
12 NO
View 4 Replies
View Related
Dec 29, 2012
which of the following views on the physical standby will us correct information on synchronization with Primary database?
For example, when I checked v$archived_gap it did not return any rows but the max(applied_seq#) on v$archive_dest_status was lagging far behind from the max(sequence#) on Primary database
select max(applied_seq#) from v$archive_dest_status where dest_id=2;
select max(sequence#) from v$archived_log where applied='YES';
select * from v$archive_gap;
View 1 Replies
View Related
Oct 22, 2013
i have found an issue regarding log archiving on dest1. yesterday one sequence number 76871 not archive to dest1.alert logfile content as follow. i configure standby and ship archive manually with window copy command. i need this archive to complete recovery on standby database.
Mon Oct 21 09:29:28 2013
ARC2: Completed archiving log# 3 seq# 76869
Mon Oct 21 09:39:28 2013
Thread 1 advanced to log sequence 76871
Current log# 2 seq# 76871 mem# 0: D:ORACLEORADATAORC1REDO02.LOG
[code]....
View 7 Replies
View Related
Nov 6, 2013
I have a Primary database and Standby database both in ASM. Recently my archive logs got deleted and i am trying to recover my standby database with an incremental backup based on scn from primary database. But i face the below error when i recover the standby database with the incremental backup taken in primary database.
RMAN> recover database noredo;Starting recover at 06-NOV-13using target database control file instead of recovery catalogallocated channel: ORA_DISK_1channel ORA_DISK_1: SID=21 device type=DISKchannel ORA_DISK_1: starting incremental datafile backup set restorechannel ORA_DISK_1: specifying datafile(s) to restore from backup setdestination for restore of datafile 00001: +STDBY/11gdb/datafile/system.258.805921881destination for restore of datafile 00002: +STDBY/11gdb/datafile/sysaux.259.805921967destination for restore of datafile 00003: +STDBY/11gdb/datafile/undotbs1.260.805922023destination for restore of datafile 00004: +STDBY/11gdb/datafile
[code]....
View 4 Replies
View Related
Mar 25, 2013
We have started developing a new application to compress tablespaces based on the business specification.
This is a Data Warehouse.
Version: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
And there is a requirement to find the uncompressed segments,to find whether the tablespace is already compressed or not.
View 1 Replies
View Related
Aug 22, 2011
We are seeing volume issue when taking Rman level 0 backup for a database , the database version is 11.2.0.2 and its on RHEL 2.1. As 11g supports compression for RMAN, we have implemented so as to reduce the backup space used.
" CONFIGURE COMPRESSION ALGORITHM 'LOW' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE "
However during full backup the volume size increases, meaning we have to increase /data volume (currently 500G) to more then a 1T for just rman to go through, else the backup hangs. Once backup is done we again bring down the volume size to less then 1T. The other compression parameters are HIGH and MEDIUM, hoever I am not very sure if changing to high or low will work as I couldn't find any right doc in meta link or may be I didn't searched correctly, I will continue to look for that.
View 2 Replies
View Related
Aug 8, 2012
Hybrid Columnar Compression is dependent on the underlying storage system. See Oracle Database Licensing Information for more information.
The below is from the Oracle® Database PL/SQL Packages and Types Reference
Compression Constant Compression Level Description
COMP_FOR_OLTP 2 OLTP compression
COMP_FOR_QUERY_HIGH 4 High compression level for query operations
COMP_FOR_QUERY_LOW 8 Low compression level for query operations
COMP_FOR_ARCHIVE_HIGH 16 High compression level for archive operations
COMP_FOR_ARCHIVE_LOW 32 Low compression level for archive operations
To use Compression Level 4 or higher do we have to have ZFS or Pillar storage ?
View 4 Replies
View Related
Mar 22, 2011
We are trying to add 2 columns in a partitioned table.but we are getting below error:
SQL Error: ORA-39726: unsupported add/drop column operation on compressed tables
But, it is not a compressed table.
select table_name, compression from user_tables
where table_name ='SVC_ORDER_CODE_FACT'
TABLE_NAME COMPRESSION
------------------------------ -----------
SVC_ORDER_CODE_FACT
actually we are trying to add 2 columns as below:
ALTER TABLE SVC_ORDER_CODE_FACT
ADD (MKT_FEATURE_KEY NUMBER default '-2', PREV_MKT_FEATURE_KEY NUMBER default '-2');
But, if we add column without default value,
View 2 Replies
View Related
Jun 21, 2012
I had created a Primary key and wanted to compress as per my senior instructions.Below are my results the size increased after compression.
select compression from dba_indexes where index_name = 'TEST_IDX';
Compression
----------
DISABLED
select sum(blocks) no_of_blocks, (sum(blocks)*8192)/(1024*1024)size_MB
[code]....
We ran a compression on the primary key index TEST_IDX
ALTER INDEX SCOTT.TEST_IDX REBUILD INITRANS 15 TABLESPACE DATA_01 COMPRESS;
ANALYZE INDEX SCOTT.TEST_IDX VALIDATE STRUCTURE;
Now when i ran the below select statement:
select compression from dba_indexes where index_name = 'TEST_IDX';
Compression
----------
ENABLED
select sum(blocks) no_of_blocks, (sum(blocks)*8192)/(1024*1024)size_MB
[code]....
As you can see after compression the blocks and size has been increased, but i ran for many tables and other indexes, we observed the blocks and size was reduced by 50-70%, i am not sure why this happened to the index compression.
View 3 Replies
View Related
Oct 18, 2013
What are the default features available along with 10g R2 Enterprise Edition , Especially for RMAN Backup ? What are the features required for licensing in 10g R2 EE version .
View 2 Replies
View Related
Mar 6, 2013
If I have a Partitioned Table set as COMPRESS FOR QUERY/ARCHIVE HIGH what would be the impact when a row in Partition X has to move to Partition Y?
View 3 Replies
View Related
Nov 16, 2012
Using Oracle 11g's compression feature in production? I haven't read anything negative yet, that doesn't meant that there isn't anything that could have an adverse affect. I wanted to check to see if there are any affects on the performance or any disadvantages of using this compression feature. I have tested this on one my major tablespace and I did see a big difference in the reduce size on the tablespace but I am still hesitated to put this into production.
View 1 Replies
View Related
May 26, 2013
I have dataguard configuration operating in maximum availability mode with a local standby db (A - lgwr sync not using real time apply) and a remote standby db (B - lgwr async). I then simualted a crash of my primary database with batch jobs running. Since the stby db A is in lgwr sync option ,all the commited data in the current online redo log has been transmitted to stby A and is present in its stby redo log (Group 2).How do I apply this stby redo log to the remote stby db.
Tried the following methods.
1.ftp the stby redo log to the remote db and tried to regiter it, got an error that it is not completely archived.
2.issued the recover standby database command and supplied the stby redo log when it asked for the sequence in the stby redo, got an error saying there is corruption in a block(tried this option multiple times ended up with the same result.)
View 5 Replies
View Related
Aug 30, 2013
My steps for testing as below:
1.create a primary database
2.duplicate a physical standby database;
3.turn on flashback on both databases.
4.record SCN xxx on physical standby database.
5.convert physical standby to logical standby (using keep identity statement)
6.flashback to logical standby to xxx
7.convert logical standby to physical standby
8.using real time apply I got errors: Fast Parallel Media Recovery enabledManaged Standby Recovery starting Real Time ApplyMRP0:
Background Media Recovery waiting for new incarnation during transient logical upgrade procedure
Errors in file /home ora/ app/ oracle/ diag/ rdbms/ ora11gr1dg/ora11gr1dg/trace/ora11gr1dg_mrp0_10120.trc:ORA-19906: recovery target incarnation changed during recoveryManaged Standby Recovery not using Real Time ApplyErrors in file /home/ ora/app/ oracle/diag/ rdbms/ ora11gr1dg/ ora11gr1dg/ trace/ora11gr 1dg_mrp0_ 10120.trc:ORA-19906: recovery target incarnation changed during recovery
Errors appears every 10 seconds. Seems MPR0 is waiting for new incarnation for a long time. So am I.Standby database incarnation:
List of Database IncarnationsDB Key Inc Key DB Name DB ID STATUS Reset SCN Reset Time-------1 1 ORA11GR1 3853851354 CURRENT 1 08/09/2013 01:02:182 2 ORA11GR1 3853851354 ORPHAN 2127877 08/28/2013 19:22:01 BGV
View 2 Replies
View Related
Oct 18, 2010
Is there any way to compress data while export in ORACLE 10g or if there is any other way I can reduce the space consumed by the datafiles.
View 10 Replies
View Related
Sep 2, 2012
I am trying to enable OLTP compression on tables and at tablespace level for the tables
Steps I am following are:
1. Move indexes to its own tablespace
2. enable OLTP compression at table level:
alter table table_name move compress for OLTP
3. Rebuild indexes
4. Issue I have is what to do with tables with LOB columns
ALTER TABLE lob_table MOVE LOB (LOB_COL) STORE AS (TABLESPACE index_tbsp); -- Is this correct?
5. alter tablespace data_tablespace default compress for OLTP;
I have a question, is the sequence of steps correct. For tables with LOB columns do we needto move lobindex to index tablespace. Beacuse lobsegment and lobindex are created in data tablespace?
View 2 Replies
View Related
Mar 13, 2013
1)i have 2 SWP TABLES. while dropping a column, i am getting error -
ORA-39726: unsupported add/drop column operation on compressed tables.
2) when i checked compression status, those were not compressed. But as per our code standard, SWP tables should not be in compress mode.
OWNER TABLE_NAME COMPRESS COMPRESS_FOR
------------------------------ ------------------------------ -------- ------------
NOVAR PAYMENT_SWP DISABLED
OWNER TABLE_NAME COMPRESS COMPRESS_FOR
------------------------------ ------------------------------ -------- ------------
NOVAR PREPAYMENT_SWP DISABLED
3) as a workaround, i compressed these 2 SWP tables with OLTP option, and then i was able to drop the column from these 2 SWP tables.
4) Below statement is correct or not ?
IF A TABLE USING BLOCK LEVEL COMPRESSION, THEN this error will come - ORA-39726: unsupported add/drop column operation on compressed tables.
if above statement is correct, then how to find out whether table data is using block level compression ?
5) we have DBMS_COMPRESSION.GET_COMPRESSION_TYPE. using this i just tried to find out, but i am getting "1" as output. I am not getting the exact meaning of it.
confirm what is the conclusion on this ?
SQL> declare
rid rowid;
n number;
begin
select max(rowid) into rid from NOVAR.PAYMENT_SWP;
n := dbms_compression.get_compression_type('NOVAR','PAYMENT_SWP',rid);
dbms_output.put_line(n);
end;
/
2 3 4 5 6 7 8 9 1
PL/SQL procedure successfully completed.
SQL>
SQL> SET SERVEROUTPUT ON
SQL> /
1
PL/SQL procedure successfully completed.
SQL> SELECT max(rowid) from NOVAR.PAYMENT_SWP;
MAX(ROWID)
------------------
AAsz4fAHSAAAD3IABs
(ii) 2nd table
SQL> set serveroutput on
SQL> declare
rid rowid;
n number;
begin
select max(rowid) into rid from NOVAR.PREPAYMENT_SWP;
n := dbms_compression.get_compression_type('NOVAR','PREPAYMENT_SWP',rid);
dbms_output.put_line(n);
end;
2 3 4 5 6 7 8 9
10 /
1
PL/SQL procedure successfully completed.
SQL> SELECT max(rowid) from NOVAR.INVOICELINE_SWP;
MAX(ROWID)
------------------
AAsz4ZAEkAAAp8XAAA
View 3 Replies
View Related
Sep 11, 2012
I have noticed that Oracle text related objects, particularily the $I tables are some of the largest objects in our database. I have been actively pursuing utilizing Oracle advanced compression in our databases for OLTP table compression and LOB object compression. I have been unable to find any documentation or notes on if it is advisable to implement either table OLTP or LOB compression for Oracle text objects.
View 1 Replies
View Related
Feb 5, 2013
I am using oracle forms 10g and basically we have a system that takes over 300 photos on a daily basis, this all works fine and with no issues except for say maybe 2-3 photos a month. Occasionally we will get a 'corrupt photo' (it not actually corrupt, it displays in everything as it should except forms) . When we encounter these photos forms just crashes out and the user is unable to query the record with the associated photo until it is deleted and a new one is taken (or alternatively if we take the photo from the database open it in paint.net and just hit save it will then work). There is no difference that we can see in the photo which doesnt work and those that do work. I have tried using WRITE_IMAGE_FILE to save the photo to disk and Read_Image_File to read from the disk to see if that makes a difference. If i save the file as jpeg and no compression it still crashes, if i save it with low compression it works fine but we lose quality which we dont want to lose. Bitmap wont work at all. Saving as JFIF and GIF works fine without any compression but we still lose quality.
The photo will display fine if we use a javabean to display it but in this instance a javabean is not an option.
One weird thing we noticed is that when we are on the form that crashes with these photos and query back a working photo first, and THEN query the 'corrupt photo' the corrupt photo displays fine, but if we go into the form and query a corrupt photo first forms crashes as explained.
View 1 Replies
View Related