Server Administration :: Archive Log File Sequence Number Reduced
Mar 28, 2012
We are facing a different issue in our database. From yesterday night, the archive log generated with 5 digit. But it supposed to be 6 digit. Hence we are not able to apply the logs in DR Location.
View 4 Replies
ADVERTISEMENT
Jul 17, 2012
SQL> select checkpoint_change#,controlfile_change# from v$database;
CHECKPOINT_CHANGE# CONTROLFILE_CHANGE#
------------------ -------------------
203454 204955
what's the difference between checkpoint_change# and controlfile_change#.
what's the checkpoint_change# use for ? does it use for recover ?
what's the controlfile_change# use for ?
when the controlfile_change# will be increase ?
SQL> select controlfile_sequence# from v$database;
CONTROLFILE_SEQUENCE#
---------------------
293
Qs.) what is controlfile_sequence# ?
View 6 Replies
View Related
Jan 10, 2011
I want to know how many archive log generating in One hour at the peak time. We have 6 nodes RAC multiplex 2.
Is there are any query through which I can achieve the above purpose.
Note: As this is a prod instance client is not happy to implement Log Miner utility.
View 9 Replies
View Related
Dec 16, 2010
I'm facing problem with archive log file size, Archive logs are generated with only of 90m or 92m or 94m(Variable sizes of less than 100m), Although i had set 100m for each of my redo log file. Here i'm providing my create db script for your reference. I want to know why the log switches before it reaches 100m.Is there any connection of intial 10m for my .dbf files.
create database mydev
maxlogmembers 3
maxloghistory 100
maxdatafiles 50
maxinstances 1
logfile
[Code]....
View 14 Replies
View Related
Apr 2, 2012
In normal days size of archives generated in a day is 14-15GB. But since yesterday morning, almost 150GB of archives have been generated and are still getting generated(200MB every 1-2 minutes).
There was a sudden reboot of server yesterday morning. At that time there was heavy load of transactions on database. Can it be a reason that smon is still doing recovery? (I am not sure on this). Also, Undo tablespace is increased from 18 GB to 50 GB since yesterday (autoextend on).
Now we are running out of space for archive file system (can't delete them also until they are transferred to DR) Size of redo log is 200MB. This database supports around 2500 users.
performance wise I don't see any hit. Also wait events are normal. (only few db file sequential read) finding the query/session which are causing this much huge amount of archives?
View 7 Replies
View Related
May 7, 2010
We are facing one issue on one of the database. The database is generating large trace files(14000) from last two days. That consumes around 15G space on the disk. And the content of the trace files is not having any meaningful message to debug:
cat /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc
*** TRACE DUMP CONTINUES IN FILE /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc ***
Dump file /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc
*** TRACE DUMP CONTINUED FROM FILE /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc ***
... (Many lines with above message)
The alert log is having one repeated error yesterday:
Thu May 6 22:00:03 2010
Errors in file /apps/oracle/admin/fs90uat/bdump/fs90uat_j000_11811.trc:
ORA-12012: error on auto execute of job 2647927
ORA-04063: ORA-04063: package body "ORACLE_OCM.MGMT_DB_LL_METRICS" has errors
ORA-06508: PL/SQL: could not find program unit being called: "ORACLE_OCM.MGMT_DB_LL_METRICS"
ORA-06512: at line 1
The corresponding trace file is having error:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /apps/oracle/product/10.2.0/db_1
System name: SunOS
Node name: corpqadb30
[Code] .......
View 2 Replies
View Related
Jun 13, 2013
I have a CSV file with 100 records and one of the column as FILE_ID. I want to load one unique number for all 100 recs not for every records.
suppose my sequence returns 3 as next val i want to load 3 for all 100 records. How to implement this in control file or sh file . I am using shell script to call sqlldr.
View 2 Replies
View Related
Mar 16, 2012
on weekends we have too many archive logs generated .i have taken the data of a week and found that average archive log generated from monday to friday is 7 files per day but on satuarday and sunday the average is 60 files and FG1 gets full. on weekends we have all type of backups running like incremental,archival and logical backups and on sunday we have full physical backup
what is the reason of too many archive log files generations at weekends. is it due to hot and logical backups , if yes then how ?
View 9 Replies
View Related
Mar 6, 2012
I have a small question to be clarified. Is there any way to find out the "Applied Archive log files" in DR and "Deleting them through .
View 1 Replies
View Related
Mar 1, 2012
Our database was generating archivelog(50MB) every 30seconds! I think this is not normal because what I did is open our database, I was the only one who is connected, I'm not running anything, but our database is still generating archivelogs!
Our redo logs: 6groups 3members.
This are the things I saw on our alert logs:
- advanced to log sequence
- cannot allocate new log, sequence
- checkpoint not complete
- private strand flush not complete
What I did is change the log mode of our database to noarchivelog then open the database, then returned it to archivelog mode then that fixed the problem. But the thing is after 6hours its abnormal behavior goes back again.
View 3 Replies
View Related
Sep 20, 2010
version: 10.2.0.4
OS: windows server 2003
I am not able to delete one month old archive log file manually on windows which doesn't having info about the standby on v$archived_log view of primary database. the sequence were already applied to the standby database. It shows the status as deleted in v$archived_log. while deleting the file manually. it showing an error like another program or person is using it.
View 10 Replies
View Related
Dec 28, 2011
what is the techniques to set the archive mode in oracle database on???????
View 4 Replies
View Related
Jul 6, 2011
Just to validate from you experts if in case I change the destination file of my archive logs, does it require a restart of the database?
View 10 Replies
View Related
Aug 23, 2011
I need store history for two tables in my system. I thought that Flashback Data Archive will be the best option. There is also another ways to do this but don't focus on this. I need to to this by FDA (Flashback Data Archive);
So my prerequisite was to create tablespace and flash back archive, and alter table to be archived.
create tablespace audit_archive datafile 'd:oradata etaaudit_archive.ora' size 100M;
create flashback archive audit_flash_archive
tablespace audit_archive quota 10G retention 10 year;
alter table teta_admin.t_prac flashback archive audit_flash_archive;
and everything works fine but on sys user.
i can query this table using "as of timestamp" clause
select prac_id, imie, imie_2, nazwisko, nr_ew from teta_admin.t_prac as of timestamp to_timestamp('2011-08-23 08:20:00','yyyy-mm-dd hh24:mi:ss')
but final construction of idea was to create additional user (interface), grant select on teta_admin.t_prac object and query archive data from interface user. and this is point of my failure. this don't work on new user.
interface user have such sys privs:
SQL> SELECT * FROM dba_sys_privs
2 WHERE grantee = 'INTERFACE';
GRANTEE PRIVILEGE ADM
------------------------------ ---------------------------------------- ---
INTERFACE CREATE SESSION NO
and table privs:
SQL> SELECT * FROM dba_tab_privs
2 WHERE grantee = 'INTERFACE';
GRANTEE OWNER TABLE_NAME GRANTOR PRIVILEGE
------------------------------ ------------------------------ ------------------------------ ------------------------------ --------------------
INTERFACE TETA_ADMIN T_PRAC TETA_ADMIN INSERT
INTERFACE TETA_ADMIN T_PRAC TETA_ADMIN DELETE
INTERFACE TETA_ADMIN T_PRAC TETA_ADMIN ALTER
INTERFACE TETA_ADMIN T_PRAC TETA_ADMIN FLASHBACK
INTERFACE TETA_ADMIN T_PRAC TETA_ADMIN SELECT
what i need to do in order to query this flashback table from interface user. when i try to do this from this user oracle says ORA-00942.
View 9 Replies
View Related
Mar 26, 2010
We had a database (DB A)that is having version 9.2.0.6.0.This DB is having multiple tables and volume of 6 million in individual tables.Another database is also 9.2.0.6.0 (DB B), this DB has Mviews pointing to DB A. Mviews are refreshed in every 15 mins, with fast refresh option in 90% cases and remaining having complete refresh.
Last weekend we have migrated DB 2 to version 10.2.0.4.0 - 64bi and on another server.After version upgrade and DB migration complete refresh was done once for all mviews.
Now DB A is generating huge amount of archive log and also it's UNDO space is getting fully consumed causing performance issue and DB hang.what has gone wrong and what we can do to improve response of DB A and also to reduce size of Archive log ?
View 3 Replies
View Related
Oct 31, 2011
I want to drop a datafile in my test db which is in no archive mode,at first, i want to offline the datafile,but it failed,is there any way to do it?
SQL> archive log list;
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 3237
[code].......
View 8 Replies
View Related
Aug 5, 2010
I have read it in books that flashback uses undo data to create the flashback data or to flashback the database to a time in the past.Then, what is the role of archive files in flashback operation. Why it is mandatory to turn on archiving before turning on flashback. Also, if you remove the latest archive files, you can NOT flashback the data to a time in past (Oracle complains of missing archive files).
View 8 Replies
View Related
Apr 13, 2011
we are getting below error:
ora-00257 archiver error. connect internal only until freed
when we tried to remove the unwanted arc files thro ASMCMD,we are getting below error:
ASMCMD> rm -ef 2011_04_05/
Unknown option: e
usage: rm [-rf] <name1 name2 . . .>
ASMCMD> rm -rf 2011_04_05/
ORA-15032: not all alterations performed
ORA-15028: ASM file '+XCOM_BACKUP_DG/TXCOM/ARCHIVELOG/2011_04_05/thread_2_seq_27215.1143.747641143' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
ORA-15032: not all alterations performed
ORA-15028: ASM file '+XCOM_BACKUP_DG/TXCOM/ARCHIVELOG/2011_04_05/thread_3_seq_21762.826.747641143' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute)
ORA-15032: not all alterations performed
ORA-15177: cannot operate on system aliases (DBD ERROR: OCIStmtExecute)
further we checked FRA size:
SQL> select * from v$flash_recovery_area_usage;
FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
------------ ------------------ ------------------------- ---------------
CONTROLFILE 0 0 0
ONLINELOG 0 0 0
ARCHIVELOG 3.19 0 38
BACKUPPIECE 0 0 0
IMAGECOPY 0 0 0
FLASHBACKLOG 0 0 0
and checked any arc processing holding lock on arc files:
> ps -ef | grep -i ora_arc*
oracle 6989 1 0 15:07 ? 00:00:00 ora_arc0_TXCOM1
oracle 6991 1 0 15:07 ? 00:00:00 ora_arc1_TXCOM1
oracle 12246 12164 0 15:17 pts/4 00:00:00 grep -i ora_arc*
oracle 13452 1 0 Mar23 ? 00:01:07 ora_arc0_TWEBAPPS1
oracle 13454 1 0 Mar23 ? 00:00:30 ora_arc1_TWEBAPPS1
oracle 15402 1 0 Mar23 ? 00:00:50 ora_arc0_SXCOM1
[Code] ........
but we were not able to remove those .arc files from that folder. finally we have down all the instances and deleted those files manually.
View 1 Replies
View Related
Oct 21, 2011
i am using oracle 10g on solaris 10 os.currently archived log is generated by size wise 52 mb.i want to know whar is the best practice for archive log generation . it should be time interval or size wise.
View 1 Replies
View Related
Sep 17, 2011
when i connect to my database i got this error.
SQL> select open_mode from v$database;
OPEN_MODE
----------
MOUNTED
SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-16014: log 2 sequence# 80 not archived, no available destinations
ORA-00312: online log 2 thread 1:
'E:ORACLEPRODUCT10.2.0ORADATAMOONREDO02.LOG'
..
so what can i do
View 4 Replies
View Related
Dec 22, 2003
I have a control file like following:
LOAD DATA
INFILE *
INTO TABLE member
REPLACE
FIELDS TERMINATED BY '|' OPTIONALLY ENCLOSED BY '"'
(
control_id,
name,
address
)
problem is that control_id value is not in data file and I have to assign to each row the same value generated from sequence or from unix variable.
For example, after I run sqlldr, I have to have records in the table like following:
control_id name address
---------- ---- -------
1847 Charlie 250 yonge st
1847 Peter 5 Brookbanks dr
1847 Ben 123 King st
.
.
.
How do I do that?
View 29 Replies
View Related
Mar 9, 2011
I have the following select query that works perfectly fine. Returns 25 rows based on the descending order of the price.But, I want add one more expression to this list of columns in this query (apart from customer_id).
the expression should look like Cust-01 for the first customer from the below query all the way to Cust-25 for the last customer.But how can I can generate 01 to 25 in oracle?
select customer_id from
(select customer_id from capitalPLAN
where member_status = 'MEMBER' AND customer_id NOT in ('156','201','1385','2125','3906','165')
order by price desc
)
where rownum <= 25
View 4 Replies
View Related
Nov 3, 2012
i have two tablespaces dictionary managed (SYSTEM,APPLSYSX) i tried to change to locally cause it will cause problem in future when trying to run OATM migration.i did it successfully on APPLSYSX,when i did it on system upon oracle procedure.i have to change all tablespaces to read only when i did that with tablespace APPLSYSD(alter tablespace APPLSYSD read only) i received errors
SQL> alter tablespace APPLSYSD READ ONLY;
alter tablespace APPLSYSD READ ONLY
*
ERROR at line 1:
ORA-01230: cannot make read only - file 636 is offline
ORA-01111: name for data file 636 is unknown - rename to correct file
ORA-01110: data file 636: '/vol5u/oracle/prddb/9.2.0/dbs/MISSING00636'
i have not this file on the OS
View 1 Replies
View Related
Apr 24, 2012
I have a csv file extracted from mainframe which has to be loaded into oracle using sqlldr utility.The numbers are in the format +0000003333, -0000003232.44 etc
I have to convert it to 3333 and -3232.44 and insert into the table.
I have used syntax like
Load file....append into table (t_num expression "to_number(':tnum,'99999.999')")
This gives me an invalid number error.
View 3 Replies
View Related
Dec 5, 2012
there restrictions in number of defining role in oracle?
View 2 Replies
View Related
Sep 17, 2012
In our production, we have two nodes in the cluster. We use the sequence for one of the main table for primary key. Our application is expecting sequence number increments along with created date time stamp. Right now sequences are cached for each node and it creates problem for the application. We would not like to use NOCACHE option because it causes performance issue.
This is the current scenario -
Transaction #1 on Node 1 - Seq ID 1 - Time Stamp 12:01
Transaction #2 on Node 2 - Seq ID 51 - Time Stamp 12:02
Transaction #3 on Node 1 - Seq ID 2 - Time Stamp 12:03
When I try to query based on the time stamp, primary should also go up. To be very clear on what I would like to have, please consider the following example.Without using NOCACHE option, I need to have the data in the following order.
Transaction #1 on Node 1 - Seq ID 1 - Time Stamp 12:01
Transaction #2 on Node 2 - Seq ID 2 - Time Stamp 12:02
Transaction #3 on Node 1 - Seq ID 3 - Time Stamp 12:03
In other words, sequence number should always increment along with the time.
View 2 Replies
View Related
May 8, 2013
i reduced the clustering factor a table having 50 rows from 38 to 6 and after sometime 1000 datas are inserted into this table if it affects the retrieval speed and also advice whether the datas will be stored in the above format as compact or datas are stored in the data block based on the index/ constraints.
View 1 Replies
View Related
Aug 12, 2013
How to find the number of users logged, in a database level(Oracle), b'coz generally in OS level (Linux) , the command will be '$ users',
View 4 Replies
View Related
Mar 31, 2012
①SQL> SELECT OBJECT_ID FROM DBA_OBJECTS WHERE OBJECT_NAME='T2012';
OBJECT_ID
---------
57082
②SQL> SELECT HEADER_BLOCK,BLOCKS FROM DBA_SEGMENTS WHERE SEGMENT_NAME = 'T2012';
HEADER_BLOCK BLOCKS
------------- --------
683 8
③SQL> SELECT DBMS_ROWID.rowid_block_number(ROWID)USED_BLOCK_NUMBER FROM SCOTT.T2012;
USED_BLOCK_NUMBER
----------------
684
④SQL> SHUTDOWN IMMEDIATE;
⑤SQL> STARTUP;
⑥SQL> SELECT BLOCK#,CLASS# FROM V$BH WHERE OBJD = '57082';
no data found
⑦SQL> SELECT * FROM SCOTT.T2012;
ID
-----
1
⑧SQL> SELECT BLOCK#,CLASS# FROM V$BH WHERE OBJD='57082';
BLOCK# CLASS#
------- ----------
686 1
684 1
687 1
685 1
688 1
683 4
⑨SQL> SELECT EMPTY_BLOCKS FROM DBA_TABLES WHERE TABLE_NAME='T2012';
EMPTY_BLOCKS
------------
3
QUESTIONS ONE:
in the ⑧ step,why block#685,block#686,block#687,block#688 in the buffer cache after i query data from scott.T2012?
QUESTIONS TWO:
in the ⑨ step,what's the block number of the empty block?just like DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID).
View 2 Replies
View Related
Aug 27, 2013
How to determine number of connections establishing from application server to database server for a particular user and also query the user is running in database.
user -- application user created in database.
same user exist in application.
View 15 Replies
View Related