Server Administration :: Archive Log File Sequence Number Reduced
Mar 28, 2012
We are facing a different issue in our database. From yesterday night, the archive log generated with 5 digit. But it supposed to be 6 digit. Hence we are not able to apply the logs in DR Location.
what's the difference between checkpoint_change# and controlfile_change#. what's the checkpoint_change# use for ? does it use for recover ? what's the controlfile_change# use for ? when the controlfile_change# will be increase ?
SQL> select controlfile_sequence# from v$database;
I'm facing problem with archive log file size, Archive logs are generated with only of 90m or 92m or 94m(Variable sizes of less than 100m), Although i had set 100m for each of my redo log file. Here i'm providing my create db script for your reference. I want to know why the log switches before it reaches 100m.Is there any connection of intial 10m for my .dbf files.
In normal days size of archives generated in a day is 14-15GB. But since yesterday morning, almost 150GB of archives have been generated and are still getting generated(200MB every 1-2 minutes).
There was a sudden reboot of server yesterday morning. At that time there was heavy load of transactions on database. Can it be a reason that smon is still doing recovery? (I am not sure on this). Also, Undo tablespace is increased from 18 GB to 50 GB since yesterday (autoextend on).
Now we are running out of space for archive file system (can't delete them also until they are transferred to DR) Size of redo log is 200MB. This database supports around 2500 users.
performance wise I don't see any hit. Also wait events are normal. (only few db file sequential read) finding the query/session which are causing this much huge amount of archives?
We are facing one issue on one of the database. The database is generating large trace files(14000) from last two days. That consumes around 15G space on the disk. And the content of the trace files is not having any meaningful message to debug:
*** TRACE DUMP CONTINUED FROM FILE /apps/oracle/admin/fs90uat/bdump/fs90uat_p050_23966.trc ***
... (Many lines with above message)
The alert log is having one repeated error yesterday:
Thu May 6 22:00:03 2010 Errors in file /apps/oracle/admin/fs90uat/bdump/fs90uat_j000_11811.trc: ORA-12012: error on auto execute of job 2647927 ORA-04063: ORA-04063: package body "ORACLE_OCM.MGMT_DB_LL_METRICS" has errors ORA-06508: PL/SQL: could not find program unit being called: "ORACLE_OCM.MGMT_DB_LL_METRICS" ORA-06512: at line 1
The corresponding trace file is having error:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORACLE_HOME = /apps/oracle/product/10.2.0/db_1 System name: SunOS Node name: corpqadb30 [Code] .......
I have a CSV file with 100 records and one of the column as FILE_ID. I want to load one unique number for all 100 recs not for every records.
suppose my sequence returns 3 as next val i want to load 3 for all 100 records. How to implement this in control file or sh file . I am using shell script to call sqlldr.
on weekends we have too many archive logs generated .i have taken the data of a week and found that average archive log generated from monday to friday is 7 files per day but on satuarday and sunday the average is 60 files and FG1 gets full. on weekends we have all type of backups running like incremental,archival and logical backups and on sunday we have full physical backup
what is the reason of too many archive log files generations at weekends. is it due to hot and logical backups , if yes then how ?
Our database was generating archivelog(50MB) every 30seconds! I think this is not normal because what I did is open our database, I was the only one who is connected, I'm not running anything, but our database is still generating archivelogs!
Our redo logs: 6groups 3members.
This are the things I saw on our alert logs: - advanced to log sequence - cannot allocate new log, sequence - checkpoint not complete - private strand flush not complete
What I did is change the log mode of our database to noarchivelog then open the database, then returned it to archivelog mode then that fixed the problem. But the thing is after 6hours its abnormal behavior goes back again.
I am not able to delete one month old archive log file manually on windows which doesn't having info about the standby on v$archived_log view of primary database. the sequence were already applied to the standby database. It shows the status as deleted in v$archived_log. while deleting the file manually. it showing an error like another program or person is using it.
I need store history for two tables in my system. I thought that Flashback Data Archive will be the best option. There is also another ways to do this but don't focus on this. I need to to this by FDA (Flashback Data Archive);
So my prerequisite was to create tablespace and flash back archive, and alter table to be archived.
alter table teta_admin.t_prac flashback archive audit_flash_archive;
and everything works fine but on sys user. i can query this table using "as of timestamp" clause
select prac_id, imie, imie_2, nazwisko, nr_ew from teta_admin.t_prac as of timestamp to_timestamp('2011-08-23 08:20:00','yyyy-mm-dd hh24:mi:ss')
but final construction of idea was to create additional user (interface), grant select on teta_admin.t_prac object and query archive data from interface user. and this is point of my failure. this don't work on new user.
interface user have such sys privs:
SQL> SELECT * FROM dba_sys_privs 2 WHERE grantee = 'INTERFACE'; GRANTEE PRIVILEGE ADM ------------------------------ ---------------------------------------- --- INTERFACE CREATE SESSION NO
and table privs:
SQL> SELECT * FROM dba_tab_privs 2 WHERE grantee = 'INTERFACE';
We had a database (DB A)that is having version 9.2.0.6.0.This DB is having multiple tables and volume of 6 million in individual tables.Another database is also 9.2.0.6.0 (DB B), this DB has Mviews pointing to DB A. Mviews are refreshed in every 15 mins, with fast refresh option in 90% cases and remaining having complete refresh.
Last weekend we have migrated DB 2 to version 10.2.0.4.0 - 64bi and on another server.After version upgrade and DB migration complete refresh was done once for all mviews.
Now DB A is generating huge amount of archive log and also it's UNDO space is getting fully consumed causing performance issue and DB hang.what has gone wrong and what we can do to improve response of DB A and also to reduce size of Archive log ?
I have read it in books that flashback uses undo data to create the flashback data or to flashback the database to a time in the past.Then, what is the role of archive files in flashback operation. Why it is mandatory to turn on archiving before turning on flashback. Also, if you remove the latest archive files, you can NOT flashback the data to a time in past (Oracle complains of missing archive files).
ora-00257 archiver error. connect internal only until freed
when we tried to remove the unwanted arc files thro ASMCMD,we are getting below error:
ASMCMD> rm -ef 2011_04_05/ Unknown option: e usage: rm [-rf] <name1 name2 . . .> ASMCMD> rm -rf 2011_04_05/ ORA-15032: not all alterations performed ORA-15028: ASM file '+XCOM_BACKUP_DG/TXCOM/ARCHIVELOG/2011_04_05/thread_2_seq_27215.1143.747641143' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute) ORA-15032: not all alterations performed ORA-15028: ASM file '+XCOM_BACKUP_DG/TXCOM/ARCHIVELOG/2011_04_05/thread_3_seq_21762.826.747641143' not dropped; currently being accessed (DBD ERROR: OCIStmtExecute) ORA-15032: not all alterations performed ORA-15177: cannot operate on system aliases (DBD ERROR: OCIStmtExecute)
i am using oracle 10g on solaris 10 os.currently archived log is generated by size wise 52 mb.i want to know whar is the best practice for archive log generation . it should be time interval or size wise.
SQL> select open_mode from v$database; OPEN_MODE ---------- MOUNTED SQL> alter database open; alter database open * ERROR at line 1: ORA-16014: log 2 sequence# 80 not archived, no available destinations ORA-00312: online log 2 thread 1: 'E:ORACLEPRODUCT10.2.0ORADATAMOONREDO02.LOG' ..
I have the following select query that works perfectly fine. Returns 25 rows based on the descending order of the price.But, I want add one more expression to this list of columns in this query (apart from customer_id).
the expression should look like Cust-01 for the first customer from the below query all the way to Cust-25 for the last customer.But how can I can generate 01 to 25 in oracle?
select customer_id from (select customer_id from capitalPLAN where member_status = 'MEMBER' AND customer_id NOT in ('156','201','1385','2125','3906','165') order by price desc ) where rownum <= 25
i have two tablespaces dictionary managed (SYSTEM,APPLSYSX) i tried to change to locally cause it will cause problem in future when trying to run OATM migration.i did it successfully on APPLSYSX,when i did it on system upon oracle procedure.i have to change all tablespaces to read only when i did that with tablespace APPLSYSD(alter tablespace APPLSYSD read only) i received errors
SQL> alter tablespace APPLSYSD READ ONLY; alter tablespace APPLSYSD READ ONLY * ERROR at line 1: ORA-01230: cannot make read only - file 636 is offline ORA-01111: name for data file 636 is unknown - rename to correct file ORA-01110: data file 636: '/vol5u/oracle/prddb/9.2.0/dbs/MISSING00636' i have not this file on the OS
I have a csv file extracted from mainframe which has to be loaded into oracle using sqlldr utility.The numbers are in the format +0000003333, -0000003232.44 etc
I have to convert it to 3333 and -3232.44 and insert into the table.
I have used syntax like
Load file....append into table (t_num expression "to_number(':tnum,'99999.999')")
In our production, we have two nodes in the cluster. We use the sequence for one of the main table for primary key. Our application is expecting sequence number increments along with created date time stamp. Right now sequences are cached for each node and it creates problem for the application. We would not like to use NOCACHE option because it causes performance issue.
This is the current scenario -
Transaction #1 on Node 1 - Seq ID 1 - Time Stamp 12:01 Transaction #2 on Node 2 - Seq ID 51 - Time Stamp 12:02 Transaction #3 on Node 1 - Seq ID 2 - Time Stamp 12:03
When I try to query based on the time stamp, primary should also go up. To be very clear on what I would like to have, please consider the following example.Without using NOCACHE option, I need to have the data in the following order.
Transaction #1 on Node 1 - Seq ID 1 - Time Stamp 12:01 Transaction #2 on Node 2 - Seq ID 2 - Time Stamp 12:02 Transaction #3 on Node 1 - Seq ID 3 - Time Stamp 12:03
In other words, sequence number should always increment along with the time.
i reduced the clustering factor a table having 50 rows from 38 to 6 and after sometime 1000 datas are inserted into this table if it affects the retrieval speed and also advice whether the datas will be stored in the above format as compact or datas are stored in the data block based on the index/ constraints.
How to determine number of connections establishing from application server to database server for a particular user and also query the user is running in database.
user -- application user created in database. same user exist in application.