Server Administration :: Full Or Incremental Checkpoint
Mar 25, 2012after i issue the following command,what checkpoint does it trigger?(full checkpoint or incremental checkpoint):
SQL> ALTE SYSTEM SWITCH LOGFILE;
after i issue the following command,what checkpoint does it trigger?(full checkpoint or incremental checkpoint):
SQL> ALTE SYSTEM SWITCH LOGFILE;
I wanted to know the checkpoint information and the interval of checkpoint for my instance.I have checked alert.log file for the same.
Is there is any other way or any view exists to get that information?
I found there is clear password in the xml file in the process of Oracle instance creating. The process like below
1, when using the following process to create the instance
Login as user Oracle
/home/Oracle11home/bin/dbca -createDatabase -silent -templateName /home/Oracle11home/assistants/dbca/templates/Small.dbt -gdbName testDB -sid testSID -sysPassword [b]testpwd[/b] -systemPassword [b]testpwd[/b]
2. Found the file in $ORACLE_HOME/checkpoints/dbca/OraDb11g_home2_oracle_creation_checkpoint.xml , it listed the sys and system passwd (like testpwd for sys and system password) directly.
<CHECKPOINT LEVEL="MAJOR" NAME="db_oracle" DESC="db creation checkpoint" STATE="FAIL">
<PROPERTY_LIST>
<PROPERTY NAME="command" TYPE="STRING" VAL=" -silent -createDatabase -templateName Small.dbt -gdbName testDB -sid testpwd -sysPassword testpwd -systemPassword testpwd"/>
</PROPERTY_LIST>
</CHECKPOINT>
I hope to confirm :
1. what is the purpose for oracle to genate the checkpoint.
2, can we remove the file safely ?
3, if there is configuration that we can avoid to show clear password in checkpoint file?
I have RMAN configured TO RECOVERY WINDOW OF 7 DAYS;I do a Level 0 backup on SAT, SUN-FRI I do a Level 1 (all to disk)
After I do a successful Level 0 backup, why would I need to keep Level 1 backups around on disk?I know the recovery window is 7 days but is there any way to remove the other 6 days of Level 1 backups after a successful Level 0?To me this seems like a waste of space since I have a succesful Level 0 or am I missing something?
7 of days ago Full backup has been taken to disk due to issues with tape devices.3 days ago tape devices have been fixed and we switched to CommVault managed tape backups. CommVault calls RMAN with following command:
run {
allocate channel ch1 type 'sbt_tape'
PARMS="SBT_LIBRARY=/usr/local/bin/simpana/Base/libobk.so,BLKSIZE=1048576,ENV=(CV_mmsApiVsn=2,CV_channelPar=ch1,ThreadCommandLine=BACKUP -jm 45 -a 2:71 -cl 9 -ins 9 -at 22 -j 294321 -jt 294321:4:1 -bal 1 -rcp 0 -ms 1 -data -ma 89 -cn oraclehost -vm Instance001)"
TRACE 0;
setlimit channel ch1 maxopenfiles 8;
backup
incremental level = 0
filesperset = 32
database
include current controlfile spfile ;
}
exit;
These backups done successfully.Then archivelog backup taken similar way.But when I issue RESTORE DATABASE PREVIEW SUMMARY; RMAN starts with Full backup set, though newer Incremental Level 0 ones are available.Why it does not use these newer ones?
Oracle 10.2.0.5 for Linux on IBM POWER
I am using the following query to determine if my rman backup succeeded for failed. I look for "COMPLETED WITH ERRORS"
col input_type format a10
col bck_hrs format 99.9 heading "Run|Time"
col status format a21
col end_dt format a20 heading "End|Time"
col mbytes_per_sec format 9,999 heading "Output|Rate|MB/sec"
[code]...
output
======
End Run Output Rate
INPUT_TYPE STATUS Time Time Size GB MB/sec
---------- --------------------- -------------------- ----- ---------- ------
ARCHIVELOG COMPLETED 2011-12-24 06:03:54 .5 189.4 106
DB INCR COMPLETED 2011-12-24 05:33:05 9.3 3,392.6 103
ARCHIVELOG COMPLETED 2011-12-23 10:12:27 .2 73.3 105
I know that the DB INCR is an INCR 0 backup but is there some way query I can join with my example above to tell me this is INCR 0 or FULL BKUP?
I was thinking maybe setting "COMMAND_ID" some text like INCR 0 or INCR 1 or FULL BKUP. Does that sound feasible
I want to get clear with one thing yesterday i installed oracle9i and dev 2000 to my client.
when they run one report they got stuck with pl/sql compilation errorrep-1247
when i checked that report in the report builder, in the query they are using some other table which is not belongs to that schema,then I give that schema.tablename and compiled, but this is coming for other reports also, then only i came to know that they are acceessin other schema also, how can i sort this out.
can i fix this by givin full access privilige or what privilige can i give to get full access of other schema table.
how can i check in the old database what are all the roles and privileges given to this user,
Tablespace usage alerts are checked every ten minutes. You can request a check every minute when you set the threshold, and confirm that this has been set:
orclz> execute DBMS_SERVER_ALERT.SET_THRESHOLD(-
> metrics_id => DBMS_SERVER_ALERT.TABLESPACE_PCT_FULL,-
> warning_operator => DBMS_SERVER_ALERT.OPERATOR_GT,-
> warning_value => '50',-
> critical_operator => DBMS_SERVER_ALERT.OPERATOR_GT,-
> critical_value => '75',-
[code]....
orclz>but you will still have to wait up to ten minutes for the alert to be raised. know whether this frequency can be changed? And why this particular alert behaves differently from all the others? This is the bahaviour in all releases since server alerts arrived.
i'm facing a problem while i'm inserting millions of record from table to table that undo tablespace reach 100% full and execution aborted. , how can free the undo tablespace ??? many of extendes are offline. will it flush automatically ??? or what i should do
View 4 Replies View RelatedI am running Oracle 9.2.0.1 on Solaris 9. On just about a daily basis we perform sqlldr loads that load on the order of 300000 rows. I frequently see in my alert log:
Checkpoint not complete
Current log# 5 seq# 176431
I have 5 redo logs each of 10M in size. If I check what is going on in v$log and correlate to the alert log when it throws the checkpoint error I always notice that I have one current log (which is good) and the rest are in a status of 'ACTIVE'. It seems that when this happens I get the checkpoint error.
What can I do to get rid of this checkpoint error? Should I increase the size of my redo logs? Is there a good way to go about estimating what size redo logs I should have?
n my last post I asked abt the issues that I was facing while restoring the full backup of RMAN on a new server. I mentioned the steps as well, Now my Question is what If I want to restore the incremental backup on the new server?
What all steps do I have to follow after restoring the zero level backup on a new server.
FIRST Question?
I have main database Server which is located on Server A, now I have to take the RMAN Backup of Server A and then shift the complete database on new Server B.
I am confused whether to use Backup command with AS COPY option or take the backup as backup sets. Which are the best option that I can use for the RMAN Backup.
Source:Oracle 11g 11.1.0.2 32 bit on Windows 2003,Destination: Oracle 11g 11.1.0.2 64 bit on Windows 2008
In our Source Database, we are using following to create full export
Source database_name=dbfour,sid=dbfour1
I am creating full export by using following
expdp system/psswd full=y dumpfile=expdp_%date:~0,2%-%date:~3,2%-%date:~6,4%.dmp logfile=explog_%date:~0,2%-%date:~3,2%-%date:~6,4%.log FLASHBACK_TIME="to_timestamp(to_char(sysdate,'yyyy-mm-dd hh24:mi:ss'),'yyyy-mm-dd hh24:mi:ss')"
In my destination DB with database_name=dbfive and sid=dbfive1, i am trying to import whole database by using the file created above and use following
impdp system/pwd@dbfive DIRECTORY=DATA_PUMP_DIR DUMPFILE=EXPDP_20-10-2011.DMP
Process started but after sometime it gave 1200 errors. Is it due to the Different Database name or Is it because i did not create table space in destination database.
i am trying to import full db export using datapump , i have too many errors for objects that is already exist . attached is the log file . thae steps i did so far
1- created the database .
2- imported the full db backup using
impdp system/xxxxxxx full=yes directory=datapump dumpfile=palbe_full_20130322.dp log=palbe_full_22042013.log
We started a full db export using exp, and we need to kill it. Is it "safe" to do so? Killing it won't affect the database in any way, right?
$ORACLE_HOME/bin/exp system/xxxxxx BUFFER=140000 FULL=Y FILE=/export/<sid>.exp VOLSIZE=2000M GRANT
S=Y INDEXES=Y COMPRESS=Y
I want do this connected in windows 2008 r2 with oracle 11G R2 execute an import, that will do a full import, from a linux with oracle 10g called "SUPORTE1"
I´m trying this in the windows2008 machine
impdp system/manager@w2k811r2 full=y DIRECTORY=dpump NETWORK_LINK=SUPORTE1;
and I get the follow errors
ORA-39001: valor de argumento invßlido ( argument valor invalid)
ORA-39200: O nome do link "SUPORTE1;" Ú invßlido. ( link name invliad)
ORA-44004: nome SQL qualificado invßlido ( sql name invalid)
I tested the connection, db-link and created the directory.
I started restoring and testing all the backups. So far I did good and I wanted to restore and test in different host. I couldn't find any online documentation regarding restore full back up in a new different server (With same OS and same version of Oracle) without RMAN catalog database.
View 4 Replies View Relatedi have full export dump file....from this, i need to import only one procedure belongs to schema : IC_MIGR_DATA... i need to import into SCHEMA : rep_user...
iam giving syntax:
impdp system/icg0ld@ICPRD directory=DUMPDIR dumpfile=IC_FULL_19062008.dmp logfile=imp_IC_FULL_190608.log schemas=rep_user parfile=imp_proc.par
parfile :
--------
INCLUDE=PROCEDURE:"LIKE 'IC_MIGR_DATA.JET_UPLIFT'"
while importing, iam getting below error,
*****[oracle10@AIICDELL IC]$ impdp system/icg0ld@ICPRD directory=DUMPDIR dumpfile=IC_FULL_19062008.dmp logfile=imp_IC_FULL_190608.log schemas=rep_user parfile=imp_proc.par
Import: Release 10.2.0.2.0 - 64bit Production on Friday, 20 June, 2008 16:19:46
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
ORA-39002: invalid operation
ORA-31694: master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" failed to load/unload
ORA-31644: unable to position to block number 30698425 in dump file "/AIIC_backup/expbkp/dumps/IC/IC_FULL_19062008.dmp"
******
how to import this one procedure JET_UPLIFT , this has to be imported into REP_USER schema, owner of this procedure is IC_MIGR_DATA
I have a stocking program , i need to use opening balance + debit-credit
In the beginning of my cursor then what ever the result i need to but it in a variable to used for the next record; i try many time but i failed
my statement look like this
opening balance 1000
receive used balance
-----------------------------------------------
0 50 950
0 100 850
100 0 1850
"If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If compatibility is >=10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup. In other words, the SCN at the time the incremental backup is taken is the file creation SCN. If compatibility <10.0.0, RMAN generates a level 0 backup of the file contents at the time of the backup, to be consistent with the behavior in previous releases."
So yes, that's it. Even in the Oracle 11g OCP course and exam, the information given is for Oracle 9 (pre 10g). Seems like a major functional regression.
------------------------------------------------------------------------------------------------------------------------------------------------
Original problem.
Am I not understanding something about rman????
Using oracle 10g standard edition and rman.
No existing backups.
When I run repeated level 1 cumulative incrementals, the appear to back up everything (like a level 0 would).
My understanding is that if a level 1 is run w/o an existing level 0 backup, it will generate a level 0 backup.
All subsequent level 1 backups should be level 1's as expected.
If I explicitly generate an level 0, followed by level 1's it all works as expected.
I am determining what got backed up by the size of the resulting save sets.
Do I have to do an explicit level 0 and then explicit level 1's? I thought not.
i already created metalized view
now i want to set MV as incremental refresh option?
is it possible to set this attribute after mv created..?
am having Oracle 9i RAC on IBM AIX .
I have large partitioned tables ( 4 partitions are added every month ). Is is possible to collect Incremental Statistics Gathering on these objects ( 9i ). If I collect stats with Ggranularity => ALL and ESTIMATE_PERCENT =100 the stats are accurate but it takes so much time .
One way may be to collect stats as Ggranularity => PARTITION for each new partition ( this quite fast ). but what about the Global Table Stats?
I've got a physical standby (10g) which is missing some archive logs, and thus the managed recovery is stuck. I followed the procedure for taking an incremental backup of the primary from the last SCN in the standby, and then recovering the standby from that backup using rman.
It seemed that everything went according to the instructions, except that the "recover database" of the standby from the backup went much faster than I expected. Afterwards I checked the SCN of the standby (select current_scn from v$database) and it had indeed updated to a number beyond where it was stuck originally. The standby controlfile was restored from a backup of the primary's current controlfile, and the standby restarted. The problem is that when I started up standby recovery again, the standby is still looking for the logs which were missing. I can't figure out why! I've been googling around and digging through the docs, and the only clues I can find suggest that this would happen if the standby controlfile wasn't updated. But I did that.
We are going to implement oracle 11g. Now I wanna use rman to take the incremental backup in the follwing way:
Level-0 backup weekly, in the way:
run
{
allocate channel c1 type disk;
allocate channel c2 type disk;
BACKUP incremental level 0 DATABASE
TAG 'Weekly_full' FORMAT 'E:RMAN_backupweekly_full_%d_%Y%M%D_s%s_p%p.bak';
backup archivelog all not backed up 1 times delete input
TAG 'Weekly_full_arc' FORMAT 'E:RMAN_backupweekly_full_arc_%d_%Y%M%D_s%s_p%p.bak';
[code].....
and RETENTION POLICY TO RECOVERY WINDOW is set to 10 days, so that there is no chance that i will delete expired backup before taking backup.
Now my question is,
1) Is the way is correct that there is no wrong configuration. My main concern is about archive log, is it correct?
2) As i am deleting archive log files with the backup and after that the expired archive log backup is also deleted, so is there any chance of failure in recovery.
3) here how archive log backup will work with 10 days retention policy.
We just purchased NAS (Raid5) unit to manage our data storage. I am planning to create a virtual partition on this nas device and use one partition for oracle data storage and another virtual partition will be used by other data (files and may be sqlserver data files....etc..)
We will have oracle installed on seperate oracle server. Can we use RMAN to manage incremental backups in this environment? May main worry point is that our data storage device will have many different type of datas and will we be able to tell RMAN to make backups only from certain virtual drives?
what is the difference between incremental and differential backup?
View 5 Replies View RelatedI have set the incremental stats for my partition table as it takes more than 20 min to gather , though the incremental is set to 'true' the table is getting analyzed completely.
View 3 Replies View RelatedAs part of an incremental backup I recover copies of the tablespaces. These by default are stored in the disk backup location diskgroup. Is it possible to move these copies to another disk group / location whilst still backing up to the default disk backup location diskgroup. I know how to do it with the main datafiles but not with the copies.
e.g.
Current situation:
Original Datafiles in +DATA
Backups in +FRA
Recovered Datafile copies in +FRA
New setup to be:
Original Datafiles in +DATA
Backups in +FRA
Recovered Datafile copies in +BACKUP
In Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
STEP - 1
SELECT /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)*
FROM HR.Employees_copy
WHERE department_id = 20
GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY)
-------------------------------------------------------------------------------------------------------
202 Pat Fay 6000
201 Michael Hartstein 13000
Elapsed: 00:00:00.01
Execution Plan
----------------------------------------------------------
Plan hash value: 3837552314
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 130 | 4 (25)| 00:00:01 |
| 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
| 2 | HASH GROUP BY | | 2 | 130 | 4 (25)| 00:00:01 |
|* 3 | TABLE ACCESS FULL | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
*690* bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
STEP - 2
INSERT INTO HR.employees_copy
VALUES(200, 'Dummy', 'User','Dummy.User@email.com',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
STEP - 3
SELECT /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)
FROM HR.Employees_copy
WHERE department_id = 20
GROUP BY employee_id, first_name, last_name;
EMPLOYEE_ID FIRST_NAME LAST_NAME SUM(SALARY)
--------------------------------------------------------------------------------------------------
202 Pat Fay 6000
201 Michael Hartstein 13000
200 Dummy User 5000
Elapsed: 00:00:00.03
Execution Plan
----------------------------------------------------------
Plan hash value: 3837552314
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 195 | 4 (25)| 00:00:01 |
| 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
| 2 | HASH GROUP BY | | 3 | 195 | 4 (25)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| EMPLOYEES_COPY | 3 | 195 | 3 (0)| 00:00:01 |
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
*714* bytes sent via SQL*Net to client
416 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
3 rows processed
In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?
Oracle database concepts 11gr2 manual states that "An incremental checkpoint is a type of thread checkpoint partly intended to avoid writing large numbers of blocks at online redo log switches. DBWn checks at least every three seconds to determine whether it has work to do. When DBWn writes dirty buffers, it advances the checkpoint position, causing CKPT to write the checkpoint position to the control file, but not to the data file headers."
As i have understand the DBWR write DIRTY BUFFER (that are under active checkpoint requests in the ACQ) to the datafiles from The Buffer Checkpoint Queues and after that the CKPT process may write the checkpoint RBA to the control files.[URL]...
The CKPT porcess write the chekpoint RBA to the controlfile after that the DBWR write the Dirty BUFFER in the datafiles, so as i understand at some point the datafiles containt block with higher RBA than the controle files. am i wrong ?
So if the instance craches before the CKPT record the chekpoint RBA in the controlfile what will be the recover behavior.Will the redo record be reapplied from the last recoded chekpoint RBA in the controlfile to the tail of the log ?