Backup Optimization :
filesperset : limit the number of datafiles in each backup set.
maxopenfiles : limit the number of files that can be simultaneously open for reads.
diskratio : read datafiles from a specified number of differnt disks.
Backup optimization skips backup of files in certain circumstances if identical file or version of the file has already been backed up.
BACKUP DURATION 4:00 PARTIAL MINIMIZE TIME DATABASE FILESPERSET 1;
BACKUP DURATION 4:00 PARTIAL MINIMIZE LOAD DATABASE FILESPERSET 1;
large_pool_size = num_of_allocated_channels * (16 MB + (4 * size_of_tape_buffer ))
Duplicate or Standby using Rman :
By default, the DUPLICATE command creates the database by using the most recent backups of the target database and then performs recovery to the most recent consistent point contained in the incremental backups and archived logs.
You can register the duplicate database in the same recovery catalog as the target database. This is possible because RMAN gives duplicate database a new DBID during duplication.
When you create a standby database with DUPLICATE ... FOR STANDBY, RMAN does not create a unique DBID.
Order of datafile creation using Rman duplicate , SET NEWNAME precede CONFIGURE AUXNAME precede which precede DB_FILE_NAME_CONVERT
If the directory structure is the same on the new host, then you can specify the NOFILENAMECHECK option.
You cannot use NOFILENAMECHECK on the same host with duplicate.
If control file autobackups is ON and the backup includes datafile 1, RMAN writes the control file and SPFILE to a separate backup set. If autobackup is OFF and the backup includes datafile 1, then RMAN includes the current control file and SPFILE in the same backup set as the datafiles.
If you have configured control file autobackup, you do not need a recovery catalog or target database control file to restore the control file in case if you lost all your control files.
Restoring an RMAN Backup to Another Node (without duplicate command )
Using RMAN To Clone A Database
Create physical standby using RMAN duplicate
Active Database Duplication in Oracle 11g with Example
Creating physical standby database with duplicate (without backup)
Rman Backup :
http://dbasolutions.wikispaces.com/Backup+-+RMAN
Rman Recovery :
Time taken by rman backups and size :
select INPUT_TYPE,
--INPUT_TYPE||'-'||(select INCREMENTAL_LEVEL from v$backup_set where COMPLETION_TIME between r.START_TIME and r.END_TIME and rownum <2) INPUT,
to_char(trunc(START_TIME),'DAY') day,
STATUS,
TO_CHAR(START_TIME,'mm/dd/yy hh24:mi') start_time,
TO_CHAR(END_TIME,'mm/dd/yy hh24:mi') end_time,
ELAPSED_SECONDS/3600 hrs,
INPUT_BYTES/1024/1024/1024 SUM_BYTES_BACKED_IN_GB,
OUTPUT_BYTES/1024/1024/1024 SUM_BACKUP_PIECES_IN_GB,
OUTPUT_DEVICE_TYPE
FROM V$RMAN_BACKUP_JOB_DETAILS r
order by SESSION_KEY
/
select SID,OPEN_TIME,
close_time,filename,
round(TOTAL_BYTES/1024/1024/1024) GBTOBEREAD_FROM_FILE,
round(bytes/1024/1024/1024) GB_Written_sofar ,
--(EFFECTIVE_BYTES_PER_SECOND/1024/1024/1024)*3600 SPEED_GBPERHR,
round(EFFECTIVE_BYTES_PER_SECOND/1024/1024) SPEED_MBPS,
round(ELAPSED_TIME/100/60) Elapsed_Min --(sysdate-open_time)*24*60 ela_time_minutes,
from V$BACKUP_ASYNC_IO
select sid,
serial,
100* sum (long_waits) / sum (io_count) as "LONG_WAIT_PCT",
sum (effective_bytes_per_second)/1024/1024 as "MB_PER_S"
from v$backup_async_io
group by sid, serial
alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';
select sl.sid, sl.opname,
to_char(100*(sofar/totalwork), '990.9')||'%' pct_done,
sysdate+(TIME_REMAINING/60/60/24) done_by
from v$session_longops sl, v$session s
where sl.sid = s.sid
and sl.serial# = s.serial#
and sl.sid in (select sid from v$session where module like 'backup%' or module like 'restore%' or module like 'rman%')
and sofar != totalwork
and totalwork > 0;
Backup details of last successfully completed level 0 backup
col det for a70
set numw 15
Select INPUT_TYPE||' '||
STATUS||' '||
TO_CHAR(START_TIME,'mm/dd/yy hh24:mi:ss') ||' -> '||--start_time,
TO_CHAR(END_TIME,'mm/dd/yy hh24:mi:ss') ||' '||--end_time,
round(ELAPSED_SECONDS/3600,2) ||'hrs '|| --hrs,
round(INPUT_BYTES/1024/1024/1024) ||'gb '|| --SUM_BYTES_BACKED_IN_GB,
round(OUTPUT_BYTES/1024/1024/1024) ||'gb '||--SUM_BACKUP_PIECES_IN_GB,
OUTPUT_DEVICE_TYPE Det, bpd.*
from (
select min(CHECKPOINT_CHANGE#) minckpt ,min(d.CHECKPOINT_TIME) mintime, max(CHECKPOINT_CHANGE#) maxckpt ,max(d.CHECKPOINT_TIME) maxtime
--select min(CHECKPOINT_CHANGE#) ,min(TO_CHAR((d.CHECKPOINT_TIME), 'YYYY-MM-DD HH:MI.SS')) mintime, max(CHECKPOINT_CHANGE#) ,max(TO_CHAR((d.CHECKPOINT_TIME), 'YYYY-MM-DD HH:MI.SS'))
from v$backup_piece p, v$backup_datafile d
where d.set_stamp(+) = p.set_stamp
and d.set_count(+) = p.set_count
and p.RMAN_STATUS_STAMP in ( select max(pi.RMAN_STATUS_STAMP) from v$backup_piece pi, v$backup_datafile di
where di.set_stamp = pi.set_stamp
and di.set_count = pi.set_count
and di.COMPLETION_TIME < nvl(to_date('&qry_dt','mm/dd/yy'),sysdate)
and di.INCREMENTAL_LEVEL = 0 ) ) bpd, V$RMAN_BACKUP_JOB_DETAILS j
where j.status like 'COMPLETED%'
and mintime Between j.START_TIME and j.End_time;
Archives created before D.F backup and archived after all D.F backup ended: current archivelog
select THREAD#, SEQUENCE# , FIRST_CHANGE# from V$BACKUP_ARCHIVELOG_DETAILS where FIRST_CHANGE# <= &MINCKPT
and NEXT_CHANGE# >= &MAXCKPT;
Archive with first or next scn between D.F backup time
select THREAD#, SEQUENCE# , FIRST_CHANGE# from V$BACKUP_ARCHIVELOG_DETAILS
where (FIRST_CHANGE# between &MINCKPT and &MAXCKPT)
or
(NEXT_CHANGE# between &MINCKPT and &MAXCKPT);
Archive with first or next scn between D.F backup start time and till recovery required
select distinct THREAD#, SEQUENCE# , FIRST_CHANGE# from V$BACKUP_ARCHIVELOG_DETAILS
where (FIRST_CHANGE# between &MINCKPT and timestamp_to_scn(nvl(to_timestamp('&&Reqdt_time','DD/MM/YYYY HH24:MI:SS'),sysdate)) )
or
(NEXT_CHANGE# between &MINCKPT and timestamp_to_scn(nvl(to_timestamp('&Reqdt_time','DD/MM/YYYY HH24:MI:SS'),sysdate)));
First few archives created after backup completed
select * from (
select distinct THREAD#, SEQUENCE# , FIRST_CHANGE#,FIRST_TIME from V$BACKUP_ARCHIVELOG_DETAILS where FIRST_CHANGE# >= &MAXCKPT order by FIRST_CHANGE# ) where rownum < 5;
select * from (
select distinct THREAD#, SEQUENCE# , FIRST_CHANGE#,FIRST_TIME from v$archived_log where FIRST_CHANGE# >= &MAXCKPT order by FIRST_CHANGE# ) where rownum < 5;
select * from (
select distinct THREAD#, SEQUENCE# , FIRST_CHANGE#,FIRST_TIME from V$BACKUP_ARCHIVELOG_DETAILS order by FIRST_CHANGE# desc)
where FIRST_CHANGE# > &MAXCKPT;
rman backups type and size
select ctime "Date"
, decode(backup_type, 'L', 'Archive Log', 'D', 'Full', 'Incremental') backup_type
, bsize "Size MB"
from (select trunc(bp.completion_time) ctime
, backup_type
, round(sum(bp.bytes/1024/1024/1024),2) bsize
from v$backup_set bs, v$backup_piece bp
where bs.set_stamp = bp.set_stamp
and bs.set_count = bp.set_count
and bp.status = 'A'
group by trunc(bp.completion_time), backup_type)
order by 1, 2
Latest level 0 backup and details
col dfname for a65
col handle for a70
select (select name from v$datafile where FILE# = d.file# ) dfname, p.handle , --d.RECID , d.STAMP , d.SET_STAMP , d.SET_COUNT ,
-- TO_CHAR((d.CHECKPOINT_TIME), 'YYYY-MM-DD HH24:MI.SS') ,
TO_CHAR((d.COMPLETION_TIME), 'YYYY-MM-DD HH24:MI.SS') comptime --,d.INCREMENTAL_LEVEL
from v$backup_piece p, v$backup_datafile d
where d.set_stamp(+) = p.set_stamp
and d.set_count(+) = p.set_count
and p.RMAN_STATUS_STAMP in ( select max(pi.RMAN_STATUS_STAMP) from v$backup_piece pi, v$backup_datafile di
where di.set_stamp = pi.set_stamp
and di.set_count = pi.set_count
and di.INCREMENTAL_LEVEL = 0 )
--and d.CHECKPOINT_TIME in ( select max(df.CHECKPOINT_TIME) from v$backup_datafile df where df.INCREMENTAL_LEVEL = 0 )
order by d.COMPLETION_TIME;
Backup records of datafiles :
select set_stamp , to_char(ba.completion_time, 'MON-DD-YYYY HH24:MI') finish_time , df.name
from v$datafile df , v$backup_datafile ba
where df.file# = ba.file#
and ba.file# != 0
order by set_stamp, ba.file#
Backup size of each backupset from latest level 0 backup:
col dfname for a90
col handle for a70
compute sum of sz on rr
compuet sum of sz on report
break on rr
break on report
select HANDLE , sum(BYTES)/1024/1024/1024 from v$backup_piece p
where p.RMAN_STATUS_STAMP in ( select max(pi.RMAN_STATUS_STAMP) from v$backup_piece pi, v$backup_datafile di
where di.set_stamp = pi.set_stamp
and di.set_count = pi.set_count
and di.INCREMENTAL_LEVEL = 0 )
group by handle;
select substr(handle,1,4) rr ,HANDLE , sum(BYTES)/1024/1024/1024 sz from v$backup_piece p
where p.RMAN_STATUS_STAMP in ( select max(pi.RMAN_STATUS_STAMP) from v$backup_piece pi, v$backup_datafile di
where di.set_stamp = pi.set_stamp
and di.set_count = pi.set_count
and di.INCREMENTAL_LEVEL = 0 )
group by handle
order by substr(handle,1,4);
Apply incremental backup to imgae copies :
BACKUP INCREMENTAL LEVEL 0 TAG = WEEKLY DATABASE;
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG WEEKLY DATABASE; -- apply incremental changes to the COPY WITH TAG WEEKLY DATABASE
RECOVER COPY OF DATABASE WITH TAG WEEKLY;
Points to consider in rman backup script :
ControlFile and Spfile restore :
If FRA is set , autobackup will take C.F backup in FRA. If autobackup is not set , you can give AUTOBACKUP FORMAT
Spfile restore from autobackup or path(specific backup piece)
RMAN> restore spfile from autobackup;
RMAN> restore spfile from autobackup db_recovery_file_dest='C:ORACLEFLASH_RECOVERY_AREA' db_name='orcl';
C.F restore from autobackup or path(specific backup piece)
RMAN>run
{
set controlfile autobackup format for device type disk to '/u01/autobackup/%F';
restore controlfile from autobackup;
}
RMAN> connect target /
RMAN> startup nomount;
RMAN> restore controlfile to 'C:\ctl.bk' from autobackup;
RMAN> restore controlfile from 'E:\flash_recovery_area\orcl\BACKUPSET\2012_11_26\O1_MF_ANNNN_TAG20121126T160306_8C6PMX3O_.BKP';
To check which backup piece contain the C.F
RMAN> restore controlfile from autobackup preview;
Note : There may be many C.F. It will restore the latest
To restore contolfile from past backup, then we need to use “SET UNTIL TIME”
run
{
SET UNTIL TIME "TO_DATE('20111129:12:01','YYYYMMDD:HH24:MI')";
RESTORE CONTROLFILE FROM AUTOBACKUP preview;
}
RMAN> connect target /
RMAN> startup nomount;
RMAN> set dbid 2601506594;
RMAN> restore controlfile from autobackup maxdays 20;
Find exact SCN Number for Oracle Restore
col fuzz# format 99999999999999999999999999
col chkpnt# format 99999999999999999999999999
select max(absolute_fuzzy_change#) fuzz#, max(checkpoint_change#) chkpnt# from
(select file#, completion_time, checkpoint_change#, absolute_fuzzy_change# from v$backup_datafile
where incremental_level = 0
and trunc(completion_time) = to_date('FEB-06-2016','MON-DD-YYYY')
and file# <> 0
order by completion_time desc
);
FUZZ# CHKPNT#
--------------------------- ---------------------------
0 14185864593835
Select the highest of the two scn above.
rman target /
spool log to list_backup.log;
list backup;
exit;
select sequence#, to_date(first_time,'DD-MON-YYYY HH24:MI:SS') first_time,first_change#,
to_date(next_time,'DD-MON-YYYY HH24:MI:SS') next_time, next_change#
from v$archived_log
where completion_time between to_date('FEB-06-2016','MON-DD-YYYY') and sysdate -10 ;
RMAN performance :
Performance can be improved by
filesperset : limit the number of datafiles in each backup set.
maxopenfiles : limit the number of files that can be simultaneously open for reads.
diskratio : read datafiles from a specified number of differnt disks.
Backup optimization skips backup of files in certain circumstances if identical file or version of the file has already been backed up.
BACKUP DURATION 4:00 PARTIAL MINIMIZE TIME DATABASE FILESPERSET 1;
BACKUP DURATION 4:00 PARTIAL MINIMIZE LOAD DATABASE FILESPERSET 1;
large_pool_size = num_of_allocated_channels * (16 MB + (4 * size_of_tape_buffer ))
Monitor RMAN
col dbsize_mbytes for 99,999,990.00 justify right head "DBSIZE_MB"
col input_mbytes for 99,999,990.00 justify right head "READ_MB"
col output_mbytes for 99,999,990.00 justify right head "WRITTEN_MB"
col output_device_type for a10 justify left head "DEVICE"
col complete for 990.00 justify right head "COMPLETE %"
col compression for 990.00 justify right head "COMPRESS|% ORIG"
col est_complete for a20 head "ESTIMATED COMPLETION"
col recid for 9999999 head "ID"
select recid
, output_device_type
, dbsize_mbytes
, input_bytes/1024/1024 input_mbytes
, output_bytes/1024/1024 output_mbytes
, (output_bytes/input_bytes*100) compression
, (mbytes_processed/dbsize_mbytes*100) complete
, to_char(start_time + (sysdate-start_time)/(mbytes_processed/dbsize_mbytes),'DD-MON-YYYY HH24:MI:SS') est_complete
from v$rman_status rs
, (select sum(bytes)/1024/1024 dbsize_mbytes from v$datafile)
where status='RUNNING'
and output_device_type is not null;
Notes :
A level 1 incremental backup can be either of the following types: C1Dall ( To restore cumulative requires level 0 + most recent level 1 ; Differential requires level 0 + all level 1 backups )
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
If using level 1 cumulative backups, then when the most recent level 1 backup is about half of the size of the base level 0 backup, take a new level 0.
SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME, BLOCKS, DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0
AND BLOCKS / DATAFILE_BLOCKS > .5
ORDER BY COMPLETION_TIME;
delete archivelog all completed before 'sysdate -1';
crosscheck archivelog all;
DELETE FORCE OBSOLETE;
DELETE EXPIRED ARCHIVELOG ALL;
DELETE NOPROMPT BACKUP OF DATABASE COMPLETED BEFORE 'SYSDATE-7';
DELETE BACKUPSET TAG "weekly_bkup";
DELETE EXPIRED BACKUP;
DELETE NOPROMPT EXPIRED BACKUPSET OF TABLESPACE uwdata DEVICE TYPE disk COMPLETED BEFORE SYSDATE-15;
DELETE NOPROMPT ARCHIVELOG UNTIL TIME "SYSDATE-5DELETE OBSOLETE REDUNDANCY = 2 device type disk;
LIST BACKUP;
LIST ARCHIVELOG ALL;
RESYNC CATALOG;
RMAN Commands
Other Links :
using-rman-repository
Duplicte internal steps :
contents of Memory Script:
{
set until scn 13169104914343;
set newname for datafile 1 to "/oravl02/oradata/dbtest/system01.dbf";
set newname for datafile 2 to "/oravl02/oradata/dbtest/sysaux01.dbf";
set newname for datafile 3 to "/oravl02/oradata/dbtest/undotbs01.dbf";
set newname for datafile 4 to "/oravl02/oradata/dbtest/users01.dbf";
restore
clone database -- Before clone , it does create C.F ,switch clone , recover and then clone database
;
}
...executing command: SET NEWNAME
...restoring datafile 00001 to /oravl02/oradata/dbtest/system01.dbf
CREATE CONTROLFILE REUSE SET DATABASE "DBTEST" RESETLOGS ARCHIVELOG
DATAFILE
'/oravl02/oradata/dbtest/system01.dbf'
contents of Memory Script:
{
switch clone datafile all;
}
contents of Memory Script:
{
set until time "to_date('2015-01-06 00:00:00','yyyy-mm-dd hh24:mi:ss')";
recover
clone database
delete archivelog
;
}
Manual way to run duplicate steps :
RMAN> RUN
2> {
3> shutdown clone immediate;
4> startup clone nomount ;
5> }
CREATE CONTROLFILE REUSE SET DATABASE "DBTEST" RESETLOGS ARCHIVELOG
DATAFILE
'/oravl02/oradata/dbtest/system01.dbf'
run
2> {
3> set newname for tempfile 1 to "/oravl02/oradata/dbtest/temp01.dbf";
4> switch clone tempfile all;
5> set newname for datafile 1 to "/oravl02/oradata/dbtest/system01.dbf";
6> set newname for datafile 2 to "/oravl02/oradata/dbtest/sysaux01.dbf";
11> catalog clone datafilecopy "/oravl02/oradata/dbtest/sysaux01.dbf";
16> switch clone datafile all;
17> }
RMAN> run
2> {
3> Alter clone database open resetlogs;
4> }
filesperset : limit the number of datafiles in each backup set.
maxopenfiles : limit the number of files that can be simultaneously open for reads.
diskratio : read datafiles from a specified number of differnt disks.
Backup optimization skips backup of files in certain circumstances if identical file or version of the file has already been backed up.
BACKUP DURATION 4:00 PARTIAL MINIMIZE TIME DATABASE FILESPERSET 1;
BACKUP DURATION 4:00 PARTIAL MINIMIZE LOAD DATABASE FILESPERSET 1;
large_pool_size = num_of_allocated_channels * (16 MB + (4 * size_of_tape_buffer ))
Duplicate or Standby using Rman :
By default, the DUPLICATE command creates the database by using the most recent backups of the target database and then performs recovery to the most recent consistent point contained in the incremental backups and archived logs.
You can register the duplicate database in the same recovery catalog as the target database. This is possible because RMAN gives duplicate database a new DBID during duplication.
When you create a standby database with DUPLICATE ... FOR STANDBY, RMAN does not create a unique DBID.
Order of datafile creation using Rman duplicate , SET NEWNAME precede CONFIGURE AUXNAME precede which precede DB_FILE_NAME_CONVERT
If the directory structure is the same on the new host, then you can specify the NOFILENAMECHECK option.
You cannot use NOFILENAMECHECK on the same host with duplicate.
If control file autobackups is ON and the backup includes datafile 1, RMAN writes the control file and SPFILE to a separate backup set. If autobackup is OFF and the backup includes datafile 1, then RMAN includes the current control file and SPFILE in the same backup set as the datafiles.
If you have configured control file autobackup, you do not need a recovery catalog or target database control file to restore the control file in case if you lost all your control files.
Restoring an RMAN Backup to Another Node (without duplicate command )
Using RMAN To Clone A Database
Create physical standby using RMAN duplicate
Active Database Duplication in Oracle 11g with Example
Creating physical standby database with duplicate (without backup)
Rman Backup :
http://dbasolutions.wikispaces.com/Backup+-+RMAN
Rman Recovery :
Time taken by rman backups and size :
select INPUT_TYPE,
--INPUT_TYPE||'-'||(select INCREMENTAL_LEVEL from v$backup_set where COMPLETION_TIME between r.START_TIME and r.END_TIME and rownum <2) INPUT,
to_char(trunc(START_TIME),'DAY') day,
STATUS,
TO_CHAR(START_TIME,'mm/dd/yy hh24:mi') start_time,
TO_CHAR(END_TIME,'mm/dd/yy hh24:mi') end_time,
ELAPSED_SECONDS/3600 hrs,
INPUT_BYTES/1024/1024/1024 SUM_BYTES_BACKED_IN_GB,
OUTPUT_BYTES/1024/1024/1024 SUM_BACKUP_PIECES_IN_GB,
OUTPUT_DEVICE_TYPE
FROM V$RMAN_BACKUP_JOB_DETAILS r
order by SESSION_KEY
/
select SID,OPEN_TIME,
close_time,filename,
round(TOTAL_BYTES/1024/1024/1024) GBTOBEREAD_FROM_FILE,
round(bytes/1024/1024/1024) GB_Written_sofar ,
--(EFFECTIVE_BYTES_PER_SECOND/1024/1024/1024)*3600 SPEED_GBPERHR,
round(EFFECTIVE_BYTES_PER_SECOND/1024/1024) SPEED_MBPS,
round(ELAPSED_TIME/100/60) Elapsed_Min --(sysdate-open_time)*24*60 ela_time_minutes,
from V$BACKUP_ASYNC_IO
select sid,
serial,
100* sum (long_waits) / sum (io_count) as "LONG_WAIT_PCT",
sum (effective_bytes_per_second)/1024/1024 as "MB_PER_S"
from v$backup_async_io
group by sid, serial
alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';
select sl.sid, sl.opname,
to_char(100*(sofar/totalwork), '990.9')||'%' pct_done,
sysdate+(TIME_REMAINING/60/60/24) done_by
from v$session_longops sl, v$session s
where sl.sid = s.sid
and sl.serial# = s.serial#
and sl.sid in (select sid from v$session where module like 'backup%' or module like 'restore%' or module like 'rman%')
and sofar != totalwork
and totalwork > 0;
col det for a70
set numw 15
Select INPUT_TYPE||' '||
STATUS||' '||
TO_CHAR(START_TIME,'mm/dd/yy hh24:mi:ss') ||' -> '||--start_time,
TO_CHAR(END_TIME,'mm/dd/yy hh24:mi:ss') ||' '||--end_time,
round(ELAPSED_SECONDS/3600,2) ||'hrs '|| --hrs,
round(INPUT_BYTES/1024/1024/1024) ||'gb '|| --SUM_BYTES_BACKED_IN_GB,
round(OUTPUT_BYTES/1024/1024/1024) ||'gb '||--SUM_BACKUP_PIECES_IN_GB,
OUTPUT_DEVICE_TYPE Det, bpd.*
from (
select min(CHECKPOINT_CHANGE#) minckpt ,min(d.CHECKPOINT_TIME) mintime, max(CHECKPOINT_CHANGE#) maxckpt ,max(d.CHECKPOINT_TIME) maxtime
--select min(CHECKPOINT_CHANGE#) ,min(TO_CHAR((d.CHECKPOINT_TIME), 'YYYY-MM-DD HH:MI.SS')) mintime, max(CHECKPOINT_CHANGE#) ,max(TO_CHAR((d.CHECKPOINT_TIME), 'YYYY-MM-DD HH:MI.SS'))
from v$backup_piece p, v$backup_datafile d
where d.set_stamp(+) = p.set_stamp
and d.set_count(+) = p.set_count
and p.RMAN_STATUS_STAMP in ( select max(pi.RMAN_STATUS_STAMP) from v$backup_piece pi, v$backup_datafile di
where di.set_stamp = pi.set_stamp
and di.set_count = pi.set_count
and di.COMPLETION_TIME < nvl(to_date('&qry_dt','mm/dd/yy'),sysdate)
and di.INCREMENTAL_LEVEL = 0 ) ) bpd, V$RMAN_BACKUP_JOB_DETAILS j
where j.status like 'COMPLETED%'
and mintime Between j.START_TIME and j.End_time;
Archives created before D.F backup and archived after all D.F backup ended: current archivelog
select THREAD#, SEQUENCE# , FIRST_CHANGE# from V$BACKUP_ARCHIVELOG_DETAILS where FIRST_CHANGE# <= &MINCKPT
and NEXT_CHANGE# >= &MAXCKPT;
Archive with first or next scn between D.F backup time
select THREAD#, SEQUENCE# , FIRST_CHANGE# from V$BACKUP_ARCHIVELOG_DETAILS
where (FIRST_CHANGE# between &MINCKPT and &MAXCKPT)
or
(NEXT_CHANGE# between &MINCKPT and &MAXCKPT);
Archive with first or next scn between D.F backup start time and till recovery required
select distinct THREAD#, SEQUENCE# , FIRST_CHANGE# from V$BACKUP_ARCHIVELOG_DETAILS
where (FIRST_CHANGE# between &MINCKPT and timestamp_to_scn(nvl(to_timestamp('&&Reqdt_time','DD/MM/YYYY HH24:MI:SS'),sysdate)) )
or
(NEXT_CHANGE# between &MINCKPT and timestamp_to_scn(nvl(to_timestamp('&Reqdt_time','DD/MM/YYYY HH24:MI:SS'),sysdate)));
First few archives created after backup completed
select * from (
select distinct THREAD#, SEQUENCE# , FIRST_CHANGE#,FIRST_TIME from V$BACKUP_ARCHIVELOG_DETAILS where FIRST_CHANGE# >= &MAXCKPT order by FIRST_CHANGE# ) where rownum < 5;
select * from (
select distinct THREAD#, SEQUENCE# , FIRST_CHANGE#,FIRST_TIME from v$archived_log where FIRST_CHANGE# >= &MAXCKPT order by FIRST_CHANGE# ) where rownum < 5;
select * from (
select distinct THREAD#, SEQUENCE# , FIRST_CHANGE#,FIRST_TIME from V$BACKUP_ARCHIVELOG_DETAILS order by FIRST_CHANGE# desc)
where FIRST_CHANGE# > &MAXCKPT;
rman backups type and size
select ctime "Date"
, decode(backup_type, 'L', 'Archive Log', 'D', 'Full', 'Incremental') backup_type
, bsize "Size MB"
from (select trunc(bp.completion_time) ctime
, backup_type
, round(sum(bp.bytes/1024/1024/1024),2) bsize
from v$backup_set bs, v$backup_piece bp
where bs.set_stamp = bp.set_stamp
and bs.set_count = bp.set_count
and bp.status = 'A'
group by trunc(bp.completion_time), backup_type)
order by 1, 2
Latest level 0 backup and details
col dfname for a65
col handle for a70
select (select name from v$datafile where FILE# = d.file# ) dfname, p.handle , --d.RECID , d.STAMP , d.SET_STAMP , d.SET_COUNT ,
-- TO_CHAR((d.CHECKPOINT_TIME), 'YYYY-MM-DD HH24:MI.SS') ,
TO_CHAR((d.COMPLETION_TIME), 'YYYY-MM-DD HH24:MI.SS') comptime --,d.INCREMENTAL_LEVEL
from v$backup_piece p, v$backup_datafile d
where d.set_stamp(+) = p.set_stamp
and d.set_count(+) = p.set_count
and p.RMAN_STATUS_STAMP in ( select max(pi.RMAN_STATUS_STAMP) from v$backup_piece pi, v$backup_datafile di
where di.set_stamp = pi.set_stamp
and di.set_count = pi.set_count
and di.INCREMENTAL_LEVEL = 0 )
--and d.CHECKPOINT_TIME in ( select max(df.CHECKPOINT_TIME) from v$backup_datafile df where df.INCREMENTAL_LEVEL = 0 )
order by d.COMPLETION_TIME;
Backup records of datafiles :
select set_stamp , to_char(ba.completion_time, 'MON-DD-YYYY HH24:MI') finish_time , df.name
from v$datafile df , v$backup_datafile ba
where df.file# = ba.file#
and ba.file# != 0
order by set_stamp, ba.file#
col dfname for a90
col handle for a70
compute sum of sz on rr
compuet sum of sz on report
break on rr
break on report
select HANDLE , sum(BYTES)/1024/1024/1024 from v$backup_piece p
where p.RMAN_STATUS_STAMP in ( select max(pi.RMAN_STATUS_STAMP) from v$backup_piece pi, v$backup_datafile di
where di.set_stamp = pi.set_stamp
and di.set_count = pi.set_count
and di.INCREMENTAL_LEVEL = 0 )
group by handle;
select substr(handle,1,4) rr ,HANDLE , sum(BYTES)/1024/1024/1024 sz from v$backup_piece p
where p.RMAN_STATUS_STAMP in ( select max(pi.RMAN_STATUS_STAMP) from v$backup_piece pi, v$backup_datafile di
where di.set_stamp = pi.set_stamp
and di.set_count = pi.set_count
and di.INCREMENTAL_LEVEL = 0 )
group by handle
order by substr(handle,1,4);
Apply incremental backup to imgae copies :
BACKUP INCREMENTAL LEVEL 0 TAG = WEEKLY DATABASE;
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG WEEKLY DATABASE; -- apply incremental changes to the COPY WITH TAG WEEKLY DATABASE
RECOVER COPY OF DATABASE WITH TAG WEEKLY;
Points to consider in rman backup script :
- export NLS_DATE_FORMAT="YYYY/MM/DD HH24:MI:SS"
- c_begin_time_sec=`date +%s`
c_end_time_sec=`date +%s`
v_total_execution_time_sec=`expr ${c_end_time_sec} - ${c_begin_time_sec}`
echo "Script execution time is $v_total_execution_time_sec seconds" - v_log_file=full_backup_${ORACLE_SID}.`date +%Y%m%d_%H%M%S`.log
- BACKUP CONTROL FILE AS THE LAST STEP
- DELETE NOPROMPT ARCHIVELOG ALL BACKED UP $v_del_arch_copies TIMES TO DISK COMPLETED BEFORE '$p_start_of_last_db_backup';
- CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
- Recommended retention policy is REDUNDANCY 2 for which you will require 3 TIMES FULL BACKUPS SPACE + ARCH
- delete force noprompt obsolete recovery window of ${TAPE_DAY_RETENTION} days device type disk;
ControlFile and Spfile restore :
If FRA is set , autobackup will take C.F backup in FRA. If autobackup is not set , you can give AUTOBACKUP FORMAT
Spfile restore from autobackup or path(specific backup piece)
RMAN> restore spfile from autobackup;
RMAN> restore spfile from autobackup db_recovery_file_dest='C:ORACLEFLASH_RECOVERY_AREA' db_name='orcl';
C.F restore from autobackup or path(specific backup piece)
RMAN>run
{
set controlfile autobackup format for device type disk to '/u01/autobackup/%F';
restore controlfile from autobackup;
}
RMAN> connect target /
RMAN> startup nomount;
RMAN> restore controlfile to 'C:\ctl.bk' from autobackup;
RMAN> restore controlfile from 'E:\flash_recovery_area\orcl\BACKUPSET\2012_11_26\O1_MF_ANNNN_TAG20121126T160306_8C6PMX3O_.BKP';
To check which backup piece contain the C.F
RMAN> restore controlfile from autobackup preview;
Note : There may be many C.F. It will restore the latest
To restore contolfile from past backup, then we need to use “SET UNTIL TIME”
run
{
SET UNTIL TIME "TO_DATE('20111129:12:01','YYYYMMDD:HH24:MI')";
RESTORE CONTROLFILE FROM AUTOBACKUP preview;
}
RMAN> connect target /
RMAN> startup nomount;
RMAN> set dbid 2601506594;
RMAN> restore controlfile from autobackup maxdays 20;
Find exact SCN Number for Oracle Restore
col fuzz# format 99999999999999999999999999
col chkpnt# format 99999999999999999999999999
select max(absolute_fuzzy_change#) fuzz#, max(checkpoint_change#) chkpnt# from
(select file#, completion_time, checkpoint_change#, absolute_fuzzy_change# from v$backup_datafile
where incremental_level = 0
and trunc(completion_time) = to_date('FEB-06-2016','MON-DD-YYYY')
and file# <> 0
order by completion_time desc
);
FUZZ# CHKPNT#
--------------------------- ---------------------------
0 14185864593835
Select the highest of the two scn above.
rman target /
spool log to list_backup.log;
list backup;
exit;
select sequence#, to_date(first_time,'DD-MON-YYYY HH24:MI:SS') first_time,first_change#,
to_date(next_time,'DD-MON-YYYY HH24:MI:SS') next_time, next_change#
from v$archived_log
where completion_time between to_date('FEB-06-2016','MON-DD-YYYY') and sysdate -10 ;
RMAN performance :
Performance can be improved by
filesperset : limit the number of datafiles in each backup set.
maxopenfiles : limit the number of files that can be simultaneously open for reads.
diskratio : read datafiles from a specified number of differnt disks.
Backup optimization skips backup of files in certain circumstances if identical file or version of the file has already been backed up.
BACKUP DURATION 4:00 PARTIAL MINIMIZE TIME DATABASE FILESPERSET 1;
BACKUP DURATION 4:00 PARTIAL MINIMIZE LOAD DATABASE FILESPERSET 1;
large_pool_size = num_of_allocated_channels * (16 MB + (4 * size_of_tape_buffer ))
Monitor RMAN
col dbsize_mbytes for 99,999,990.00 justify right head "DBSIZE_MB"
col input_mbytes for 99,999,990.00 justify right head "READ_MB"
col output_mbytes for 99,999,990.00 justify right head "WRITTEN_MB"
col output_device_type for a10 justify left head "DEVICE"
col complete for 990.00 justify right head "COMPLETE %"
col compression for 990.00 justify right head "COMPRESS|% ORIG"
col est_complete for a20 head "ESTIMATED COMPLETION"
col recid for 9999999 head "ID"
select recid
, output_device_type
, dbsize_mbytes
, input_bytes/1024/1024 input_mbytes
, output_bytes/1024/1024 output_mbytes
, (output_bytes/input_bytes*100) compression
, (mbytes_processed/dbsize_mbytes*100) complete
, to_char(start_time + (sysdate-start_time)/(mbytes_processed/dbsize_mbytes),'DD-MON-YYYY HH24:MI:SS') est_complete
from v$rman_status rs
, (select sum(bytes)/1024/1024 dbsize_mbytes from v$datafile)
where status='RUNNING'
and output_device_type is not null;
Notes :
A level 1 incremental backup can be either of the following types: C1Dall ( To restore cumulative requires level 0 + most recent level 1 ; Differential requires level 0 + all level 1 backups )
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
If using level 1 cumulative backups, then when the most recent level 1 backup is about half of the size of the base level 0 backup, take a new level 0.
SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME, BLOCKS, DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0
AND BLOCKS / DATAFILE_BLOCKS > .5
ORDER BY COMPLETION_TIME;
delete archivelog all completed before 'sysdate -1';
crosscheck archivelog all;
DELETE FORCE OBSOLETE;
DELETE EXPIRED ARCHIVELOG ALL;
DELETE NOPROMPT BACKUP OF DATABASE COMPLETED BEFORE 'SYSDATE-7';
DELETE BACKUPSET TAG "weekly_bkup";
DELETE EXPIRED BACKUP;
DELETE NOPROMPT EXPIRED BACKUPSET OF TABLESPACE uwdata DEVICE TYPE disk COMPLETED BEFORE SYSDATE-15;
DELETE NOPROMPT ARCHIVELOG UNTIL TIME "SYSDATE-5DELETE OBSOLETE REDUNDANCY = 2 device type disk;
LIST BACKUP;
LIST ARCHIVELOG ALL;
RESYNC CATALOG;
RMAN Commands
Other Links :
using-rman-repository
Duplicte internal steps :
contents of Memory Script:
{
set until scn 13169104914343;
set newname for datafile 1 to "/oravl02/oradata/dbtest/system01.dbf";
set newname for datafile 2 to "/oravl02/oradata/dbtest/sysaux01.dbf";
set newname for datafile 3 to "/oravl02/oradata/dbtest/undotbs01.dbf";
set newname for datafile 4 to "/oravl02/oradata/dbtest/users01.dbf";
restore
clone database -- Before clone , it does create C.F ,switch clone , recover and then clone database
;
}
...executing command: SET NEWNAME
...restoring datafile 00001 to /oravl02/oradata/dbtest/system01.dbf
CREATE CONTROLFILE REUSE SET DATABASE "DBTEST" RESETLOGS ARCHIVELOG
DATAFILE
'/oravl02/oradata/dbtest/system01.dbf'
contents of Memory Script:
{
switch clone datafile all;
}
contents of Memory Script:
{
set until time "to_date('2015-01-06 00:00:00','yyyy-mm-dd hh24:mi:ss')";
recover
clone database
delete archivelog
;
}
Manual way to run duplicate steps :
RMAN> RUN
2> {
3> shutdown clone immediate;
4> startup clone nomount ;
5> }
CREATE CONTROLFILE REUSE SET DATABASE "DBTEST" RESETLOGS ARCHIVELOG
DATAFILE
'/oravl02/oradata/dbtest/system01.dbf'
run
2> {
3> set newname for tempfile 1 to "/oravl02/oradata/dbtest/temp01.dbf";
4> switch clone tempfile all;
5> set newname for datafile 1 to "/oravl02/oradata/dbtest/system01.dbf";
6> set newname for datafile 2 to "/oravl02/oradata/dbtest/sysaux01.dbf";
11> catalog clone datafilecopy "/oravl02/oradata/dbtest/sysaux01.dbf";
16> switch clone datafile all;
17> }
RMAN> run
2> {
3> Alter clone database open resetlogs;
4> }