ASM Queries
ASM Datafile Move
Mark disks for ASM
Rename asm Disk Group
ASM missing disk
Offline or drop
Adding-new-asm-disks-what-is-best-practise
ora-15063-asm-discovered-insufficient-amount-of-disks-2
Oracle ASM disk failure
Other ASM Material Links
Space usage :
asmcmd lsdg | awk 'BEGIN { print "\nSize\tUsed\tAvail\tUse%\tName" getline } { printf" %s \t%.0f\t %.0f %.0f'%' \t%s \n" ,$7/1024 ,$7/1024-$8/1024 , $8/1024, ($7-$8)*100/$7, $12}'
for dg in $(asmcmd ls + | awk -F "/" '{ print $1 }'); do echo "-----"$dg"-----"; asmcmd lsdsk -d $dg; echo; done;
dg=+; for dir in $(asmcmd ls $dg|egrep '/$'); do echo $dg/$dir; asmcmd du $dg/$dir; done
ASM NOTES :
Normal redundancy disk groups require at least two failure groups.
High redundancy disk groups require at least three failure groups.
External redundancy do not use failure groups.
Disk_repair_time (11g) : Default : 3.6 Hours.Determine how long to wait before an ASM disk is permanently dropped after disk is taken offline. It helps inpreventing unneccesary rebalancing if disk failure is transient i.e for a short time.In 10g, such a disk would be dropped at once from the disk group when it's offline. Only point of concern is that for that duration ther's only one mirror of those extents left which are on the failed disk if the disk group uses normal redundancy.
ASM_PREFERRED_READ_FAILURE_GROU (11g): If storage at differnt location,it helps to fecth extent from local mirror copy.
In Oracle 11.2 disk groups are managed as Clusterware resource.
11.1 requires you to manually start them or append the new disk group to ASM_DISKGORUPS.
Oracle 11g allows each node to define a preferred failure group, allowing nodes in extended clusters to access local failure groups in preference to remote ones
ASM_PREFERRED_READ_FAILURE_GROUPS
From Oracle 11g onwards , diskgroup can have different au_size and can be specified in create diskgroup command.
Fron 11.1 or higher , only the changed extents are written to resynchronize the offline disk when a disk in a failure group fails.
When cluster is in rolling upgrade mode each node in turn can be shutdown, upgraded and started.
If you want fixed AU_SIZE it can be changed when the disk group is created and cannot be changed later on.
Fine grained striping : redo log files AU_SIZE = 128KB (10g and 11g)
Coarse striping : datafiles first 20000 extents(1MB / 10g and 11g ) , After 20GB ( 8AU's of 1MB /11g) , After 40GB (64 AUs of 1MB /11g)
Notes :
An external redundancy disk group cannot be mounted if it has missing disks.
When disk is dropped(after offline) disk group can be mounted forcefully if we have one failgroup for normal redundancy or more than two failgroups for high redundancy
It tolerate the loss of one or more disks (including all disks) in the same failgroup.
Name.ASMfileno.otherno
Files in different disk groups can have the same ASM file number.
Even in normal redundancy , C.F will have 3 mirrored copy(high redundanct:coarse)
To change repair time : $ asmcmd setattr -G DATA disk_repair_time '8.0h'
kfod - ASM discovery tool : Can be used when ASM has problems mounting the disk group and in particular when it complains that it cannot find a disk(s)
renamedg : Caution , database would not have a clue that its files are now in a renamed disk group.
External redundancy : when disk goes offline and the disk group gets dismounted
Problem was with the access to ASM metadata : will not be able to mount. So fix the issue and then try. If unable to fix , recreate the disk group and restore database data from backups.
Problem was with database data : Will mount but fail in rebalance. So stop rebalance ,drop the 'offending' file(s) and restore/recreate those files.
ALTER DISKGROUP data OFFLINE DISK 'disk_0000', 'disk_0001';
ALTER DISKGROUP data OFFLINE DISKS IN FAILGROUP 'fg_0000';
ALTER DISKGROUP data ONLINE ALL;