ZFS is a combined file system and logical volume manager designed by sun microsystems . The features of ZFS includes its high scalability,maximum integrity,drive pooling and multiple
RAID levels. ZFS uses concept of storage pools to manage physical storage, instead of volume manager ZFS aggregates devices in to storage pool.The storage pool describes physical characteristics of the storage and act as an arbitrary data store from which file systems can be created.Also you don't need to predefine the size of the file system and file systems inside the ZFS pools will be automatically grow with in the disk space allocated to the storage pool.
RAID levels. ZFS uses concept of storage pools to manage physical storage, instead of volume manager ZFS aggregates devices in to storage pool.The storage pool describes physical characteristics of the storage and act as an arbitrary data store from which file systems can be created.Also you don't need to predefine the size of the file system and file systems inside the ZFS pools will be automatically grow with in the disk space allocated to the storage pool.
High scalability
ZFS is the 128 bit file system that is capable of zettabites of data ( 1 billion terabytes) of data , no matter how much harddrive you have zfs is capable to manage it.
Maximum Integrity
All the data inside the ZFS occupied with a checksum which will ensure it's data integrity . You can ensure that your data will not encounter silent data corruption.
Drive pooling
ZFS is just like our system RAM, when ever you need more space, you need to insert the new HDD only and it will automatically added in to the pool . So no need of headache as formating, initializing, partitioning etc.
Capability of different RAID levels
We can configure multiple RAID levels using ZFS and performance wise it is upto the mark with hardware raids.
Configuration Part
Stripped pool
In this case data is stripped between multiple HDD's and no backup is available. The data access speed is high in this case
1. In our server we have below HDD's attached
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63>
/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0
1. c0d1 <VBOX HAR-34e30776-506a21a-0001-1.01GB>
/pci@0,0/pci-ide@1,1/ide@0/cmdk@1,0
2. c1d1 <DEFAULT cyl 513 alt 2 hd 128 sec 32>
/pci@0,0/pci-ide@1,1/ide@1/cmdk@1,0
2. Create the stripped pool
bash-3.00# zpool create unixpool c0d1 c1d1
3. Check the pool status
bash-3.00# zpool status unixpool
pool: unixpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
unixpool ONLINE 0 0 0
c0d1 ONLINE 0 0 0
c1d1 ONLINE 0 0 0
errors: No known data errors
bash-3.00# zpool list unixpool
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
unixpool 1.98G 78K 1.98G 0% ONLINE -
bash-3.00# zfs list unixpool
NAME USED AVAIL REFER MOUNTPOINT
unixpool 73.5K 1.95G 21K /unixpool
Mirrored Pool
As the name pointed this will have mirrored configuration and occupies backup. Data Read speed is high in this case, but write speed is slow
1. Creating the mirrored pool
bash-3.00# zpool create unixpool mirror c0d1 c1d1
2. Verifying the pool status
bash-3.00# zpool status unixpool
pool: unixpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
unixpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0d1 ONLINE 0 0 0
c1d1 ONLINE 0 0 0
errors: No known data errors
bash-3.00# zpool list unixpool
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
unixpool 1016M 108K 1016M 0% ONLINE -
bash-3.00# zfs list unixpool
NAME USED AVAIL REFER MOUNTPOINT
unixpool 73.5K 984M 21K /unixpool
Mirroring multiple disks
*****************
1. Creating the mirrored pool
bash-3.00# zpool create unixpool2m mirror c0d1 c1d1 mirror c2t0d0 c2t1d0
2. verifying the pool status
bash-3.00# zpool status unixpool2m
pool: unixpool2m
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
unixpool2m ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0d1 ONLINE 0 0 0
c1d1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
errors: No known data errors
bash-3.00# zpool list unixpool2m
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
unixpool2m 2.04G 81K 2.04G 0% ONLINE -
bash-3.00# zfs list unixpool2m
NAME USED AVAIL REFER MOUNTPOINT
unixpool2m 76.5K 2.01G 21K /unixpool2m
Raid 5 (RaidZ) spool
This needs minimum 3 HDD's and able to sustain single HDD failure . If the disks are different sizes we need to use -f option while creating
1. Creating the raidz
bash-3.00# zpool create -f unixpoolraidz raidz c0d1 c1d1 c2t0d0
2. Checking the status
bash-3.00# zpool status unixpoolraidz
pool: unixpoolraidz
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
unixpoolraidz ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c0d1 ONLINE 0 0 0
c1d1 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
errors: No known data errors
bash-3.00# zpool list unixpoolraidz
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
unixpoolraidz 2.98G 150K 2.98G 0% ONLINE -
bash-3.00# zfs list unixpoolraidz
NAME USED AVAIL REFER MOUNTPOINT
unixpoolraidz 93.9K 1.96G 28.0K /unixpoolraidz
Raid 6 (RaidZ2)
This needs minimum 4 HDD's and can sustain 2 HDD failures
1. Creating the raidz2
bash-3.00# zpool create -f unixpoolraidz2 raidz2 c0d1 c1d1 c2t0d0 c2t1d0
2. Checking the status
bash-3.00# zpool status unixpoolraidz2
pool: unixpoolraidz2
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
unixpoolraidz2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c0d1 ONLINE 0 0 0
c1d1 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
errors: No known data errors
bash-3.00# zpool list unixpoolraidz2
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
unixpoolraidz2 3.97G 338K 3.97G 0% ONLINE -
bash-3.00# zfs list unixpoolraidz2
NAME USED AVAIL REFER MOUNTPOINT
unixpoolraidz2 101K 1.95G 31.4K /unixpoolraidz2
Destroying a zpool
We can destroy the zpool even it is mounted status also
bash-3.00# zpool destroy unixpoolraidz2
bash-3.00# zpool list unixpoolraidz2
cannot open 'unixpoolraidz2': no such pool
Importing and exporting the pool
As a unix administrator you might be facing some situation for storage migration. In that case we have a option called storage pool migration from one storage system to other
1. We have a pool called unixpool now
bash-3.00# zpool status unixpool
pool: unixpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
unixpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0d1 ONLINE 0 0 0
c1d1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
errors: No known data errors
2. Exporting the unixpool
bash-3.00# zpool export unixpool
3. Checking the status
bash-3.00# zpool status unixpool
cannot open 'unixpool': no such pool
Once you imported you can see there is no pool is available now. Now we need to import the pool to a new system
4. Check the pool need to be imported using zpool command
bash-3.00# zpool import
pool: unixpool
id: 13667943168172491796
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
unixpool ONLINE
mirror-0 ONLINE
c0d1 ONLINE
c1d1 ONLINE
mirror-1 ONLINE
c2t0d0 ONLINE
c2t1d0 ONLINE
5. zpool import unixpool
bash-3.00# zpool status unixpool
pool: unixpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
unixpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0d1 ONLINE 0 0 0
c1d1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
errors: No known data errors
We have successfully imported the unixpool now.
Zpool scrub option for integity check and repairing
We have a option for zpool integrity check and repairing just like our fsck in conventional unix file system . We can use zpool scrub for to achive this. From below command you can see the scrub option completed status and timing also
bash-3.00# zpool scrub unixpool
bash-3.00# zpool status unixpool
pool: unixpool
state: ONLINE
scrub: scrub completed after 0h0m with 0 errors on Mon Feb 1 18:30:31 2016
config:
NAME STATE READ WRITE CKSUM
unixpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c0d1 ONLINE 0 0 0
c1d1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c2t0d0 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
errors: No known data errors
Thank you . Please post your comments and suggestions