COMP 3000 Essay 1 2010 Question 9: Difference between revisions
Line 90: | Line 90: | ||
== '''Future File Systems''' == | == '''Future File Systems''' == | ||
====BTRFS==== | ====BTRFS==== | ||
--posted by [Naseido] -- just a rough draft for an intro to B-trees | |||
--source: http://www.dhtusa.com/media/IOPerf_CMG09DHT.pdf ( found through Google Scholar) | |||
BTRFS, B-tree File System, is a file system that is often compared to ZFS because it has very similar functionality even though a lot of the implementation is different. BTRFS is based on the b-tree structure where a subvolume is a named b-tree made up of the files and directories stored. | |||
====WinFS==== | ====WinFS==== |
Revision as of 18:45, 13 October 2010
Question
What requirements distinguish the Zettabyte File System (ZFS) from traditional file systems? How are those requirements realized in ZFS, and how do other operating systems address those same requirements? (Please discuss legacy, current, and in-development systems.)
Answer
Introduction
TO-DO: Edit, expand, revise
ZFS was developed by Sun Microsystems (now owned by Oracle) as a server class file systems. This differs from most file systems which were developed as desktop file systems that could be used by servers. With the server being the target for the file system particular attention was paid to data integrity, size and speed.
One of the most significant ways in which the ZFS differs from traditional file systems is the level of abstraction. While a traditional file system abstracts away the physical properties of the media upon which it lies i.e. hard disk, flash drive, CD-ROM, etc. ZFS abstracts away if the file system lives one or many different pieces of hardware or media. Examples include a single hard drive, an array of hardrives, a number of hard drives on non co-located systems.
One of the mechanisms that allows this abstraction is that the volume manager which is normally a program separate from the file system in traditional file systems is moved into ZFS.
ZFS is a 128-bit file system allowing this allows addressing of 2128 bytes of storage.
Major Features of ZFS
Note to self: The following is a quick reminder of the main points and will be revised soon. Some content will be moved to the intro. -- Tawfic
ZFS stands for Zettabyte File System. A ZFS combines a file system with a logical volume manager. A ZFS file system can store up to 2^58 zettabytes. Each ZB is 2^70 bytes. A traditional file system (refereed to as TFS hereafter) uses the idea of volumes to manage storage. By contrast, ZFS uses the idea of pooled storage.
Issues with volume storage (used by TFS)
# The FS can not use disks under a different volume. # Need a volume manager (another layer) to manage the volume. # Hard to grow/shrink. # Storage is fragmented.
Advantages of pooled storage
# No partitions to manage. # All free storage space is always available. # Easy to grow/shrink.
Problems a ZFS attempts to tackle/avoid
Loosing important files Running out of space on a partition Booting with a damaged root file system.
Issues with existing File Systems
No way to prevent silent data corruptions E.g. defects in a controller, disk, firmware . . etc can corrupt data silently. Hard to manage Limits on file sizes, number of files, files per directory..etc
In ZFS, the ideas of files and directories are replaced by objects.
ZFS is composed of seven components (achieving, and going beyond what a TFS provides ):
# SPA (Storage Pool Allocator). # DSL (Data Set and snapshot Layer). # DMU (Data Management Unit). # ZAP (ZFS Attributes Processor). # ZPL (ZFS POSIX Layer). # ZIL (ZFS Intent Log). # ZVOL (ZFS Volume).
Physical Layer Abstraction
- volume management and file system all in one
- file systems on top of zpools on top of vdevs on top of physical devices
- file systems easily and often span over many physical devices.
- ridiculous capacity
Data Integrity
At the lowest level, ZFS uses checksums for every block of data that is written to disk. The checksum is checked whenever data is read to ensure that data has not been corrupted in some way. The idea is that if either the block or the checksum is corrupted, then recalculating the checksum for the block will result in a mismatch between the calculated and stored checksums. It is possible that both the block and checksum record could be corrupted, but the probability of the corruption being such that the corrupted block's checksum matches the corrupted checksum is exceptionally low.
In the event that a bad checksum is found, replication of data, in the form of "Ditto Blocks" provide an opportunity for recovery. A block pointer in ZFS is actually capable of pointing to multiple blocks, each of which contains duplicate data. By default, duplicate blocks are only stored for file system metadata, but this can be expanded to user data blocks as well. When a bad checksum is read, ZFS is able to follow one of the other pointers in the block pointer to hopefully find a healthy block.
RAID setups are particularly well suited to ZFS, since there is already an abstraction between the physical storage and the zpools. Besides protecting from outright total disk failure, if a bad checksum is found, there is the possibility that one of the alternate disks has a healthy version. If these errors accumulate, it can signal an impending drive failure. When a drive does fail, some of our tolerance for data loss is consumed; that is, the system is operating at less than 100% redundancy (however that is defined for the system at hand). To address this, ZFS supports "hot spares", idle drives that can be brought online automatically when another drive fails so that full redundancy can be rebuilt with minimal delay, hopefully in time for the next drive failure.
With block-by-block data integrity well in hand, ZFS also employs a transactional update model to ensure that higher level data structures remain consistent. Rather than use a journal to allow for quick consistency checking in the event of a system crash, ZFS uses a copy-on-write model. New disk structures are written out in a detached state. Once these structures have been written and checked, then they are connected to the existing disk structures in one atomic write, with the structures they replace becoming disconnected.
At the user level, ZFS supports file-system snapshots. Essentially, a clone of the entire file system at a certain point in time is created. In the event of accidental file deletion, a user can access an older version out of a recent snapshot.
Data Deduplication
Data Deduplication is a method of interfile storage compression, based around the idea of storing any one block of unique data only once physically, and logically linking that block to each file that contains that data. Effective use of data deduplication can reduce the space and power requirements of physical storage, but only if you have data that lends itself to deduplication.
Data Deduplication schemes are typically implemented using hash-tables, and can be applied to whole files, sub files (blocks), or as a patch set. There is an inherit trade off between the granularity of your deduplication algorithm and the resources needed to implement it. In general, as you consider smaller blocks of data for deduplication, you increase your "fold factor", that is, the difference between the logical storage provided vs. the physical storage needed. At the same time, however, smaller blocks means more hash table overhead and more CPU time needed for deduplication and for reconstruction.
The actual analysis and deduplication of incoming files can occur in-band or out-of-band. In-band deduplication means that the file is analyzed as it arrives at the storage server, and written to disk in its already compressed state. While this method requires the least over all storage capacity, resource constraints of the server may limit the speed at which new data can be ingested. In particular, the server must have enough memory to store the entire deduplication hash table in memory for fast comparisons. With out-of-band deduplication, inbound files are written to disk without any analysis (so, in the traditional way). A background process analyzes these files at a later time to perform the compression. This method means higher overall disk I/O is needed, which can be a problem if the disk (or disk array) is already at I/O capacity.
In the case of ZFS, which is typically hosted as a server-side file system, the server itself performs all of the deduplication and reconstruction; the entire process is transparent to the client. ZFS assumes that it is running on a highly multi-threaded operating system and that CPU cycles are in greater abundance than disk I/O cycles, and thus performs the deduplication in-band.
Future File Systems
BTRFS
--posted by [Naseido] -- just a rough draft for an intro to B-trees --source: http://www.dhtusa.com/media/IOPerf_CMG09DHT.pdf ( found through Google Scholar)
BTRFS, B-tree File System, is a file system that is often compared to ZFS because it has very similar functionality even though a lot of the implementation is different. BTRFS is based on the b-tree structure where a subvolume is a named b-tree made up of the files and directories stored.
WinFS
References
- Mandagere, N., Zhou, P., Smith, M. A., and Uttamchandani, S. 2008. Demystifying data deduplication. In Proceedings of the ACM/IFIP/USENIX Middleware '08 Conference Companion (Leuven, Belgium, December 01 - 05, 2008). Companion '08. ACM, New York, NY, 12-17.
- Geer, D.; , "Reducing the Storage Burden via Data Deduplication," Computer , vol.41, no.12, pp.15-17, Dec. 2008
- Bonwick, J.; ZFS Deduplication. Jeff Bonwick's Blog. November 2, 2009.
- Andrew Li, Department of Computing Macquarie University, Zettabyte File System Autopsy: Digital Crime Scene Investigation for Zettabyte File System
- Zhang, Yupu and Rajimwale, Abhishek and Arpaci-Dusseau, Andrea C. and Arpaci-Dusseau, Remzi H.; End-to-end Data Integrity for File Systems: A ZFS Case Study. FAST'10: Proceedings of the 8th USENIX conference on File and storage technologies. USENIX Association, Berkley, CA, USA.