COMP 3000 Essay 1 2010 Question 10

From Soma-notes
Revision as of 10:09, 15 October 2010 by Filitche (talk | contribs)
Jump to navigation Jump to search

Question

How do the constraints of flash storage affect the design of flash-optimized file systems? Explain by contrasting with hard disk-based file systems.

Answer

First introduced in the late 80s, Flash-memory is a light, energy-independent, compact, shock-resistant and efficiently readable type of storage. Because of the particular limitations of this kind of memory, flash file systems require a fundamentally different system architecture than disk-based file-systems. The particular limitations that lead to this necessity are flash-memory’s limited number of erase-cycles, the slowness of its writes and erasures, as well as its need to conduct erasures one entire block at a time. A typical disk-based file-system is not suitable for working with flash memory as it erases far too frequently and indiscriminately while being simultaneously optimized for other constraints that do not affect flash memory. This means that a different solution is necessary and that solution is the log-based file-system which is far better suited to working with flash memory because it is able to optimize a flash unit's lifespan through the log-structure of writes and wear-leveling, as well as its efficiency through the better management of writes and erasures.

==Flash Memory==Though Flash memory has many advantages it also has several key disadvantages which are of key relevance for anyone attempting to impliment an file system to work with it.

Originally conceived as an improved alternative to EPROMs[7], Flash memory is non-volatile(power-supply independent) storage space that has become more popular recently due to its fast fetch times. There are two basic forms of the flash storage system, NOR and NAND. Each type has its advantages and disadvantages. NOR has the fastest read times, but is much slower at writing. NAND on the other hand has much more capacity, faster write times, is less expensive, and has a much longer life expectancy. This means that NAND is the superior form of flash for long term storage, while NOR is preferred for computer embedded read-only functions. [2]

More and more people use flash memory, with many sizes of drives, ranging from a few hundred megabyte USB key, to a few terabyte internal solid-state drive(SSD). Two main reasons for this movement are because of flash's extremely fast read times, and its falling price. A typical flash drive has read speeds of up to 14 times faster than a hard disk drive (HDD).[17]

The extreme read speed makes flash drives a preferred method of storing games. This effectively makes loading times virtually non-existent. There is however a downside to this method. Games constantly save, modify, and change files wearing out the blocks much quicker. Flash drives have been shown to be effective for the use in web-servers for running their CSS scripts or HTML pages.

Although flash drives offer exponentially faster read-speeds than HDDs, they still have not become the main source of data management. The reason for this is because HDDs are simply much cheaper, and flash drives still have many faults. The most critical fault is that each block in flash memory can only be erased approximately 100,000 times (or just 10000 if its NOR flash).[14] This poses a problem because of flash memory's second significant drawback: when modifying a file, even if its a single bit, the entire block must be erased, and rewritten. This erase/rewrite slows down the write operation considerably, making it actually slower to write a file to flash than an HDD.[8]

Thus, though flash does offer much faster read-times, it comes with the heavy constraints on a limited number of erase cycles, slow write and erase times and the necessity to be erased a block at a time.


Traditionally Optimized File Systems

Since the kernel asks for a block number, a conventional hard disk drive (HDD) file-system is not optimized to work with flash memory. The reason for this is that conventional hard-disks have different constraints from those of flash memory - their primary problem is to reduce seeking time, while the primary problem when working with flash memory is to erase in a minimal and balanced way.

The most consuming process for an HDD is seeking data by relocating the read-head and spinning the magnetic disk. A traditional file system optimizes the way it stores data by placing related blocks close-by on the disk in order to minimize mechanical movement within the HDD. One of the great advantages of flash memory, which accounts for its fast read speed, is that there is no need to seek data physically. This is also why defragmentaion, a procedure used by HDDs to put files into more convenient configurations and thus minimize seeking times, loses its purpose in a flash memory context. Indeed, the unnecessary erasures that it entails are both inefficient and harmful for a flash memory unit.

This comes directly out of flash memory's aforementioned constraints: the slow block-sized erasures and the limited number of erase-cycles. Because of these, a flash optimal file system needs to minimize its erase operations and also to spread out its erasures in such a way as to avoid the formation of hot-spots: sections of memory which have undergone a disproportionately high number of erasures and are thus in danger of burning out. This process of spreading out data is referred to as wear leveling. To minimize hotspots, a system using flash memory would have to write new data to empty memory blocks. This method would also call for some sort of garbage collection to conduct necessary erasure operations while the system is idle. It makes better sense to do these at this time because of the slow nature of erasures in the flash memory context. Of course, there is no such feature in a traditional HDD file-system. For the reasons given above, HDD systems are unsuitable for use with flash memory.

Flash Optimized File Systems

Log-based file systems, however, do not share the disadvantages of HDD file systems. Also known as Flash Transitional Layers (FTL), their distinguishing feature is a log that keeps the data which must still be written to disk and then writes it all in one big write. This is useful for flash systems because it allows them to time the expensive write operation in such a way as to not waste resources and also because, by filling up full blocks at at time, the log-structured file system makes sure that space is used to its fullest capacity. This last fact in turn ensures that it will take the memory a comparatively long time to be filled and that erasures and over-writes will only occur when they absolutely have to. As has been discussed, this is of great benefit to flash memory.

A log structured file system also keeps a log of how many times each block of data has been erased and this is good for conducting wear leveling - a process the purpose of which is to prevent particular blocks from being erased more than others. As already discussed, the purpose of such a practice is to reduce hot-spots. Wear-leveling writes data that does not change to blocks that have been erased frequently thus decreasing the probability that those blocks will be erased even more. Similarly, it tries to write evenly to all the blocks so as to maximize the hard-drive's overall lifespan. [3]

The FTL is what makes it possible to combine the above features with a more traditional memory-management structure. It has a translation table, where each physical memory address is associated with an emulated block sectors. This allows a traditional file system that uses block sectors to be used on the flash drive. The FTL is able to achieve this by giving each block a flag which keeps track of its state. When blocks are being written to, the FTL marks them as allocated.

When a block is being written to, the FTL marks the blocks needed as "allocated". This prevents other data from being written to the block that has already been allocated. The FTL then goes on to write the data in the allocated blocks. After this is finished, the relevant block's state becmes "pre-valid". Next, the block is marked as valid.On another technical level, the FTL also works with banks. A bank is essentially a group of sequential addresses which keeps track of when it was last updated using timestamps.When the FTL receives a request to write something to memory, it uses a list of these to determine which area of the drive should be used. Then, it only writes to that bank, until it is full before switching to another. Very importantly, the system also keeps a Cleaning Bank list which indicates which banks need to be emptied out. This feature makes certain that writes and erasures are never mixed in the same bank and is also integral to the whole erasure process, which is in turn one of the key feature of the log-based file system. [8]

At the heart of this process is the garbage collector: When the FTL realizes that there is not enough room to write new data onto the drive, it runs a garbage collection routine. This routine selects a segment to be cleaned, copies all of the valid data into a new segment, then erases everything in the old segment. This frees up the otherwise useless invalidated blocks. Furthermore, because the collector does not erase every block the moment it becomes invalidated but ony does it by the bank, time is saved when the expensive erase operation is called. Due to the slowness of the erasures, the kernel typically conducts the cleaning when the drive is idle and many resources are available for this process. [SOURCE]


Thus, the advantages of the log-based file system arise from its log-structure but also have to do with wear leveling and the optimization of writes and erasures on the block level. The former is accomplished by means of the log itself and its intelligent use when it comes to deciding where to write. The latter, through certain technical tools such as allocation-technology, banks and, of course, the garbage collector.

Conclusion

In this way, thanks to its wear-leveling, banks and garbage collector, the log-based file-system is far better suited to working with flash memory than a traditional HDD file system. The latter is utterly unfit for this task due to its placing primacy on the minimization of seeks rather than on the minimization and management of erasures. Dealing smartly with erasures is extremely important for a flash memory file system, as that memory type's particular weaknesses, the limited number of erasure cycles, the necessity to erase by the block and the relative slowness of the erasures themselves, all have to do with erasing. A good flash memory file system must therefore be built with the aim of making the best of these weaknesses and this is precisely the reason why older disk-based file systems are not suitable for flash memory while log-based file systems are.

Questions

  1. Even though flash drives are exponentially faster than traditional HDDs, why are HDDs still the main method of data storage?
  2. Writing and erasing data are costly operations for a flash based storage drive. Why does modifying data (even a single bit) take the most amount of time?
  3. Why is the Flash Translational Layer so important to a flash drive's functionality? Why can you not use the traditional interface to deal with the block layer?
  1. What particularities/deficiencies of flash memory does any file-system which implements it have to take into account? What are some ways of dealing with them?

References

[1] Kim, Han-joon; Lee, Sang-goo. A New Flash Memory Management for Flash Storage System. IEEExplore. Dept. of Comput. Sci., Seoul Nat. Univ., 06 Aug 2002. <http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=812717&tag=1#>

[2] Smith, Lance. NAND Flash Solid State Storage Performance and Capability. Flash Memory Summit. SNIA Education Committee, 18 Aug 2009. <http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2009/20090812_T1B_Smith.pdf>

[3] Chang, LiPin. On Efficient Wear Leveling for Large-Scale Flash-Memory Storage Systems. Association for Computing Machinery (ACM). Dept. of Comput. Sci.,Nat. ChiaoTung Univ., 15 Mar 2007. <http://portal.acm.org/citation.cfm?id=1244248>

[4] Nath, Suman; Gibbons, Phillip. Online maintenance of very large random samples on flash storage. Association for Computing Machinery (ACM). The VLDB Journal, 27 Jul 2007. <http://portal.acm.org/citation.cfm?id=1731355>

[5] Lim, Seung-Ho; Park; Kyu-Ho. An Efficient NAND Flash File System for Flash Memory Storage. CORE Laboratory. IEEE Transactions On Computers, Jul 2006. <http://vlsi.kaist.ac.kr/paper_list/2006_TC_CFFS.pdf>

[6] NAND vs. NOR Flash Memory Technology Overview. RMG and Associates. Toshiba America, accessed 14 Oct 2010. <http://maltiel-consulting.com/NAND_vs_NOR_Flash_Memory_Technology_Overview_Read_Write_Erase_speed_for_SLC_MLC_semiconductor_consulting_expert.pdf>

[7] Bez, Roberto; Camerlenghi, Emilio; Modelli, Alberto; Visconti, Angelo. Introduction to Flash Memory. IEEExplore. STMicroelectronics, 21 May 2003. <http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1199079&tag=1>

[8] Kawaguchi, Atsuo; Nishioka, Shingo; Motoda Hiroshi. A Flash-Memory Based File System. CiteSeerX Advanced Research laboratory, Hitachi, Ltd., 1995. <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.160.5142>

[9] Rosenblum, Mendel; Ouserhout, John. The Design and Implementation of a Log-structured File System. Association for Computing Machinery (ACM). University of California at Berkeley, Feb 1992. <http://portal.acm.org/citation.cfm?id=146943&coll=GUIDE&dl=GUIDE&CFID=108397378&CFTOKEN=72657973&ret=1#Fulltext>

[10] Shimpi, Anand. Intel X25-M SSD: Intel Delivers One of the World's Fastest Drives. AnAndTech. AnAndTech, 8 Sep 2008. <http://www.anandtech.com/show/2614>

[11] Shimpi, Anand. The SSD Relapse: Understanding and Choosing the Best SSD. AnAndTech. AnAndTech, 30 Aug 2009. <http://www.anandtech.com/show/2829>

[12] Shimpi, Anand. The SSD Anthology: Understanding SSDs and New Drives from OCZ. AnAndTech. AnAndTech, 18 Mar 2009. <http://www.anandtech.com/show/2738>

[13] Corbet, Jonathan. Solid-State Storage Devices and the Block Layer. Linux Weekly News. Linux Weekly News, 4 Oct 2010. <http://lwn.net/Articles/408428/>

[14] Woodhouse, David. JFFS : The Journalling Flash File System. CiteSeerX. Red Hat, Inc, Accessed 14 Oct 2010. <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.128.6156&rep=rep1&type=pdf>

[15] Agrawal, Nitin; Prabhakaran, Vijayan; Wobber, Ted; Davis, John; Manasse, Mark. Panigrahy, Rina. Design Tradeoffs for SSD Performance. Association for Computing Machinery (ACM), USENIX 2008 Annual Technical Conference, 2008. <http://portal.acm.org/citation.cfm?id=1404014.1404019>

[16] Lee, Sang-Won, et al. A Log Buffer-Based Flash Translation Layer Using Fully-Associative Sector Translation. Association for Computing Machinery (ACM). ACM Transactions on Embedded Computing Systems (TECS), Jul 2007. <http://portal.acm.org/citation.cfm?id=1275990>

[17] Reach New Heights in Computing Performance. Micron Technology Inc. Micro Technology Inc, Accessed 14 Oct 2010. <http://www.micron.com/products/solid_state_storage/client_ssd.html>

[18] Flash Memories. 1 ed. New York: Springer, 1999. Print.

[19] Nonvolatile Memory Technologies with Emphasis on Flash: A Comprehensive Guide to Understanding and Using Flash Memory Devices. IEEE Press Series on Microelectronic Systems. New York: Wiley-Ieee Press, 2008. Print.

[20] Nonvolatile Semiconductor Memory Technology: A Comprehensive Guide to Understanding and Using NVSM Devices. IEEE Press Series on Microelectronic Systems. New York: Wiley-Ieee Press, 1997. Print.

External links

Relevant Wikipedia articles: Flash Memory, LogFS, Hard Disk Drives, Wear Leveling, Hot Spots, Sold State Drive.