Difference between revisions of "COMP 3000 Essay 1 2010 Question 10"

From Soma-notes
Jump to navigation Jump to search
 
(49 intermediate revisions by 5 users not shown)
Line 4: Line 4:


=Answer=
=Answer=
First introduced in the late 80s, Flash-memory is a light, energy-independent, compact, shock-resistant and efficiently readable type of storage. Because of the particular limitations of this kind of memory, flash file systems require a fundamentally different system architecture than disk-based file-systems:  these systems need to be designed in light of flash-memory’s limited number of erase-cycles and its need to conduct erasures one entire block at a time. These constraints are a direct result of the same design that gives flash its advantages with regard to [ TO WHAT?] as both are due to [TO WHAT?] . Thus, a typical disk-based file-system is not suitable for working with flash memory as it erases far too frequently and indiscriminately while being simultaneously optimized for other constraints that do not affect flash memory. This means that a different solution is necessary and that solution is the log-based file-system which is far better suited to working with flash memory because it optimizes erasures by [WHAT?].
 
First introduced in the late 80s, Flash-memory is a light, energy-independent, compact, shock-resistant and efficiently readable type of storage. Because of the particular limitations of this kind of memory, flash file systems require a fundamentally different system architecture than disk-based file-systems. The particular limitations that lead to this necessity are flash-memory’s limited number of erase-cycles, the slowness of its writes and erasures, as well as its need to conduct erasures one entire block at a time. A typical disk-based file-system is not suitable for working with flash as it erases far too frequently and indiscriminately while being simultaneously optimized for other constraints that do not affect flash memory. This means that a different solution is necessary and that solution is the log-based file-system which is far better suited to working with flash because it is able to optimize a flash unit's lifespan through the log-structure of writes and wear-leveling, as well as its efficiency through the better management of writes and erasures.  


==Flash Memory==
==Flash Memory==
Flash memory is non-volatile(meaning digital storage that does not require power to retain its memory) storage space that has become more popular recently due to its fast fetch times. There are two basic forms of the flash storage system, NOR and NAND. Each type has its advantages and disadvantages. NOR has the fastest read times, but is much slower at writing. NAND on the other hand has much more capacity, faster write times, is less expensive, and has a much longer life expectancy.[2]


More and more people use flash memory, with many sizes of drives, ranging from a few hundred megabyte USB key, to a few terabyte internal solid-state drive(SSD). Two main reasons for this movement are because of flash's extremely fast read times, and its falling price. A typical flash drive has read speeds of up to 14 times faster than a hard disk drive (HDD).[17]
Though Flash memory has many advantages it also has several disadvantages which are of key relevance for anyone attempting to implement an file system to work with it.
 
Though flash memory on the whole has strengths and weaknesses, it is also divided into two sub-types and each of these also has its own advantages and disadvantages that shall be briefly examined here. The two types of flash memory are NOR and NAND: NOR has the faster read times, but is much slower at writing. NAND on the other hand has much more capacity, faster write times, is less expensive, and can last about ten times longer. This means that NAND is the superior form of flash for long term storage, while NOR is preferred for computer embedded read-only functions. [2] Although these two types of flash are different, they are not sufficiently so as to merit an independence file system approach for each one. Thus, we shall not differentiate between them for the rest of this article.  
 
The use of flash has grown dramatically in recent years, primarily for the purpose of portable devices and portable data storage. While this phenomenon has been partially due to the falling price of the product, it has been more of a result of flash's strengths - these make it a near optimal solution for the many portable devices which are currently in vogue. These strengths, besides compactness, resistance to shock and independence from power-supplies also include extremely fast read times. Indeed, the read times of flash devices can be as much as fourteen times those of hard disk drives. [17]  


Although flash drives are exponentially faster than HDDs, they still have not become the main source of data management. The reason for this is because HDDs are simply much cheaper, and flash drives still have many faults. The most critical fault is that each block in flash memory can only be erased approximately 100,000 times.[14] This poses a problem because when modifying a file, even if its a single bit, the entire block must be erased, and rewritten. This erase/rewrite slows down the write operation considerably, making it actually slower to write a file to flash than an HDD.[8]
Despite these advantages, however, are not the preferred storage method for home PCs. The reason for this is not just that other kinds of memory are still significantly cheaper, but also that flash memory has certain limitations built into it. The most critical of these is that each block of flash memory can only be erased approximately 100,000 times (or just 10000 if its NOR flash).[14] This poses a problem because of flash memory's second significant drawback: when modifying a file, even if its a single bit, the entire block must be erased, and rewritten. This erase/rewrite slows down the write operation considerably, making it actually slower to write or erase a file on flash than to a hard disk drive (HDD).[8]


'''1. What flash storage is, why its good but also why it must have the problems that it does (the assumption is that it must have them, why would it otherwise?) [don't know much about this just now... basics include that there is NOR (reads slightly faster)and NAND (holds more, writes faster, erases much faster, lasts about ten times longer) flash with NAND being especially popular for storage (what's NOR good for?). Here, we'd ideally want to talk about why flash was invented (supposed as an alternative to slow ROM), why it was suitable for that, and how it works on a technical level. Then, we'd want to mention why this technical functionality was innovative and useful but also why it came with two serious set-backs: having a limited-number of re-write cycles and needing to erase a block at a time.
Thus, though flash does offer much faster read-times, it comes with the heavy constraints on a limited number of erase cycles, slow write and erase times and the necessity to be erased a block at a time.  
Either way, Flash storage affords far faster fetch times than the traditional platter-based HDD, and stability of information in a sense. Where the data is not actually stored, but reprogrammed, in a sense, the data is more secure and is less likely to be erased easily. On that note, in order to flip a single bit, that entire block will need to be erased, then reprogrammed. In an 'old' HDD, let's say, When the HDD fails at the end of its life cycle, your data is gone. (unless you're willing to shell out $200/hr to have it recovered, yes I've seen companies in Ottawa that do this.) In a flash HDD, when it reaches the end of its life, it merely becomes read-only. Bugger for Databases, but useful for technical notes and archives, let's say. With today's modern gaming computers, Flash memory can be good on quick load times, however with limited read-writes, it could afford better use to things that are not updated as frequently. I.e... Well I don't have a better example than a webserver hosting a company's CSS and scripts. ~Source: Years in the 'biz
Flash memory started out as a replacement for EPROMs. At the time EPROMs needed a UV photoemission to be erased while flash memory could be erased electronically. The first flash memory product came out in 1988 but it did not take off until the late 1990’s because it could not be reliable produced. NOR and NAND memory is named after the arrangement of the cells in the memory array. NOR based flash memory benefits from having very fast burst read times but slower write times. Due to the structure of NOR memory programs stored in NOR based memory can be executed without being loaded into RAM first. NAND flash memory has a very large storage capacity and can read and write large files relatively fast. NAND is more suited for storage while NOR memory is better suited for direct program execution such as in CMOS chips'''


HDDs use a block system, in which the kernel specifies which blocks to read and write. When using a flash drive, the blocks are emulated and mapped to a physical memory address. It does through what is called a "Transition Layer".


==Traditionally Optimized File Systems==
==Traditionally Optimized File Systems==


Since the kernel asks for a block number, a conventional hard disk drive (HDD) file-system is not optimized to work with flash memory. The reason for this is that conventional hard-disks have different constraints from those of flash memory - their primary problem is to reduce seeking time, while the primary problem when working with flash memory is to erase in a minimal and balanced way.  
An HDD file-system, however, is not optimized to work with flash memory. The reason for this is that conventional hard-disks have different constraints from those of flash memory - their primary problem is to reduce seeking time, while the primary problem when working with flash memory is to erase in a minimal and balanced way.  


The most consuming process for an HDD is seeking data by relocating the read-head and spinning the magnetic disk. A traditional file system optimizes the way it stores data by placing related blocks close-by on the disk in order to minimize mechanical movement within the HDD. One of the great advantages of flash memory, which accounts for its fast read speed, is that there is no need to seek data physically. This is also why defragmentaion, a procedure used by HDDs to put files into more convenient configurations and thus minimize seeking times, loses its purpose in a flash memory context. Indeed, the unnecessary erasures that it entails are both inefficient and harmful for a flash memory unit.  
The most consuming process for an HDD is seeking data by relocating the read-head and spinning the magnetic disk. A traditional file system optimizes the way it stores data by placing related blocks close-by on the disk in order to minimize mechanical movement within the HDD. One of the great advantages of flash memory, which accounts for its fast read speed, is that there is no need to seek data physically. This is also why defragmentaion, a procedure used by HDDs to put files into more convenient configurations and thus minimize seeking times, loses its purpose in a flash memory context. Indeed, the unnecessary erasures that it entails are both inefficient and harmful for a flash memory unit. [SOURCE?]


This comes directly out of flash memory's aforementioned constraints: the slow block-sized erasures and the limited number of erase-cycles. Because of these, a flash optimal file system needs to minimize its erase operations and also to spread out its erasures in such a way as to avoid the formation of hot-spots: sections of memory which have undergone a disproportionately high number of erasures and are thus in danger of burning out. This process of spreading out data is referred to as "wear leveling". To minimize hotspots, a system using flash memory would have to write new data to empty memory blocks. This method would also call for some sort of garbage collection to conduct necessary erasure operations while the system is idle. It makes better sense to do these at this time because of the slow nature of erasures in the flash memory context. Of course, there is no such feature in a traditional HDD file-system.  
This comes directly out of flash memory's aforementioned constraints: the slow block-sized erasures and the limited number of erase-cycles. Because of these, a flash optimal file system needs to minimize its erase operations and also to spread out its erasures in such a way as to avoid the formation of hot-spots: sections of memory which have undergone a disproportionately high number of erasures and are thus in danger of burning out. This process of spreading out data is referred to as ''wear leveling''. To minimize hot-spots, a system using flash memory would have to write new data to empty memory blocks. [3] This method would also call for some sort of garbage collection to conduct necessary erasure operations while the system is idle. Of course, there is no such feature in a traditional HDD file-system. For the reasons given above, HDD systems are unsuitable for use with flash memory.


'''More on this later'''
==Flash Optimized File Systems==


==Flash Optimized File Systems==
Log-based file systems do not share the disadvantages of HDD file systems. Also known as Flash Transitional Layers (FTL), their distinguishing feature is a log that keeps the data which must still be written to disk and then writes it all in one big write. This is useful for flash systems because it allows them to time the expensive write operation in such a way as to not waste resources. It is also useful because, by filling up full blocks at at time, the log-structured file system makes sure that space is used to its fullest capacity. This last fact in turn ensures that it will take the memory a comparatively long time to be filled and that erasures and over-writes will only occur when they absolutely have to. As has been discussed, this is of great benefit to flash memory.


The process of "wear leveling" [WHAT IS THIS?] is achieved through a Log-based File System, often referred to as the Flash Transitional Layer(FTL). Essentially, the drive stores a log that keeps track of how many times each erase sector has been invalidated (or erased). The translational layer has a translation table, where each physical memory address is associated with an emulated block sectors. This allows a traditional file system that uses block sectors to be used on the flash drive. Each block has a flag which keeps track of its state. When a block is being written to, the FTL marks the blocks needed as ''allocated''. This prevents other data being written to the block that has already been allocated. The FTL then goes on to write the data in the allocated blocks. Once it completes the transaction, the system updates the allocated blocks to ''pre-valid''. Once that is completed, the drive marks the invalidated blocks to ''invalid'', while marking the newly written block as ''valid''. This entire flagging process is to ensure that the newly allocated blocks are never mixed up with the invalidated blocks.  
A log structured file system also has advantages when it comes to wear leveling. These advantages mostly come from the use of banks and the garbage collector. A bank is essentially a group of sequential addresses which keeps track of when it was last updated using timestamps.When the FTL receives a request to write something to memory, it uses a list of these to determine which area of the drive should be used (ie. which one has the fewest timestamps). Then, it only writes to that bank, until it is full before switching to another. Very importantly, the system also keeps a Cleaning Bank list which indicates which banks need to be emptied out. This feature makes certain that writes and erasures are never mixed in the same bank and is also integral to the whole erasure process, which is in turn one of the key feature of the log-based file system. [8]


===Banks===
At the heart of this process is the garbage collector: When the FTL realizes that there is not enough room to write new data onto the drive, it runs a garbage collection routine. This routine selects a segment to be cleaned, copies all of the valid data into a new segment, then erases everything in the old segment. This frees up the otherwise useless invalidated blocks. Furthermore, because the collector does not erase every block the moment it becomes invalidated but ony does it by the bank, time is saved when the expensive erase operation is called. Due to the slowness of the erasures, the kernel typically conducts the cleaning when the drive is idle and many resources are available for this process. [SOURCE]
The FTL organizes data using structures called banks. When the FTL gets a request to write something to memory, it uses a bank list to determine which area of the drive should be used. Essentially a bank is a group of sequential addresses, that keeps track of when it was last updated using timestamps. The FTL will only write to that bank, and once there is not enough space to write anymore, it switches out the current bank for the one with the most available space. When cleaning up the bank, the system puts it into what is called the Cleaning Bank List and removes it from the Bank List, thus avoiding any chance of some data being written to that bank while something is being erased.


===Cleaner===
Thus, the advantages of the log-based file system arise from its log-structure but also have to do with wear leveling and the optimization of writes and erasures on the block level. These advantages are made actual through the log itself, the use of banks and through the garbage collector.  
When the FTL realizes that there is not enough room to write new data onto the drive, it runs a garbage collection routine. This routine selects a segment to be cleaned, copies all of the valid data into a new segment, then erases everything in the old segment. This frees up the otherwise useless invalidated blocks and by not erasing every block as soon as it becomes invalidated, it saves on the amount of times that the expensive erase operation is called. The kernel can preemptively clean the drive when the system is idle.


=Conclusion=
==Conclusion==


In this way, thanks to its [WHATEVER MAKES LOG FSs ACTUALLY GOOD AT DEALING WITH FLASH], the log-based file-system is far better suited to working with flash memory than a traditional HDD file system. The latter is utterly unfit for this task due to its placing primacy on the minimization of seeks rather than on the minimization and management of erasures. Dealing smartly with erasures is extremely important for a flash memory file system, as that memory type's particular weaknesses, the limited number of erasure cycles, the necessity to erase by the block and the relative slowness of the erasures themselves, all have to do with erasing. A good flash memory file system must therefore be built with the aim of making the best of these weaknesses and this is precisely the reason why older disk-based file systems are not suitable for flash memory while log-based file systems are. [INSPIRATIONAL LAST WORDS]
In this way, thanks to its special wear-leveling features, the log-based file-system is far better suited to working with flash memory than a traditional HDD file system. The latter is utterly unfit for this task due to its placing primacy on the minimization of seeks rather than on the minimization and management of erasures. Dealing smartly with erasures is extremely important for a flash memory file system, as that memory type's particular weaknesses, the limited number of erasure cycles, the necessity to erase by the block and the relative slowness of the erasures themselves, all have to do with erasing. A good flash memory file system must therefore be built with the aim of making the best of these weaknesses and this is precisely the reason why older disk-based file systems are not suitable for flash memory while log-based file systems are.  


=Questions=
=Questions=
#Even though flash drives are exponentially faster than traditional HDDs, why are HDDs still the main method of data storage?
#Writing and erasing data are costly operations for a flash based storage drive. Why does modifying data (even a single bit) take the most amount of time?
#Why is the Flash Translational Layer so important to a flash drive's functionality? Why can you not use the traditional interface to deal with the block layer?
#What particularities/deficiencies of flash memory does any file-system which implements it have to take into account? What are some ways of dealing with them?


=References=
=References=
Line 89: Line 93:
=External links=
=External links=


Relevant Wikipedia articles: [http://en.wikipedia.org/wiki/Flash_Memory Flash Memory], [http://en.wikipedia.org/wiki/LogFS LogFS], [http://en.wikipedia.org/wiki/Hard_disk Hard Disk Drives], [http://en.wikipedia.org/wiki/Wear_leveling Wear Leveling], [http://en.wikipedia.org/wiki/Hot_spot_%28computer_science%29 Hot Spots], [http://en.wikipedia.org/wiki/Solid-state_drive Sold State Drive].
Relevant Wikipedia articles: [http://en.wikipedia.org/wiki/Flash_Memory Flash Memory], [http://en.wikipedia.org/wiki/LogFS LogFS], [http://en.wikipedia.org/wiki/Hard_disk Hard Disk Drives], [http://en.wikipedia.org/wiki/Wear_leveling Wear Leveling], [http://en.wikipedia.org/wiki/Hot_spot_%28computer_science%29 Hot Spots], [http://en.wikipedia.org/wiki/Solid-state_drive Sold State Drive].
 
 
=old=
 
Hey guys,
 
This is what I've got so far... mostly based on wikipedia:
 
Flash memory has two limitations: it can only be erased in blocks and and it wears out after a certain number of erase cycles. Furthermore, a particular kind of Flash memory (NAND) is not able to provide random access. 
As a result of these Flash based file-systems cannot be handled in the same way as disk-based file systems. Here are a few of the key differences:
 
- Because memory must be erased in blocks, its erasure tends to take up time. Consequently, it is necessary to time the erasures in a way so as not to interfere with the efficiency of the system’s other operations. This is is not a real concern with disk-based file-systems.
- A disk file-system needs to minimize the seeking time, but Flash file-system does not concern itself with this as it doesn’t have a disk.
- A flash system tries to distribute memory in such a way so as not to make a particular block of memory subject to a disproportionally large number of erasures. The purpose of this is to keep the block from wearing out prematurely. The result of it is that memory needs to be distributed differently than in a disk based file-system.
Log-sturctured file systems are thus best suited to dealing with flash memory (they apparently do all of the above things).
 
For the essay form, I'm thinking of doing a section about traditional hard-disk systems, another about flash-memory and a third about flash systems. At this point, I am imagining the thesis as something like, "Flash systems require a fundamentally different system architecture than disk-based systems due to their need to adapt to the constraints inherent in flash memory: specifically, due to that memory's limited life-span and block-based erasures." The argument would then talk about how these two differences directly lead to a new FS approach.
 
That's how I see it at the moment. Honestly, I don't like doing research about this kind of stuff, so my data isn't very deep. That said, if you guys could find more info and summarize it, I'm pretty sure that I could synthesize it all into a coherent essay.
 
Fedor

Latest revision as of 11:34, 8 November 2010

Question

How do the constraints of flash storage affect the design of flash-optimized file systems? Explain by contrasting with hard disk-based file systems.

Answer

First introduced in the late 80s, Flash-memory is a light, energy-independent, compact, shock-resistant and efficiently readable type of storage. Because of the particular limitations of this kind of memory, flash file systems require a fundamentally different system architecture than disk-based file-systems. The particular limitations that lead to this necessity are flash-memory’s limited number of erase-cycles, the slowness of its writes and erasures, as well as its need to conduct erasures one entire block at a time. A typical disk-based file-system is not suitable for working with flash as it erases far too frequently and indiscriminately while being simultaneously optimized for other constraints that do not affect flash memory. This means that a different solution is necessary and that solution is the log-based file-system which is far better suited to working with flash because it is able to optimize a flash unit's lifespan through the log-structure of writes and wear-leveling, as well as its efficiency through the better management of writes and erasures.

Flash Memory

Though Flash memory has many advantages it also has several disadvantages which are of key relevance for anyone attempting to implement an file system to work with it.

Though flash memory on the whole has strengths and weaknesses, it is also divided into two sub-types and each of these also has its own advantages and disadvantages that shall be briefly examined here. The two types of flash memory are NOR and NAND: NOR has the faster read times, but is much slower at writing. NAND on the other hand has much more capacity, faster write times, is less expensive, and can last about ten times longer. This means that NAND is the superior form of flash for long term storage, while NOR is preferred for computer embedded read-only functions. [2] Although these two types of flash are different, they are not sufficiently so as to merit an independence file system approach for each one. Thus, we shall not differentiate between them for the rest of this article.

The use of flash has grown dramatically in recent years, primarily for the purpose of portable devices and portable data storage. While this phenomenon has been partially due to the falling price of the product, it has been more of a result of flash's strengths - these make it a near optimal solution for the many portable devices which are currently in vogue. These strengths, besides compactness, resistance to shock and independence from power-supplies also include extremely fast read times. Indeed, the read times of flash devices can be as much as fourteen times those of hard disk drives. [17]

Despite these advantages, however, are not the preferred storage method for home PCs. The reason for this is not just that other kinds of memory are still significantly cheaper, but also that flash memory has certain limitations built into it. The most critical of these is that each block of flash memory can only be erased approximately 100,000 times (or just 10000 if its NOR flash).[14] This poses a problem because of flash memory's second significant drawback: when modifying a file, even if its a single bit, the entire block must be erased, and rewritten. This erase/rewrite slows down the write operation considerably, making it actually slower to write or erase a file on flash than to a hard disk drive (HDD).[8]

Thus, though flash does offer much faster read-times, it comes with the heavy constraints on a limited number of erase cycles, slow write and erase times and the necessity to be erased a block at a time.


Traditionally Optimized File Systems

An HDD file-system, however, is not optimized to work with flash memory. The reason for this is that conventional hard-disks have different constraints from those of flash memory - their primary problem is to reduce seeking time, while the primary problem when working with flash memory is to erase in a minimal and balanced way.

The most consuming process for an HDD is seeking data by relocating the read-head and spinning the magnetic disk. A traditional file system optimizes the way it stores data by placing related blocks close-by on the disk in order to minimize mechanical movement within the HDD. One of the great advantages of flash memory, which accounts for its fast read speed, is that there is no need to seek data physically. This is also why defragmentaion, a procedure used by HDDs to put files into more convenient configurations and thus minimize seeking times, loses its purpose in a flash memory context. Indeed, the unnecessary erasures that it entails are both inefficient and harmful for a flash memory unit. [SOURCE?]

This comes directly out of flash memory's aforementioned constraints: the slow block-sized erasures and the limited number of erase-cycles. Because of these, a flash optimal file system needs to minimize its erase operations and also to spread out its erasures in such a way as to avoid the formation of hot-spots: sections of memory which have undergone a disproportionately high number of erasures and are thus in danger of burning out. This process of spreading out data is referred to as wear leveling. To minimize hot-spots, a system using flash memory would have to write new data to empty memory blocks. [3] This method would also call for some sort of garbage collection to conduct necessary erasure operations while the system is idle. Of course, there is no such feature in a traditional HDD file-system. For the reasons given above, HDD systems are unsuitable for use with flash memory.

Flash Optimized File Systems

Log-based file systems do not share the disadvantages of HDD file systems. Also known as Flash Transitional Layers (FTL), their distinguishing feature is a log that keeps the data which must still be written to disk and then writes it all in one big write. This is useful for flash systems because it allows them to time the expensive write operation in such a way as to not waste resources. It is also useful because, by filling up full blocks at at time, the log-structured file system makes sure that space is used to its fullest capacity. This last fact in turn ensures that it will take the memory a comparatively long time to be filled and that erasures and over-writes will only occur when they absolutely have to. As has been discussed, this is of great benefit to flash memory.

A log structured file system also has advantages when it comes to wear leveling. These advantages mostly come from the use of banks and the garbage collector. A bank is essentially a group of sequential addresses which keeps track of when it was last updated using timestamps.When the FTL receives a request to write something to memory, it uses a list of these to determine which area of the drive should be used (ie. which one has the fewest timestamps). Then, it only writes to that bank, until it is full before switching to another. Very importantly, the system also keeps a Cleaning Bank list which indicates which banks need to be emptied out. This feature makes certain that writes and erasures are never mixed in the same bank and is also integral to the whole erasure process, which is in turn one of the key feature of the log-based file system. [8]

At the heart of this process is the garbage collector: When the FTL realizes that there is not enough room to write new data onto the drive, it runs a garbage collection routine. This routine selects a segment to be cleaned, copies all of the valid data into a new segment, then erases everything in the old segment. This frees up the otherwise useless invalidated blocks. Furthermore, because the collector does not erase every block the moment it becomes invalidated but ony does it by the bank, time is saved when the expensive erase operation is called. Due to the slowness of the erasures, the kernel typically conducts the cleaning when the drive is idle and many resources are available for this process. [SOURCE]

Thus, the advantages of the log-based file system arise from its log-structure but also have to do with wear leveling and the optimization of writes and erasures on the block level. These advantages are made actual through the log itself, the use of banks and through the garbage collector.

Conclusion

In this way, thanks to its special wear-leveling features, the log-based file-system is far better suited to working with flash memory than a traditional HDD file system. The latter is utterly unfit for this task due to its placing primacy on the minimization of seeks rather than on the minimization and management of erasures. Dealing smartly with erasures is extremely important for a flash memory file system, as that memory type's particular weaknesses, the limited number of erasure cycles, the necessity to erase by the block and the relative slowness of the erasures themselves, all have to do with erasing. A good flash memory file system must therefore be built with the aim of making the best of these weaknesses and this is precisely the reason why older disk-based file systems are not suitable for flash memory while log-based file systems are.

Questions

  1. Even though flash drives are exponentially faster than traditional HDDs, why are HDDs still the main method of data storage?
  2. Writing and erasing data are costly operations for a flash based storage drive. Why does modifying data (even a single bit) take the most amount of time?
  3. Why is the Flash Translational Layer so important to a flash drive's functionality? Why can you not use the traditional interface to deal with the block layer?
  4. What particularities/deficiencies of flash memory does any file-system which implements it have to take into account? What are some ways of dealing with them?

References

[1] Kim, Han-joon; Lee, Sang-goo. A New Flash Memory Management for Flash Storage System. IEEExplore. Dept. of Comput. Sci., Seoul Nat. Univ., 06 Aug 2002. <http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=812717&tag=1#>

[2] Smith, Lance. NAND Flash Solid State Storage Performance and Capability. Flash Memory Summit. SNIA Education Committee, 18 Aug 2009. <http://www.flashmemorysummit.com/English/Collaterals/Proceedings/2009/20090812_T1B_Smith.pdf>

[3] Chang, LiPin. On Efficient Wear Leveling for Large-Scale Flash-Memory Storage Systems. Association for Computing Machinery (ACM). Dept. of Comput. Sci.,Nat. ChiaoTung Univ., 15 Mar 2007. <http://portal.acm.org/citation.cfm?id=1244248>

[4] Nath, Suman; Gibbons, Phillip. Online maintenance of very large random samples on flash storage. Association for Computing Machinery (ACM). The VLDB Journal, 27 Jul 2007. <http://portal.acm.org/citation.cfm?id=1731355>

[5] Lim, Seung-Ho; Park; Kyu-Ho. An Efficient NAND Flash File System for Flash Memory Storage. CORE Laboratory. IEEE Transactions On Computers, Jul 2006. <http://vlsi.kaist.ac.kr/paper_list/2006_TC_CFFS.pdf>

[6] NAND vs. NOR Flash Memory Technology Overview. RMG and Associates. Toshiba America, accessed 14 Oct 2010. <http://maltiel-consulting.com/NAND_vs_NOR_Flash_Memory_Technology_Overview_Read_Write_Erase_speed_for_SLC_MLC_semiconductor_consulting_expert.pdf>

[7] Bez, Roberto; Camerlenghi, Emilio; Modelli, Alberto; Visconti, Angelo. Introduction to Flash Memory. IEEExplore. STMicroelectronics, 21 May 2003. <http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1199079&tag=1>

[8] Kawaguchi, Atsuo; Nishioka, Shingo; Motoda Hiroshi. A Flash-Memory Based File System. CiteSeerX Advanced Research laboratory, Hitachi, Ltd., 1995. <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.160.5142>

[9] Rosenblum, Mendel; Ouserhout, John. The Design and Implementation of a Log-structured File System. Association for Computing Machinery (ACM). University of California at Berkeley, Feb 1992. <http://portal.acm.org/citation.cfm?id=146943&coll=GUIDE&dl=GUIDE&CFID=108397378&CFTOKEN=72657973&ret=1#Fulltext>

[10] Shimpi, Anand. Intel X25-M SSD: Intel Delivers One of the World's Fastest Drives. AnAndTech. AnAndTech, 8 Sep 2008. <http://www.anandtech.com/show/2614>

[11] Shimpi, Anand. The SSD Relapse: Understanding and Choosing the Best SSD. AnAndTech. AnAndTech, 30 Aug 2009. <http://www.anandtech.com/show/2829>

[12] Shimpi, Anand. The SSD Anthology: Understanding SSDs and New Drives from OCZ. AnAndTech. AnAndTech, 18 Mar 2009. <http://www.anandtech.com/show/2738>

[13] Corbet, Jonathan. Solid-State Storage Devices and the Block Layer. Linux Weekly News. Linux Weekly News, 4 Oct 2010. <http://lwn.net/Articles/408428/>

[14] Woodhouse, David. JFFS : The Journalling Flash File System. CiteSeerX. Red Hat, Inc, Accessed 14 Oct 2010. <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.128.6156&rep=rep1&type=pdf>

[15] Agrawal, Nitin; Prabhakaran, Vijayan; Wobber, Ted; Davis, John; Manasse, Mark. Panigrahy, Rina. Design Tradeoffs for SSD Performance. Association for Computing Machinery (ACM), USENIX 2008 Annual Technical Conference, 2008. <http://portal.acm.org/citation.cfm?id=1404014.1404019>

[16] Lee, Sang-Won, et al. A Log Buffer-Based Flash Translation Layer Using Fully-Associative Sector Translation. Association for Computing Machinery (ACM). ACM Transactions on Embedded Computing Systems (TECS), Jul 2007. <http://portal.acm.org/citation.cfm?id=1275990>

[17] Reach New Heights in Computing Performance. Micron Technology Inc. Micro Technology Inc, Accessed 14 Oct 2010. <http://www.micron.com/products/solid_state_storage/client_ssd.html>

[18] Flash Memories. 1 ed. New York: Springer, 1999. Print.

[19] Nonvolatile Memory Technologies with Emphasis on Flash: A Comprehensive Guide to Understanding and Using Flash Memory Devices. IEEE Press Series on Microelectronic Systems. New York: Wiley-Ieee Press, 2008. Print.

[20] Nonvolatile Semiconductor Memory Technology: A Comprehensive Guide to Understanding and Using NVSM Devices. IEEE Press Series on Microelectronic Systems. New York: Wiley-Ieee Press, 1997. Print.

External links

Relevant Wikipedia articles: Flash Memory, LogFS, Hard Disk Drives, Wear Leveling, Hot Spots, Sold State Drive.