I/o scheduling with mapping cache awareness for flash based storage systems
Proceedings of the 13th International Conference on Embedded Software, 2016•dl.acm.org
NAND flash memory has been the default storage component in mobile systems. One of the
key technologies for flash management is the address mapping scheme between logical
addresses and physical addresses, which deals with the inability of in-place-updating in
flash memory. Demand-based page-level mapping cache is often applied to match the
cache size constraint and performance requirement of mobile storage systems. However,
recent studies showed that the management overhead of mapping cache schemes is …
key technologies for flash management is the address mapping scheme between logical
addresses and physical addresses, which deals with the inability of in-place-updating in
flash memory. Demand-based page-level mapping cache is often applied to match the
cache size constraint and performance requirement of mobile storage systems. However,
recent studies showed that the management overhead of mapping cache schemes is …
NAND flash memory has been the default storage component in mobile systems. One of the key technologies for flash management is the address mapping scheme between logical addresses and physical addresses, which deals with the inability of in-place-updating in flash memory. Demand-based page-level mapping cache is often applied to match the cache size constraint and performance requirement of mobile storage systems. However, recent studies showed that the management overhead of mapping cache schemes is sensitive to the host I/O patterns, especially when the mapping cache is small. This paper presents a novel I/O scheduling scheme, called MAP, to alleviate this problem. The proposed scheduling approach reorders I/O requests for performance improvement from two angles: Prioritizing the requests that will hit in the mapping cache, and grouping requests with related logical addresses into large batches. Experimental results show that MAP improved upon traditional I/O schedulers by 30% and 8% in terms of read and write latencies, respectively.
ACM Digital Library
Showing the best result for this search. See all results