CORGi @ CMU

Logo

Computer Organization Research Group led by Prof. Nathan Beckmann

Navigate to

Funded by


CMU Affiliates

Making Data Access Faster and Cheaper with Smarter Flash Caches

Web services at the scale of Google and other hyperscale service providers consume enormous amounts of data. These large datasets are essential to the value provided by web services and, to be useful, they need to be accessed quickly and cheaply. Storing large amounts of data is extremely expensive in current storage systems. The traditional solution is a cache hierarchy that lets the “hot” data for an application be accessed quickly, and the bulk of data be stored cheaply. Unfortunately, datasets have grown to the point where even caches are very expensive, so industry is increasingly moving caches onto cheaper storage media.

Among available storage media, flash offers the best mix of capacity, performance, and cost, often by orders of magnitude compared to spinning disk, DRAM, and emerging NVMs. However, flash also has two properties that complicate cache design: write asymmetry and write amplification. Write asymmetry and write amplification are especially problematic for caches, since caching is a uniquely write-heavy workload. Every time an object is admitted, the cache incurs a write, and caches admit new objects constantly. This proposal will dramatically reduce the cost of accessing large datasets quickly by designing new, smarter caches optimized for the unique challenges of flash.

Sponsor

Projects

Smart Caching at Datacenter Scale

Publications

Brief Announcement: Spatial Locality and Granularity Change in Caching [pdf]

Nathan Beckmann, Phillip Gibbons, Charles McGuffey. SPAA 2022.

Spatial Locality and Granularity Change in Caching [pdf]

Nathan Beckmann, Phillip Gibbons, Charles McGuffey. arXiv 2022.

Brief Announcement: Block-Granularity-Aware Caching [pdf]

Nathan Beckmann, Phillip Gibbons, Charles McGuffey. SPAA 2021.