Hello!

I’m Nirav, a third-year Ph.D. student in the Computer Science Department (CSD) at Carnegie Mellon University (CMU), where I’m fortunate to be advised by Prof. Justine Sherry. My research area is Networking, and I’m part of the Systems, Networking, and Performance (SNAP) group at CMU; I’m secondarily affiliated with CONIX and CMU’s Parallel Data Lab (PDL). My research interests broadly lie at the intersection of systems and performance modeling (i.e. building computer systems and saying something mathematically rigorous about their performance).

Prior to starting graduate school, I completed my undergraduate degree (B.A.Sc.) in Computer Engineering at the University of Toronto, Canada, in 2018. Despite my apparent penchant for cold, capricious weather, I spent most of life in Mumbai, India. In my spare time, I enjoy reading (fiction, philosophy), imbibing copious amounts of espresso, and playing chess, golf, and soccer.


Publications

Abstract
Caches are at the heart of latency-sensitive systems. In this paper, we identify a growing challenge for the design of latency-minimizing caches called delayed hits. Delayed hits occur at high throughput, when multiple requests to the same object queue up before an outstanding cache miss is resolved. This effect increases latencies beyond the predictions of traditional caching models and simulations; in fact, caching algorithms are designed as if delayed hits simply didn’t exist. We show that traditional caching strategies – even so called ‘optimal’ algorithms – can fail to minimize latency in the presence of delayed hits. We design a new, latency-optimal offline caching algorithm called BELATEDLY which reduces average latencies by up to 45% compared to the traditional, hit-rate optimal Belady’s algorithm. Using BELATEDLY as our guide, we show that incorporating an object’s ‘aggregate delay’ into online caching heuristics can improve latencies for practical caching systems by up to 40%. We implement a prototype, Minimum-AggregateDelay (MAD), within a CDN caching node. Using a CDN production trace and backends deployed in different geographic locations, we show that MAD can reduce latencies by 12-18% depending on the backend RTTs.
BibTeX
@inproceedings{atre-sigcomm20,
  author = {Atre, Nirav and Sherry, Justine and Wang, Weina and Berger, Daniel},
  title = {Caching with Delayed Hits},
  booktitle = {Proceedings of the 2020 Conference of the ACM Special Interest Group on Data Communication (SIGCOMM)},
  series = {SIGCOMM '20},
  year = {2020},
  publisher = {ACM},
  address = {New York, NY, USA},
  code = {https://github.com/cmu-snap/Delayed-Hits}
}

Research Talks

Caching with Delayed Hits

Industry Experience

Summer, 2020 (4 months)
Summer, 2018 (4 months)
Sep, 2016 - Aug, 2017 (1 year)
Summer, 2016 (4 months)