CMU / School of Computer Science / CS Department
|
9002 Gates Hillman Center
Computer Science Department
Carnegie Mellon University
5000 Forbes Ave
Pittsburgh, PA 15213-3891
Phone: 412-268-3069
email: amarp+ [at] cs [dot] cmu [dot] edu
|
|
Hi !
I am a Researcher at Microsoft Research at Redmond.
I was a PhD Student in the Computer Science Department at Carnegie Mellon University. My advisor at CMU was Prof. Dave Andersen. The goal of my research is to design, build, and evaluate scalable cluster-based systems and protocols that are robust to failures, and my research interests lie in operating systems, networking, and distributed systems. My work so far has looked at network protocols within large datacenters, and distributed systems and protocols that not only work in constrained environments (e.g. an energy-efficient cluster architecture, FAWN) but also are flexible enough to support a variety of application requirements.
My research has been supported by an IBM Research Fellowship and a ThinkSwiss Research Scholarship in the past.
I completed my M.Eng. in CS from Cornell University. I worked with Prof. Ken Birman on research projects in the area of reliable distributed systems (specifically, scalable process group communication). I was fortunate to be associated with and surrounded by some incredibly bright people there.
|
|
|
|
The gap between CPU and IO speeds has been growing over the years. For applications that are IO bound, this translates to a powerful CPU mostly sitting idle. Worse, CPU power consumption grows super-linearly with speed, and even with dynamic power scaling, systems as a whole are not energy proportional (a system running at 20% capacity may consume more than 20% of its peak power).
With the objective of designing a well-balanced system (reducing the CPU-I/O speed gap) that is energy efficient, a FAWN node is designed to have a slow, energy-efficient CPU, a small amount of DRAM, and fast storage technology (Flash as opposed to hard disks). A single FAWN node, consuming between 3 to 5 Watts, might have a 512 MHz CPU, 256MB DRAM, and 4GB Flash storage. In comparison, a server machine may consume 100 Watts (specification: say 2 GHz dual core, 2GB RAM, 1TB disk-based storage and/or GBs of SSD-based storage). A single Wimpy node, when serving data from a CompactFlash card, is 2 order of magnitude better in Queries/Joule when compared to a single server with a hard disk, and twice as better when compared to a single server with a SSD. Here are some rough numbers to help in understanding FAWN.
But, a single FAWN node is not enough to serve TBs of application data, or to serve tens of thousands of queries per second. Enter: an energy-efficient *array* of nodes. Our SOSP work describes FAWN and a distributed key-value store, adapting known techniques in distributed systems to FAWN, and showcases the advantages of FAWN for applications with random-access IO bound workloads. Our design centers around avoiding random writes on Flash by using a purely log-structed datastore. Replication and consistency are obtained using a variant of chain replication on a consistent hashing ring. Node additions and removals involve spliting/merging and transferring datastores; these operations avoid time consuming random writes to the datastore on Flash.
Project Homepage: FAWNMulti-hop wireless mesh networks provide cheap, easily deployable Internet access. A key challenge in mesh networks is maintaining high transfer throughput. Unfortunately, doing so is difficult because these networks inherently suffer from interference due to the broadcast nature of the wireless medium: subsequent hops interfere with each other, the probability of a loss increases with hop length, and congestion increases near the gateway nodes as the medium around the gateway becomes a bottleneck.
Ditto is a system that improves transfer performance in mesh networks by opportunistically caching data both at nodes on the path of a transfer and at nodes that overhear the data transfer. The system uses content-based naming to provide application independent caching at the granularity of small chunks. Ditto targets traditional applications for content caching (e.g. popular Web content, large software and operating system updates, and large data downloads such as those on peer-to-peer networks.), as well as emerging applications such as high-bandwidth video downloads.
Experimental evaluation on two wireless testbeds show that Ditto's opportunistic caching increases throughput by up to 7x over simpler on-path caching schemes, and by up to an order of magnitude over no caching.
Cluster-based storage systems that use TCP/IP over commodity low-cost Ethernet networks are faced with a scaling problem termed TCP Incast -- a catastrophic collapse of TCP throughput for barrier synchronized read workloads. Clients read data, one data block at a time, with each block striped over multiple servers. As more storage servers send data to a client, servers begin to notice packet losses due to limited buffering at Ethernet switches. Under sever packet loss, servers experience TCP timeouts lasting a minimum of 200ms (minRTO). TCP timeouts should be approximately equal to the Round Trip Time (RTT) of the network, but 200ms coarse-grained timeouts are 3 orders of magnitude higher than datacenter RTTs (100 micro seconds). If a server experiences a timeout while other servers have completed their transfer and have no more data to send for a given data block, the link capacity is underutilized resulting in a throughput (goodput) collapse.
Extended timeouts also hurt latency sensitive applications in the datacenter. For example, the response time of search applications that need results from multiple servers before sorting them and returning the search results may be affected by coarse-grained timeouts. Similarly, timeouts on a single TCP flow might affect the response time of applications that don't necessarily have a high fan-in degree.
Our work explores the causes of TCP Incast, and evaluates a range of possible solutions concluding that microsecond granularity TCP timeouts, along with eliminating minRTO, is the most effective solution. This solution also improves the response time of latency sensitive applications using TCP. Finally, we find that this solution is safe for use in the wide-area.
Project Homepage: TCP IncastAs part of Prof. Ken Birman's research group at Cornell, I worked on evaluating multicast protocols that scale in the number of groups a node is part of, as opposed to the traditional scaling metric of the number of members in a group. I helped evaluate Ricochet, a scalable multicast protocol for latency sensitive cluster-based applications, and PLATO, a total ordering protocol layered over Ricochet. Both these efforts were led by Mahesh Balakrishnan. I also evaluated the scalability of JGroups for throughput sensitive applications.
Project Homepage: Ricochet|
|
Gandalf: All we have to decide is what to do with the time that is given to us.