Special Systems Design and Implementation / Intel Science & Technology Center / Database Seminar
- Robert Mehrabian Collaborative Innovation Center
- Panther Hollow Room - 4th Floor
- DAVID A. MALTZ
- Partner Group Software
- Engineering Manager
- Microsoft Research
Beyond Software Defined Networking: When Software Isn’t Fast Enough
The success of cloud computing stems from the ability to take extremely large pools of physical servers, connect them by a massive shared network, and then carve up those resources into separate virtual networks assigned to different tenants. For years, the ability to give each tenant an isolated virtual environment, configured exactly the way they want it, has depended on Software Defined Networking (SDN). In turn, Software Defined Networking has depended on software-based virtual switches running on the servers to modify each tenants’ packets in ways that create the isolation. This worked fine when servers had 10 Gbps network interfaces, but as network speeds have increased to 40G and above, the Software in SDN has become a bottleneck to stable performance.
In this talk, I will explain how computing clouds like Azure use SDN to virtualize network resources and the limits of that approach. I will then explain how and why Microsoft deployed FPGAs into nearly all our servers, so that programmable hardware offloads can be used to create blazingly fast and predictable networks while still providing the flexibility of software-based solutions. I will end by sketching some of the other things that become possible once FPGAs are present on every server, such as machine-learning and deep neural networks of unprecedented scale. This talk will be of interest to those that want to learn how cloud hosting platforms work, the state of the art in cloud networking, or how FPGAs provide a way to overcome the performance bottlenecks of software-only solutions.
Dr. David A. Maltz leads Azure's Physical Network team, which is responsible for developing, deploying, and operating the software and network devices that connect the servers of all Microsoft's online services, including the Azure Public Cloud, Office365, and Bing. We write the code for the software defined network v-switches on the servers and the SONiC firmware that runs many of our physical switches. We build the distributed systems that continuously monitor the network and ensure it remains healthy by automatically remediating problems. We design the cloud-scale networks and data centers that provide petabits of connectivity at low cost and high reliability.
My past projects include a broad array of hardware and software that strive to improve the usability, performance, and cost of cloud computing. Prior to joining Azure, I worked on the Microsoft Autopilot team, which won Microsoft's 2013 Technology Achievement Award for advances in cloud-scale data centers (http://www.microsoft.com/about/technicalrecognition/Autopilot.aspx). Prior to joining Autopilot, I worked in industry research and academia, including 5 years in Microsoft Research. I founded two startup companies in network traffic management and wireless networking. I was part of the 4D team that won the SIGCOMM 2015 Test of Time Award for the 2005 paper that spurred the field of Software Defined Networking, and I was part of the GroupLens team that won the 2010 ACM Systems SoftwareAward.
Faculty Host: Justine Sherry