CMU Artificial Intelligence Seminar Series sponsored by


Back to Seminar Schedule

Tuesday, Feb 8, 2022

Time: 12:00 - 01:00 PM ET
Recording of this Online Seminar on Youtube

Huan Zhang -- How We Trust a Black-box: Formal Verification of Deep Neural Networks

Relevant Paper(s):

Abstract: Neural networks have become a crucial element in modern artificial intelligence. However, they are often black-boxes and can behave unexpectedly and produce surprisingly wrong results. When applying neural networks to mission-critical systems such as autonomous driving and aircraft control, it is often desirable to formally verify their trustworthiness such as safety and robustness. In this talk, I will first introduce the problem of neural network verification and the challenges involved to guarantee neural network output given bounded input perturbations. Then, I will discuss the bound propagation based neural network verification algorithms such as CROWN and beta-CROWN, which efficiently propagate linear inequalities through the network in a backward manner. My talk will highlight state-of-the-art verification techniques used in our α,β-CROWN (alpha-beta-CROWN) verifier, a scalable, powerful and GPU-accelerated neural network verifier that won the 2nd International Verification of Neural Networks Competition (VNN-COMP'21) with the highest total score.

Bio: Huan Zhang is a postdoctoral researcher at CMU, supervised by Prof. Zico Kolter. He received his Ph.D. degree at UCLA in 2020. Huan's research focuses on the trustworthiness of artificial intelligence, especially on developing formal verification methods to guarantee the robustness and safety of machine learning. Huan was awarded an IBM Ph.D. fellowship and he led the winning team in the 2021 International Verification of Neural Networks Competition. Huan received the 2021 AdvML Rising Star Award sponsored by MIT-IBM Watson AI Lab.