CMU-Led Team Achieves Back-To-Back Wins at Neural Network Verification Competition

Kayla PapakieWednesday, October 12, 2022

An SCS-led team took top honors at the International Verification of Neural Networks Competition for their open-source tool α,β-CROWN.

For the second year in a row, School of Computer Science researcher Huan Zhang led a multi-institutional team to victory in the 2022 International Verification of Neural Networks Competition.

The team's open-source tool, α,β-CROWN (alpha-beta-CROWN), once again prevailed against 10 other teams whose neural network verification methods were evaluated in more than a dozen standardized benchmarks, each with a variety of mathematical problems. The competition brings together researchers working to establish formal methods to better understand and guarantee the behavior of neural networks.

"Many of this year's benchmarks were proposed by industrial participants, which shows the increasing interest in applying neural networks with verifiable guarantees in real-world applications," said Zhang, a postdoctoral researcher in the Computer Science Department (CSD). "The competition was more challenging, featuring bigger datasets and neural networks two orders of magnitude larger than last year's."

In addition to Zhang, the CMU-led team included Zico Kolter, an associate professor in CSD, and researchers from the University of Illinois Urbana-Champaign, Columbia University, Drexel University and the University of California, Los Angeles.

For this year's competition, the team enhanced α,β-CROWN's scalability, strength and user-friendliness, while also implementing a more advanced verification algorithm called GCP-CROWN. With this improvement, the team scored 1,274.9 out of 1,300 possible points, outperforming the second-place winner by nearly 260 points.

Deep neural networks have become a crucial part of modern artificial intelligence, but their applications to AI in critical tasks, such as autonomous vehicle systems or medical procedures, are still challenging.

"We need to trust that an AI surgeon can successfully perform complex surgery or that an AI pilot can safely fly a plane," Zhang said.

In settings where societal risk is high, it's essential to guarantee the behaviors of neural networks are well-understood and predictable. The team's back-to-back success shows progress toward this goal.

"The algorithms developed by our team during the past few years are pioneering in this field, and our continued work in neural network verification will ultimately enable the broader scientific and industrial communities to explore the capabilities of AI-based systems," Zhang said.

For More Information

Aaron Aupperlee | 412-268-9068 |