Learning with Limited Numerical Precision Using the Cascade-Correlation Learning Algorithm

Markus Hoehfeld and Scott E. Fahlman

 

Abstract

 

A key question in the design of specialized hardware for simulation of neural networks is whether fixed-point arithmetic of limited numerical precision can be used with existing learning algorithms.  We present an empirical study of the effects of limited precision in Cascade-Correlation networks on three different learning problems.  We show that learning can fail abruptly as the precision of network weights or weight-update calculations is reduced below 12 bits.  We introduce techniques for dynamic rescaling and probabilistic rounding that allow reliable convergence down to 6 bits of precision, with only a gradual reduction in the quality of the solutions.