To top

On the Expressivity of Neural Networks

Project description

The project aims at a better understanding of neural networks’ (NNs’) expressivity. More precisely, we want to characterize the class of functions that is represented by NNs with ReLU activations and a given architecture. In this context, it is of particular interest to derive upper and lower bounds on the size (width and depth) of ReLU NNs that exactly compute certain functions. Here, we are interested in both, NNs computing general (piece-wise linear) functions, as well as in NNs exactly solving basic problems in combinatorial optimization. Our approach builds on techniques from mixed-integer and combinatorial optimization, polyhedral theory, as well as discrete and tropical geometry.