authors: Sanjeev Arora, Yingyu Liang, Tengyu Ma
journal: (ICLR 2016)
publication year: (2016)
links: arxiv (preprint)

abstract: Generative models for deep learning are promising both to improve understanding of the model, and yield training methods requiring fewer labeled samples. Recent works use generative model approaches to produce the deep net's input given the value of a hidden layer several levels above. However, there is no accompanying "proof of correctness" for the generative model, showing that the feedforward deep net is the correct inference method for recovering the hidden layer given the input. Furthermore, these models are complicated. The current paper takes a more theoretical tack. It presents a very simple generative model for RELU deep nets, with the following characteristics: (i) The generative model is just the reverse of the feedforward net: if the forward transformation at a layer is A then the reverse transformation is A^T. (This can be seen as an explanation of the old weight tying idea for denoising autoencoders.) (ii) Its correctness can be proven under a clean theoretical assumption: the edge weights in real-life deep nets behave like random numbers. Under this assumption ---which is experimentally tested on real-life nets like AlexNet--- it is formally proved that feed forward net is a correct inference method for recovering the hidden layer. The generative model suggests a simple modification for training: use the generative model to produce synthetic data with labels and include it in the training set. Experiments are shown to support this theory of random-like deep nets; and that it helps the training.

Category: Paper Announcements
Write comment (0 Comments)

authors: Francois Malgouyres, Joseph Landsberg
journal: (SPARS 2017 proceedings)
publication year: 2017
links: arxiv (preprint), SPARS

abstract: We study a deep matrix factorization problem. It takes as input a matrix X obtained by multiplying K matrices (called factors). Each factor is obtained by applying a fixed linear operator to a short vector of parameters satisfying a model (for instance sparsity, grouped sparsity, non-negativity, constraints defining a convolution network\ldots). We call the problem deep or multi-layer because the number of factors is not limited. In the practical situations we have in mind, we can typically have K=10 or 100. This work aims at identifying conditions on the structure of the model that guarantees the stable recovery of the factors from the knowledge of X and the model for the factors.We provide necessary and sufficient conditions for the identifiability of the factors (up to a scale rearrangement). We also provide a necessary and sufficient condition called Deep Null Space Property (because of the analogy with the usual Null Space Property in the compressed sensing framework) which guarantees that even an inaccurate optimization algorithm for the factorization stably recovers the factors. We illustrate the theory with a practical example where the deep factorization is a convolutional network.

Category: Paper Announcements
Write comment (0 Comments)

authors: Tomaso Poggio, Federico Girosi
journal: Proceedings of the IEEE
publication year: 1990
links: MIT, IEEE

abstract: The problem of the approximation of nonlinear mapping, (especially continuous mappings) is considered. Regularization theory and a theoretical framework for approximation (based on regularization techniques) that leads to a class of three-layer networks called regularization networks are discussed. Regularization networks are mathematically related to the radial basis functions, mainly used for strict interpolation tasks. Learning as approximation and learning as hypersurface reconstruction are discussed. Two extensions of the regularization approach are presented, along with the approach's corrections to splines, regularization, Bayes formulation, and clustering. The theory of regularization networks is generalized to a formulation that includes task-dependent clustering and dimensionality reduction. Applications of regularization networks are discussed.

Category: Paper Announcements
Write comment (0 Comments)

authors: Hrushikesh Mhaskar, Qianli Liao, Tomaso Poggio
journal: (CBMM Memo)
publication year: 2016
links: arxiv (preprint), CBMM

abstract: While the universal approximation property holds both for hierarchical and shallow networks, we prove that deep (hierarchical) networks can approximate the class of compositional functions with the same accuracy as shallow networks but with exponentially lower number of training parameters as well as VC-dimension. This theorem settles an old conjecture by Bengio on the role of depth in networks. We then define a general class of scalable, shift-invariant algorithms to show a simple and natural set of requirements that justify deep convolutional networks.

Category: Paper Announcements
Write comment (0 Comments)

authors: Vardan Papyan, Yaniv Romano, Michael Elad
journal: Journal of Machine Learning Research (submitted)
publication year: (2016)
links: arxiv (preprint)

abstract: Convolutional neural networks (CNN) have led to many state-of-the-art results spanning through various fields. However, a clear and profound theoretical understanding of the forward pass, the core algorithm of CNN, is still lacking. In parallel, within the wide field of sparse approximation, Convolutional Sparse Coding (CSC) has gained increasing attention in recent years. A theoretical study of this model was recently conducted, establishing it as a reliable and stable alternative to the commonly practiced patch-based processing. Herein, we propose a novel multi-layer model, ML-CSC, in which signals are assumed to emerge from a cascade of CSC layers. This is shown to be tightly connected to CNN, so much so that the forward pass of the CNN is in fact the thresholding pursuit serving the ML-CSC model. This connection brings a fresh view to CNN, as we are able to attribute to this architecture theoretical claims such as uniqueness of the representations throughout the network, and their stable estimation, all guaranteed under simple local sparsity conditions. Lastly, identifying the weaknesses in the above pursuit scheme, we propose an alternative to the forward pass, which is connected to deconvolutional, recurrent and residual networks, and has better theoretical guarantees.

Category: Paper Announcements
Write comment (0 Comments)

authors: Nadav Cohen, Amnon Shashua
journal: (Proceedings of Machine Learning Research 2016)
publication year: 2016
links: arxiv (preprint)

abstract: While the universal approximation property holds both for hierarchical and shallow networks, we prove that deep (hierarchical) networks can approximate the class of compositional functions with the same accuracy as shallow networks but with exponentially lower number of training parameters as well as VC-dimension. This theorem settles an old conjecture by Bengio on the role of depth in networks. We then define a general class of scalable, shift-invariant algorithms to show a simple and natural set of requirements that justify deep convolutional networks.

Category: Paper Announcements
Write comment (0 Comments)

authors: Thomas Wiatowski, Philipp Grohs, Helmut Bölcskei
journal: IEEE Transactions on Information Theory (submitted)
publication year: (2017)
links: arxiv (preprint)

abstract: Many practical machine learning tasks employ very deep convolutional neural networks. Such large depths pose formidable computational challenges in training and operating the network. It is therefore important to understand how many layers are actually needed to have most of the input signal's features be contained in the feature vector generated by the network. This question can be formalized by asking how quickly the energy contained in the feature maps decays across layers. In addition, it is desirable that none of the input signal's features be "lost" in the feature extraction network or, more formally, we want energy conservation in the sense of the energy contained in the feature vector being proportional to that of the corresponding input signal. This paper establishes conditions for energy conservation for a wide class of deep convolutional neural networks and characterizes corresponding feature map energy decay rates. Specifically, we consider general scattering networks, and find that under mild analyticity and high-pass conditions on the filters (which encompass, inter alia, various constructions of Weyl-Heisenberg filters, wavelets, ridgelets, alpha-curvelets, and shearlets) the feature map energy decays at least polynomially fast. For broad families of wavelets and Weyl-Heisenberg filters, the guaranteed decay rate is shown to be exponential. Our results yield handy estimates of the number of layers needed to have at least ((1-eps)*100)% of the input signal energy be contained in the feature vector.

Category: Paper Announcements
Write comment (0 Comments)

Neural networks were originally introduced in 1943 by McCulloch and Pitts as an approach to develop learning algorithms by mimicking the human brain. The major goal at that time was to introduce a sound theory of artificial intelligence. However, the limited amount of data and the lack of high

Read more ...

Category: Featured Articles
Write comment (0 Comments)

authors: Philipp Grohs, Thomas Wiatowski, Helmut Bölcskei
journal: Proceedings of the IEEE International Symposium on Information Theory (ISIT)
publication year: 2016
links: arxiv (preprint)

abstract: Wiatowski and Bölcskei, 2015, proved that deformation stability and vertical translation invariance of deep convolutional neural network-based feature extractors are guaranteed by the network structure per se rather than the specific convolution kernels and non-linearities. While the translation invariance result applies to square-integrable functions, the deformation stability bound holds for band-limited functions only. Many signals of practical relevance (such as natural images) exhibit, however, sharp and curved discontinuities and are hence not band-limited. The main contribution of this paper is a deformation stability result that takes these structural properties into account. Specifically, we establish deformation stability bounds for the class of cartoon functions introduced by Donoho, 2001.

Category: Paper Announcements
Write comment (0 Comments)

Similar to neural networks, compressed sensing offers a perspective to handle high-dimensional data and works on the standing assumption that a simple low complexity governs the data. Its core application targets efficient data acquisition from very few measurements. The reasonably young field

Read more ...

Category: Other Announcements
Write comment (0 Comments)

authors: Thomas Wiatowski, Helmut Bölcskei
journal: IEEE Transactions on Information Theory (submitted)
publication year: 2016
links: arxiv (preprint)

abstract: Deep convolutional neural networks have led to breakthrough results in numerous practical machine learning tasks such as classification of images in the ImageNet data set, control-policy-learning to play Atari games or the board game Go, and image captioning. Many of these applications first perform feature extraction and then feed the results thereof into a trainable classifier. The mathematical analysis of deep convolutional neural networks for feature extraction was initiated by Mallat, 2012. Specifically, Mallat considered so-called scattering networks based on a wavelet transform followed by the modulus non-linearity in each network layer, and proved translation invariance (asymptotically in the wavelet scale parameter) and deformation stability of the corresponding feature extractor. This paper complements Mallat's results by developing a theory of deep convolutional neural networks for feature extraction encompassing general convolutional transforms, or in more technical parlance, general semi-discrete frames (including Weyl-Heisenberg, curvelet, shearlet, ridgelet, and wavelet frames), general Lipschitz-continuous non-linearities (e.g., rectified linear units, shifted logistic sigmoids, hyperbolic tangents, and modulus functions), and general Lipschitz-continuous pooling operators emulating sub-sampling and averaging. In addition, all of these elements can be different in different network layers. For the resulting feature extractor we prove a translation invariance result which is of vertical nature in the sense of the network depth determining the amount of invariance, and we establish deformation sensitivity bounds that apply to signal classes with inherent deformation insensitivity such as, e.g., band-limited functions.

Category: Paper Announcements
Write comment (0 Comments)