Graduiertenkolleg: Methods for Discrete Structures

Deutsche Forschungsgemeinschaft
faculty | junior-faculty | postdocs | students | associate students | former students | former associate students
|
locations | Term schedule | history
|
predoc-courses | schools | block-courses | workshops
partners


Monday Lecture and Colloquium


Monday, January 4, 2016

Technische Universität Berlin
Institut für Mathematik
Straße des 17. Juni 136
10623 Berlin
room MA 041



Lecture - 14:15

Joachim Giesen - Friedrich-Schiller-Universität Jena


Parameterized Optimization Problems in Machine Learning

Abstract:
Many machine learning methods are given as parameterized optimization problems. Well known examples include support vector classification, LASSO regression, elastic net regression, matrix completion, robust PCA and sparse inverse covariance matrix estimation. These methods are parameterized by regularization- and/or kernel hyperparameters, and their parameters have to be tuned carefully since the choice of their values can have a significant impact on the statistical performance of the learning methods. In most cases the parameter space does not carry much structure and parameter tuning essentially boils down to exploring the whole parameter space. In my talk I will present numerically robust and practically efficient parameter space exploration algorithms that come with theoretical approximation guarantees and matching lower bounds.



Colloquium - 16:00

Shay Moran - Technion Haifa


Sample compression schemes for VC classes

Abstract:
Sample compression schemes were defined by Littlestone and Warmuth (1986) as an abstraction of the structure underlying many learning algorithms. Roughly speaking, a sample compression scheme of size k means that given an arbitrary list of labeled examples, one can retain only k of them in a way that allows to recover the labels of all other examples in the list. They showed that compression implies PAC learnability for binary-labeled classes, and asked whether the other direction holds. I will present a solution to this question, showing that every concept class C with VC dimension d has a sample compression scheme of size exponential in d. The proof uses an approximate minimax phenomenon for binary matrices of low VC dimension, which may be of independent interest.

Joint work with Amir Yehudayoff




Letzte Aktualisierung: 29.10.2015