09:00 - 09:45 | Registration |
09:45 - 10:00 | Opening |
10:00 - 11:00 | Petter Brändén
Negative dependence and the geometry of polynomials |
11:00 - 11:30 | Prize Awarding ceremony |
11:30 - 12:10 | Talk of the Prize Winner |
12:10 - 14:00 | Lunch |
14:00 - 15:00 | Britta Peis
A Modified Greedy Algorithm for Submodular Cover |
15:00 - 15:40 | Bento Natura
Faster Exact Linear Programming |
15:50 - 16:20 | Coffee Break |
16:20 - 17:00 | Christoph Hertrich
Facets of Neural Network Complexity |
17:00 - 17:40 | Arturo Merino
Greedy strategies for combinatorial generation |
18:30 - ... | Conference Dinner at 12 Apostoli |
10:00 - 11:00 | Francisco Santos
The space of Delzant polytopes |
11:00 - 11:40 | Meike Neuwohner
Improved Approximation Algorithms for Weighted k-Set Packing |
11:40 - 12:20 | Max Deppert
Algorithms for Scheduling Problems and Integer Programming |
12:20 - 13:00 | Nemanja Draganic
Size-Ramsey Numbers: Recent Developments |
13:00 - 14:00 | Lunch |
14:00 - 15:00 | Laszlo Végh
Approximating Nash Social Welfare |
15:00 - 16:00 | Talk of the Juror |
16:00 - 16:15 | Closing |
Speaker: Francisco Santos
A Delzant polytope is a simple polytope with rational edge directions and such that all its vertex cones are unimodular. Equivalently, it is a polytope with a rational, simplicial, unimodular normal fan. Their name comes from a theorem of Delzant, stating that these polytopes are precisely the momentum polytopes of symplectic toric manifolds. Delzant polytopes are also important in toric algebraic geometry, since their normal fans are the fans of smooth projective toric varieties. We explore the space of Delzant polytopes of a given dimension $n$ from both a discrete and a continuous perspective: - In the first one we look only at their fans and connect them via "blow-ups/blow downs", that is, via unimodular stellar refinements/coarsenings. - In the continuous perspective we consider Delzant polytopes as a subspace of the metric space of full-dimensional $n$-polytopes, with respect to either the Hausdorff distance or the symmetric-difference distance. These distances are not metrically equivalent (e.g., they produce two different completions) but they are topologically equivalent. The topological space so obtained can be modelled as a (not locally finite) cell-complex by gluing secondary fans, but its topology is strictly coarser than the CW-complex topology. In both perspectives we show drastic differences between the cases $n=2$ and $n=3$. (Joint work with Álvaro Pelayo).
Speaker: Arturo Merino
Combinatorial generation is the task of computing all solutions to a combinatorial problem instead of only one of them. For example, we may be interested in all spanning trees of a given graph or all optimal weight perfect matchings of a given weighted graph. In recent years, there has been much interest in unifying techniques or general approaches that simultaneously apply to various combinatorial generation problems. In this talk we will discuss two simple dual approaches for combinatorial generation: the permutation-based jump framework of Hartung-Hoang-Mütze-Williams, and the binary-based framework of Merino and Mütze. These approaches have several remarkable commonalities: - Both approaches are greedy in nature. - Both can be applied to a wide range of combinatorial generation problems. - Both approaches compute Hamilton paths in natural polytopes associated with the combinatorial objects. - Both approaches allow us to design efficient state-of-the-art generation algorithms. Furthermore, we will discuss the connection between these two approaches and with other areas of mathematics, such as polytope theory, optimization, and symmetry.
Speaker: Britta Peis
Submodular cover is a common generalisation of set cover, integer cover, and the minimum weight spanning tree problem. In the submodular cover problem, we are given a submodular, nonnegative, and monotone non-decreasing function f defined on all subsets of a finite ground set E, and the task is to find a subset S satisfying f(S)=f(E) of minimum cost for some given cost function on E. It is known that a naive greedy heuristic, which starts with the empty set, and iteratively adds elements of optimal ratio between cost and marginal coverage has a performance ratio of 1+ ln k, where k is the maximum f-value of a singleton (Wolsey 1982). This performance guarantee is nearly best possible, even in the special case of set cover (Feige 1996). We propose and discuss a modification of the greedy algorithm where redundant elements are deleted in each iteration. We show that this modified greedy algorithm is guaranteed to terminate with an optimal solution to the submodular cover problem in case of submodular systems satisfying certain properties which are easily seen to be fulfilled by e.g. laminar set cover and weighted matroid rank functions. (Joint work with Niklas Rieken and José Verschae.)
Speaker: László Végh
The Nash social welfare (NSW) problem asks for an allocation of indivisible items to agents in order to maximize the geometric mean of agents’ valuations. This is a common objective for fair division, representing a balanced tradeoff between the often conflicting requirements of fairness and efficiency. The problem is NP-complete already for additive valuation functions. In the talk, I give an overview of the state of art in approximation algorithms for this problem. In particular, I will describe a constant factor approximation for the general class of submodular valuations, using a simple combination of matching and local search techniques. I will also discuss the asymmetric version of the problem, and on its compatibility with envy freeness criteria.
Speaker: Max Deppert
For the well-known makespan minimization problem on identical parallel machines (P||Cmax) we give a new efficient polynomial-time approximation scheme (EPTAS). A novelty about this scheme is that it admits an implementation which beats the currently best approximation ratio of MULTIFIT. A generalization of P||Cmax are scheduling problems with batch setup times and we present constant approximation algorithms for the three major settings. We achieve similar results for many shared resources scheduling (MSRS) for which we present constant approximation algorithms and two EPTAS results. For the scheduling problem of uniprocessor real-time task systems we give a new approach to compute worst-case response times which even admits the first proof of a polynomial-time algorithm for harmonic systems in the presence of task release jitters. In more detail, we prove a duality between Response Time Computation (RTC) and the Mixing Set problem. Both problems can be expressed as block-structured integer programs and closely related problems are Directed Diophantine Approximation and simultaneous congruences. The most common setting of the latter is that each congruence has to have a certain remainder. We relax this such that the remainder of each congruence may lie in a given interval. Interestingly, for harmonic divisors we can compute the smallest solution in polynomial time.
Speaker: Petter Brändén
Negative dependence traditionally models repelling particles in statistical physics. In recent years it has been made evident that negative dependence inequalities are present in several fundamental problems in diverse areas such as combinatorics, probability theory, algebraic geometry, quantum mechanics and computer science. Attempts to create a mathematical theory for negative dependence had for a long time failed. However in recent years two different successful approaches to prove negative dependence inequalities have been developed. One using Hodge theory and the other using the geometry of zeros of multivariate polynomials. The theory of Lorentzian polynomials merges these two approaches. In this talk we will introduce Lorentzian polynomials and give applications to matroid theory and the random cluster model. This talk is based on joint work with June Huh and Jonathan Leake.
Speaker: Nemanja Dranganic
The study of Ramsey numbers is a cornerstone of combinatorics, offering insights into the unavoidable structure within large graphs. Among these, size-Ramsey numbers focus on the smallest graphs (in terms of the number of edges) that guarantee a specified monochromatic subgraph under any k-edge coloring. Introduced by Erdős et al. in 1978, size-Ramsey numbers have been the subject of extensive research, particularly over the past few decades. This talk will explore both classical results and recent breakthroughs in size-Ramsey theory, with a particular emphasis on new bounds, methods, and open problems
Speaker: Christoph Hertrich
How to use discrete mathematics and theoretical computer science to understand neural networks? Guided by this question, I will focus on neural networks with rectified linear unit (ReLU) activations, a standard model and important building block in modern machine learning pipelines. The functions represented by such networks are continuous and piecewise linear. But how does the set of representable functions depend on the architecture? And how difficult is it to train such networks to optimality? In my talk I will answer fundamental questions like these using methods from polyhedral geometry, combinatorial optimization, and complexity theory.
Speaker: Bento Natura
We present novel algorithms to solve various subclasses of linear programs, with a particular focus on strongly polynomial algorithms—those that operate in polynomial time relative to the problem’s dimension. Although subclasses like bipartite matching and maximum flow are known to be solvable in strongly polynomial time, the existence of such algorithms for general linear programs remains one of the most challenging open questions in discrete mathematics and beyond. While not achieving a solution for general linear programs, we introduce a new algorithm that recovers strong polynomiality for many subclasses and resolves the long-standing open problem of whether linear programs with at most two variables per row or column can be solved in strongly polynomial time. This algorithm is noteworthy for being competitive with any algorithm in the extensive theory of path-following methods in linear programming.
Speaker: Meike Neuwohner
The weighted $k$-Set Packing problem is defined as follows: As input, we are given a collection $\mathcal{S}$ of sets, each of cardinality at most $k$, and a positive weight for each set. The task is to compute a subcollection $A$ of $\mathcal{S}$ such that the sets in $A$ are pairwise disjoint, and the total weight of $A$ is maximum. The case $k=2$ is equivalent to the Maximum Weight Matching problem. For $k\geq 3$, already the special case of unit weights, the unweighted $k$-Set Packing problem, is NP-hard. The technique that has proven most successful in designing approximation algorithms for both the unweighted and the weighted $k$-Set Packing problem is local search. For the unweighted problem, the state-of-the-art is a polynomial-time $\frac{k+1}{3}+\varepsilon$-approximation by F\"urer and Yu that takes into account certain local improvements of up to logarithmic size. For general weights, we recently showed that by considering a special class of local improvements of logarithmically bounded size, we can obtain a polynomial-time $\frac{k+\varepsilon_k}{2}$-approximation, where $\lim_{k\rightarrow\infty} \epsilon_k=0$. We further proved that this result is asymptotically best possible: For general weights, one cannot achieve a better guarantee than $\frac{k}{2}$ by only considering local improvements of logarithmically bounded size. At a first glance, this seems to conclude the story of local improvement algorithms for the weighted $k$-Set Packing problem. However, in this talk we show how to employ a black box algorithm for the unweighted $k$-Set Packing problem to generate local improvements of super-logarithmic size and obtain a polynomial-time $(0.4986\cdot k+0.5194)$-approximation algorithm for weighted $k$-Set Packing. Our techniques further allow us to establish a general connection between the approximation guarantees achievable for unit and general weights.