**Monday, February 11, 2013**

Technische Universität Berlin

Institut für Mathematik

Str. des 17. Juni 136

10623 Berlin

room MA 041

** Lecture 1 - 14:15**

*Abstract:*

In this talk we survey a generalized notion of convexity that we have
introduced and studied: a (fully d-dimensional) set in d-dimensional
space is k-convex when every line intersects its interior in at most k
connected components. The usual notion of convexity coincides with
1-convexity under this definition.

Our interest goes especially to low values of k, focusing on dimension
2, and mostly on polygons in the plane. Polygons that are k-convex can
be triangulated with fast yet simple algorithms. However, recognizing
them in general is a 3SUM-hard problem.
We give an alternative characterization of 2-convex polygons, a
particularly interesting class, and show how to recognize them in
O(n \log n) time, where n is the number of vertices. A description of
their shape is given as well, which leads to Erdos-Szekeres type
results regarding subsets of their vertex sets. We also introduce the
concept of generalized geometric permutations, and show that their
number can be exponential in the number of 2-convex objects considered.

We also generalize the notion of k-convexity to finite point sets:
A set of $n$ points is considered to be k-convex if there exists a
spanning polygonization such that the corresponding polygon is
k-convex.
As the main combinatorial result, we show that every point set of size
n always contains a subset of Omega(log^2 n) points that are in
2-convex position. This bound is tight. From an algorithmic point of
view, we can show that 2-convexity of a point set can be decided
using a polynomial-time algorithm, while the corresponding problem for
a larger degree of convexity becomes NP-complete.

The results we overview have been developed in works with several
coauthors: O. Aichholzer, F. Aurenhammer, E.D. Demaine, T. Hackl,
A. Pilz, P.A. Ramos, J. Urrutia, P. Valtr, and B. Vogtenhuber.

**Lecture 2 - 16:00**

*Abstract:*

The complexity of a polynomial is defined as the minimal number of
arithmetic operations required to build it up from the variables and constants.
Proving lower bounds on the complexity of specific polynomials is very challenging:
the most famous question being the permanent versus determinant problem.
It turned out that for its study, one may focus on arithmetic circuits of depth four.
In the talk we shall survey two recent approaches on this that lead to new and
fascinating mathematical questions.

For univariate polynomials, depth four circuits express a polynomial as a sum of
products of sparse polynomials of small size. How many real zeros may such
a polynomial have? Pascal Koiran discovered a surprising connection:
if the number of real zeros must be necessary small (at most polynomial in the circuit size),
then the permanent is hard (its complexity is superpolynomial in the number of variables).
The corresponding question of real algebra is wide open.

In another direction, Gupta, Kamath, Kayal and Saptharishi recently proved
an exponential lower bound on depth four circuits computing the permanent
or determinant. This is based on quantifying the amount of algebraic relations
between the higher order partial derivatives of the polynomial under question.
While the bound obtained for the determinant seems to close to the optimum,
there seems to be room for improvement for the permanent. For this,
understanding the Hilbert function of the ideal of permanental minors is required.
(One has a good understanding of the Hilbert function of determinantal minors.)

Letzte Aktualisierung:
29.01.2013