In conjunction with IPCO XI, there was a summer school on Jun 6-7, 2005, also at TU Berlin. Three leading experts gave extended lectures on recent developments in randomized methods in combinatorics and optimization.
|Martin Dyer (U Leeds):||Approximate Counting|
|Daniel Spielman (M.I.T., Yale U):||Fast, randomized algorithms for partitioning, sparsification, and the solution of linear systems|
|Jiri Matousek (Charles U Prague):||Generalized linear programming|
The program is also available.
Lecture Notes (PDF)
We use techniques developed in the of study random walks and random matrices to partition graphs, sparsify graphs and solve sparse systems of linear equations of the form Ax=b, where A is symmetric and diagonally dominant. The resulting algorithm for approximately solving linear equations takes nearly-linear time. In this lecture, we will explain the combinatorial parts of these algorithms.
We being by defining what it means for one graph to approximate another. To solve our linear systems, we must be able to find for every graph a very sparse graph that approximates it well. By a very sparse graph, we mean a graph on n vertices with at most n(1+o(1)) edges.
Our very sparse approximations of graphs are obtained by applying a modification of a construction of Vaidya and an algorithm for approximating arbitrary graphs by sparse graphs. This later algorithm for approximating graphs by sparse graphs exploits random sampling and an algorithm for graph partitioning.
Our analysis of how well a random subgraph of a graph approximates the original applies a generalization of a technique of Furedi and Komlos. We find that random sampling only provides a good approximation when the original graph has high conductance.
Accordingly, we require an algorithm that can quickly partition a graph into components of high conductance while cutting few edges. We apply a theorem of Lovasz and Simonovits to show how to obtain such an algorithm by simulating truncated random walks.
This is joint work with Shang-Hua Teng.
Lecture Notes (PDF)
Linear programming is concerned with optimizing linear functions over convex polytopes. Attempts to analyze the running time of the simplex method, as well as other motivations, have led to the notion of abstract objective functions on convex polytopes, which are linear orderings of the vertices that share some simple propeties of orderings induced by generic linear functions. Several other axiomatic frameworks generalizing linear programming have been introduced as well. In addition to linear programming they encompass many other important geometric optimization problems. Some of the algorithms for linear programming can be expressed and analyzed in these frameworks.
The talk is meant as an introduction to the concepts mentioned above, and on the more technical side, it will outline a recent result of Szabo and the speaker on bad worst-case performance of the simplex method with a certain randomized pivot rule on cubes with abstract objective functions.
Lecture Notes (PowerPoint)
|Pages maintained by Marc Pfetsch, email@example.com||last updated: June 14 2005|