**May 25 , 2009 **

Humboldt-Universität zu Berlin

Institut für Informatik

Johann von Neumann-Haus

Rudower Chaussee 25

Raum 4.112

** Lecture - 14:15**

*Abstract:*

We are often confronted with a huge amount of available information that
is generated by large scale complex information systems that cannot even
be stored in their entirety. Examples include the World Wide Web as a
source of information, performance measurements in networks, etc. In
many of these applications, polynomial algorithms that are efficient in
relatively small inputs, may become impractical for input sizes of
several gigabytes.

Managing and analyzing such data sets forces us to revisit the
traditional notions of efficient algorithms. For example, when we
consider approximation algorithms for clustering problems in metric
spaces with $n$ points, then since their input size is $\Theta(n^2)$,
they typically have $\Omega(n^2)$ running time. Clearly, such a running
time is not feasible for massive data sets. Constructing a sublinear
time algorithm may seem to be an impossible task since it allows one to
read only a small fraction of the input. However, in recent years, we
have seen development of sublinear time algorithms for optimization
problems arising in such diverse areas as graph theory, geometry,
algebraic computations, and computer graphics.

In this talk, we will present a few examples of sublinear algorithms.
Our main focus will be on techniques used to design sublinear-time
algorithms to estimate basic graph parameters (e.g., the average degree
of the graph or the cost of its minimum spanning tree). We will also
discuss the framework of property testing, an alternative notion of
approximation for decision problems, which has been applied to give
sublinear algorithms for a wide variety of problems. Our main focus will
be on the so-called bounded degree model of computations.

**Colloquium - 16:00**

*Abstract:*

One of the first theorems in many graph theory courses says that in
every simple graph on at least two vertices, there is a pair of vertices
with equal degrees. This is not longer true in multigraphs, these may be
completely irregular, i.e., no two vertices have equal degree. Given a
simple graph, what is the most efficient way to replace some edges by
multi-edges to make it into an irregular multigraph? More precisely,
what is the smallest $k$ such that there is a weight function $f:
E(G)\to [k]$ such that all the weighted degrees are different (this is
always possible as long as all components have at least three vertices)?

You can vary this question to only ask for the weighted degrees of
adjacent vertices to be different. Here, for some graphs $k=1$ is
enough. It was concectured by Karo{\'n}ski, {\L}uczak and Thomas that
$k=3$ is sufficient for all graphs without components of size $2$.

In this talk, I will address both questions (and some more related ones)
and show how to get new "records". Most of this is joint work with M.
Karo{\'n}ski and M. Kalkowski.

Letzte Aktualisierung:
11.05.2009