How does the algorithm work?

The following explanation gives a glimpse of our disjoint routing algorithm. We put high emphasis on a mathematically rather involved subroutine to compute lower bounds on the optimal objective function value. To get more details, please don't hesitate to contact us directly.

A powerful mathematical technique called Lagrangian Relaxation lies at the heart of our algorithm. We demonstrate this technique with a few screen-shots of a simple example taken from our visualization tool. You can click on each of the thumbnails to get the original-size images.

We consider the instance given in Figure 1. It consists of a transmission network (the underlying graph with the dotted edges) with individual edge costs and a bunch of colored demands that have to be embedded, say, edge-disjointly into the network. That is, for each demand we want to find a connecting path in the transmission network such that each edge is used by no more than one path. We want to minimize the total cost of these paths, i.e., the sum of the costs of all used edges. This problem belongs to the class of NP-hard problems which are known to be notoriously difficult.

[the problem]


Figure 1: The problem.

[first iteration]


Figure 2: First iteration.



[second iteration]


Figure 3: Second iteration.



[solution]


Figure 4: The solution.

The idea of Lagrangian Relaxation is to neglect for a moment the fact that we need disjoint paths. It remains a multiple-source multiple-destination shortest paths problem which can be solved efficiently. Naturally, by computing merely shortest paths for our demands, one cannot guarantee them to be edge-disjoint. Thus, an iterative process is used to take the disjointness condition into account which we explain by considering two subsequent iterations (Figures 2 and 3).

Suppose that we just compute these shortest paths. Usually, there are conflicts on some edges, which means that there are some edges used more than once. In the figures these edges are highlighted by blinking (Fig. 2). Now we increase the cost of each of these conflicting edges by a certain amount and subsequently we start the next iteration which computes shortest paths with respect to these new edge costs. As the former conflict edges are now more expensive, they have a tendency to be avoided by the new shortest paths. Therefore, the overall routing picture changes from one iteration to the next (Fig. 3). The conflicts occur on different edges in every iteration and their numbers tends to decrease, possibly to zero.

Although this iterative procedure is easy to describe, a thorough mathematical analysis is needed to ensure both a good lower bound and fast convergence. In fact, it can be shown that in each iteration the sum of the costs of the shortest paths less the sum of all added edge costs is a lower bound on the cost of the optimum disjoint solution.

In the example, after a few number of iterations a feasible (i.e., edge-disjoint) solution, is found (Fig. 4). If the cost of this solution was equal to the best lower bound found so far, we would know that this solution is even an optimum solution. However, this is not the case here and we continue for further iterations hoping to find a better solution. A plot of the obtained lower bounds and the number of multiply used edges is given in Figure 5. Note that the lower bound approaches its maximum value rather quickly. This is the typical behavior of the bound improvement subalgorithm and its efficiency contributes largely to the strength of Lagrangian Relaxation. In practice, a run of this bound computation takes time in the order of milliseconds, depending on the number of demands.

It is the "motor" of a Branch-and-Bound algorithm which ultimately solves the given problem instance. It does so by dispensing with all disjointness constraints at first. It generates a series of subproblems that constitute an implicit enumeration of all feasible solutions in the form of a search tree. The branching-out in this tree, simply put, works as follows. We consider one tree node at a time and route all given demands through the network in that node. If two paths intersect in an edge, two new subproblems are generated one of which no longer allows the demand of the first path to use this edge while the other one does so with the other demand. This means, the network is successively diminished by the conflict edges found. Lower bounds are then computed for both new subproblems.

Empirically, in most cases a feasible solution is found by the Lagrangian Relaxation subroutine. (It can be interpreted as an upper bound on the optimal solution cost). A very good lower bound is also obtained. Maintaining best possible upper and lower bounds considerably helps our Branch-and-Bound algorithm to prune subtrees of the search tree.

[the plot]


Figure 5: A plot of all iterations.