Lecture 5: Low-Stretch Spanning Trees (and Low-Diameter Decompositions)

07 Sep 2018

Low-Diameter Decomposition Schemes

We did not get around to presenting such a scheme in lecture, but since there are ones that are easy to state and analyze, let me do it here. Recall the definition of a Low-Diameter Decomposiiton (LDD) scheme: given a graph $G = (V,E)$ and parameters $D, \beta$, you want to find a (random) partition of $V$ into $V_1, V_2, \ldots$, such that

  1. For each $V_j$, the $G$-diameter $\max_{u,v \in V_j} d_G(u,v)$ is at most $D$.
  2. For each pair $u,v$, the probability that $u, v$ lie in different parts is at most $\frac{d_G(u,v)}{D} \beta$.

Observe that we’re asking for $d_G(u,v)$ to be bounded in property (1), so the shortest $u,v$ path may leave $V_i$. (This is often called a weak diameter bound in the jargon.)

Moreover, let us simplify property (2) to ask it only for $u,v$ being edges of the graph: this implies the above property (2) via the triangle inequality. (Make sure you see why.)

The Coin-Flipping LDD Scheme

We claimed that for any graph $G$ on $n$ nodes, and any $D$, there exist LDD schemes with $\beta = O(\log n)$. Let’s see one. This is a scheme of Linial and Saks, which also appears in a slightly modified form in Yair Bartal.

The idea is the first one you may think of: take the graph $G$, pick a node $x$, build a shortest path tree from $x$, go down some random distance $R$ from $x$ in this tree, and cut all edges that cross at that distance. If $R \leq D/2$, all $u,v$ in this piece containing $x$ you’ve just carved out have distance at most $d(u,x) + d(v,x) \leq R/2 + R/2 \leq D$. Call this piece $V_1$, and recurse on the rest of the graph to get $V_2, V_3, \ldots$.

Formally, having chosen some random value $R$, you define $V_1$ to be all $u$ such that $d_G(x,u) \leq R$, and $V’ := V \setminus V_1$ to be the rest of the graph. Clearly this construction satisfies property (1) of LDD schemes.

But how do you choose this $R$ to satisfy property (2)? That’s where the fun begins. One natural way is to pick $R$ uniformly at random from $[0,D/2]$, or even in $[D/4,D/2]$, but that does not work. (Do you see an example why not?)

A different and good way to define $R$ is to be a geometric random variable with bias $p = \frac{4 \log n}{D}$. I.e., repeatly flip a coin with bias (i.e., probability of getting a heads) equal to $p$ until you see a heads, and let $R$ be the number of flips. So you are growing a ball around the center $x$ of increasing radius: you increase the radius by $1$ when you get a tails, and you cut the current ball out when you get a heads!

This ball view is very useful in showing the two LDD properties, which we do next:

This proves the LDD properties we wanted!

Two Asides

We were cheating above: the diameter property (1) could be violated with tiny probability, right? Easy to fix. If some cluster has diameter larger than $D$, repeat the process again! You will have to repeat it only $< 2$ times. And if you union bound the probability that an edge $(u,v)$ is cut by the probability it is cut in any of the repeats, you get $< d_G(u,v) \frac{8 \log n}{D}$, still OK.

Secondly, in fact this process ensures a stronger diameter property: that the distance $d_{G[V_j]}(u,v) \leq D$, and not just that $d_G(u,v) \leq D$. I.e., the distance between nodes in a part $V_j$ is small, even if you only consider paths that remain within $V_j$ (i.e., in the induced graph $G[V_j]$.) However, the tree-building process that Jason described in lecture need not use this stronger diameter property.

Other LDD Schemes

There are several other LDD schemes: three particularly nice ones are due to Seymour, Calinescu, Karloff, and Rabani, and our own Miller, Peng, and Xu. These all have other nice properties, but for a simple first-principles analysis, I find the Bartal LDD scheme very appealing.

Optimality?

Can you get an LDD scheme gets $\beta = o(\log n)$ for all graphs and all $D$? The answer is no. Observe that an implication of parameters $(D, \beta)$ is this: given an unweighted graph and $D$, you can partition the vertices into pieces of diameter $D$ such that only a $\beta/D$ fraction of the edges are cut. (Just use property (2) on the edges, and linearty of expectations to observe that the expected number of cut edges are $|E| \cdot \beta/D$. Now there must exist some outcome which is at least the expectation.)

Now we use a fact from extremal graph theory that there exists some constant $c$, such that for every $n$ there exist degree-3 graphs with every simple cycle having length at least $c \log n$. For such a graph $G$, set $D = c/3 \log n$. Now in any LDD, no $V_i$ can contain any cycle. (Why? The antipodal nodes in a cycle have distance $c/2 \log n > D$.) So each induced graph $G[V_j]$ is a tree. Hence there must be $|E| - (n-1) = (3n/2) - (n-1) > n/2$ cut edges.

Combining with the previous paragraph, $|E| \beta/D > n/2$, so $\beta > D/3 = (c/9) \log n$. Hence, for these graphs you must have $\beta = \Omega(\log n)$.

LDDs for Special Classes of Graphs

The fact that $\beta = \Omega(\log n)$ for these worst-case graphs does not mean we cannot do better when the graphs are “nice”. Indeed, for the path or the cycle, it is easy to see you can get $\beta = O(1)$. Or even for the $2$-dimensional grid graph. A useful fact is there exist LDD schemes with $\beta = 1$ for all planar graphs (and for even more general classes of graphs). The original result was due to Klein, Plotkin, and Rao, and the current best result is due to Abraham, Gavoille, Gupta, Neiman, and Talwar. See this simpler proof of the KPR theorem due to James Lee.

Another useful class of graphs: suppose the graph vertices are points in $\mathbb{R}^k$, and the distances $d(u,v)$ are the Euclidean distances between them, then you can achieve $\beta = \sqrt{k}$. This is a result of Charikar et al..