Feasible Potentials
Recall the notion of a feasible potential: it is an assignment of values to nodes so that the “reduced” weight $\hat{w}(u,v) = \phi_u + w(u,v) - \phi_v$ becomes non-negative for every edge. E.g., if the original costs were non-negative, then the all-zeros potential is feasible. Else we showed how to compute it using a single call to Bellman-Ford. Let’s observe some facts about these potentials.
We claim that $\phi_t - \phi_s$ is a lower bound on the shortest-path distance from $s$ to $t$ (with respect to weights $w$). Indeed, consider the shortest-path $P$ from $s$ to $t$ w.r.t $w$: summing the reduced weight over the edges in $P$ gives us $\phi_s + w(P) - \phi_t \geq 0$, or $\phi_t - \phi_s \leq w(P)$.
Why is this fact interesting? Suppose we set $\phi_s = 0$, and try to maximize $\phi_t$. I.e., we try to get the best lower bound we can get via feasible potentials. It turns out that the maximum value of $\phi_t$ is the shortest-path distance from $s$ to $t$. (Indeed, just setting $\phi_u = d_w(s,u)$ gives you such a solution.) Similarly, if you try to maximize $\sum_u \phi_u$ subject to $\phi$ being a feasible potential and $\phi_s = 0$, setting $\phi_u = d_w(s,u)$ gives you the maximum value for this LP as well. So this is a way to solve SSSP using via a linear program:
Note that we set $\phi_s \leq 0$, but since we’re maximizing in the objective, the optimal solution will ensure $\phi_s = 0$.
I like to view this LP this way: you have a graph where each edge $(u,v)$ is an inextendible string of length $w_{uv}$. Now you fix the position of the node $s$ at zero (i.e., set $\phi_s = 0$) and try to pull the rest of the nodes as far as you can away from zero. Naturally the furthest you can take node $u$ away from the source/origin is its shortest-path distance $d_w(s,u)$. And that is precisely what the LP achieves.
Now suppose we take the dual of this LP. Let the dual variables be $f_{uv}$ for the first set of primal constraints, and $g$ for the last constraint. Since the primal constraints are inequalities, the dual variables are non-negative. And the primal variables are unconstrained, so the dual constraints are equalities.
This sends flow from $s$ to all nodes in $V \setminus s$, such that $1$ unit ends at each other node. The cheapest way to send this flow would be along a shortest path, so this dual LP uses min-cost flow to compute the SSSP! Two different ways to view the SSSP problem.
By the way, Goran mentioned this paper Negative-Weight Shortest Paths and Unit Capacity Minimum Cost Flow getting an improvement to Bellman-Ford. It solves the min-cost flow problem on unit-capacity undirected graphs (and hence the single-source shortest-path problem with negative edge weights) in time $\tilde{O}(m^{10/7} \log M)$ time. And indeed, it’s an improvement (for graphs that are sparse enough) to Goldberg’s $O(m\sqrt{n} \log M)$-time algorithm I’d mentioned in class, but does not improve Bellman-Ford when the edge-weights are not small integers.
Finding the Paths, not just Lengths
In lecture we only discussed computing the lengths of the shortest path, and not finding the paths themselves. For all the classical algorithms, this is easy to implement. (Please make sure you see how to find the shortest paths.)
For Seidel’s algorithm, it’s a little more tricky. Indeed, since the runtime of Seidel’s algorithm is strictly sub-cubic, a first question is: can we even write down the shortest paths in $n^{\omega}$ time, even though the total length of all these paths may be $\Omega(n^3)$? The insight is that we don’t need to write down the paths: we just need to write down the successor pointers – i.e., for each pair $i,j$, define $S_j(i)$ to be the second node on a shortest $i$-$j$ path (the first node being $i$, and the last being $j$). Then to get the entire $i$-$j$ shortest path, we just follow these pointers: $i, S_j(i), S_j(S_j(i)), \ldots, j$. So there is a representation of all shortest paths that uses at most $O(n^2 \log n)$ bits.
The main idea for computing the successor matrix for Seidel’s algorithm is to solve the Boolean Product Matrix Witness problem: given $n \times n$ Boolean matrices $A, B$, compute an $n \times n$ matrix $W$ such that $W_{ij} = k$ if $A_{ik} = B_{kj} = 1$, and $W_{ij} = 0$ if no such $k$ exists. We will hopefully see (and solve) this problem in Homework 2.
Fast Matrix Multiplication
If you’ve not seen Strassen’s algorithm before, it is an algorithm for multiplying $n \times n$ matrices in time $n^{\log_2 7} \approx n^{2.81}$. It’s quite simple to state, and one can think of it as a 2-dimensional version of Karatsuba’s algorithm for multiplying two numbers. Mike Paterson has a very beautiful geometric interpretation of the sub-problems Strassen comes up with, and how they relate to Karatsuba.
The time for matrix multiplication was later improved by several people, culminating in the Coppersmith-Winograd algorithm to get $\omega = 2.376$. The C-W algorithm looks being similar to Strassen’s algorithm in that it breaks the matrices into smaller blocks and recurses on them. But the recursion is quite mysterious, at least to me. Smaller improvements were recently given by Andrew Strothers and (our own) Virginia Vassilevska-Williams. Here’s a survey Virginia wrote about these developments, giving an overview of these results.
In 2005, Cohn, Kleinberg, Szegedy, and Umans gave group-theoretic approaches that combine some of the C-W ideas with some group-theoretic ideas, to match the C-W bound. They also gave conjectures that would lead to $\omega = 2$. However, two years back, in 2016 Blasiak et al. showed hurdles in proving their conjectures.
Even more recently, Alman and Vassilevska-Williams (in this ITCS 2018 paper and an upcoming FOCS 2018 paper) have shown the limitations of all known approaches. Here’s a blog post giving an overview of their results.