Unsurprisingly, the conference program features many sets of talks that interest me but that I can't attend without a time machine due to parallel scheduling, but also plenty of blocks of time where the topics of all the parallel talks are too divergent from my own interests to draw my attention to any of them. There must be some interesting optimization problems to be solved in collecting data from prospective attendees about what talks they're likely to want to attend, grouping the talks into tuples to be offered in parallel in order to minimize conflicts, and then grouping the parallel tuples of talks into parallel sessions in order to minimize the number of times everyone has to change sessions. Of course these problems are all likely to be NP-hard but maybe some approximation is possible.

Antoine Vigneron spoke on some nice results concerning very general algorithms for optimizing geometric quantities described as sums of algebraic functions over low-dimensional Euclidean spaces. The idea was quite simple: for each of the functions in the sum, draw a family of level sets for an exponentially spaced sequence of values, partition the space into cells by overlaying all the level sets, and try a candidate point within each cell. For instance, he mentioned the problem of finding a circle that best fits a set of n points in the plane, where the quality of a fit is measured as the sum of the Euclidean distances of the points to the circle. This quality measure can be described as a sum of algebraic functions in a three-dimensional space (coordinatized by the center and radius of the circles) and he gets a (1+ε)-approximation algorithm that (while exhibiting worse dependence on n) improves the dependence on the ε compared to a previous algorithm of Har-Peled from (1/ε)

^{60}to (1/ε)

^{3}.

Next after Vigneron, and one of the most egregious examples of strange scheduling, was Petr Hliněný's talk on crossing numbers, scheduled opposite a session that was largely about planar graphs. Hliněný's result is that, for certain bounded genus graphs, a very simple algorithm (that cuts open handles one at a time along dual cycles and then, once all the handles have been cut, reroutes the cut edges through the resulting planar graph) provides a constant factor approximation to the minimum possible number of crossings in a planar drawing. A key tool is a new parameter of embedded graphs that he calls stretch: the minimum product of the lengths of any two cycles that have only one crossing point (such cycles must necessarily be nonseparating). It was also an amusing surprise to see one of my drawings of the Nauru graph used as the illustration for the concept of graphs embedded on surfaces.

Back in the planar session, the talks described above were bookended by talks by Christian Wulff-Nilsen and Jeff Erickson that used very similar techniques to very different ends. Christian's algorithm was for finding the set of shortest paths in all the different graphs formed by deleting one edge from a given planar digraph, and Jeff's was on solving planar max-flow by reformulating it as the same sort of parametric negative cycle-detection problem that I used in my WADS paper with Kevin Wortman on minimum dilation stars. But they both perform a sort of network simplex algorithm for finding shortest paths in which a shortest path tree from a problem with slightly different edge lengths is rearranged by swapping edges in and out, they both use dynamic tree data structures on the complementary dual tree to find which swaps to perform, and in both cases a bound of O(n) on the number of swaps leads to an overall O(n log n) time bound.

I also liked the talk by Ryan Williams on his results with Pătraşcu on the relation between lower bounds on exponential time and polynomial time algorithms. The so-called "exponential time hypothesis" is that 3SAT, clique finding, and many related problems require time 2

^{Ω(n)}to solve exactly; it's consistent with the algorithms we know and, if any one of these problems could be solved more quickly, they all could. But as Williams and Pătraşcu show, the ETH can be translated downwards to imply that k-SUM problems require time n

^{Ω(k)}, the assumption that CNF-SAT can't be solved in time (2-ε)

^{n}can be translated downwards to a lower bound of n

^{k}on finding k-dominating sets, etc. I have some vague memory of a talk by Gerhard Woeginger at IWPEC 2004 on similar subjects (in Woeginger's case, an upward translation from algorithms for k-SUM to algorithms for subset sum, which could equally well be viewed as a downward translation from hardness of subset sum to hardness of k-SUM). Williams and Pătraşcu cite Woeginger's paper but don't really mention this connection; on the other hand, subset sum isn't one of the problems covered by the ETH.

Talk slides for my shameless attempt to cash in on the irrational enthusiasm for approximation algorithms, in the same session as Williams' talk, are here.

## Error

Your reply will be screened

Your IP address will be recorded