I slept in (sorry, early morning speakers) and started my day with Yuval Shavitt's talk on counting subgraphs, because of its close connection to some of my own recent research. Specifically, Shavitt and his co-authors want to approximate the number of K1,s subgraphs (stars) in a given graph. If one knows the entire degree distribution of the graph, this is easy, but they show how to do it while only randomly sampling a small fraction of the graph. By grouping nodes with similar degrees into buckets and sampling, one gets a rough idea of the degree distribution, enough to find most of the stars, but there could still be some unsampled buckets with a very small number of very high degree vertices. To find these, they do some more sampling to estimate the number of adjacencies between buckets with many vertices and buckets with few vertices; they also prove lower bounds showing that no such sublinear sampling method is possible for approximately counting only slightly more complicated subgraphs (triangles and length-3 paths).
Session 3 was one of the ones where I could easily have gone to talks in all three sessions, but I ended up in the geometric graph theory session. I was especially impressed with Tasos Sidiropoulos' talk on his work with James Lee, on partitioning bounded-genus graphs. The problem is to find a random distribution over a family of partitions of a given graph, for a given distance threshold R, such that each set in each partition has diameter R, but so that a pair of vertices at distance d has probability only βd/R of being separated, for a parameter β that is as small as possible; one can view the distribution as describing something sort of like an embedding of the graph into a metric space (the probability distributions describe an L1 combination of simpler metrics defined by the individual partitions) in which β measures the distortion of the embedding. For arbitrary graphs the best bound on β is O(log n), but for planar graphs it is O(1), and for Kt-minor-free graphs it is O(t2); the minor-free bound implies that genus g graphs have β = O(g), but Lee and Sidiropoulos exponentially improve this to O(log g). The main idea is to combine tools from three previous papers in a clever way: a method of Erickson and Whittlesey for (deterministically) finding a collection of O(g) shortest paths that cut the bounded-genus surface into a disk, the β = O(1) method for the planar part of the input that is at least a random multiple of R away from the shortest paths, and a method by Kalinescu, Karloff, and Rabani for achieving the general O(log n) bound by forming radius-O(R) bubbles around a sequence of randomly selected points of the metric space. If one instead forms long thin radius-O(R) bubbles around randomly ordered paths of the cut set, and then slices these bubbles into diameter-O(R) pieces, the O(log n) bound turns into O(log g). Apparently this leads to improvements in a large number of approximation algorithms for bounded genus graphs.
I also enjoyed the talk by Koivisto or Parvainen (I walked in a minute late and didn't catch which of them presented it) on algorithms for problems such as the traveling salesman problem or feedback arc set that can be solved by finding an optimal permutation of the vertices (e.g. for feedback arc set, the cost of the solution is the number of edges that are oriented from a later vertex in the permutation to the earlier one). I posted a couple years ago about the (approximately) 2n time and space dynamic programming algorithm for TSP, at which time Thore Husfeldt and Ryan Williams told me about a different solution which uses only polynomial space but 4n time: for each partition into two equal subsets of vertices, recursively find the optimal tour that goes first through all the vertices on one side of the partition, and then through all the vertices on the other side. But 2n space is too much (the algorithm will run out of space for problem sizes on which it is still reasonably fast) and polynomial space is too little (the time becomes the bottleneck for problem sizes where the memory usage is a tiny fraction of what's available). By switching to dynamic programming when the recursion gets to small enough sets, one can get a tradeoff between time and space, but one in which the product of both is still 4n; this paper finds a better tradeoff with a smaller product. The idea is to find a family of partial orders whose linear extensions together cover all permutations, and to loop through these partial orders, for each one doing dynamic programming in the associated distributive lattice of lower sets to find the optimal linear extension. The time is the sum of the sizes of the lattices, and the space is the maximum width of any of these lattices; the hard part (and the part where perhaps there is still more interesting research to be done) is finding the right collection of partial orders (or possibly more general structures such as antimatroids).
Quote of the day, from Daniel Lokshtanov in the first of his two (both good) talks on fixed-parameter-tractable algorithms: “Some polynomials are more polynomial than others.” The context was a comparison of time bounds for problems like max cut on graphs of bounded cliquewidth; a bound of the form f(c)nO(1) is more polynomial than one of the form nO(log c), which is more polynomial than nO(c), etc.
And, as with yesterday's post, to end with something not-quite-on-topic for the conference: texting while your professor is lecturing vs twittering while some researcher is giving his or her SODA talk: is one more inappropriate than the other?