As in previous years, it's time for my report on the cs.DS (data structures and algorithms) section of arXiv.org. The past years' rapid growth seems to have plateaued: In 2013 there were 1065 preprints listed in cs.DS, up only a little from the 940 preprints in 2012.

It's also time for me to pick ten algorithms preprints that I found particularly interesting (not including my own ones as well as some I wrote about earlier). Here they are, in chronological order. It was difficult to get the list down to 10, and I cheated a bit by including two papers for some of the entries.

**On randomized memoryless algorithms for the weighted**, Ashish Chiplunkar and Sundar Vishwanathan, arXiv:1301.0123 and FOCS 2013. This paper gives a tight doubly-exponential bound on the competitive ratio of memoryless*k*-server problem*k*-server algorithms. For the sequence of ratios, see OEIS A065035.**Nearly maximum flows in nearly linear time**, Jonah Sherman, arXiv:1304.2077 and FOCS 2013, and**An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations**, Jonathan A. Kelner, Yin Tat Lee, Lorenzo Orecchia, and Aaron Sidford, arXiv:1304.2338 and SODA 2014. Here "nearly maximum" and "approximate" mean within a (1 + ε) factor of optimal, and "nearly linear" and "almost linear" mean that the time on an*m*-edge graph is*O*(*m*^{1 + o(1)}). The previous bound by Christiano and four co-authors in STOC 2011, which the first paper characterizes as a breakthrough, was slower by an*n*^{1/3}factor.**A**, Hans Bodlaender, Pål G. Drange, Markus S. Dregi, Fedor V. Fomin, Daniel Lokshtanov, and Michał Pilipczuk, arXiv:1304.6321 and FOCS 2013. This is the first known algorithm to get a constant approximation approximation to treewidth in an amount of time that's both FPT and singly-exponential in the treewidth. It's important for FPT algorithms more generally, because constructing a good tree decomposition is a bottleneck in other computations: one can often then finish the solution in time that's singly exponential in the width of the decomposition.*O*(*c*^{k}*n*) 5-approximation algorithm for treewidthWhile I have the podium, let me also remind everyone that it's a bad idea to put formulas into titles: it makes it hard to search for them, it makes it hard to format them on sites without proper MathJax support, the formatting is strange when the rest of the title is boldfaced, and it makes it likely that indexing sites will format them incorrectly. I was going to guess that this explains why I can't find the FOCS version of this paper in Google scholar, but it didn't find Sherman's paper either, so I don't know. (Also, formulas starting with a vowel should use "an", not "a".)

**Combining binary search trees**, Erik D. Demaine, John Iacono, Stefan Langerman, and Özgür Özkan, arXiv:1304.7604 and ICALP 2013, and**In pursuit of the dynamic optimality conjecture**, John Iacono, arXiv:1306.0207 and the Munrofest. These two papers concern the dynamic optimality conjecture, the still-unsolved problem of whether splay trees have constant competitive ratio among all self-adjusting binary search trees. The first one shows that all known optimality properties can be simultaneously achieved, and the second one (while mostly a survey) also describes a specific binary search tree data structure (not a splay tree) that has a constant competitive ratio if anything does.**Polynomial bounds for the grid-minor theorem**, Chandra Chekuri and Julia Chuzhoy, arXiv:1305.6577. Every graph of treewidth*t*contains a grid minor of size polynomial in*t*that can be found in polynomial time, showing that grid minor size and treewidth are polynomially related. Previous bounds were only sublogarithmic, so this is a big step.**Dynamic data structure for tree-depth decomposition**, Zdeněk Dvořák, Martin Kupec, and Vojtěch Tůma, arXiv:1307.2863. Bounded tree-depth is a pretty strong assumption. But in exchange you get dynamic graph algorithms with constant time per operation for maintaining a tree-depth decomposition and for any MSO property of these graphs. Although I think the results are exciting, I think the writeup could be improved: this reads like a conference extended abstract, with all the details missing.**Finding small patterns in permutations in linear time**, Sylvain Guillemot and Dániel Marx, arXiv:1307.3073 and SODA 2014. A small permutation is a pattern of a larger permutation if it is order-isomorphic to a subsequence of the larger permutation; I've recently been studying connections between permutation patterns and graph drawing. It's NP-complete to test whether one permutation is a pattern of another, but this paper shows that it's FPT, with the parameter being the size of the smaller permutation, and with linear dependence on the size of the larger permutation.**The Mondshein sequence**, Jens M. Schmidt, arXiv:1311.0750. Canonical orderings of maximal planar graphs are a very important tool in graph drawing; they are an ordering of the vertices such that each prefix forms a triangulated disk and each new vertex attaches to a contiguous path on this disk's boundary. They can be generalized to arbitrary directed graphs by a structure known as a non-separating ear decomposition, but so far this has been less useful because it's slower to construct. This paper solves that problem by finding a linear time algorithm for non-separating ear decomposition. The algorithm is complicated and depends on 3-connectivity analysis, but once the decomposition has been constructed it's possible to use it to test planarity easily.**Exact algorithms for maximum independent set**, Mingyu Xiao and Hiroshi Nagamochi, arXiv:1312.6260 and ICALP 2013. The first improvement in over 25 years on the worst case time for finding maximum independent sets in graphs. The algorithm only uses polynomial space, so presumably Robson's idea of memoizing small subgraphs can give a further improvement.**Faster all-pairs shortest paths via circuit complexity**, Ryan Williams, arXiv:1312.6680. The best algorithms for finding all-pairs shortest paths in dense graphs (in the real RAM model, so without assuming small integer edge lengths) take a bit less than*O*(*n*^{3}) time, but how much less? A hint that maybe it's a lot less comes from an old decision-tree algorithm of Fredman that solves the problem in time*O*(*n*^{2.5}) but with a decision tree of exponential size. By applying this to very small subproblems one can at least get an explicit algorithm that's subcubic by a polylogarithmic factor. Williams instead shows that it's possible to achieve a bound that's smaller than cubic by an exponential of a polylog. So it's still not an improvement to the actual exponent of the problem but it's not just an encoding trick as some logarithmic improvements have been alleged to be. The technique for achieving these bounds involves algebraic connections between tropical matrix multiplication (the min,plus algebra that one needs for shortest paths) and matrix multiplication over finite fields.