urn:lj:livejournal.com:atom1:110111100xDE0xDE0xDE2015-07-01T18:20:43Zurn:lj:livejournal.com:atom1:11011110:312432Linkage for the end of June2015-07-01T18:20:17Z2015-07-01T18:20:43ZI know, it's already July; I forgot that June only has 30 days.<br /><ul><li><a href="http://puzzlepicnic.com/puzzle?4694">Spiral galaxies</a>, my latest puzzle addiction (<a href="https://plus.google.com/100003628603413742554/posts/RzTcvx57F5y">G+</a>)</li><br /><li><a href="http://gizmodo.com/why-mathematicians-are-hoarding-this-special-type-of-ja-1711008881">An endangered species of mathematical chalk</a> (<a href="https://plus.google.com/100003628603413742554/posts/awYsAnAsK6E">G+</a>)</li><br /><li><a href="http://www.bach-bogen.de/blog/thecelloupgrade/zwischen-e-und-f">Mathematical art/music project in Stuttgart</a> (<a href="https://plus.google.com/100003628603413742554/posts/Dr9k2SXEpQt">G+</a>)</li><br /><li><a href="http://www.slate.com/blogs/the_eye/2015/06/15/joris_laarman_mx3d_the_pedestrian_bridge_will_be_3_d_printed_over_an_amsterdam.html">3d-printed fractal bridge in Amsterdam</a> (<a href="https://plus.google.com/100003628603413742554/posts/LYC2vThcozm">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2015-06-17/In_focus">Freedom to take and use photographs in public places in Europe endangered by newly proposed EU law</a> (<a href="https://plus.google.com/100003628603413742554/posts/G6YQijJjDns">G+</a>)</li><br /><li><a href="http://thecreatorsproject.vice.com/blog/a-paper-origami-sculpture-that-shrinks-from-your-touch">Touch-sensitive kinetic origami</a> (<a href="https://plus.google.com/100003628603413742554/posts/8kQXkYYaEsH">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=2g3sdzgSABM">Video about 3d immersions of the Klein bottle</a> (<a href="https://plus.google.com/100003628603413742554/posts/Lzj2SN95Jee">G+</a>)</li><br /><li><a href="http://retractionwatch.com/2015/06/25/one-publisher-appears-to-have-retracted-thousands-of-meeting-abstracts-yes-thousands/">IEEE clears away some of its junk publications</a> (<a href="https://plus.google.com/100003628603413742554/posts/c8fAPuTJzv6">G+</a>)</li><br /><li><a href="https://www.msri.org/system/cms/files/132/files/original/Lander-Case_for_Research.pdf">Why curiosity-driven basic research is important (and should continue to get government funding),</a> by mathematician and biologist Eric Lander (<a href="https://plus.google.com/100003628603413742554/posts/Hc4Ab2nSRwY">G+</a>)</li><br /><li><a href="http://www.improbable.com/2015/06/27/preference-peculiarities-curves-good-or-angles-bad/">Curves good, or angles bad?</a> (<a href="https://plus.google.com/100003628603413742554/posts/Lj78vLR6FKq">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:312298New preprint on track layouts2015-07-01T01:11:29Z2015-07-01T01:14:02ZAlthough it only hints at the connection, one way of interpreting my latest preprint is about higher-dimensional graph drawings. The paper is "<a href="http://arxiv.org/abs/1506.09145">Track Layouts, Layered Path Decompositions, and Leveled Planarity</a>" (with Bannister, Devanny, Dujmović, and Wood, arXiv:1506.09145).<br /><br />The track layouts of the title can be interpreted geometrically as being embeddings of the vertices of a graph on the positive coordinate axes of <i>d</i>-dimensional space, such that each edge forms a curve lying in the quarter-plane between two axes and no two edges cross. For instance, for three tracks, you get a drawing on the three rays and three quarter-planes of an orthant of three-dimensional space, and if you look at that orthant from a point of view somewhere on its symmetry axis, you get a picture looking something like the right side of this figure:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/3-track-spiral.png"></div><br /><br />The left side of the figure shows a different style of graph drawing, a leveled planar drawing in which the vertices are arranged in rows and each edge connects two consecutive rows; the blue shading shows how any leveled planar drawing can be spiraled around between the three rays of the orthant to produce a 3-track drawing. Not every 3-track drawing arises in this way: for instance, you can easily find a 3-track drawing of a triangle, while every leveled planar drawing is bipartite. But it turns out that every bipartite 3-track drawing is also leveled planar. Although this is a nice and non-obvious equivalence between two seemingly different drawing styles, it's a bit unfortunate, because testing whether a graph has a leveled planar drawing was known to be NP-complete and therefore the same is true for 3-track drawing (answering a question posed by Dujmović, Pór, and Wood in 2004).<br /><br />Despite this hardness result, there are several natural graph classes that always have 3-track drawings, including outerplanar graphs, Halin graphs, and squaregraphs. Since outerplanar graphs have treewidth two and Halin graphs have treewidth three, you might think that the series-parallel graphs (also treewidth two but more general than outerplanar) would be sandwiched between them and also have 3-track drawings, but that turns out not to be true. Here's a counterexample:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/ApexTree.png"></div><br /><br />The apex vertex connected to everything else would force the rest of the graph to live on the remaining two tracks, but the only two-track graphs are caterpillars, trees in which all vertices are within distance one of a central path. Since the tree formed from this graph by removing the apex is not a caterpillar, the graph itself does not have a 3-track drawing.<br /><br />This paper also introduces the notion of layered pathwidth, but I don't want to take much credit for that part because it came from some earlier not-yet-published work by Dujmović, Wood, and others. The definition is a bit too technical to repeat here (read the paper), but the leveled planar graphs turn out to be exactly the graphs of layered pathwidth one. So the hardness of testing leveled planarity also shows that testing layered pathwidth is hard. The apex-binary tree example above has bounded track-number (at most four) but unbounded layered pathwidth (for sufficiently large binary trees) showing that track-number and layered pathwidth are distinct concepts. But we think that the graphs of track number three (even the non-bipartite ones) should have bounded layered pathwidth, although we haven't yet been able to prove that conjecture.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:312061The white village of Thorn2015-06-30T01:55:34Z2015-06-30T01:55:34ZHere's another town in the Netherlands that I visited just before Computational Geometry Week: <a href="https://en.wikipedia.org/wiki/Thorn,_Netherlands">Thorn</a>, also known as "the white village". The story goes that when Napoleon took over the Netherlands, he instituted a building tax based on how many windows each building had. So the villagers bricked up many of their windows and then, to make the change less obvious, whitewashed the buildings. The buildings are still painted white and give the place a distinctive look.<br /><br />It's a small town, so not something that would likely fill a whole day of sightseeing, but very pretty. Behind the church we found an art gallery where an older man had put on a show of his art about trains, including paintings, prints based on old engineering drawings, and a giant model of the bridge over the river Kwai; it was the first day of the show and we were the first to visit.<br /><br /><div align="center"><a href="http://www.ics.uci.edu/~eppstein/pix/thorn/7.html"><img src="http://www.ics.uci.edu/~eppstein/pix/thorn/7-m.jpg" border="2" style="border-color:black;" /></a></div><br /><br /><b>( <a href="http://www.ics.uci.edu/~eppstein/pix/thorn/index.html">The rest of the photos</a> )</b>urn:lj:livejournal.com:atom1:11011110:311631Delft2015-06-28T21:51:46Z2015-06-28T22:01:51ZI arrived a day early in the Netherlands for Computational Geometry Week, to allow me longer to get used to the nine-hour time change. One of the things I did with the extra time was to visit Delft, one of many pretty Dutch canal cities, which turned out to be holding a fun flea market that day as well as some sort of children's marching band competition. I didn't take any photos of those, but I did get some from a tour of the Royal Delft Museum and Factory there. Royal Delft is probably best known for its ornamental blue-and-white painted plates, but I was more interested in their architectural ceramics:<br /><br /><div align="center"><a href="http://www.ics.uci.edu/~eppstein/pix/delft/Columnade.html"><img src="http://www.ics.uci.edu/~eppstein/pix/delft/Columnade-m.jpg" border="2" style="border-color:black;" /></a></div><br /><br /><b>( <a href="http://www.ics.uci.edu/~eppstein/pix/delft/index.html">The rest of the photos</a> )</b>urn:lj:livejournal.com:atom1:11011110:311383Report from Geometry Week2015-06-27T22:33:59Z2015-07-01T00:14:19ZI just returned from visiting Eindhoven, the Netherlands, for <a href="http://www.win.tue.nl/SoCG2015/">Computational Geometry Week</a>, including the 31st International Symposium on Computational Geometry, the 4th Annual Minisymposium on Computational Topology, the Workshop on Geometric Networks, the Workshop on Stochastic Geometry and Random Generation, the Workshop on Geometric Intersection Graphs, the Young Researchers Forum, and the CG Week Multimedia Exposition, almost all of which I attended pieces of (it was not possible to attend everything because SoCG had two parallel sessions and the workshops were run in parallel to each other).<br /><br />After a welcoming reception the evening before at campus café De Zwarte Doos ("the black box", but I was warned not to search Google for that phrase because some other company is using it for other purposes) the conference itself began Monday morning (June 22) with the best-paper talk from SoCG, "<a href="http://dx.doi.org/10.4230/LIPIcs.SOCG.2015.1">Combinatorial discrepancy for boxes via the γ<sub>2</sub> norm</a>", by Jirka Matoušek (posthumously) and Aleksandar Nikolov. The question they study is, given a set of <i>n</i> points in <i>d</i>-dimensional Euclidean space, how to label the points with +1 and –1 so that the sum of the labels in each axis-aligned box is as small as possible. They provide a lower bound for how big these sums might be, of the form log <i>n</i><sup><i>d</i> + O(1)</sup>, nearly tight and roughly squaring the previous bound. The method is simple and elegant and was presented very clearly by Nikolov; it consists of showing that the discrepancy (the quantity in question) is nearly the same as the γ<sub>2</sub> norm of an associated matrix (the product of the maximum row norm and column norm of two matrices that multiply to the given one, chosen to have small rows and columns respectively), that the points can be assumed to form a grid, that the matrix for points in a grid is a Kronecker product of lower-dimensional matrices of the same type, that the one-dimensional matrix (an all-one lower triangular matrix) has logarithic γ<sub>2</sub> norm, and that the γ<sub>2</sub> norm is multiplicative with respect to the Kronecker product.<br /><br />Next, we saw the three videos and one demo of the Multimedia Exposition. My favorite was "Tilt: the video", on the hardness of puzzles in which you tilt a panel with multiple moving balls and fixed obstacles in order to try to move the balls into given configurations:<br /><br /><div align="center"><lj-embed id="58" /></div><br /><br />Another highlight from Monday was Jeff Erickson's talk at the computational topology workshop. Listed in the program as "????", it turned out to be about <a href="http://jeffe.cs.illinois.edu/pubs/talks/prehistory.pdf">the pre-history of computational topology and computational geometry</a>, in which Jeff informed us that most of the basic concepts and algorithms in Chapter 1 of O'Rourke's computational geometry book have been known for centuries. In particular winding numbers and turning numbers can be traced to Thomas Bradwardine in 1320. Algorithms for computing the signed areas of curves and polygons (by summing over trapezoids or triangles determined by pieces of their boundaries) come from Albrecht Meister in 1770, as do some pretty strong hints about the Whitney–Graustein theorem that turning number is a complete topological invariant for regular homotopy. The algorithm for testing whether a point is inside or outside of a polygon by shooting a ray for the point and counting how many times it crosses the boundary comes from Gauss, who also asked which chord diagrams represent the crossing sequences of immersed circles. Even later, Max Dehn gave the first full proof of the Jordan curve theorem for polygons by proving the existence of triangulations, proving the existence of ears in the triangulations (often called Meisters' ear theorem, but this is a different Meisters, later than both Dehn and Meister), and doing the obvious induction.<br /><br />Tuesday started with a talk by Mikkel Abrahamsen on <a href="http://dx.doi.org/10.4230/LIPIcs.SOCG.2015.198">finding bitangents of polygons</a>. The method is very simple, and uses only two pointers into each polygon. Two of these pointers point to vertices from each polygon that define a candidate bitangent. The other two pointers walk around the two polygons in tandem, checking whether everything else is on the correct side of the bitangent. If not, we get a new bitangent to try and we reset the two walks. The technical advantage of this over previous methods (involving computing convex hulls of convex polygons) is that it uses only constant space instead of linear space, but it's the sort of thing I was very happy to see at SoCG and find very difficult to imagine getting into a more general theoretical conference like FOCS and STOC. Next, in the same session, Luis Barba presented what turned out to be the winner of the best student presentation award, a talk on <a href="http://dx.doi.org/10.4230/LIPIcs.SOCG.2015.209">finding geodesic centers of polygons</a>, points such that the whole polygon can be reached by paths within the polygon that are as short as possible.<br /><br />Also on Tuesday was the first of two invited talks, by Ben Green on his work with Terry Tao on <a href="http://dx.doi.org/10.4230/LIPIcs.SOCG.2015.405">the number of ordinary lines</a>. If you have a finite set of points in the plane, not necessarily in general position but not all on a line, then there will be at least one "ordinary line" containing exactly two points; this is the <a href="https://en.wikipedia.org/wiki/Sylvester%E2%80%93Gallai_theorem">Sylvester–Gallai theorem</a>. But there will generally be more than one ordinary line; for instance if all but one point is on a line then the number of ordinary lines is still large. What Green and Tao showed is that (for very large point sets) the number of ordinary lines is at least <i>n</i>/2 when <i>n</i> (the number of points) is even, and 3<i>n</i>/4 when it is odd, matching known upper bounds. The method involves taking the projective dual line arrangement, using Euler's formula to show that when the number of ordinary lines (simple crossings in the dual) is small then most crossings involve three lines and most faces in the line arrangement are triangles, and using these properties to show that this implies that the points can be covered by a small number of cubic curves. Then there's a big case analysis involving the various different possible cubic curves; the one that gives the fewest ordinary lines turns out to be a conic plus a line. Towards the end of the talk he reviewed several related conjectures including one that he credited to <a href="http://users.monash.edu.au/~davidwo/papers/KPW-VisCol-DCG05.pdf">Kára, Pór, and Wood</a> on whether every large set of points contains either a large collinear subset or a large pairwise-visible subset; there are some hints that algebraic methods can also be used for this but it looks trickier than for counting ordinary lines.<br /><br />Unfortunately the Young Researchers' Forum papers don't seem to be individually linked from the conference web site, but a <a href="http://www.computational-geometry.org/YRF/cgyrf2015.pdf">book of all two-page abstracts</a> is available as a single large pdf file. Two of the Tuesday afternoon talks caught my attention. Andrew Winslow spoke about <a href="http://arxiv.org/abs/1504.07883">tiling the plane with translates of a polyomino</a>. It was known that this is possible if and only if the boundary of the polyopmino can be partitioned into six pieces such that opposite pieces of boundary fit precisely together. Winslow translates this into a string problem in which one describes the polyomino by a string over four characters describing the four directions a boundary segment can be oriented. Then the problem becomes one of finding a cyclic rotation of this string, and a partition of it into six substrings, such that opposite substrings are reverse complements. Some combinatorics on words involving "admissible factors" reduces the problem to something that (like many string problems) can be solved using suffix trees. And Anika Rounds spoke in the same session, showing some hardness results on realizability of linkages (systems of rigid bodies connected by pins). Unlike some of the other work on linkages I've reported on here, the bodies are not allowed to overlap, but this constraint makes the realizability problem strongly NP-hard.<br /><br />Also of note Tuesday was the afternoon snack of <a href="https://en.wikipedia.org/wiki/Bossche_bol">Bossche bollen</a>, a Dutch specialty in the form of chocolate-covered whipped cream bombs.<br /><br />One of the Wednesday morning talks had an odd title: "<a href="http://dx.doi.org/10.4230/LIPIcs.SOCG.2015.436">Low-quality dimension reduction</a>". What this turns out to mean is that one has a high-dimensional nearest neighbor query problem and one translates it into a lower-dimensional <i>k</i>-nearest-neighbor problem. The result is an approximate nearest neighbor data structure with truly linear space and sublinear query time, with the query time exponent depending only on the approximation quality and no exponential dependence on dimension. Another of the Wednesday morning talks, by Hsien-Chih Chang, concerned <a href="http://dx.doi.org/10.4230/LIPIcs.SOCG.2015.689">a generalization of Voronoi diagrams to points with vector weights</a>, where one wants to find all points whose vector of weights, augmented with one more coordinate for the distance to a query point, is not dominated by any other point. When the weight coordinates are independent random variables this turns out to have near-linear complexity and near-constant output size per query.<br /><br />After an enthusiastic presentation by Benjamin Burton at the Wednesday afternoon business meeting, we decided to go to Brisbane, Australia, in 2017. The 2016 location has already been set for Boston, co-located with STOC. Other business meeting topics included reports from the various program chairs, a diiscussion of the possibility of setting up a non-profit organization to run the conference (about which we'll probably see more online later this year), a change to the steering committee elections to institute staggered terms, and a report from Jack Snoeyink of the NSF on some new initiatives for international cooperation. The rest of the day was occupied by the excursion (a boat ride and walking tour in nearby Den Bosch) and dinner (in the Orangerie of Den Bosch, a decommissioned Gothic church). At the dinner, Pankaj Agarwal and Vera Sacristán presented moving rememberances of Jirka Matoušek and Ferran Hurtado, respectively, both of whom were important figures in the computational geometry community and both of whom died this past year.<br /><br />In the first section of Thursday morning, the well-synchronized timing of the parallel sessions came in handy. Daniel Dadush described <a href="http://dx.doi.org/10.4230/LIPIcs.SOCG.2015.704">deterministic algorithms for estimating the volume of a convex body</a> by using a symmetric subset of the body to construct a lattice whose set of points within a slightly-expanded verson of the body can be easily listed and approximate its volume well. The problem has a randomized approximation scheme and deterministic algorithms must be at least exponential-time, but unlike previous ones this one is only single-exponential. The other parallel session included a nice <a href="http://dx.doi.org/10.4230/LIPIcs.SOCG.2015.754">APX-hardness reduction for <i>k</i>-means clustering</a>, from vertex covers n trangle-free graphs. The dmension is high (each vertex becomes a dimension and each edge becomes a point, the sum of two basis vectors) but this can be reduced using the Johnson–Lindenstrauss lemma. And back to the first session, Timothy Chan greatly simplified an old algorithm of Bernard Chazelle for <a href="http://dx.doi.org/10.4230/LIPIcs.SOCG.2015.733">constructing the intersection of two convex polyhedra in linear time</a>.<br /><br />In the next session we saw what I thought was one of the better student presentations, but one that was ineligible for the best-presentation award because it was scheduled after the voting closed. Its presenter, Arie Bos, is a retiree who has gone back to school, and he told me he thought it would be unfair to compete for the prize because he has so much experience making presentations in his past career. The subject was generalized Hilbert curves, fractal curves that can be formed by taking a Hamiltonian tour of a hypercube and then repeatedly replacing each vertex of the path by a shrunken copy of the same tour. The result of the paper is a new method for generating these tours in such a way that subtours are particularly compact: the bounding box of every subtour is at least 1/4 full, regardless of dimension.<br /><br />Susanne Albers presented the second of two invited talks, on data-dependent algorithm analysis. Her thesis was that, although worst-case analysis has been effective at encouraging the development of efficient algorithms and at distinguishing efficient from inefficient ones, it can be too pessimistic and in some cases unable to distinguish which of two competing algorithms is the better one in practice. The first half of the talk concerned randomized data models. The fully random model describes the input to e.g. quicksort but is not realistic enough for most problems. Better randomized models include smoothed analysis, the planted subgraph model (in which one is supposed to fnd a non-random solution part of an input hidden within a larger random part), and the randomized incremental model seen frequently in computational geometry in which a worst-case set of geometric objects is randomly ordered. The second half of the talk concerned some of Albers' own recent work, on deterministic online algorithms, focusing on modeling the locality of referencing in online input sequences. For online caching, rather than previous work using access graphs, she instead looks at the vector of distances between requests of the same types, where the distance is measured by the number of distinct other types between two requests of one type. By restricting the definition of the competitive ratio to input sequences with the same vector, she shows that the classical LRU strategy is much more tightly competitive than could be shown by the classical input-length based analysis, and that it is significantly better than other choices. She also performs a similar analysis for list updates, but using a dfferent data model in which one looks at runs and long runs in the subsequence of inputs having two particular data values.<br /><br />After a lunch featuring Hollandse Nieuwe herring (eaten raw in small pieces on toothpicks rather than the traditional method of lowering a whole herring down one's throat like a bird) I spent the remainder of the conference at the Workshop on Geometric Intersection Graphs. There's an <a href="http://cgweek15.tcs.uj.edu.pl/problems.pdf">open problem list</a> from the workshop online; eventually it is supposed to include problems from the open problem session but currently it seems to be only a pre-provided list of problems. I contributed one on the complexity of coloring <a href="http://en.wikipedia.org/wiki/Circle_graph">circle graphs</a>. A 1992 paper of Walter Unger claimed a proof that this is NP-complete for 4-coloring and polynomial for 3-coloring, but I don't believe the 3-coloring part: it was presented too sketchily and there is no full journal version. So I think it should still be considered open whether 3-coloring of circle graphs is easy or hard. Both the 3-coloring and 4-coloring questions are also interesting for triangle-free circle graphs, which are always 5-colorable.<br /><br />Overall, I found it to be a well-run, entertaining, interesting, and informative conference. I would have liked to see more application papers both in the main symposium and the satellites, but that's been a perennial issue since the start of SoCG. The new open access publisher and formatting seems to be working well, and I'm looking forward to next year in Boston.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:311043Two new papers2015-06-17T03:30:07Z2015-06-17T03:30:07ZSomehow I seem to have two new papers online that I haven't mentioned here before.<br /><br />First, among the many newly-online papers of the newly-open-source (yay!) <a href="http://drops.dagstuhl.de/portals/extern/index.php?semnr=15005">Proceedings of the 31st International Symposium on Computational Geometry</a>, I have one with Drago Bokal and Sergio Cabello, "<a href="http://drops.dagstuhl.de/opus/volltexte/2015/5113/pdf/30.pdf">Finding All Maximal Subsequences with Hereditary Properties</a>". Despite the abstract name, this is really about a concrete problem: given trajectory data (a sequence of points describing the motion of someone or something), answer questions about the shape of different parts of the trajectory. For instance, in the path below, one part is pretty much straight, a second part is nearly stationary, and the third part is moving in one general direction but not by a straight line. We want to be able to figure that out.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/WindowedFootprints.png"></div><br /><br />More technically, we set up a data structure that can query whether any subsequence of the trajectory has one of these three properties, formalized as having low convex hull area, having low diameter, or having a direction with respect to which it is monotone. The data structure itself is very simple — just store for each starting point of a subsequence the farthest ending point that gives a yes answer — but the harder part is building the data structure by finding all of these farthest ending points, efficiently. That's what the "maximal subsequences" in the title means, and where I'll leave you here, to read the paper if you want to find more.<a name='cutid1-end'></a><br /><br />Second, over on the arXiv, I have a new preprint "<a href="http://arxiv.org/abs/1506.04380">Genus, Treewidth, and Local Crossing Number</a>", arXiv:1506.04380, with Vida Dujmović and David Wood. Here local crossing number means the maximum number of crossings per edge, related to but not the same as the global crossing number (proportional to average crossings per edge). For instance, a <a href="https://en.wikipedia.org/wiki/1-planar_graph">1-planar graph</a> is a graph with local crossing number one. It was known that, like planar graphs, the graphs of low crossing number obey a <a href="https://en.wikipedia.org/wiki/Planar_separator_theorem">separator theorem</a> (they can be recursively partitioned into small pieces with few vertices appearing on the boundary between the pieces) but the functional dependence of the separator size on the local crossing number wasn't known before. Now it is, even when we consider embeddings on surfaces of high genus instead of the plane.<br /><br />A second result in the same paper involves finding low-crossing-number embeddings of arbitrary graphs. It was known that any graph with <i>m</i> edges can be embedded onto a surface of your favorite genus <i>g</i> in such a way that the average crossings per edge is nearly (within a polylog factor of) <i>m</i>/<i>g</i>, as good as one could hope to get. We strengthen this to get the same bounds for local crossing number instead of global crossing number. Like the previous result for global crossing number, the proof uses a method of Leighton and Rao for routing paths on expanders, but with some additional machinery: a load-balancing preprocessing phase that helps avoid problems with irregularities in the degree distribution of the graph.<a name='cutid2-end'></a>urn:lj:livejournal.com:atom1:11011110:310818Linkage2015-06-16T04:44:37Z2015-06-16T04:45:07Z<ul><li><a href="https://www.youtube.com/watch?v=UxcJfaoK5xg">Marble machine with 11000 marbles</a> (video; <a href="https://plus.google.com/100003628603413742554/posts/L35u5geW4J4">G+</a>)</li><br /><li><a href="http://www.theonion.com/graphic/pros-and-cons-standardized-testing-50388">Pros and cons of standardized testing</a>, only slightly exaggerated (<a href="https://plus.google.com/100003628603413742554/posts/UjvJ7c344wp">G+</a>)</li><br /><li><a href="http://mitpress.mit.edu/sites/default/files/titles/content/alife14/978-0-262-32621-6-ch084.pdf">Conservation of genki</a> in <a href="https://en.wikipedia.org/wiki/Critters_%28block_cellular_automaton%29">Critters</a> (<a href="https://plus.google.com/100003628603413742554/posts/8YQgpBt4y2E">G+</a>)</li><br /><li><a href="http://www.thisiscolossal.com/2015/06/bruce-shapiros-mesmerizing-kinetic-sand-drawing-machines/">Bruce Shapiro's sand drawing machines</a> (<a href="https://plus.google.com/100003628603413742554/posts/JbpKBkAsRyz">G+</a>)</li><br /><li><a href="http://scholarlykitchen.sspnet.org/2014/08/21/the-mystery-of-a-partial-impact-factor/">The mystery of a partial impact factor</a> or, yet another reason not to take these numbers seriously (<a href="https://plus.google.com/100003628603413742554/posts/GFejYnoCGMs">G+</a>)</li><br /><li><a href="http://america.aljazeera.com/opinions/2015/6/killing-tenure-is-academias-point-of-no-return.html">Scott Walker looks to kill tenure in Wisconsin</a> (<a href="https://plus.google.com/100003628603413742554/posts/PsDgwAXUHnC">G+</a>)</li><br /><li><a href="http://www.mathematicalgemstones.com/gemstones/can-you-prove-it/">Can you prove it?</a> A cute theorem about a coincidence of line segments connecting centers of tangent circles (<a href="https://plus.google.com/100003628603413742554/posts/5LQ8r4Rsmpx">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=0hlvhQZIOQw">Numberphile video on Ford circles and mediants</a> (<a href="https://plus.google.com/100003628603413742554/posts/ibyCaNmey3s">G+</a>)</li><br /><li><a href="http://www.theguardian.com/education/2015/jun/11/nobel-laureate-sir-tim-hunt-resigns-trouble-with-girls-comments">Even a Nobel prize won't save you if you say something sexist enough</a> (<a href="https://plus.google.com/100003628603413742554/posts/6UrfUrfcFij">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=bXn4_JkVFVo">Video of a cat playing a theremin</a> (<a href="https://plus.google.com/100003628603413742554/posts/2Qxotz8H6V4">G+</a>)</li><br /><li><a href="http://aperiodical.com/2015/06/math-stack-a-really-pretty-deck-of-cards-with-maths-on/">Playing cards decorated with mathematics</a> (<a href="https://plus.google.com/100003628603413742554/posts/NmgoQ8HUSwm">G+</a>)</li><br /><li><a href="http://www.wired.com/2015/05/attack-geosciences-congress/">Congress's attack on the geosciences</a> (<a href="https://plus.google.com/100003628603413742554/posts/6EgHn2tCCjq">G+</a>)</li><br /><li><a href="http://www.researchgate.net/publication/275966051_Manifesto_of_Editorial_Independence_of_Editors_of_Frontiers_Medical_Journals">Manifesto of editorial independence</a> <a href="http://scim.ag/OAconflict">gets editors sacked from for-profit open source publisher</a> (<a href="https://plus.google.com/100003628603413742554/posts/hnUtXSC86BR">G+</a>)</li><br /><li><a href="https://ideas.repec.org/top/top.person.all.html">Economists rank themselves</a> (<a href="https://plus.google.com/100003628603413742554/posts/cEjEdjxutYA">G+</a>)</li><br /><li><a href="http://drops.dagstuhl.de/portals/extern/index.php?semnr=15005">The new open-source SoCG proceedings are here</a> (<a href="https://plus.google.com/100003628603413742554/posts/QS2eafsTDkm">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:310630Metric dimension for subdivided graphs2015-06-08T01:11:58Z2015-06-08T01:11:58ZI have another new preprint out this evening: "<a href="http://arxiv.org/abs/1506.01749">Metric Dimension Parameterized by Max Leaf Number</a>", arXiv:1506.01749. The <a href="https://en.wikipedia.org/wiki/Metric_dimension_(graph_theory)#Properties">metric dimension</a> of a graph is the minimum number of vertices you need to choose as landmarks so that all other vertices are uniquely determined by their distances to the landmarks.<br /><br />The result in the new paper is small: it says that if you form big graphs from smaller ones by subdividing their edges into paths, you can solve the problem in a time that depends exponentially on the size of the small graph, but only linearly on the number of added subdivision vertices. So the result seems to apply to only a very restricted class of graphs, but as I argue in the paper there are some natural real-world graphs that have the structure of subdivided smaller graphs: the graphs of public transportation systems tend to have this form, because they consist of long lines or tracks with many stops on them. For instance, here's a graph of the Toronto subway system, from Paulshannon <a href="https://commons.wikimedia.org/wiki/File:TTCsubwayRTmap-2007.svg">on Wikimedia commons</a>:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/TTCsubwayRTmap-2007.svg.png"></div><br /><br />It's also known how to compute the metric dimension efficiently on trees. So putting these two results together, it seems at least plausible that there should be an efficient algorithm on the graphs formed by subdividing smaller graphs and gluing trees onto them. That is, I would like a fixed-parameter tractable algorithm parameterized by the <a href="https://en.wikipedia.org/wiki/Circuit_rank">cyclomatic number</a> (or slightly better the almost-tree number), rather than by the max leaf number. But despite some effort I wasn't able to get that.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:310518Linkage2015-06-01T01:59:04Z2015-06-01T01:59:04Z<ul><li><a href="http://nielsenhayden.com/makinglight/archives/016246.html">An interesting new voting system arises from the ashes of a broken science-fiction award process</a> (<a href="https://plus.google.com/100003628603413742554/posts/g2JPvVap3B7">G+</a>)</li><br /><li><a href="https://vimeo.com/2719929">Jorg Meyer, scientific glassblower</a>, <a href="https://en.wikipedia.org/wiki/Jorg_Meyer">whale rider, and falconer</a> (<a href="https://plus.google.com/100003628603413742554/posts/CvqaSqTA4DY">G+</a>)</li><br /><li><a href="https://plus.google.com/+JoergFliege/posts/VWFHj2vYBGG">Oklahoma oilman fails to get a university researcher fired for publishing about ties between fracking and earthquakes</a> (<a href="https://plus.google.com/100003628603413742554/posts/Jxc8nNTnCYs">G+</a>)</li><br /><li><a href="http://arxiv.org/abs/1502.07597">Women mathematicians in France in the mid-twentieth century</a> (<a href="https://plus.google.com/100003628603413742554/posts/44e4NEu3k2t">G+</a>, with links to Wikipedia articles on the same people)</li><br /><li><a href="http://www.sciencemag.org/content/348/6234/479">What happens when you mix open-records laws, state university professors' email histories, and the anti-science movement</a> (<a href="https://plus.google.com/100003628603413742554/posts/PWBvQDiDJwn">G+</a>)</li><br /><li><a href="http://blogs.oregonstate.edu/glencora/2015/05/21/with-great-privilege-comes-great-responsibility/">Silent Glen gets tenure</a> (<a href="https://plus.google.com/100003628603413742554/posts/j2bDTNRYzdu">G+</a>)</li><br /><li><a href="http://thrilling-tales.webomator.com/derange-o-lab/pulp-o-mizer/pulp-o-mizer.html">Pulp magazine cover generator</a> (<a href="https://plus.google.com/100003628603413742554/posts/3mQrBK8UzVh">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Reuleaux_triangle">Reuleaux triangles</a> (<a href="https://plus.google.com/100003628603413742554/posts/Ls2pYDMkCSy">G+</a>)</li><br /><li><a href="https://plus.google.com/+PeterSuber/posts/Bn6K3ZCGaM4">Do hybrid open access journals double-dip?</a> Answer: sometimes they even triple-dip (<a href="https://plus.google.com/100003628603413742554/posts/9cR9iH9yKxg">G+</a>)</li><br /><li><a href="http://arxiv.org/abs/1505.06508">Pattern-avoiding permutation counting functions are nastier than we thought</a> (<a href="https://plus.google.com/100003628603413742554/posts/NdueUTsAC6b">G+</a>)</li><br /><li><a href="http://jeff560.tripod.com/mathsym.html">Earliest uses of mathematical symbols</a> (<a href="https://plus.google.com/100003628603413742554/posts/5vuYmEcpZZA">G+</a>)</li><br /><li><a href="http://www.lindengledhill.com/">Crystal microphotographs by Linden Gledhill</a> (<a href="https://plus.google.com/100003628603413742554/posts/9rdNzj4hYjj">G+</a>)</li><br /><li><a href="http://www.nature.com/news/sleeping-beauty-papers-slumber-for-decades-1.17615">Sleeping beauty papers</a> with many citations after a long sleep (<a href="https://plus.google.com/100003628603413742554/posts/gof4gKPz77V">G+</a>)</li><br /><li><a href="http://boingboing.net/2015/05/29/miniature-origami-robot-self-f.html">Tiny self-folding robot</a> (<a href="https://plus.google.com/100003628603413742554/posts/3xUV12qot32">G+</a>)</li><br /><li><a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2602683">Extending freedom of panorama in Europe</a> (<a href="https://plus.google.com/100003628603413742554/posts/6y4qBN9zUGj">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:310028Graham on Erdős on Egyptian fractions2015-05-20T07:37:43Z2015-05-20T07:39:03ZIn a recent paper Ron Graham <a href="http://www.math.ucsd.edu/~ronspubs/13_03_Egyptian.pdf">surveys the work of Paul Erdős on Egyptian fractions</a>. Did you know that Erdős' second paper was on the subject? I didn't. It proved that the sum of a harmonic progression can never form an Egyptian fraction representation of an integer (there is always at least one prime that appears in only one term). Graham himself is also a fan, having studied Egyptian fractions in his Ph.D. thesis.<br /><br />Another of Erdős' papers surveyed by Graham is also somewhat related to the subject of my recent blog posts on sequences of highly composite numbers. This paper (famous for formulating the Erdős–Straus 4/n = 1/x + 1/y + 1/z conjecture) included another conjecture that every rational number x/y (between 0 and 1) has an Egyptian fraction representation with O(log log y) terms. However, the best bound known so far is larger, O(sqrt log y).<br /><br />For any number z, let D(z) be the smallest number with the property that every positive integer less than z can be expressed as a sum of at most D(z) divisors of z (not necessarily distinct). Then a stronger version of Erdős' conjecture (for which the same bounds are known) is that, for every y, there exists a number z larger than y (but not too much larger) with D(z) = O(log log z). With such a z, you can split x/y into floor(xz/y)/z + remainder/yz and then use the sum-of-divisors property of z to split each of these two terms into a small number of unit fractions.<br /><br />Computing D(z) for small values of z is not particularly hard, using a dynamic programming algorithm for the subset sum problem. So, based on the guess that the highly composite numbers would have small values of D(z), I tried looking for the biggest highly composite number with each value. In this way I found that D(24) = 3; D(180) = 4; D(5040) = 5; and D(1081080) = 6. That is, every positive integer less than 1081080 can be represented as a sum of at most six divisors of 1081080, and some require exactly six. Based on this, every x/y with y at most 1081080 can be represented as at most a 12-term Egyptian fraction.<br /><br />Each number in the sequence 2, 6, 24, 180, 5040, 1081080, ... is within a small factor of the 1.6 power of the previous number; another way of saying the same thing is that the numbers in this sequence obey an approximate multiplicative Fibonacci recurrence in which each number is approximately the product of the previous two. The next number in the sequence might still be within reach of calculation, using a faster programming language than my Python implementation. If that 1.6-power pattern could be shown to continue forever, then Erdős' log-log conjecture would be true.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:309894Mid-May linkage2015-05-16T05:36:58Z2015-05-16T05:37:38Z<ul><li><a href="http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2560572">An economic analysis of public domain photos on Wikipedia</a> shows that "massive social harm was done by the most recent copyright term extension that has prevented millions of works from falling into the public domain since 1998" (<a href="https://plus.google.com/100003628603413742554/posts/7DaUpswgdY2">G+</a>)</li><br /><li><a href="http://blogs.ams.org/visualinsight/2015/05/01/twin-dodecahedra/">An infinite tree of regular dodecahedra</a> sharing a cube of vertices between each neighboring pair (<a href="https://plus.google.com/100003628603413742554/posts/UpKo5xNmZn9">G+</a>)</li><br /><li><a href="https://igorpak.wordpress.com/2015/05/02/you-should-watch-combinatorics-videos/">Combinatorics videos</a> collected by Igor Pak (<a href="https://plus.google.com/100003628603413742554/posts/KuGUzGoTqZw">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=a3QqKBWHarA">The inspiration for some of Man Ray's art in a collection of mathematical models</a> (<a href="https://plus.google.com/100003628603413742554/posts/UhfwavcvYdo">G+</a>)</li><br /><li><a href="https://plus.google.com/+FrancoisDorais/posts/E3Yh9YwQTQN">Did you know you could get bibtex directly from a doi?</a> (<a href="https://plus.google.com/100003628603413742554/posts/Qbx6xEERaup">G+</a>)</li><br /><li><a href="http://blogs.plos.org/everyone/2015/05/01/plos-one-update-peer-review-investigation/">Journal editor canned for using sexist referee report</a> (<a href="https://plus.google.com/100003628603413742554/posts/jmxuXn5GZ1W">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=ploETyBDM7I">Trilingual powers of two</a> in a video on street-vendor cookie-making (<a href="https://plus.google.com/100003628603413742554/posts/5HQMVbkkJ9S">G+</a>)</li><br /><li><a href="http://www.metafilter.com/149171/The-International-Journal-of-Proof-of-Concept-or-Get-The-Fuck-Out">Winner, best name of an actual publication</a> (hacker zine PoC||GTFO; <a href="https://plus.google.com/100003628603413742554/posts/SgfjCrpmKPP">G+</a>)</li><br /><li><a href="https://www.chromeexperiments.com/experiment/100000-stars">3d visualization of nearby stars</a> (<a href="https://plus.google.com/100003628603413742554/posts/XQsfYnUMWsE">G+</a>)</li><br /><li><a href="https://doajournals.wordpress.com/2015/05/11/historical-apc-data-from-before-the-april-upgrade">Over 2/3 of listed open access journals charge no author fees</a> (<a href="https://plus.google.com/100003628603413742554/posts/geLxaXBzBge">G+</a>)</li><br /><li><a href="http://www.theguardian.com/technology/2015/may/14/dear-google-open-letter-from-80-academics-on-right-to-be-forgotten">Open letter to Google by 80 academics</a> asking for greater transparency on "right to be forgotten" (<a href="https://plus.google.com/100003628603413742554/posts/WGj2wwQtU1J">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:309622Parametric knapsacks for number-theoretic sequences2015-05-15T20:32:41Z2015-05-16T00:34:59ZOne of the key principles of <a href="http://11011110.livejournal.com/307881.html">parametric optimization</a> is that, when you are faced with optimizing the nonlinear combination of two linear values (sums of element weights, costs, etc) you should instead look at the set of optima for all possible linear combinations of the same two values. Let's see how this applies to the <a href="http://11011110.livejournal.com/305481.html">number-theoretic knapsack problems</a> I posted about <a href="http://11011110.livejournal.com/309343.html">earlier this week</a>.<br /><br />In the knapsack problem, we are trying to optimize the total profit of a subset of the given elements, subject to the condition that their total size is at most a given threshold. This can be expressed as a nonlinear combination of these two linear values in which the function of profit and size is the identity function on profit when the size is small enough and zero otherwise. This isn't the nice sort of quasiconvex function that parametric methods are best-suited for, but the fractional knapsack problem instead involves a greedy algorithm for maximizing the profit/size ratio, and this sort of ratio is quasiconvex. So in any case, following the parametric approach, let's replace both of these nonlinear combinations by the linear combination profit − λ·size, let the parameter λ vary, and see what solutions we get.<br /><br />For any particular value of λ, the answer is very simple: the optimal solutions are the ones that take all elements for which profit/size > λ (the ones that make a positive contribution to the solution value), and any subset of the elements for which profit/size = λ (the ones whose contribution is zero). The smallest-size optimal solution is the one that takes only the elements for which profit/size > λ. So the set of all smallest-size optimal solutions is almost exactly the same as the set of solutions generated by the greedy algorithm that adds one element at a time in order by the profit/size ratio. To make this algorithm generate exactly the smallest-size optimal solutions, we need to modify it so that when there are ties in profit/size ratio it adds all tied elements at once rather than adding them one at a time. When the set of profit/size values is discrete (as it is in our problems) this set of solutions also has the property that each solution is the unique optimal solution for a nonempty range of parameter values.<br /><br />Now suppose we go back to the number-theoretic sequences that I started with (the highly abundant numbers and the highly composite numbers), expand out the definition of the profit and size functions in the parametric optimization functions profit − λ·size, and eliminate the logs in these functions by exponentiating. Then the sequence of smallest-size optimal solutions for these objective functions are exactly how the colossally abundant numbers and superior highly composite numbers are defined. That is, it is no coincidence that starting with the knapsack-problem formulations of the highly abundant and highly composite numbers, and then applying the greedy algorithm to the resulting knapsack problems, gave these other two sequences: it falls out directly from the parametric analysis above and the definitions of these sequences.<br /><br />However, OEIS states that <a href="http://oeis.org/A073751">the correctness of the generation algorithm</a> for the successive factors of the colossally abundant numbers is still conjectural rather than proven. How can this be, when we have seen above that the greedy algorithm always works for sequences like this? The part that must still be unknown concerns the possibility of ties: is it ever possible for two or more knapsack elements to have the same profit/cost ratio? If so we must take both or all of them at once rather than letting them be chosen one at a time. And this is problematic from the algorithmic point of view because it involves testing complicated expressions involving logarithms for exact equality.<br /><br />Specifically, in the highly abundant number version of the problem, we need to know whether there can exist two prime powers <i>p<sup>i</sup></i> with the same value of the expression log<sub><i>p</i></sub>(<i>p</i><sup><i>i</i> + 1</sup> − 1)/(<i>p</i><sup><i>i</i></sup> − 1). In the highly composite number version of the problem, we need to know whether there can exist two prime powers with the same value of the expression log<sub><i>p</i></sub>(<i>i</i> + 1)/<i>i</i>. In both cases, it seems unlikely, but obviously that's not a proof. More generally, Alaoglu and Erdős conjectured in 1944 (in connection with this problem) that two expressions log<sub><i>p</i></sub><i>q</i> with different prime bases and rational arguments can only be equal if they're both integers, but (although it is known that there can be no three-way ties) this remains unproven.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:309343Fractional knapsacks and colossal abundance2015-05-14T05:04:42Z2015-05-14T17:59:01ZIn <a href="http://11011110.livejournal.com/305481.html">a recent post</a> I observed that the largest <a href="https://en.wikipedia.org/wiki/Highly_abundant_number#References">highly abundant number</a> below some threshold <i>n</i> could be found as the optimal solution of a certain knapsack problem in which the items to be packed into the knapsack are prime powers <i>p<sup>i</sup></i> with profit log (<i>p</i><sup><i>i</i> + 1</sup> − 1)/(<i>p</i><sup><i>i</i></sup> − 1) and size log <i>p</i> (both independent of <i>n</i>), and with knapsack capacity log <i>n</i>. In particular every highly abundant number has a factorization that can be generated as the solution to this knapsack problem with the number itself as the threshold.<br /><br />Unfortunately, the knapsack problem is NP-complete, making its solutions vary in complicated ways, and making it tricky to extract useful information about highly abundant numbers and their factorizations from this formulation. But fortunately, there's a class of knapsack problems that are really easy to solve: the ones where the optimal fractional solution is the same as the optimal integer solution. These are the solutions that you get by a greedy algorithm that at each step chooses the item with maximum profit/size. This greedy strategy is not optimal for all capacities, but it is optimal when the capacity happens to equal the solution size. So which highly abundant numbers have this greedy property?<br /><br />To test this, I wrote <a href="http://www.ics.uci.edu/~eppstein/0xDE/frachab.py">a simple piece of Python code</a> that starts with the number 1, repeatedly chooses a not-already-chosen prime power that maximizes the profit/size ratio defined above, and multiplies the current number by the base of the chosen prime power. I computed the profits and sizes sloppily using floating point numbers and the built-in log function, but that seems to be good enough for small prime powers. Here are the first few results:<pre>
1
2
6
12
60
120
360
2520
5040
55440
720720
1441440
4324320
21621600
367567200
6983776800
160626866400
321253732800
9316358251200
288807105787200
2021649740510400
6064949221531200
224403121196654400
9200527969062830400
395622702669701707200
791245405339403414400
37188534050951960476800
1970992304700453905270400
116288545977326780410953600
581442729886633902054768000
35468006523084668025340848000
2376356437046672757697836816000
168721307030313765796546413936000
12316655413212904903147888217328000
135483209545341953934626770390608000
10703173554082014360835514860858032000
21406347108164028721671029721716064000
1776726809977614383898695466902433312000
5330180429932843151696086400707299936000
474386058264023040500951689662949694304000
46015447651610234928592313897306120347488000
598200819470933054071700080664979564517344000
60418282766564238461241708147162936016251744000
6223083124956116561507895939157782409673929632000
665869894370304472081344865489882717835110470624000
72579818486363187456866590338397216244027041298016000
8201519488959040182625924708238885435575055666675808000</pre>This calculation was essentially instantaneous; I cut it off here because it was a conveniently-sized screenfull of numbers rather than out of any difficulty in continuing the sequence for many more terms.<br /><br />When I tried looking this up in OEIS, I had two surprises. First (except for the leading one) this exactly matches all the known terms of the sequence of <a href="http://oeis.org/A004490">colossally abundant numbers</a>, which have quite a different definition from the highly abundant numbers. <s>Why? I don't know. Do this sequence and the sequence of colossally abundant numbers stay equal forever? I also don't know. And second, this calculation goes much farther than the known entries for the colossally abundant numbers in OEIS (about half of the terms shown above). The computation was so quick that I would tag the sequence "easy" if I were adding it as a new one to OEIS, but the colossally abundant numbers aren't tagged easy and have no listed algorithm for generating their sequence. Does this give a new easy way to calculate the colossally abundant numbers?</s> Update: <a href="http://oeis.org/A073751">this sequence of factors</a> looks like it is calculated the same way, so this method does seem to be known, but still somewhat conjectural. It's not clear whether it was obtained using the greedy knapsack idea or through some other reasoning.<br /><br />The same knapsack formulation applies to other sequences of numbers maximizing multiplicative functions, and the same fractional-knapsack greedy trick can be used to find easy-to-compute subsequences of those other sequences. For instance, the <a href="https://en.wikipedia.org/wiki/Highly_composite_number">highly composite numbers</a> have knapsack problems with profit log (<i>i</i> + 1)/<i>i</i>, and the greedy knapsack method applied to this profit function gives what looks like the sequence of <a href="http://oeis.org/A002201">superior highly composite numbers</a>. Are others as interesting? I also don't know.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:309102Congratulations, Dr. Bannister!2015-05-12T01:08:11Z2015-05-12T01:08:11ZMy student <a href="http://www.ics.uci.edu/~mbannist/">Michael Bannister</a> passed his thesis defense this afternoon. Michael has published nearly a dozen papers on topics involving graph algorithms and computational geometry (see his home page for a complete listing). His thesis research involved lower bounds and fixed-parameter upper bounds for graph drawing: inapproximability of layout compaction, the use of Galois theory to prove the nonexistence of exact algorithms for optimizing the vertex placement in many styles of graph drawing, and parameterized algorithms for one-page and two-page crossing minimization.<br /><br />Michael has also been one of our most popular teaching assistants and has enthusiastically encouraged undergraduates to take part in research projects, leading to a poster at last year's Graph Drawing symposium and an ongoing project that we hope to turn into another publication. Next year he'll be putting those skills to good use as a visiting assistant professor at <a href="https://en.wikipedia.org/wiki/Pomona_College">Pomona College</a>, a highly selective private school also located in Southern California, while his wife (another theoretician, Jenny Lam) finishes her own doctorate.<br /><br />Congratulations, Michael, and congratulations Pomona! Our loss is your gain.urn:lj:livejournal.com:atom1:11011110:308857Tallying preference ballots efficiently2015-05-08T06:15:11Z2015-05-08T06:20:39ZThe <a href="https://en.wikipedia.org/wiki/Schulze_method">Schulze method</a> for determining the results of multiway votes has three parts:<br /><br />1. Use the ballots to determine the results (winner and margin of victory) of each possible head-to-head contest.<br />2. Perform an all-pairs <a href="https://en.wikipedia.org/wiki/Widest_path_problem">widest path</a> computation on a directed complete graph weighted by the margins of victory.<br />3. Find the candidate with wider outgoing than incoming paths to all other candidates.<br /><br />The second part can be done in cubic time using the <a href="https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algorithm">Floyd-Warshall algorithm</a> (the choice in practice) or faster using fast matrix multiplication. And the third part is easy. But what about the first part? Here, some Wikipedia editor <a href="https://en.wikipedia.org/w/index.php?title=Schulze_method&diff=next&oldid=428562163">wrote in 2011</a> that the first part, "if implemented in the most straightforward way, takes time proportional to C<sup>2</sup> times the number of voters" (where C is the number of candidates). But then last year some other editor <a href="https://en.wikipedia.org/w/index.php?title=Schulze_method&type=revision&diff=629989153&oldid=628267803">tagged this claim</a> as being original research.<br /><br />This raised some questions for me. Given how straightforward it is, can this really be considered to be original research? Is it possible to find a published source for the time analysis of this step that can be used to untag it? (If you know of one, please tell me or add it to the article.) Is the algorithm with this time bound really "the most straightforward way"? And if this is the time bound you get by doing things straightforwardly, can we get a better time bound by trying to be more clever?<br /><br />To begin with, I think the most straightforward way of solving this is the following. I'll assume that each ballot is stored as a sorted array of the candidates, most-preferred first. For each pair of candidates, loop over all ballots, and search the ballot array sequentially to find the first position that has one of the two candidates in the pair; tally that ballot as a win for the candidate that was found. When you've looped through all the ballots, compare the tallies for the two candidates to determine the winner, and subtract the tallies to determine the margin of victory. But this takes time O(C<sup>3</sup>n), not O(C<sup>2</sup>n).<br /><br />The O(C<sup>2</sup>n)-time method that was intended is presumably something like the following one. We will make a matrix M[i,j] that will eventually store the number of voters who prefer candidate i to candidate j, initially all zeros. We then loop through the ballots one at a time. For each ballot B, each i in the range from 1 to C, and each j in the range from i+1 to C, we add one to the count for M[B[i],B[j]]. Finally, after computing this matrix, we can compare M[i,j] to M[j,i] as before to determine each pairwise winner, or subtract these two numbers to determine the margin of victory.<br /><br />But when the number of voters is big (larger than C!) there's a different way to tally the votes that's more efficient. First, sort the ballots, so that all people who voted the same way are collected into the same group. (This can be done by treating each vote as a number in the <a href="https://en.wikipedia.org/wiki/Factorial_number_system">factorial number system</a> and applying <a href="https://en.wikipedia.org/wiki/Counting_sort">counting sort</a> to these numbers). Then, apply the O(C<sup>2</sup>n)-time method to the grouped ballots, looping over all groups rather than all individual ballots and changing the part that adds one to M[B[i],B[j]] so that instead it adds the size of a group of ballots. The running time is O(Cn) to number and sort the ballots, plus O(C<sup>2</sup>C!) to tally them. So we've reduced the dependence on n down to linear in C, at the expense of adding another term that is a much larger function of C. For systems like the Oscars or Hugos that have only five candidates and thousands of voters, this could be a win.<br /><br />It's not possible to achieve a time of just O(Cn), without the extra term, because even when n is tiny the output size is C<sup>2</sup>. But it is possible to trade off between the grouped and ungrouped tallying methods, when n is intermediate in size. To do so, group the candidates (arbitrarily) into blocks of B candidates (preferably a power of two; we'll pick the right size for B later). We can partition a voter's preferences into blocks in time O(Cn) by using bucket sort to partition the candidates into blocks in their preference order, and we can determine the voter's preferences between the candidates in the union of two blocks in time O(B) by applying a merge algorithm, comparing candidates using a reverse index of the positions of each candidate in the voter's preference list. There are O((C/B)^2) pairs of blocks, so combining the times for splitting votes into blocks and for applying the factorial method to each pair of blocks gives a total runtime of O(Cn + C<sup>2</sup>n/B + C<sup>2</sup>(2B)!). The right choice for B is the one which makes the second and last terms of this runtime approximately equal (B proportional to log n/loglog n) and this logarithmic factor is the amount by which the middle term of the time bound is faster than the "straightforward" O(C<sup>2</sup>n)-time method.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:308604Linkage2015-05-01T05:01:52Z2015-05-01T05:01:52ZSome good discussions this time over on G+, especially for the vote-off-the-island post but also on the golden spiral, P=NP counterexample, and election-system posts.<br /><ul><li><a href="http://www.thisiscolossal.com/2015/04/raw-rendered-experimental-3d-artworks-by-joey-camacho/">3d rendered art by Joey Camacho</a> (<a href="https://plus.google.com/100003628603413742554/posts/VrV9SCVLukg">G+</a>)</li><br /><li><a href="http://www.theguardian.com/science/alexs-adventures-in-numberland/2015/mar/14/pi-day-2015-pi-rivers-truth-grime">Rivers don't actually approximate semicircles</a> (<a href="https://plus.google.com/100003628603413742554/posts/c5UuWWziNxM">G+</a>)</li><br /><li><a href="http://www.usatoday.com/story/tech/2015/04/19/chris-roberts-one-world-labs-united-rsa-computer-security-tweets/26036397/">Intimidating researchers from discussing known vulnerabilities in fly-by-wire systems</a> (<a href="https://plus.google.com/100003628603413742554/posts/iTDu67mZXtg">G+</a>)</li><br /><li><a href="http://www.kurims.kyoto-u.ac.jp/icalp2015/accepted-ICALP-A.html">ICALP accepted papers</a> (<a href="https://plus.google.com/100003628603413742554/posts/dc3a2EMZ9NJ">G+</a>)</li><br /><li><a href="http://makezine.com/2015/04/20/understand-1700-mechanical-linkages-helpful-animations/">Animations of mechanical linkages</a> (<a href="https://plus.google.com/100003628603413742554/posts/KMVL1uVadxp">G+</a>)</li><br /><li><a href="https://plus.google.com/+DavidRoberts/posts/5scZ4Hvzh5d">If impact factors are so obviously irrelevant, why do we still use them?</a> (<a href="https://plus.google.com/100003628603413742554/posts/bCAYQMsUL4x">G+</a>)</li><br /><li><a href="http://hechingerreport.org/californias-multi-million-dollar-online-education-flop-is-another-blow-for-moocs/">California MOOC boondoggle flops</a> (<a href="https://plus.google.com/100003628603413742554/posts/CLSjy3GgSVD">G+</a>)</li><br /><li><a href="http://chronicle.com/article/Iowa-Legislator-Wants-to-Give/229589/">Enabling students to vote disliked instructors off the island</a> (<a href="https://plus.google.com/100003628603413742554/posts/2ycUKjQuGBi">G+</a>)</li><br /><li><a href="http://shorts2014.quantumlah.org/">Festival of short films on quantum mechanics</a> (<a href="https://plus.google.com/u/0/100003628603413742554/posts/3hbTaLaE7Um">G+</a>)</li><br /><li><a href="https://xkcd.com/spiral/">What do Don Sheehy, a sewing machine, and the golden spiral have to do with each other?</a> (<a href="https://plus.google.com/100003628603413742554/posts/9YqCMRdFW2Y">G+</a>)</li><br /><li><a href="http://arxiv.org/abs/1504.06890">Undergraduates publish counterexamples to P=NP proofs</a> as a result of a research seminar at Rochester conducted by Lane Hemaspaandra (<a href="https://plus.google.com/100003628603413742554/posts/gX9ETXHXGod">G+</a>)</li><br /><li><a href="http://www.cut-the-knot.org/Curriculum/SocialScience/%28171%292015.pdf">Deciding elections by who has the best median-voter score</a> (<a href="https://plus.google.com/100003628603413742554/posts/A62YhjhCfEt">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:308431Perturbing weighted elements to make set weights distinct2015-04-21T01:11:54Z2015-04-21T01:11:54ZSuppose you have a polynomial-time algorithm that operates on sets of weighted elements, and involves comparisons of the weights of different sets. (This describes many different algorithms for shortest paths, minimum spanning trees, minimum weight matchings, <a href="http://11011110.livejournal.com/307881.html">closures</a>, etc.) But suppose also that your algorithm is only guaranteed to work correctly when different sets always have distinct total weights. When comparisons could come out equal, your algorithm could crash or produce incorrect results. But equal weights are likely to happen when the element weights are small integers, for instance. Is there some semi-automatic way of patching your algorithm to work in this case, without knowing any details about how it works?<br /><br />An obvious thing to try is to add small distinct powers of two to the element weights. If these numbers are small enough they won't affect initially-unequal comparisons. And if they're distinct powers of two then their sums are also distinct, so each two sets get a different perturbation. But this method involves computing with numbers that have an additional <i>n</i> bits of precision (where <i>n</i> is the number of elements in the problem), and a realistic analysis of this method would give it a near-linear slowdown compared to the unperturbed algorithm. Can we do better?<br /><br />Exactly this issue comes up in my latest preprint, "<a href="http://arxiv.org/abs/1504.04931">Rooted Cycle Bases</a>" (with McCarthy and Parrish, arXiv:1504.04931, to appear at WADS). The paper is motivated by some problems concerning <a href="http://11011110.livejournal.com/279049.html">kinematic chains</a>, and studies problems of finding a cycle basis of a given graph in which all basis cycles are constrained to contain a specific edge. When all cycles have distinct weights a simple greedy algorithm can be used to find a minimum-weight basis, but if there are ties then this algorithm can easily go astray. Its analysis is complicated enough that, rather than trying to add special case tie-breaking rules to the algorithm and proving that they still work correctly, I'd like a general-purpose method for converting algorithms that work for distinct path and cycle weights into algorithms that don't require distinctness.<br /><br />If randomization is allowed, it's not difficult to perturb the weights efficiently, so that additions and comparisons of weights still take constant time. Just let ε be a sufficiently small number (or by symbolic computation treat it as an infinitesimal) and perturb each element weight by a randomly chosen integer multiple of ε where the random integers of this scheme have polynomial magnitude. These integers are small enough that (on a machine capable of addressing its own memory) they fit into a machine word, so adding them and comparing their sums takes constant time per operation. And by choosing the polynomial to be large enough, we can ensure that with high probability each two sets that we compare will have different perturbations. (We don't care about the many other pairs of sets that we don't compare.)<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/set-comparison.png"></div><br /><br />The deterministic case is trickier. To solve it (in an appendix of the preprint) I define a data structure that can build up a persistent collection of sets, by adding one element at a time to a previously-constructed set, and then can answer queries that seek the smallest index of an element that belongs to one set and not another. Essentially, it involves a binary tree structure imposed on the elements, and a recursive representation of each set that follows the tree structure but shares substructures with other sets, so that differing elements can be found by tracing down through the tree looking for non-shared substructures. The figure above (from the paper) illustrates in a schematic way what it looks like; see the appendix for details. This allows the power-of-two technique to work, by replacing numerical comparisons on high-precision numbers by these set queries. It would also be possible to add element-removal operations, although I didn't need these for the cycle basis problem. But it's a bit cumbersome and slow: comparing two sets with this method takes logarithmic time, and adding an element to a set is slightly slower than that. And the details involve deterministic integer dictionary data structures that are theoretically efficient but for practical problem sizes probably worse than binary search trees. So I think there's definitely scope for coming up with a cleaner and faster solution.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:307976The red door2015-04-17T07:16:44Z2015-04-17T07:16:44ZI couldn't resist photographing this door to a lecture hall in the science sector of the UCI campus. I'm not sure what the pink paint brushmarks are: vandalism? Rustoleum? But they make a nice pattern.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/reddoor/2-m.jpg" border="2" style="border-color:black;" /></div><br /><br /><b>( <a href="http://www.ics.uci.edu/~eppstein/pix/reddoor/1.html">Another shot of the same door</a> )</b>urn:lj:livejournal.com:atom1:11011110:307881Parametric closures2015-04-17T01:08:46Z2015-04-17T01:08:46ZMy latest arXiv preprint, <a href="http://arxiv.org/abs/1504.04073">The Parametric Closure Problem</a> (arXiv:1504.04073, to appear at WADS) concerns an old optimization problem that can be used, among other applications, in the planning process for open-pit mining.<br /><br />Suppose you have the mining rights to a three-dimensional patch of earth and rock, in which the ore is of a type and depth that make it appropriate to remove the ore by digging down to it from above rather than by tunneling. You can make a three-dimensional model of your mining area, in which different three-dimensional blocks of material might represent ore of different values or worthless overburden (the stuff on top of the ore that you have to remove to get to the ore). Each block has its own value: the profit that can be extracted from its ore minus the cost of digging it out and processing it. Additionally, each block has some blocks above it (maybe staggered in a three-dimensional brick-wall pattern) that have to be removed first before you can get to it. Some blocks are worth digging for; others are buried so deeply under other worthless material that it would cost more to dig them out than you would get in profit from them. How should you go about deciding which blocks to excavate and which to leave in place?<br /><br />This can be modeled mathematically by the <a href="https://en.wikipedia.org/wiki/Closure_problem">closure problem</a>, in which you have as input a partially ordered set (the blocks of the mine, ordered by which ones have to be excavated first before you can get to which other ones) with weights on each element (the net profit of excavating each block). The goal is to find a downward-closed subset of the partial order (a set of blocks such that, whenever a block is in the set, so is all of its overburden) with maximum total weight. Alternatively, instead of a partial order, you can think about a directed acyclic graph, in which you have to find a set of vertices with no outgoing edges; the problem is essentially the same. It has long been known that this can be solved in polynomial time using a transformation to the minimum cut problem.<br /><br />Ok, but that assumes that the price of the material you're extracting (gold, say) is fixed. What happens as the price of gold varies? If gold is more expensive, it will be worthwhile to dig deeper for it; if it is cheap enough, you might even prefer to shut down the whole mine. How many different mining plans do you need for different prices of gold, and how can you compute them all? This is an example of a parametric optimization problem, one in which the weight of each element depends continuously on a parameter rather than being a fixed number.<br /><br />Alternatively, what if you want to optimize a quantity that isn't just a sum of element weights? Suppose, for instance, that it takes a certain up-front cost to extract a block of ore, but that you only get the value of the gold in the ore later. How can you choose a mining plan that maximizes your return-on-investment, the ratio between the profit you expect and the cost you have to pay now? This can also be modeled as a parametric problem, where the weight of a block has the form C × profit − cost for an unknown parameter C. If you can find all the different mining plans that would be obtained by different choices of C, you can then search through them to choose the plan with the optimal return-on-investment, and this turns out to be optimal.<br /><br />My paper defines the parametric (and bicriterion) closure problems, but I was only able to find polynomial-time solutions (and polynomial bounds on the number of different solutions to be found) for some special cases of partial orders, including series-parallel partial orders, semiorders, and orders of bounded width. However, the partial orders arising in the mining problem are unlikely to be any of these, so a lot more remains to be done. In particular I'd like to know whether there can exist a partial order whose parametric closure problem has exponentially many solutions, or whether they all have only a polynomial number of solutions. (Anything in between would also be interesting.)<br /><br />Incidentally, it's tempting to try to generalize closures of partial orders to feasible sets of antimatroids, and ask for an algorithm that can find the maximum weight feasible set. Unfortunately, this antimatroid closure problem is NP-complete. Consider, for instance, an antimatroid defined from a family of sets <i>S<sub>i</sub></i> in which there is one antimatroid element <i>x<sub>i</sub></i> corresponding to each set <i>S<sub>i</sub></i>, another antimatroid element <i>y<sub>j</sub></i> corresponding to each element of a set, and the feasible sets consist of any subset of the <i>x<sub>i</sub></i>'s together with any of the <i>y<sub>j</sub></i>'s that are covered by sets among the chosen <i>x<sub>i</sub></i>'s. If we give the <i>x<sub>i</sub></i>'s small equal negative weights and the <i>y<sub>j</sub></i>'s big equal positive weights, then the optimal feasible set is given by the optimal solution to a set cover problem. Although this complexity result doesn't prove anything about the number of solutions to the corresponding parametric problem, it makes me think that the parametric antimatroid problem is likely to be exponential.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:307498Linkage for tax day2015-04-16T05:41:47Z2015-04-16T05:41:47Z<ul><li><p><a href="https://www.youtube.com/watch?v=RYH_KXhF1SY">Fractal flat torus flyover video</a> (<a href="https://plus.google.com/100003628603413742554/posts/ESNzvcmNWPL">G+</a>)</p></li>
<li><p><a href="http://retractionwatch.com/2015/04/01/you-cant-make-this-stuff-up-plagiarism-guideline-paper-retracted-for-plagiarism/">The author of the plagiarized article on plagiarism turns out to have himself been a past victim of plagiarism</a> (<a href="https://plus.google.com/100003628603413742554/posts/fuW4FPEHzuC">G+</a>)</p></li>
<li><p><a href="http://arxiv.org/abs/1501.03837">2048 NP-hardness</a> (<a href="https://plus.google.com/100003628603413742554/posts/UXrCSYbdW4c">G+</a>)</p></li>
<li><p><a href="http://mkukla.com/Stone/stone_07_1.html">Michael Kukla's organically-shaped stone sculpture</a> (<a href="https://plus.google.com/100003628603413742554/posts/Yudx1zunsN6">G+</a>)</p></li>
<li><p><a href="http://greenupgrader.com/15763/water-saving-tip-the-shower-bucket/">Save water: shower with a bucket</a> (<a href="https://plus.google.com/100003628603413742554/posts/53q1YH9JpBx">G+</a>)</p></li>
<li><p><a href="http://www.nytimes.com/2015/04/03/opinion/south-koreas-invasion-of-privacy.html">The reach of the all-seeing eye extends to the land of morning calm</a> (<a href="https://plus.google.com/100003628603413742554/posts/gFrJMvfEECz">G+</a>)</p></li>
<li><p><a href="http://research.cs.queensu.ca/cccg2015/">CCCG call for papers</a> (<a href="https://plus.google.com/100003628603413742554/posts/CP2rxPCRTLB">G+</a>)</p></li>
<li><p><a href="http://www.thisiscolossal.com/2015/04/layered-glass-sculptures-niyoko-ikuta/">Layered cut-glass sculpture by Niyoko Ikuta</a> (<a href="https://plus.google.com/100003628603413742554/posts/LexAfZzuV1L">G+</a>)</p></li>
<li><p><a href="http://theconversation.com/using-wikipedia-a-scholar-redraws-academic-lines-by-including-it-in-his-syllabus-39103">Editing Wikipedia as course assignment</a> (<a href="https://plus.google.com/100003628603413742554/posts/8pBR5Vx3pzv">G+</a>)</p></li>
<li><p><a href="https://dl.dropboxusercontent.com/u/73307148/www.wads.org/Home/accepted.html">WADS accepted papers</a> (<a href="https://plus.google.com/100003628603413742554/posts/Wxn6MF5orK7">G+</a>)</p></li>
<li><p><a href="http://www.dataisnature.com/?p=2138">The fractal architecture and algorithmic design of Hindu temples</a> (<a href="https://plus.google.com/100003628603413742554/posts/9FWZLhi5P2c">G+</a>)</p></li>
<li><p><a href="http://m759.net/wordpress/?p=49049">4d hypercube in a 4x4 planar grid</a> (<a href="https://plus.google.com/100003628603413742554/posts/8GTbEjYwvFK">G+</a>)</p></li>
<li><p><a href="http://boingboing.net/2015/04/13/village-has-a-model-village-wh.html">Self-containing model village Droste effect</a> (<a href="https://plus.google.com/100003628603413742554/posts/CZiE7z5P6ch">G+</a>)</p></li>
<li><p><a href="http://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-services-uses-formal-methods/fulltext">CACM on formal methods at Amazon</a> (<a href="https://plus.google.com/100003628603413742554/posts/6CrVgj1zCsn">G+</a>)</p></li>
<li><p><a href="https://plus.google.com/117663015413546257905/posts/E4cfuyhawYh">Infinite reflection within a mirrored sphere</a> (<a href="https://plus.google.com/100003628603413742554/posts/YskmJuQUcik">G+</a>)</p></li></ul>urn:lj:livejournal.com:atom1:11011110:307408Linkage2015-04-01T03:56:27Z2015-04-01T03:56:27Z<ul><br /><li><a href="http://www.bbc.com/news/technology-31302312">Non-uniformly-random playlists sound more random than random ones</a> (<a href="https://plus.google.com/100003628603413742554/posts/RsmyXs8GpMZ">G+</a>)</li><br /><li><a href="http://www.metafilter.com/147970/If-you-can-read-this-sentence-you-can-talk-with-a-scientist">Is it a good thing that science is monolingual?</a> Is it even true? (<a href="https://plus.google.com/100003628603413742554/posts/aV9jJwG3ddr">G+</a>)</li><br /><li><a href="http://blog.wikimedia.org/2015/03/17/raspberry-pi-tanzania-school/">Bringing Wikipedia to a school without electricity</a> (<a href="https://plus.google.com/100003628603413742554/posts/eg7upcxdKRC">G+</a>)</li><br /><li><a href="https://plus.google.com/115585433364871264133/posts/7pYEqXYu36G">Erik Demaine presents the MAA Centennial Lecture</a> (<a href="https://plus.google.com/100003628603413742554/posts/Q6PNMKv51db">G+</a>)</li><br /><li><a href="https://quomodocumque.wordpress.com/2015/03/18/math-bracket-2015/">What if we held elimination tournaments based on the strength of math departments?</a> (<a href="https://plus.google.com/100003628603413742554/posts/dyjP8wBTbX8">G+</a>)</li><br /><li><a href="http://bldgblog.blogspot.co.uk/2014/06/mathematical-equations-as-architectonic.html">Mathematical equations as architectonic forms</a> (<a href="https://plus.google.com/100003628603413742554/posts/gujwuuur157">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=AxJubaijQbI">Persi Diaconis on good and bad ways to shuffle cards</a> (<a href="https://plus.google.com/100003628603413742554/posts/jS8u9tWRJV3">G+</a>)</li><br /><li><a href="http://www.cems.uvm.edu/~darchdea/problems/problems.html">Open problems in topological graph theory</a> from the late Dan Archdeacon (<a href="https://plus.google.com/100003628603413742554/posts/iRsQaEVpaGP">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=OuF-WB7mD6k">How to fix a wobbly table.</a> But only if the problem is the uneven ground, not the table itself. <a href="https://plus.google.com/100003628603413742554/posts/VqteLTwupnP">G+</a>)</li><br /><li><a href="http://www.scientificamerican.com/article/new-form-of-ice-forms-in-graphene-sandwich/">Square ice in graphene sandwiches</a> (<a href="https://plus.google.com/100003628603413742554/posts/aW4jWARGbvx">G+</a>)</li><br /><li><a href="http://news.sciencemag.org/scientific-community/2015/03/editor-quits-journal-over-pay-expedited-peer-review-offer">Nature Publishing Group lets authors pay for faster reviews.</a> One editor quits in disgust. (<a href="https://plus.google.com/100003628603413742554/posts/eNcuZdZGEfW">G+</a>)</li><br /><li><a href="https://hbr.org/2015/03/the-5-biases-pushing-women-out-of-stem">Five biases pushing women out of STEM</a> (<a href="https://plus.google.com/100003628603413742554/posts/E5ATuXyMZvP">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Entropy_compression">Entropy compression</a>, proving that randomized algorithms terminate because their past histories have too little information (<a href="https://plus.google.com/100003628603413742554/posts/gKxGtagGxRe">G+</a>)</li><br /><li><a href="https://www.simonsfoundation.org/multimedia/mathematical-impressions-multimedia/mathematical-impressions-the-golden-ratio/">George Hart on why you shouldn't believe many claims about appearances of the golden ratio</a> (<a href="https://plus.google.com/100003628603413742554/posts/JcBqFQGkJyr">G+</a>)</li><br /></ul>urn:lj:livejournal.com:atom1:11011110:307017Clique minors in de Bruijn graphs2015-03-22T05:18:21Z2015-03-22T05:18:21ZIn my new Wikipedia article on <a href="https://en.wikipedia.org/wiki/Queue_number">the queue number of graphs</a>, the binary de Bruijn graphs form an important family of examples. These are 4-regular graphs with one vertex for every <i>n</i>-bit binary string, and with an edge from every string of the form 0s or 1s to s0 or s1. <a href="http://11011110.livejournal.com/75392.html">I posted about them</a> here several years ago, with the following drawing, which can be interpreted as a 2-queue drawing with one queue for the edges that wrap around the left side and another for the edges that wrap around the right.<br /><br /><div align="center"><a href="http://en.wikipedia.org/wiki/Image:DeBruijn-3-2.png"><img src="http://www.ics.uci.edu/~eppstein/0xDE/dbg32b.png" border="0"></a></div><br /><br />Graph minors also showed up in the article, and it occurred to me to wonder: do de Bruijn graphs belong to any minor-closed graph families? The answer should be no, because they're too highly connected, but can we quantify this? One way would be to determine the <a href="https://en.wikipedia.org/wiki/Hadwiger_number">Hadwiger number</a> of the de Bruijn graphs, i.e., the size of their largest clique minors. As long as this is not bounded by a constant, the de Bruijn graphs do not belong to any nontrivial minor-closed family. And in fact, that turns out to be true: the Hadwiger number is somewhere near the square root of the number of vertices.<br /><br />One direction is easy: an <i>n</i>-vertex de Bruijn graph has 2<i>n</i> edges, and a <i>k</i>-vertex clique minor needs at least <i>k</i>(<i>k</i> − 1)/2 edges, so <i>k</i> has to be at most approximately 2√<i>n</i>.<br /><br />In the other direction, it's possible to exhibit an explicit clique minor of size nearly the square root of <i>n</i> in any de Bruijn graph. To do so, I need three ingredients:<br /><br />(1) A representative vertex in the de Bruijn graph for each clique vertex,<br /><br />(2) A path in the de Bruijn graph between any two representative vertices (not necessarily disjoint from the other paths), and<br /><br />(3) A mapping from the vertices within these paths to representative vertices, such that each path can be split into two segments that are mapped to the two endpoints of the path.<br /><br />With these ingredients, the minor itself can be formed by throwing away non-path vertices and contracting path edges between pairs of vertices that are mapped to the same endpoint as each other. (Every clique minor of any graph can be represented in this way.)<br /><br />So here are the representative vertices: for order-<i>k</i> de Bruijn graphs (with <i>n</i> = 2<sup><i>k</i></sup> vertices) they are the binary strings of the form 1<i>x</i>1<i>y</i>1<i>y</i>, where <i>x</i> is a string of about log<sub>2</sub> <i>k</i> consecutive 0's and <i>y</i> is a string of length (<i>k</i> − len(<i>x</i>) − 3)/2 that doesn't contain <i>x</i> as a substring. The <i>y</i> part of this is what distinguishes this representative vertex from all the other ones, and we will look for this string to determine how to map path vertices to representative vertices. The <i>x</i> part of the string carries no useful identifying information, but instead will allow us to find <i>y</i> even when the string has been shifted and mangled in the process of finding a path between two representative vertices. With this choice of the length of <i>x</i>, a constant fraction of the strings that are the right length to be <i>y</i> are valid (don't contain <i>x</i> as a substring). The number of valid <i>y</i>'s, and therefore the size of the clique minor that we find, is proportional to the square root of <i>n</i>/log <i>n</i>.<br /><br />To find a path from one representative vertex to another, we simply follow edges that shift the bitstring left by one position, shifting in the bits of the second representative vertex as we shift out the bits of the first. This actually gives two paths between each two representative vertices (one in each direction) but that isn't a problem; just pick one of the two.<br /><br />In order to define the mapping from path vertices to representative vertices, it's convenient to think of a bitstring (vertex of the de Bruijn graph) as having its left end wrapped around and glued to the right end to form a single cyclic sequence of bits. As we follow the path, the string <i>x</i> of consecutive 0's will rotate from the left side of the string to the right and then back to the left, but will always be uniquely identifiable as the only string of consecutive 0's of the correct length in this cyclic sequence. From the position of <i>x</i>, in any path vertex, we can identify two substrings in the cyclic sequence, in the correct positions relative to <i>x</i> to be the <i>y</i>'s of a representative vertex. For the first half of the path, one of these two <i>y</i> substrings will be equal to the <i>y</i> of the starting vertex of the path, and the second will be arbitrary (some mix of the two path endpoints). For the second half of the path, the pattern is reversed: the other one of the two <i>y</i> substrings will be equal to the <i>y</i> of the ending vertex of the path, and the other one will be a mix. But we can tell which of these two situations is the case by looking at the position of the consecutive 0's. So we map each path vertex to the representative vertex for one of its two <i>y</i> substrings, the one that isn't mixed up.<br /><br />So which of sqrt(<i>n</i>) (the edge-counting upper bound) and sqrt(<i>n</i>/log <i>n</i>) (the explicit construction of a clique minor) is closer to the truth? I'm not sure. On the one hand, if you have representative vertices <i>k</i> units apart from each other (as seems necessary, up to constant factors) with disjoint paths between them in the clique minor, then comparing the total number of edges in these paths with the total number of edges in the complete minor would show that the sqrt(<i>n</i>/log <i>n</i>) bound is tight. On the other hand, in the construction above, the paths are not disjoint, and they can't be because the representative vertex doesn't have high degree. But I don't know how to define the mapping from paths from representative vertices without, seemingly, wasting bits on the <i>x</i> strings which are used only as markers to determine where in the path each vertex is.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:306907Shattered glass2015-03-21T04:55:15Z2015-03-21T04:55:15ZA broken pane in the main stairwell of my department's building (maybe a bird strike?) gave me a chance to play with the geometry of shattered glass.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/brenglass/1-m.jpg" border="2" style="border-color:black;" /></div><br /><br /><b>( <a href="http://www.ics.uci.edu/~eppstein/pix/brenglass/index.html">The rest of the photos</a> )</b>urn:lj:livejournal.com:atom1:11011110:306573Linkage for the ides of March2015-03-16T01:11:03Z2015-03-16T01:11:03Z<ul><li><a href="http://blogs.ams.org/visualinsight/2015/03/01/schmidt-arrangement/">The Schmidt arrangement</a>, triangular rosettes of circles from number theory (<a href="https://plus.google.com/100003628603413742554/posts/RM8JpoWaoA4">G+</a>)</li><br /><li><a href="https://archive.org/details/vieleckeundvielf00bruoft">Brückner, <i>Vielecke und Vielflache</i> (1900)</a>, with <a href="http://rudygodinez.tumblr.com/post/79054495133/prof-dr-max-bruckner-four-plates-from-the-book-vielecke">a tumblr post of some stellated polyhedra photographed in the book</a> (<a href="https://plus.google.com/100003628603413742554/posts/fZ4Txj3kJ7p">G+</a>)</li><br /><li><a href="http://www.fq.math.ca/Announcements/Riordan6.pdf">$1000 prize for solving open problems in OEIS</a> (<a href="https://plus.google.com/100003628603413742554/posts/Dj3e9dfkDaX">G+</a>)</li><br /><li><a href="http://bit.ly/1aKcndl">MAA celebrates Women's History Month</a> (<a href="https://plus.google.com/100003628603413742554/posts/TZsABVCueCi">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=0eC4A2PXM-U">Tensegrity robot video</a> (<a href="https://plus.google.com/100003628603413742554/posts/c7UwZtBgv52">G+</a>)</li><br /><li><a href="http://theconversation.com/you-probably-havent-heard-of-these-five-amazing-women-scientists-so-pay-attention-38329">Mini-biographies of five women scientists</a> (<a href="https://plus.google.com/100003628603413742554/posts/XvdMgN1vNhW">G+</a>)</li><br /><li><a href="http://www.youtube.com/watch?v=l4bmZ1gRqCc">Numberphile video on the diversity of human natural-language number systems</a> (<a href="https://plus.google.com/100003628603413742554/posts/73Njxen87s2">G+</a>)</li><br /><li><a href="http://iacopoapps.appspot.com/hopalongwebgl/">Interactive 3d fractal fly-through</a> (<a href="https://plus.google.com/100003628603413742554/posts/Vem5zGKrBpX">G+</a>)</li><br /><li><a href="http://www.laurenbcollister.com/well-well-look-whos-at-it-again">Yet more Elsevier misbehavior</a> (charging for access to open-access papers; <a href="https://plus.google.com/100003628603413742554/posts/BSXDuPdECwK">G+</a>)</li><br /><li><a href="http://sarielhp.org/blog/?p=8827">Sad news of Jirka Matousek's death</a> (<a href="https://plus.google.com/100003628603413742554/posts/DTwFj8qmvvT">G+</a>)</li><br /><li><a href="http://retractionwatch.com/2015/03/12/yes-we-are-seeing-more-attacks-on-academic-freedom-guest-post-by-historian-of-science-and-medicine/">Increasing attacks on academic freedom</a> (<a href="https://plus.google.com/100003628603413742554/posts/ELfJRoJda4c">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=2E9m6yDEIj8">Vi Hart throws cold water on the whole Pi day thing and how arbitrary it is</a> (<a href="https://plus.google.com/100003628603413742554/posts/6YiHR4YHEj4">G+</a>)</li><br /><li><a href="https://cameroncounts.wordpress.com/2015/03/15/folding-de-bruijn-graphs/">Folding de Bruijn graphs</a> (<a href="https://plus.google.com/100003628603413742554/posts/DUGi3ED1ppW">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:306195Photos from Bellairs2015-03-15T04:31:47Z2015-03-15T04:31:47ZI was in Barbados last week for the <a href="http://cglab.ca/~morin/misc/bb2015/">Third Annual Workshop on Geometry and Graphs</a>. This time, unlike <a href="http://11011110.livejournal.com/286162.html">my visit last year</a>, I remembered to bring my camera.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/bellairs15/23-m.jpg" border="2" style="border-color:black;" /></div><br /><br /><b>( <a href="http://www.ics.uci.edu/~eppstein/pix/bellairs15/index.html">Many more photos, not all by me</a> )</b>