urn:lj:livejournal.com:atom1:110111100xDE0xDE0xDE2016-02-04T04:28:44Zurn:lj:livejournal.com:atom1:11011110:324449Finding your place on a map as quickly as you could tell someone where you are2016-02-04T04:28:44Z2016-02-04T04:28:44ZI have a new preprint, "<a href="http://arxiv.org/abs/1602.00767">Distance-Sensitive Planar Point Location</a>" (arXiv:1602.00767, with Aronov, de Berg, Roeloffzen, and Speckmann) whose title and abstract may already be a bit intimidating. So I thought I would at least try to explain what it's about in simpler terms.<br /><br />Point location is something you do every time you orient yourself on a map. You know where you are, in the world, and you try to figure out what that means about where you are on the map. Let's model the map mathematically as a unit square, subdivided into polygons. You already know the coordinates of a point in the square, but now you want to figure out which polygon contains that point. To do so, you might make a sequence of comparisons, for instance checking which side of one of the polygon edges you're on. You want to do as few comparisons as possible before getting the answer.<br /><br />One limit on the number of comparisons you might need is entropy, a fancy word for the number of bits of information you need to send to someone else to tell them where you are. If you can solve point location in a certain number of comparisons, you can also say where you are by communicating the results of those comparisons. So, the entropy is no bigger than the time complexity of point location. We'd like to turn that around, by finding methods for point location that match any method you might come up with for communicating your location. For instance, if there are <i>n</i> different polygons on your map, you could name them by strings of log<sub>2</sub><i>n</i> bits, and communicate your location by saying one of those names. And it turns out that there are methods of point location that take only <i>O</i>(log <i>n</i>) comparisons, so we can match this naming scheme by a point location method.<br /><br />But there are other naming schemes that might do even better than that, and we'd like to also match them. For instance, you might be in some places more frequently than others. Most of the time, I'm in California, and when I'm outside California I'm more likely to be in states where I have relatives or that are the frequent sites of computer science conferences (say, Massachusetts) than others (say, Maine). So it would be more efficient on the average for me to tell people where I am using a naming scheme in which California has a very short name and South Dakota has a longer one. Several researchers, including Iacono, Arya, Malamatos, and Mount, have provided point location schemes that can match the complexity of any such naming scheme. But to achieve this, they require all of the polygons to have simple shapes like triangles, not complicated ones like the boundaries of some US states.<br /><br />In our paper, we use a different idea. Instead of considering the popularity of different regions, we consider the distance to the nearest boundary. If you're far from the boundary of a region, it should be easy to tell where you are, and if you're close to the boundary you'll have to look more carefully. If your distance to the nearest boundary is <i>d</i>, then you're inside a circle that doesn't cross any boundary and whose area is proportional to <i>d</i><sup>2</sup>. There can be only <i>O</i>(1/<i>d</i><sup>2</sup>) polygons that contain a circle that big, and you can communicate which one you're in by sending <i>O</i>(log(1/<i>d</i><sup>2</sup>)) bits of information (a name of one of those big polygons). A simple point location scheme based on a quadtree data structure turns out to match that bound.<br /><br />That's a warmup to the main results of the paper, which combine popularity with distance to the boundary. If you're well within a polygon, at the center of a circle that's inside the polygon and has area proportional to it, then the time to find out where you are is proportional only to the length of the name of the polygon (in your favorite naming scheme) regardless of how complicated its shape is. But if you're closer to the boundary than that, then the time to locate yourself will be slower, by a number of steps that depends inverse-logarithmically on how close you are.<br /><br />It would be nice to have a method that depends only on the lengths of the names of the polygons, without assuming they're all nicely shaped, and without depending on other quantities like the distance to the boundary. But that's not possible, as an example in our introduction details. For, if there are only two polygons on your map (say polygon 0 and polygon 1), then the lengths of both of their names are constant. If we want to match that by the complexity of a point location scheme, then we can only make a constant number of comparisons, and only solve location problems for which the shapes of the polygons are simple.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:324305Linkage2016-02-01T04:36:30Z2016-02-01T04:36:30Z<ul><li><a href="https://construclonica.wordpress.com/2015/10/16/suelos-de-pentagonos-irregulares/">Cairo tiling Legos</a> (<a href="https://plus.google.com/100003628603413742554/posts/33cgLtaEQrJ">G+</a>)</li><br /><li><a href="http://elevr.com/spherical-video-editing-effects-with-mobius-transformations/">Zooming in spherical videos using Möbius transformations</a>, with bonus spherical Droste effect (<a href="https://plus.google.com/100003628603413742554/posts/BYS1c5geCsJ">G+</a>)</li><br /><li><a href="http://www.vox.com/2016/1/16/10777050/university-of-maryland-chocolate-milk">Corporate-sponsored chocolate-milk press-release-research at the University of Maryland</a> (<a href="https://plus.google.com/100003628603413742554/posts/K2cD6aMhrj6">G+</a>)</li><br /><li><a href="https://brooker.co.za/blog/2012/01/17/two-random.html">The power of two choices works very well for load balancing with stale data</a> (<a href="https://plus.google.com/100003628603413742554/posts/NPbfFzLy5r9">G+</a>)</li><br /><li><a href="http://artery.wbur.org/2014/11/13/harvard-art-museums-forbes-pigment-collection">Harvard's library of colors</a> (<a href="https://plus.google.com/100003628603413742554/posts/RHtenbQno2g">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=2DRJ2oUK4-E">3d color printing with layers of paper</a> (<a href="https://plus.google.com/100003628603413742554/posts/1ME9uv1xySH">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Trillium_theorem">What is the name of the trillium theorem?</a> (<a href="https://plus.google.com/100003628603413742554/posts/THk6Yi3WE2h">G+</a>)</li><br /><li><a href="http://www.latimes.com/entertainment/herocomplex/la-et-hc-usc-women-video-game-design-program-20160124-htmlstory.html">Women outnumber men in USC's game design program</a> (<a href="https://plus.google.com/100003628603413742554/posts/cJrb5k2y8GM">G+</a>)</li><br /><li><a href="http://www.nytimes.com/2016/01/26/business/marvin-minsky-pioneer-in-artificial-intelligence-dies-at-88.html?_r=1">New York Times obituary for Marvin Minsky</a> (<a href="https://plus.google.com/u/0/100003628603413742554/posts/PfvuJVsERdZ">G+</a>)</li><br /><li><a href="http://www.wired.com/2016/01/goodbye-applets-another-cruddy-piece-of-web-tech-is-finally-going-away/">Java applets are dead</a>. So what do we use as a replacement for Cinderella? (<a href="https://plus.google.com/100003628603413742554/posts/DtiG3dsEbGY">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=MO5LK1h2eAo">Geometric mean of ranks aggregation in sports competition scoring</a>: unfair because it disobeys <a href="https://en.wikipedia.org/wiki/Independence_of_irrelevant_alternatives">independence of irrelevant alternatives</a>? (<a href="https://plus.google.com/100003628603413742554/posts/MT1moDG9mCE">G+</a>)</li><br /><li><a href="http://www.nature.com/news/arxiv-rejections-lead-to-spat-over-screening-process-1.19267">Physics arXiv crank filter had two false positives</a>. Should we do the same in CS? (<a href="https://plus.google.com/100003628603413742554/posts/KiL9mJf2W6B">G+</a>)</li><br /><li><a href="https://plus.google.com/u/0/+DavidRoberts/posts/YtRbDXwMmE5">What started out as a boring spat between two mathematical physicists has turned into an interesting discussion of the role of blogs and other less-formal writings in scientific research and publishing</a> (<a href="https://plus.google.com/u/0/100003628603413742554/posts/bqTWAgXvdKN&quot;&quot;">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:323892How to make a bad pseudoline arrangement worse2016-01-28T05:59:40Z2016-01-28T05:59:58ZThe image below is a drawing of (part of) a pseudoline arrangement, a collection of curves in the plane that behave like lines in the sense that each curve partitions the plane into two unbounded regions and each two curves have exactly one point of intersection, where they cross. It's from my latest arXiv preprint, "<a href="http://arxiv.org/abs/1601.06865">Convex-Arc Drawings of Pseudolines</a>" (with Mereke van Garderen, Bettina Speckmann, and Torsten Ueckerdt, arXiv:1601.06865).<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/many-bends.png"></div><br /><br />The repeated yellow squares within the image are themselves smaller pseudoline arrangements, all the same as each other, with nine pseudolines each. I drew them more curvy than they need to be, but the special property of these arrangements is that they need to be at least a little bit curvy: they cannot be drawn with straight lines.<br /><br />The point of this image is that it repeats! The black parts outside the yellow squares show how to connect up the pseudolines so that the repetition still obeys the requirements of a pseudoline arrangement. The nine pseudolines in any one square must never go to the same square as each other again (otherwise they would cross twice, not allowed). To prevent this, one of the nine pseudolines in each square (the one that comes in the top and goes out the bottom) connects horizontally to the next square across, another connects one square down on the left and one square up on the right, another connects two squares down and up, etc. In this way, an <i>n</i> by <i>n</i> grid of yellow squares can be glued together using a total of only <i>O</i>(<i>n</i>) pseudolines. They won't all cross each other within the grid, but that can be fixed up by adding extra crossings outside the grid.<br /><br />If you start with a small constant-sized pseudoline arrangement requiring at least one bend, then after performing this expansion you get a big pseudoline arrangement requiring a quadratic number of bends. Here I define a bend to be a vertex of a polygonal chain, but the same construction generalizes to show that if you draw the pseudolines as smooth spline curves you need a quadratic number of knots, or if you draw them as piecewise-circular curves you need a quadratic number of arcs. It's not hard to draw any pseudoline arrangement with only this many bends (just use a <a href="http://11011110.livejournal.com/111308.html">wiring diagram</a>) but one of the results of the paper is that you can still draw the arrangement with quadratically many bends even if you require all of the pseudolines to be convex curves.<br /><br />Another part of the paper concerns "weak pseudoline arrangements", where not every pair of curves is required to cross (but they can still only intersect in a single crossing point). We say that a weak pseudoline arrangement is outerplanar if every crossing point belongs to an unbounded face. Outerplanar arrangements can't always be straightened in the plane; for instance rotating the parabola <i>y</i> = <i>x</i><sup>2</sup> + 1 by right angles around the origin forms a pseudoline arrangement with four pseudolines that cannot be made straight. However, we show that this is a special property of the Euclidean plane: in the hyperbolic plane, every outerplanar arrangement can be straightened. This leads to Euclidean drawings with only two bends per pseudoline, but maybe only one bend is possible, I'm not sure.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:323781Face incidence polytopes2016-01-21T23:59:58Z2016-01-22T23:12:35ZIn <a href="http://11011110.livejournal.com/323466.html">my previous post here</a> I remarked that the hypercubes are the face lattices of simplexes. I also drew the face lattice of a square, which you might have noticed forms a planar graph, the graph of another polyhedron (the <a href="https://en.wikipedia.org/wiki/Tetragonal_trapezohedron">tetragonal trapezohedron</a>, an octahedron with eight kite-shaped faces). This turns out not to be a coincidence. For <s>every</s> many (see comments) <i>d</i>-dimensional convex polytopes <i>P</i>, the covering graph of its face lattice is the graph of a (<i>d</i> + 1)-dimensional convex polytope, which for lack of a better name I'll call the face incidence polytope, or incidence polytope for short (although that shorter phrase does already have a different meaning).<br /><br />This polytope shares more structure with <i>P</i> than just its vertex and edge set. If <i>P</i> has an <i>i</i>-dimensional face that forms a subset of a <i>j</i>-dimensional face, then this incidence between the two faces of <i>P</i> corresponds to a (<i>j</i> − <i>i</i>)-dimensional face of the incidence polytope, and every face of the incidence polytope comes from an incidence of two faces of <i>P</i> in this way. In particular, the vertices of the incidence polytope correspond to the incidences between faces of <i>P</i> and themselves, and the edges of the incidence polytope correspond to incidences between faces that are one dimension apart, which form the edges of the covering graph of the face lattice.<br /><br /><b>Construction</b><br /><br />The construction of the incidence polytope is very simple. Given a polytope <i>P</i>, place <i>P</i> so that the origin of its space is interior to it, and let <i>Q</i> be its <a href="https://en.wikipedia.org/wiki/Dual_polyhedron">polar polytope</a>. Place <i>P</i> and <i>Q</i> on two parallel <i>d</i>-dimensional hyperplanes in (<i>d</i> + 1)-dimensional space. Translation or scaling of these two planes doesn't affect the combinatorial structure of the result, but it is convenient to choose how to place these two hyperplanes so that they are perpendicular to the line connecting their origins; neither polytope should be rotated. Next, construct the convex hull of the union of <i>P</i> and its parallel polar. Then, the incidence polytope is the polar polytope to this convex hull.<br /><br />For instance, if you start with a regular polygon with <i>n</i> sides, then its polar polygon is also regular with <i>n</i> sides, but rotated by an angle of pi/<i>n</i> with respect to the original polygon, so that the vertices of the two polygons interleave each other. Placing these two polygons on two parallel planes and constructing their hull gives an <a href="https://en.wikipedia.org/wiki/Antiprism">antiprism</a>, and then forming the polar polytope of this antiprism gives us a trapezohedron, like the one I drew last time.<br /><br /><b>Correctness</b><br /><br />Why does this work? (When it works at all; see comments.) I think the simplest explanation involves dropping a dimension, to a (<i>d</i> −1)-dimensional (hyper)surface in <i>d</i>-dimensional space, the image of the <a href="https://en.wikipedia.org/wiki/Gauss_map">Gauss map</a>. This map transforms points on the surface of <i>P</i> to their (set of) unit normal vectors. The set of all unit vectors is a sphere, and the image of the Gauss map is a subdivision of the sphere according to which point or points of <i>P</i> are most extreme in a given direction. For instance, the Gauss map for a cube subdivides the sphere into eight spherical triangles. Each triangle corresponds to one of the eight vertices of the cube. The unit vectors in this triangle represent directions in which that cube vertex is the most extreme: its dot product with the vector is bigger than the dot products involving the other seven vertices.<br /><br /><div align="center"><a href="https://commons.wikimedia.org/wiki/File:Spherical_square_bipyramid.png"><img src="http://www.ics.uci.edu/~eppstein/0xDE/Spherical-octahedron.png" border="0"></a></div><br /><br />The edges of this subdivision represent directions in which there are two equally extreme points (the endpoints of an edge of the cube) and the vertices of the subdivision represent directions in which there are four equally extreme points (the vertices of one of the cube's faces). In this way, the features of the Gauss map correspond to the features of the cube, but with the dimensions reversed. If instead we take the Gauss map of the polar of a cube (an octahedron), both the polar and Gauss map operations reverse dimensions and we get the cube itself back, but a spherical cube rather than a polyhedral one. And if we overlay the Gauss maps for the cube and its polar, we get something like this:<br /><br /><div align="center"><a href="https://commons.wikimedia.org/wiki/File:Spherical_deltoidal_icositetrahedron.png"><img src="http://www.ics.uci.edu/~eppstein/0xDE/Spherical-cube-oct-overlay.png" border="0"></a></div><br /><br />If we have a unit vector <i>v</i> in (<i>d</i> + 1)-dimensional space, then we can use this overlay diagram to determine the extreme points for the hull of the union of <i>P</i> and its offset polar. First, as a special case, if <i>v</i> is parallel to the line between the origins of the two <i>d</i>-dimensional subspaces, we know the extreme points: they are either all of <i>P</i> or all of its polar, depending on the sign of <i>v</i>. Otherwise, we can project <i>v</i> to a nonzero vector in the two subspaces, and look it up which cell of the overlay diagram contains it; this will tell us the extreme points for <i>P</i> and its polar in the direction of <i>v</i>. The extreme points for the hull of <i>P</i> and its polar can only be the same as the extreme points for <i>P</i>, for its polar, or both, because a hyperplane perpendicular to <i>v</i> that touched any other points of <i>P</i> or its polar would cut through the hull and separate some extreme points from the rest of the polytope. So this means that the only faces (sets of extreme points) that we can obtain are the ones coming from features of the overlay diagram, which are exactly the ones we want.<br /><br /><b>Applications</b><br /><br />In my previous post we saw that the faces of the hypercube are in one-to-one correspondence with the sets of binary strings that can be described by placing wildcards in some positions of the string, and fixing other positions to 0 and 1. In order to make this correspondence work, we needed to include one more set of binary strings (the empty set) that could not be described by wildcards in this way, to correspond to the empty (−1)-dimensional face of the hypercube. The incidence polytope construction shows that these wildcard strings (together with Ø) are also in one-to-one correspondence with the vertices of a polytope, the incidence polytope of a hypercube. The edges of the incidence polytope represent the ways in which you could turn a 0 or 1 into a wildcard character or vice versa.<br /><br />Another polytope whose faces are meaningful is the <a href="https://en.wikipedia.org/wiki/Permutohedron">permutohedron</a>, the convex hull of the permutations of the vector (1, 2, 3, ..., <i>d</i>). It's a (<i>d</i> −1)-dimensional polytope, because these points all lie in a hyperplane (their sum of coordinates is a triangular number). Its nonempty faces correspond to <a href="https://en.wikipedia.org/wiki/Weak_ordering">weak orderings</a>, orderings that like the outcome of a horse race are transitive (if x beats y and y beats z then x beats z) but may have equivalence classes of tied elements: the vertices of a face are the permutations that can be formed from the ordering by breaking all ties in a consistent way. Here for instance are the 13 weak orderings of a three-element set:<br /><br /><div align="center"><a href="https://commons.wikimedia.org/wiki/File:13-Weak-Orders.svg"><img src="http://www.ics.uci.edu/~eppstein/0xDE/13-Weak-Orders.png" border="0"></a></div><br /><br />The 3-element permutohedron is a regular hexagon. The outer six vertices of the diagram above correspond to the hexagon's vertices, the middle six vertices correspond to the hexagon's edges, and the central vertex corresponds to the whole hexagon. So we have here almost the entire face lattice of the hexagon, but we're missing one vertex, corresponding to the empty set. If we add back this one vertex (adjacent to the six outer vertices of the diagram) we get the graph of a polyhedron, the hexagonal trapezohedron, the incidence polyhedron of the hexagon. In the same way, if we take the incidence polytope of a permutohedron, we get a <i>d</i>-dimensional polytope whose vertices represent all the weak orderings of a <i>d</i>-element set (with one extra vertex representing the empty set of permutations, or an inconsistent weak ordering). The edges of the incidence polytope represent ways of breaking a tie by splitting one equivalence class into two smaller equivalence classes.<br /><br />In the same way, the nonempty faces of an <a href="https://en.wikipedia.org/wiki/Associahedron">associahedron</a> represent partial parenthesizations of a sequence, or partitions of a regular polygon by sets of diagonals into smaller convex polygons; its vertices represent complete parenthesizations or triangulations of a convex polygon. The vertices of the incidence polytope of the associahedron also represent partial parenthesizations, or partitions into convex polygons, except for one extra vertex representing the empty face of the associahedron. The edges of the incidence polytope represent ways of inserting one more pair of parentheses or one more diagonal.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:323466Bit tricks for wildcard strings and hypercube face lattices2016-01-20T02:11:32Z2016-01-20T02:11:32Z<p>The binary strings of a given length, like the length-8 string "11011110" in the name of my blog, can be thought of as naming the vertices of a hypercube of the same dimension: each bit is one of the Cartesian coordinates of a vertex. In the same way,
binary strings with wildcard characters, like "11***1*0", can be thought of as naming the nonempty faces of the hypercube; the number of stars gives the dimension of the face, up to the string "********" which represents the whole cube. But there's one more face, the empty set Ø, which cannot be represented in the same way.</p>
<p>As with the collection of faces of any polyhedron, the faces of a hypercube can be partially ordered by inclusion, and this partial order forms a lattice: every family of faces has a unique meet (its greatest lower bound, the intersection of all the faces), and a unique join (its least upper bound, the unique minimal face that contains all of them).
For instance, the meet of two opposite sides of an ordinary 3-dimensional cube (for instance the two sides **0 and **1) is the empty set (that's why Ø needs to be a face) and the join of the same two opposite sides is the whole cube ***. This is the <a href="https://en.wikipedia.org/w/index.php?title=Face_lattice">face lattice</a> of the hypercube. (The hypercube itself can also be viewed as a face lattice of another kind of polyhedron, a simplex.)</p>
<p>Here's an example, for the face lattice of a square (a 2-dimensional cube). The inclusion ordering is shown by the edges, and each lattice element is labeled both by the part of the square it represents and by the corresponding wildcard string.</p>
<p align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/Square-face-lattice.png" border="0"></p>
<p>A 2012 NSDI paper by Kazemian, Varghese, and McKeown, "<a href="https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final8.pdf">Header Space Analysis: Static Checking For Networks</a>", uses some of these operations. It needed them to be fast, so it describes how to implement them using a constant number of bit-manipulation operations. (Actually, it omits the join operation, because what it really wants is the union, but that can't always be described as a single face.) Their basic idea is to expand each symbol of the wildcard face description into two bits: 0 ⇒ 01, 1 ⇒ 10, and * ⇒ 11. Although the empty set Ø could not be written as a wildcard string, it can be represented in the same way as a binary number, the number 0. With this representation, we can perform subset testing, and meet and join operations, using a constant number of bit operations, as follows:</p>
<ul>
<li><p>One hypercube face A is a subset of another face B if and only if A & B == A.</p></li>
<li><p>The minimal face containing both A and B as subsets is the face A | B, the result of a bitwise Boolean or operation.</p></li>
<li><p>The intersection of faces A and B (their join) is usually A & B, the result of a bitwise Boolean and. But when the intersection is empty, A & B will not necessarily be zero as it should: it may only have 00 as the expansion of a single positions. To test for this possibility, let C = A & B and let M be a bitstring of the form ...01010101. Then if (~C >> 1) & ~C & M == 0, there is some position of the wildcard string where both bits are zero, and we should return zero as the result of the intersection operation. Otherwise, we can return A & B.</p></li>
</ul>
<p>One drawback to this representation is that it's a little tricky to test whether a given (non-wildcard) binary string is a match to a given wildcard string. To do so, we have to somehow expand each of its bits into two bits to put them into the correct position for the bitwise Boolean operations, and this bit permutation operation is not a primitive operation on many computer architectures (nor in many programming languages). So here's a second small trick to make this part easier without making the other parts any harder: simply rearrange the same bits into a more convenient ordering.</p>
<p>Instead of representing a wildcard string as a sequence of pairs of bits drawn from 01, 10, or 11, let's split those pairs out. We'll represent a wildcard string as a sequence of 2n bits in which the ith bit represents either whether the wildcard string can match a 0 in position i (if i < n) or whether it can match a 1 in position i – n (if i > n). That is, we use the same bits as before but we transpose their positions. Then the subset and join operations are exactly the same as before. The intersection (meet) operation only needs a small adjustment and simplification: instead of the expression (~C >> 1) & ~C & M == 0 we use (~C >> n) & ~C == 0, with no masking needed. And mapping a binary number x to the representation of the wildcard string that matches only x becomes trivial: use the formula (x << n) | ~x. To test whether x matches a wildcard string Y, compute the wildcard string X = (x << n) | ~x and then apply the subset test X & Y == X.</p>
<p>With this permuted representation, all the lattice operations as well as wildcard membership testing have simple constant-time implementations using only bitwise Boolean operations.</p><a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:323112Linkage for Wikipedia's 15th birthday2016-01-16T06:34:31Z2016-01-16T06:34:31Z<ul><li><a href="http://idlewords.com/talks/website_obesity.htm">A great long-form new year's resolution to slim down your website</a> (<a href="http://www.metafilter.com/155972/The-dawn-of-the-Taft-Test">MF</a>; <a href="https://plus.google.com/100003628603413742554/posts/NsgXCP1nHZq">G+</a>)</li><br /><li><a href="https://meta.wikimedia.org/wiki/Research:Online_harassment_resource_guide">A guide to scholarly literature on online harassment and responses to it</a> (<a href="https://plus.google.com/100003628603413742554/posts/V2bdeh35ZER">G+</a>)</li><br /><li><a href="http://blogs.ams.org/visualinsight/2016/01/01/free-modular-lattice-on-3-generators/">The free modular lattice</a> and its resemblance to the <a href="https://en.wikipedia.org/wiki/Free_distributive_lattice">free distributive lattice</a> (<a href="https://plus.google.com/100003628603413742554/posts/cREnvinNtbq">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Stars_%28M._C._Escher%29">Escher's <i>Stars</i></a> (<a href="https://plus.google.com/100003628603413742554/posts/9gra8cv7oY7">G+</a>)</li><br /><li><a href="https://www.eatcs.org/index.php/nerode-prize">Nerode prize in multivariate algorithmics, call for nominations</a> (<a href="https://plus.google.com/u/0/100003628603413742554/posts/9LWufMRtsHG">G+</a>)</li><br /><li><a href="https://www.newscientist.com/article/dn28743-mathematicians-invent-new-way-to-slice-pizza-into-exotic-shapes/">Cutting pizza into congruent pieces that don't all meet at the center</a> (<a href="http://arxiv.org/abs/1512.03794">arXiv</a>; <a href="https://plus.google.com/100003628603413742554/posts/d5GhSXkgR5o">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=92WHN-pAFCs">Clear video explanation of the halting problem</a> (<a href="https://plus.google.com/100003628603413742554/posts/Pzs1RSEmu9r">G+</a>)</li><br /><li><a href="http://tex.stackexchange.com/questions/174375/challenge-images-from-hart-segerman">Challenge: draw these exploded polytopes automatically as nicely as Vi Hart did by hand</a> (<a href="https://plus.google.com/100003628603413742554/posts/BGSW3SdP7aT">G+</a>)</li><br /><li><a href="https://www.insidehighered.com/news/2016/01/11/new-analysis-offers-more-evidence-against-student-evaluations-teaching">Student evaluations better at measuring gender bias than teaching effectiveness</a> (<a href="http://www.metafilter.com/156195/More-evidence-that-student-evaluations-of-teaching-evaluate-gender-bias">MF</a>; <a href="https://plus.google.com/100003628603413742554/posts/G3MuBxGEkcp">G+</a>)</li><br /><li><a href="http://www.nytimes.com/2016/01/10/upshot/when-teamwork-doesnt-work-for-women.html">Co-authorship helps male but not female academics get ahead</a> (<a href="https://plus.google.com/100003628603413742554/posts/Vr5FJbrJkJC">G+</a>)</li><br /><li><a href="http://hyrodium.tumblr.com/post/137219704284/two-red-circles-which-are-tangent-are-transformed">Part of a series of nice visualizations of Möbius transformations</a> (<a href="https://plus.google.com/100003628603413742554/posts/NzhiW4TUXqt">G+</a>)</li><br /><li><a href="https://www.washingtonpost.com/news/wonk/wp/2016/01/13/this-is-actually-what-america-would-look-like-without-gerrymandering/">Gerrymandering explained</a> and some computational experiments in using k-medians to avoid it (<a href="https://plus.google.com/100003628603413742554/posts/jQxrHAi9wAw">G+</a>)</li><br /><li><a href="http://fivethirtyeight.com/features/the-most-edited-wikipedia-pages-over-the-last-15-years/">The most-edited Wikipedia pages over the last 15 years</a> (<a href="https://plus.google.com/100003628603413742554/posts/h5XhnDJEX3K">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:322911Report from SODA, ALENEX, and ANALCO2016-01-13T04:53:23Z2016-01-13T05:07:06ZThe <a href="http://www.siam.org/meetings/da16/">27th ACM-SIAM Symposium on Discrete Algorithms</a>, in Arlington, Virginia, just finished, along with its satellite workshops ALENEX (experimental algorithmics, one of the most heavily rejected topics from SODA) and ANALCO (analytic combinatorics). With four parallel sessions going at most times, there's no way to take in everything, so here are my impressions of the small fraction I saw.<br /><br />The first talk I saw, Sunday morning by Rasmus Pagh, concerned <a href="http://arxiv.org/abs/1507.03225">locality-sensitive hashing</a>, a technique for performing approximate nearest-neighbor searches in high dimensions with subquadratic preprocessing and sublinear query time (with exponents depending on approximation quality). We can do this in linear space, or (now with Pagh's paper) without false negatives: every point within a query radius of the given point should be reported, along with some other farther-away points. Pagh asked as an open question whether we can get similar bounds for simultaneously achieving both linear space and no false negatives.<br /><br />In the early afternoon I switched between SODA and ALENEX. Aditya Bhaskara spoke about <a href="http://arxiv.org/abs/1510.07768">distributed peer-to-peer network reorganization</a>: making local changes to network connectivity to improve communications. Random graphs are good in many ways, and one way of obtaining one (subject to fixed vertex degrees) is to replace randomly-chosen pairs of disjoint edges by different edges on the same endpoints, but that doesn't work in a distributed setting because it can disconnect your graph. Instead, it works better to choose a random three-edge path and reconnect its two endpoints to the two middle vertices. Both kinds of random changing will eventually produce a random graph, but slowly (polynomial but very high degree). Bhaskara showed that they instead reach an expander (almost as good for most purposes) much more quickly.<br /><br />Back at ALENEX, Markus Blumenstock shaved a log off an <a href="http://epubs.siam.org/doi/abs/10.1137/1.9781611974317.10">algorithm for pseudoarboricity</a> (the maximum average degree of a subgraph) by using an approximation algorithm in place of an exact algorithm in the earlier steps of a binary search. He asked for other situations where the same trick works.<br /><br />Later in the afternoon, Benjamin Raichel spoke on his work with Avrim Blum and Sariel Har-Peled on <a href="http://arxiv.org/abs/1507.02574">approximating high-dimensional convex hulls</a> (in the sense that every input point should be at small distance from the approximation) as the hull of a small sample of points, by repeatedly adding the point farthest from the current hull to the sample. Despite the simplicity of the algorithm, the analysis is complicated and shows that both the number of sample points and the best distance you could achieve with a sample of that size are approximated to within an ε<sup>−2/3</sup> factor, independent of dimension.<br /><br />One of the good things about conferences like this is meeting up with people you otherwise wouldn't likely encounter. For instance, while grabbing coffee at a nearby bagel shop on Monday morning I ran into Brent Heeringa, whom I had previously met on the boat ride excursion at WADS 2011. Long ago I wrote a paper on <a href="https://en.wikipedia.org/wiki/Synchronizing_word">reset sequences (synchronizing words)</a> of finite automata, which included an algorithm for finding such a sequence by repeatedly appending the shortest sequence that merges two states. It's not hard to see (although it hadn't occurred to me to mention) that the result is an <i>n</i>-approximation, and Brent pointed me to recent work by Gawrychowski and Straszak at MFCS 2015 <a href="http://arxiv.org/abs/1408.5248">using PCP theory to prove that no significantly-sublinear approximation is possible</a>. This turns out to be an obstacle to proving the Černý conjecture on the existence of quadratic-length reset sequences, since the same linear factor comes up as the ratio between the lower and upper bounds on reset sequence length.<br /><br />We have long known that the power of two choices can be used to prevent conflicts in talk scheduling: after scheduling all the talks logically as usual, schedule an equal number of parallel sessions in which every speaker presents their talk a second time, in a random order. This would act much like a cuckoo hash table and allow anyone who doesn't have too long a list of must-see talks to find a schedule to see everything they want. Ironically, one of this year's scheduling victims was Mr. Power of Two Choices himself, Michael Mitzenmacher, speaking about two data structural applications of this idea in ANALCO directly opposite the data structures session of SODA, where John Iacono and Stefan Langerman presented an entertaining talk on a problem related to the dynamic optimality conjecture for binary search trees. One property you'd like a binary search tree to have is finger searching: if you search for two items near each other in the tree, the cost should be the log of their distance. Another is the working set property: if you're accessing only a subset of the items, the cost per access should be the log of the size of the subset. A third is static optimality: you should do as well as top-down searches in any static tree. If you combine working sets and fingers, you get a property equivalent to static finger trees: you should do as well as searches starting from the previously accessed item in any static tree. On a long train ride, Iacono and Langerman tried unsuccessfully to understand Cole's proof of the finger searching property for splay trees; instead, they ended up proving the <a href="http://epubs.siam.org/doi/10.1137/1.9781611974331.ch49">static finger property for greedy ass trees</a>, a different data structure defined using their previous work on the <a href="https://en.wikipedia.org/wiki/Geometry_of_binary_search_trees">geometry of binary search trees</a>.<br /><br />In his invited talk Monday, Sariel Har-Peled surveyed the theory and algorithmics of <a href="https://en.wikipedia.org/wiki/Bounded_expansion">graphs of polynomial expansion</a>, including his work with Quanrud at ESA 2015 on using this theory to get <a href="http://sarielhp.org/papers/14/low_density/">approximate independent sets for intersection graphs of fat objects</a>. There's a trick there: these graphs may be dense, but the ones that you get by overlaying two independent subsets of fat objects have polynomial expansion and therefore a good separator theorem. This property turns out to be enough to make a local optimization algorithm find good independent sets: if your independent set is not close enough to the optimum, you can improve it by exchanging a small piece of it surrounded by a separator for its overlay with the optimum. <a href="http://sarielhp.org/research/talks/16/01_soda/bp.pdf">Sariel has already put his slides online</a> if you want to learn more.<br /><br />My own talk Monday was about <a href="http://arxiv.org/abs/1510.03152">a class of 4-polytopes whose graphs can be recognized in polynomial time</a>, generalizing the 3-dimensional polyhedra formed from the Halin graphs. We don't know whether recognizing the graphs of all 4-polytopes is polynomial, although I suspect not (it should be complete for the <a href="https://en.wikipedia.org/wiki/Existential_theory_of_the_reals">existential theory of the reals</a>). My work also connects this problem to another problem with open complexity, <a href="https://en.wikipedia.org/wiki/Clustered_planarity">clustered planarity</a>. And the graphs from my paper turn out to have polynomial expansion. Following Sariel's example, <a href="http://www.ics.uci.edu/~eppstein/pubs/Epp-SODA-16.pdf">here are my talk slides</a>.<br /><br />Later in the afternoon I wanted to see the talk for a paper by Igor Pak and Scott Garrabrant on counting permutation patterns, but nobody showed up to deliver it. Instead I saw two talks about spanners, sparse graphs that approximate the distances of denser graphs (or of finite metric spaces). Arnold Filtser proved that <a href="http://epubs.siam.org/doi/10.1137/1.9781611974331.ch62">any metric space has a spanning tree whose weight is proportional to the minimum spanning tree and whose average distortion (the ratio of tree distance to ambient distance, averaged over all pairs of points) is constant</a>. And, in one of two best-paper winners (I didn't see the other one, by student Mohsen Ghaffari, on <a href="http://arxiv.org/abs/1506.05093">distributed independent sets</a>) Shiri Chechik showed with Christian Wulff-Nilsen that <a href="http://epubs.siam.org/doi/10.1137/1.9781611974331.ch63">every graph has a spanner (with constant distortion 2<i>k</i> for all pairs of points) whose weight and number of edges are both within a factor of <i>n</i><sup>1/<i>k</i></sup> of the minimum spanning tree</a>. This is optimal modulo a conjecture of Erdős, according to which graphs of girth 2<i>k</i> + 1 can have as many as Ω(<i>n</i><sup>1 + 1/<i>k</i></sup>) edges; deleting any edge from such a graph would give high distortion for the endpoints of the deleted edge.<br /><br />The final session on Monday was on computational geometry. Kyle Fox showed <a href="http://epubs.siam.org/doi/10.1137/1.9781611974331.ch82">how to find paths approximating the minimum of the integral of inverse local feature size</a> (a measure of how well the path avoids obstacles), complementing a paper by Nayyeri from WADS on approximating the integral of local feature size (a measure of how well the path stays near to a sampled submanifold). Sariel spoke again, on <a href="http://sarielhp.org/p/15/k_level/k_level.pdf">approximate levels in arrangements</a>; I learned from his talk that <a href="https://plus.google.com/101113174615409489753/posts/G38XNLU1G1S">lizards are coming</a>, that the convex hulls of the tiles in a tiling form pseudocircles, and that one can triangulate these pseudocircles to get a tiling again by simpler-shaped pieces within the original ones.<br /><br />Monday evening was the business meeting. Next year will be in Barcelona (according to the presenter at the meeting, home of the world's worst soccer team), but at a hotel 20 minutes from downtown. Phil Klein gets the unenviable task of chairing the program committee.<br /><br />Timothy Chan gave two talks on derandomization: one Monday about low-dimensional linear programming, and a second Tuesday about <a href="http://web.stanford.edu/~rrwill/derand-apsp-ov-cr.pdf">shaving super-polylogarithmic factors from all pairs shortest paths</a> (with Ryan Williams). The method also applies to testing whether pairs of vectors are orthogonal, which is almost the same as whether pairs of sets are disjoint, which is almost the same as whether pairs of graph vertices are not in any triangle, problems for which Tsvi Kopelowitz (with Porat and Pettie) gave <a href="http://arxiv.org/abs/1407.6756">3SUM-based lower bounds</a> in the same session. For instance, one can list all triangles in a <i>d</i>-degenerate graph in time <i>O</i>(<i>md</i>) (maybe with a log shaved?) and this bound is tight even for the graphs that have significantly fewer triangles than this.<br /><br />Jakub Łącki closed out the morning's contributed talks with one on <a href="http://arxiv.org/abs/1507.02426">graphs whose degree distribution obeys a power law</a>. He pointed out that for a random graph with this property, one can also use the power law to bound the number of higher-degree neighbors of each vertex (to a function significantly smaller than its degree). Random graphs are unrealistic but real-world graphs seem to obey similar local bounds, and this can be used to develop efficient algorithms. For instance, to find the triangles in such a graph, one can orient every edge from lower to higher degree (giving a DAG) and then search for pairs of outgoing edges.<br /><br />My favorite talk of Tuesday afternoon was the one by Veit Wiechert, on his work with Gwen Joret and Piotr Micek on <a href="http://arxiv.org/abs/1507.01120">sparsity properties of the covering graphs of partially ordered sets</a>. If the covering graph is series-parallel (or simpler) then the underlying partial order must have bounded order-dimension, but there are orders of unbounded dimension whose covering graphs are planar graphs of bounded treewidth. Nevertheless one can go much farther in this connection between sparsity and dimension by adding a height constraint: the main result of the paper is that for covering graphs of bounded expansion, the partial orders of bounded height have bounded dimension. One can't go farther, because there are nowhere-dense graphs of bounded-height orders that have unbounded dimension. This result implies, for instance, that in <a href="http://arxiv.org/abs/1504.04073">my WADS paper on parametric closure problems</a>, the polynomial bound for incidence posets of bounded-treewidth graphs can immediately be extended to incidence posets of bounded-expansion graphs.<br /><br />Of course, there's much more that I haven't written about (even among the stuff I saw), and I'd be interested in seeing similar reports that any other attendees might write. Please link them in the comments.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:3227182015 in algorithm preprints2016-01-06T06:34:48Z2016-01-06T06:35:15Z<a href="http://11011110.livejournal.com/302869.html">As in past years</a>, here's a roundup of what's been happening in the data structures and algorithms (cs.DS) section of arXiv.org. Over the last year, there were 1340 new algorithms preprints on arXiv; that's up about 13% from 1182 in 2014. The explosive growth rate of this section has been gradually diminishing over the years, but this time it didn't: it's higher than 2014's 10% growth rate. <a href="http://arxiv.org/help/stats/2015_by_area/index">Statistics on arXiv submissions more generally</a> are also available.<br /><br />I picked out the following ten papers (listed chronologically) as being particularly interesting to me. I'm not going to claim that they're in any objective sense the best of the year. Nevertheless I hope they're interesting to others as well.<ul><li><a href="http://arxiv.org/abs/1502.04588">"A (1+ε)-embedding of low highway dimension graphs into bounded treewidth graphs", arXiv:1502.04588</a>, by Feldmann, Fung, Könemann, and Post, ICALP 2015. The highway dimension of a graph models a natural property of road networks according to which, if you go far enough from some starting point, there are only a few different ways that your initial path can go. For instance, I used to live in Santa Barbara, where there are only three ways out of town: east or west along the coast on Highway 101, or north over the mountains on San Marcos Pass Road. (One rainy year, all three were blocked simultaneously.) The same phenomenon happens both on smaller and larger scales. This paper connects this theory with deep results in metric embedding and graph structure theory, allowing many more graph problems to be approximated efficiently on low-highway-dimension graphs. It appears closely related to Feldmann's second ICALP 2015 paper which used metric embeddings as part of approximation algorithms for graphs of low highway dimension.</li><br /><li><a href="http://arxiv.org/abs/1502.05204">"Clustered integer 3SUM via additive combinatorics", arXiv:1502.05204</a>, by Chan and Lewenstein, STOC 2015. The 3SUM problem is the following: you're given a collection of numbers and you want to test whether some triple of them sums to zero. A naive algorithm would take cubic time but with a little care it can be solved in quadratic time instead; for a long time that was believed to be optimal, and this assumption was used to show lower bounds on many other algorithmic problems. The quadratic time bound was broken recently by Williams at STOC'14, but the improvement was in a lower-order term, not the quadratic main exponent of the time bound. This paper gives a bigger break, with exponents bounded below two, although only for some special cases: small integer values, or subsets of a preprocessed set of integers.</li><br /><li><a href="http://arxiv.org/abs/1503.03465">"Faster 64-bit universal hashing using carry-less multiplications", arXiv:1503.03465</a>, by Lemire and Kaser. <a href="https://plus.google.com/100003628603413742554/posts/1X2mUDNsEN1">I already posted briefly about this</a>, but the basic idea is that modern CPUs now include instructions for doing arithmetic in GF2[<i>x</i>] (the ring of polynomials over the binary field), and this can be used to make a high-quality hash function very fast.</li><br /><li><a href="http://arxiv.org/abs/1504.01431">"If the current clique algorithms are optimal, so is Valiant's parser", arXiv:1504.01431</a>, by Abboud, Backurs, and Williams, FOCS 2015. Clique-finding and context-free grammar parsing are both among the problems that can be sped up using fast matrix multiplication: a <i>k</i>-clique in an <i>n</i>-vertex graph can be found in time <i>O</i>(<i>n</i><sup><i>k</i>ω/3</sup>), and a grammar of size <i>g</i> with an <i>n</i>-symbol input string can be parsed in time <i>O</i>(<i>n</i><sup>ω</sup>), where ω is the exponent of fast matrix multiplication. Now this paper shows that the two problems are related: any additional speedup of grammar parsing would also speed up clique-finding, even for grammars of constant size. This is a big improvement on previous lower bounds for context-free parsing which were conditional and required non-constant grammars.</li><br /><li><a href="http://arxiv.org/abs/1507.02318">"A faster pseudopolynomial time algorithm for subset sum", arXiv:1507.02318</a>, by Koiliaris and Xu. The textbook algorithm for subset sum takes time <i>O</i>(<i>nK</i>) where <i>n</i> is the number of input items (assumed to be positive integers) and <i>K</i> is the sum to be achieved. This paper reduces the dependence on <i>n</i> to the square root.</li><br /><li><a href="http://arxiv.org/abs/1507.03738">"Tight bounds for subgraph isomorphism and graph homomorphism", arXiv:1507.03738</a>, by Fomin, Golovnev, Kulikov, and Mihajlin, to appear next week at SODA 2016. Subgraph isomorphism is the problem of finding one graph as a subgraph of another. It includes as special cases NP-hard problems such as finding cliques or Hamiltonian cycles, but when the subgraph to be found has a small number <i>k</i> of vertices, it can be solved in time <i>n</i><sup><i>O</i>(<i>k</i>)</sup>. This paper proves a lower bound of the same form, conditional on the <a href="https://en.wikipedia.org/wiki/Exponential_time_hypothesis">exponential time hypothesis</a>, the assumption that there is no subexponential algorithm for Boolean satisfiability.</li><br /><li><a href="http://arxiv.org/abs/1511.00700">"The 4/3 additive spanner exponent is tight", arXiv:1511.00700</a>, by Abboud and Godwin. This is part of a line of research on approximating distances in arbitrary unweighted graphs by sparse graphs, so accurately that the error is only additive rather than multiplicative. It seemed that there was a tradeoff between the number of edges in the sparse graph and the accuracy of approximation: you could decrease the exponent of the number of edges (relative to the number of vertices) at the expense of a bigger addiitive error. But the best result of this type known was that with error at most 6 you could get a spanner with only <i>O</i>(<i>n</i><sup>4/3</sup>) edges. Now it seems that the tradeoff stops here: fewer edges will necessarily cause an additive error that grows as a power of <i>n</i> rather than staying constant.</li><br /><li><a href="http://arxiv.org/abs/1511.07070">"Which regular expression patterns are hard to match?", arXiv:1511.07070</a>, by Backurs and Indyk. The obvious answer is "none of them" because regular-expression matching has a low polynomial time bound. But it's quadratic (the product of the expression length and the input length), while some special cases such as matching a collection of dictionary words can be solved much more quickly (e.g. by building a DFA and then running it). This paper proves a strong dichotomy (assuming the exponential time hypothesis) between expressions that require near-quadratic time and expressions that take only near-linear time.</li><br /><li><a href="http://arxiv.org/abs/1511.02612">"Optimal dynamic strings", arXiv:1511.02612</a>, by Gawrychowski, Karczmarz, Kociumaka, Łącki, and Sankowski. The usual ways of representing strings take time linear in the string length to form new strings by concatenating or splitting the existing ones, and linear time to compare two strings or find the first position at which they differ. This paper gives a data structure for the same operations that takes logarithmic time per update and constant time per query.</li><br /><li><a href="http://arxiv.org/abs/1512.03547">"Graph isomorphism in quasipolynomial time", arXiv:1512.03547</a>, by Babai. So much has already been written about this one. Need I say more? But there is more: I've seen some statistics on most-downloaded papers on the arXiv (too rough to link to), and this is the cs.DS representative on the list, with tens of thousands of accesses. So obviously it's getting read far beyond its own specialized research community.</li></ul><a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:322482Linkage for the end of the year2016-01-01T03:43:03Z2016-01-01T03:43:03Z<ul><li><a href="http://www.theage.com.au/comment/why-wikipedia-at-15-is-a-beautiful-exercise-in-scholarly-excellence-20151209-glj79f.html">Why Wikipedia at 15 is a beautiful exercise in scholarly excellence</a> (<a href="https://plus.google.com/100003628603413742554/posts/CeGNjP62EkY">G+</a>)</li><br /><li><a href="http://www.csun.edu/gd2015/">Graph Drawing 2015 proceedings open access through Jan.18</a> (follow link from conference page; <a href="https://plus.google.com/100003628603413742554/posts/38BHprrWCqP">G+</a>)</li><br /><li><a href="http://bit.ly/1O0PXRc">How circle packings bring together some important ideas in geometry, topology, and analysis</a> (<a href="https://plus.google.com/100003628603413742554/posts/Ntc64jLdC9B">G+</a>)</li><br /><li><a href="http://www.theguardian.com/science/2015/dec/14/many-women-in-stem-fields-expect-to-quit-within-five-years-survey-finds">Many women in STEM fields expect to quit within five years, survey finds</a> (<a href="https://plus.google.com/100003628603413742554/posts/NzP4FuzVfs7">G+</a>)</li><br /><li><a href="https://pbelmans.wordpress.com/2015/12/21/on-towards-mathjax-3-0/">The MathJax project moves away from thinking of itself as a steppingstone to MathML</a> (<a href="https://plus.google.com/100003628603413742554/posts/disnZBsNEKR">G+</a>)</li><br /><li><a href="http://scholarlyoa.com/2015/12/17/instead-of-a-peer-review-reviewer-sends-warning-to-authors/">Continued issues with peer review at MDPI</a> (<a href="https://plus.google.com/100003628603413742554/posts/JYQuzosjqsP">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Two_ears_theorem">Two ears theorem</a> (<a href="https://plus.google.com/100003628603413742554/posts/6KQK3asxPsn">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=QAja2jp1VjE">Using dot patterns to visualize Euclidean transformations</a> (<a href="https://plus.google.com/100003628603413742554/posts/RujpJHmVSdS">G+</a>)</li><br /><li><a href="http://retractionwatch.com/2015/12/23/korean-prosecutors-seek-jail-time-for-professors-in-massive-plagiarism-scheme/">Academic plagiarism leads to criminal charges in South Korea</a> (<a href="https://plus.google.com/100003628603413742554/posts/BKrHNXmTRWz">G+</a>)</li><br /><li><a href="http://blogs.scientificamerican.com/roots-of-unity/contrasts-in-number-theory">Evelyn Lamb compares Piper Harron's and Shinichi Mochizuki's approaches to telling the world about their mathematics</a> (<a href="https://plus.google.com/100003628603413742554/posts/SMqXRpC839x">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Binary_logarithm">Binary logarithm</a>, newly listed as a Wikipedia Good Article (<a href="https://plus.google.com/100003628603413742554/posts/7bHRTJDEfQG">G+</a>)</li><br /><li><a href="http://bit.ly/1NXd9QR">Mathematical artwork visualizing conjugacy classes of subgroups of the icosahedral group</a>, with new years wishes from the AMS (<a href="https://plus.google.com/100003628603413742554/posts/8cMb5UbP8CU">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Secret_sharing">Secret sharing</a>, and an attempt to make more sense out of a minor technical subplot of <i>Star Wars</i> (<a href="https://plus.google.com/100003628603413742554/posts/11tL6jWf2NU">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:322140Discrepancy of uniform hypergraphs2015-12-22T01:55:26Z2015-12-22T01:55:26ZA hypergraph is just another way of talking about a family of sets: one thinks of the elements of the sets as vertices and the sets themselves as being like edges of a graph. Except that, unlike edges, sets in a family of sets can have more than two vertices, so we call them hyperedges rather than edges. An <i>r</i>-uniform hypergraph is one in which all the hyperedges have the same number of vertices as each other; for instance, a 2-uniform hypergraph is just an ordinary graph.<br /><br />If the vertices of a hypergraph are given two colors (black and white), the <a href="https://en.wikipedia.org/wiki/Discrepancy_of_hypergraphs">discrepancy</a> of the coloring measures how evenly each hyperedge is colored. In an ordinary graph, a proper 2-coloring has equal numbers of each color in each edge, and we want to get as close to that as we can in a hypergraph as well. So the discrepancy of a hyperedge is defined to be the absolute value of the difference between the numbers of black and white vertices, the discrepancy of a coloring is defined as the maximum of the discrepancies of any of the edges, and the discrepancy of a hypergraph is defined as the minimum discrepancy of any of its 2-colorings. For instance, the 4-uniform hypergraph shown as a Venn diagram below has discrepancy 2: it turns out not to be possible to 2-color the vertices so that all four hyperedges are evenly balanced between black and white. For if it were, the three lower sets (red, blue, and yellow, all sharing the bottom three vertices but each with a different top vertex) would force the three top vertices to have the same color as each other, causing the upper green set to be unbalanced.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/4-unbalanced.png"></div><br /><br />How many hyperedges must an <i>r</i>-uniform hypergraph have in order to force it to be unbalanced? This question generalizes the question of how many edges are needed to make a graph non-bipartite, for which the answer is three: a triangle is non-bipartite, and anything with fewer edges isn't. For <i>r</i> = 4, the example above (and some messy case analysis for fewer sets) shows that the answer is four. But for <i>r</i> = 6, it's three again: a 6-uniform hypergraph formed by tripling each vertex of a triangle (without changing the edge intersection pattern) has discrepancy two, because two of the sets of three equivalent vertices must have the same majority color.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/6-unbalanced.png"></div><br /><br />Generalizations of these examples show that three sets can force a nonzero discrepancy whenever <i>r</i> is 2 mod 4, and four can force a nonzero discrepancy whenever <i>r</i> is 4 or 8 mod 12, so the remaining cases are the ones where <i>r</i> is divisible by 12. On the other hand, the best upper bound I know how to prove that is valid for arbitrary <i>r</i> is logarithmic in <i>r</i>. I don't even know whether the number of sets needed to force a nonzero discrepancy is bounded for all <i>r</i>, or whether it can grow without bound as a function of <i>r</i>.<br /><br />One can ask the same sort of question when <i>r</i> is an odd number, of course. To prevent the problem from becoming trivial, we can define a hypergraph to be balanced when its discrepancy is at most one, and ask for the minimum number of hyperedges in an unbalanced <i>r</i>-uniform hypergraph. For this variation of the problem, the number of hyperedges needed to force the hypergraph to be unbalanced has an even weaker upper bound, of the form <i>r</i> + <i>O</i>(log <i>r</i>).<br /><br />This problem comes up in my latest preprint with Dan Hirschberg, "<a href="http://arxiv.org/abs/1512.06488">From Discrepancy to Majority</a>", arXiv:1512.06488, to appear at LATIN 2016. We use the odd-<i>r</i> case as a preliminary step in an algorithm<br />that takes as input a two-colored set of elements, and is allowed to access the coloring only by means of an oracle that returns the discrepancy of an <i>r</i>-subset of the elements. This problem was introduced in a paper by De Marco and Kranakis, motivated by fault diagnosis of distributed systems, and we improve their bounds on the number of queries needed by roughly a factor of <i>r</i>/2. In order to get any useful information at all from the oracle, we need to make it give an answer different from "this set has the minimum possible discrepancy", and to do that we need to set up a system of sets where at least one of them does not have the minimum possible discrepancy.<br /><br />But I think the problem of finding minimal unbalanced uniform hypergraphs is interesting on its own, even aside from this application. The bounds we give on it in our paper seem to be far from tight, in part because they give lower-order contributions to the overall query bound for our algorithm so we didn't have a lot of motivation for tightening them. So there should be scope for plenty more exploration of this problem.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:321850Planar split thickness2015-12-19T04:37:58Z2015-12-19T04:40:09ZI have a few new additions to my unmet-coauthor list in my latest arXiv preprint, "<a href="http://arxiv.org/abs/1512.04839">On the Planar Split Thickness of Graphs</a>", arXiv:1512.04839, to appear at <a href="http://latin2016.natix.org/">LATIN 2016</a>.<br /><br />The paper is on the following kind of graph drawing, where we draw each vertex multiple times (with labels so you can see which vertices are repeated where). To interpret the drawing, take the union of the adjacencies of the copies. For instance, the vertex d, repeated at the top of the drawing and the lower center, is adjacent to 6, 7, 2, 3, 0 (from the top), and also to 1, 4, 5, 8, 9 (from the bottom). If you check more carefully you'll see that each letter-digit pair is adjacent, so this is a complete bipartite graph.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/2K6,10.png"></div><br /><br />This drawing style is related to graph minors (where the repeated copies of each vertex need to form a connected subgraph) and graph thickness (where only one copy of each vertex is allowed per connected component of the drawing) but without the constraints of either, allowing a little more flexibility in what can be drawn with a given amount of repetition. For instance, the complete bipartite graphs K<sub>6,10</sub> (shown above), K<sub>5,16</sub>, and K<sub>7,8</sub> can all be drawn this way with only two repetitions per vertex, but cannot be drawn with thickness two.<br /><br />Unfortunately, it's NP-hard to find a drawing of this type with as few repetitions per vertex as possible. But fortunately, there are some important special cases where it's not so hard. In particular, one of our results is to solve the problem on graphs of bounded treewidth. The algorithm uses <a href="https://en.wikipedia.org/wiki/Courcelle%27s_theorem">Courcelle's theorem</a>, so there's probably a lot of room to make it more practical...urn:lj:livejournal.com:atom1:11011110:321613Linkage for mid-December2015-12-16T05:58:46Z2015-12-16T05:58:46Z<ul><li><a href="http://bit-player.org/2015/ramsey-theory-in-the-dining-room">Ramsey theory in the dining room</a> (<a href="https://plus.google.com/100003628603413742554/posts/DgLy5UqubGc">G+</a>)</li><br /><li><a href="http://www.theguardian.com/science/alexs-adventures-in-numberland/2015/dec/02/why-the-history-of-maths-is-also-the-history-of-art">Why the history of maths is also the history of art</a> (<a href="https://plus.google.com/100003628603413742554/posts/ShBgA9U1XTp">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=ZUllziPdRkw">The merger of a bubble and a soap film</a>, from the <a href="http://gfm.aps.org/">gallery of fluid motion</a> (<a href="https://plus.google.com/100003628603413742554/posts/Fu1TzCaBRyw">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=_DnDeBa0KFc">Kepler Orrery IV</a>, a video simulation showing the scales and temperatures of the extrasolar planetary systems discovered so far (<a href="https://plus.google.com/100003628603413742554/posts/b6FR8Tfiyns">G+</a>)</li><br /><li><a href="http://aperiodical.com/2015/12/aperiodvent-day-6-the-panarboreal-formula/">The Chung–Graham universal graphs for trees</a> (<a href="https://plus.google.com/100003628603413742554/posts/KProuT8H74d">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=GolS9Db3BMM">A video walk through fluffy clouds of red string and suspended keys</a> (<a href="https://plus.google.com/100003628603413742554/posts/8o8tuzYRxwf">G+</a>)</li><br /><li><a href="http://boingboing.net/2015/12/08/undercover-greenpeace-activist.html">Undercover Greenpeace activists buy off corrupt academics in a climate science sting</a> (<a href="https://plus.google.com/100003628603413742554/posts/9yRRD7d5jdh">G+</a>)</li><br /><li><a href="http://www.worldofcoins.eu/forum/index.php?topic=9903.0">An alphabet of heptagons: Seven-sided coins</a> (via <a href="http://www.metafilter.com/155387/Minter-needed-must-own-straightedge-compass-and-additional-tool">MF</a>; <a href="https://plus.google.com/100003628603413742554/posts/6ng34acQWxB">G+</a>)</li><br /><li><a href="http://www.npr.org/sections/money/2014/10/21/357629765/when-women-stopped-coding">The big mid-1980s drop in women in CS</a> (<a href="https://plus.google.com/100003628603413742554/posts/9cikKQG4skK">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Osgood_curve">Osgood curves</a>, Jordan curves that themselves have nonzero area (<a href="https://plus.google.com/100003628603413742554/posts/Ty4bomWkkT2">G+</a>)</li><br /><li><a href="http://hyperallergic.com/239413/expert-claims-m-c-escher-museum-is-full-of-replicas/">Escher museum displayed fakes without telling the punters</a> (<a href="https://plus.google.com/100003628603413742554/posts/MUc4XgspQtv">G+</a>)</li><br /><li><a href="http://arxiv.org/abs/1512.03547">Graph isomorphism in quasipolynomial time</a> (<a href="https://plus.google.com/100003628603413742554/posts/GNfj7VA3pFQ">G+</a>)</li><br /><li><a href="http://www.metafilter.com/155527/WaPo-Drops-the-Mic-on-He-or-She">WaPo gives its blessing to singular they</a> (<a href="https://plus.google.com/100003628603413742554/posts/MXsPar7JdcK">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:321510Enumerating polyhedra with few edge lengths2015-12-08T05:35:26Z2015-12-08T05:35:26ZThis is related to my post yesterday, which demonstrated that simplicial convex polyhedra (all faces are triangles) with integer edge lengths can be forced to have nasty vertex coordinates, without any closed form solution. How big do the edge lengths need to be for this to happen? My guess is that lengths 1 and 2 should be enough, but I haven't worked out an explicit example. But length 1 alone is not enough.<br /><br />The simplicial polyhedra with all edge lengths 1 are called <a href="https://en.wikipedia.org/wiki/Deltahedron">deltahedra</a>, and there are only eight of them. They include three of the regular polyhedra, and five others shown below in a <a href="http://brianalexanderhofmeister.com/artwork/3139978-Convex-Deltrahedra-Acid-Magnetism-Winter-Light-and-Love.html">Neo-Platonist artwork by Brian Hofmeister</a> (see his web site for more interesting geometric art):<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/BrianHofmeisterDeltahedra.jpg" alt="Convex Deltrahedra (Acid, Magnetism, Winter, Light and Love) by Brian Hofmeister, http://brianalexanderhofmeister.com/artwork/3139978-Convex-Deltrahedra-Acid-Magnetism-Winter-Light-and-Love.html"></div><br /><br />Most of the irregular deltahedra can be built out of simpler pieces, prisms and pyramids, and have relatively simple vertex coordinates. The <a href="https://en.wikipedia.org/wiki/Snub_disphenoid">snub disphenoid</a>, the light blue one in the middle, has coordinates that are more of a mess, involving the solution to a cubic equation, but still closed form.<br /><br />So next we need to look at polyhedra with edge lengths 1 and 2, but there are many of them and I don't think they have been explicitly enumerated. How many? Finitely many, but I don't know much more than that.<br /><br />More generally, suppose you have finitely many edge lengths to use as the side lengths of your triangles. For instance, in my office I have a polyhedral construction set consisting of triangles (and some other polygons) with sides that are all either 1 or sqrt(2) (in some system of measurement). Then there are finitely many triangles you can build from these lengths, and finitely many combinations of triangles you can put together at a single vertex to get a total angle that is strictly less than 2<i>π</i> (else you can't use that combination in a convex polyhedron). Because there are finitely many, there is some nonzero number <i>ε</i> that gives the smallest angular defect (difference from 2<i>π</i>) of any of these combinations of triangles.<br /><br />But every convex polyhedron has total angular defect 4<i>π</i>, by a discrete version of the <a href="https://en.wikipedia.org/wiki/Gauss%E2%80%93Bonnet_theorem">Gauss–Bonnet theorem</a>. This means that the total number of vertices can never be more than 4<i>π</i>/<i>ε</i>, a finite number. And with finitely many vertices you can only have finitely many different polyhedra.<br /><br />This argument is not very constructive. As I've written it above, it doesn't tell you how quickly the number of integer-sided simplicial polyhedra grows as a function of the maximum edge length, for instance. And if you try to use this method to prove a bound on the growth rate, you quickly run into tricky problems in diophantine approximation: just how close can you get to zero defect while still having positive defect? I can't even tell from this argument whether the 1—sqrt(2) triangles in my office can make more convex polyhedra than 1—2 triangles. (Note, though, that any bound has to involve the lengths and not just the number of different edge lengths, because a 1—<i>n</i>—<i>n</i> isosceles triangle can make a number of different bipyramids that's linear in <i>n</i>.) So there's a lot of scope for more research in this area.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:321043Polyhedra whose vertex coordinates have no closed form formula2015-12-07T01:20:56Z2015-12-07T16:13:32ZThe cube has the nice symmetric set of Cartesian vertex coordinates <nobr>(±1, ±1, ±1).</nobr> These give it an edge length of 2, but if you want any other number <i>s</i> just scale by <i>s</i>/2. Similarly, the regular octahedron can be give coordinates that are all the possible permutations of <nobr>(0, 0, ±1),</nobr> and the regular tetrahedron can be given coordinates that are a subset of the cube's vertices, the ones with an even number of plus signs; in both cases the scale factor involves the square root of two. The regular dodecahedron can be formed from the eight vertices of a cube by adding twelve more vertices with coordinates formed by cyclically permuting <nobr>(0, ±<i>φ</i>, ±1/<i>φ</i>),</nobr> where <i>φ</i> is the golden ratio; its scaling factor also involves the golden ratio. And the regular icosahedron, with side length 2 (like the cube, and with the same scaling factor), has coordinates given by all cyclic permutations of <nobr>(0, ±1, ±<i>φ</i>).</nobr> The three subsets of four vertices having the same cyclic permutation as each other form three golden rectangles, which interlock in the pattern of the Borromean rings and have the icosahedron as their convex hull:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/Icosahedron-golden-rectangles.png"><br><br /><small>(<a href="https://commons.wikimedia.org/wiki/File:Icosahedron-golden-rectangles.svg">image from Wikimedia commons</a>)</small></div><br /><br />It's also possible to make irregular variants of these polyhedra (or of any other polyhedron), with the same combinatorial structure and integer coordinates. For instance, the cyclic permutations of <nobr>(0, ±1, ±2)</nobr> give an irregular icosahedron. Finding small integer coordinates for a polyhedron combinatorially equivalent to the regular dodecahedron makes an amusing exercise, and bounding how big the coordinates need to be to realize all polyhedra remains an intriguing open problem.<br /><br />Given all this, you might be forgiven for thinking that every polyhedron has a nice closed-form formula for its vertex coordinates. But when the geometry and not just the combinatorics of the polyhedron is specified, it's not true, even when the polyhedron's faces are particularly simple (say, integer-sided triangles).<br /><br />To begin with, what is the input? One reasonable choice for how to describe a polyhedron is to give a <a href="https://en.wikipedia.org/wiki/Net_(polyhedron)">net</a>, a set of planar polygons (the faces of the polyhedron) connected to each other edge-to-edge, with instructions for how to glue the remaining edges together. It's a useful way of making polyhedral models out of paper, but a bit problematic from the point of view of theoretical algorithms, because we still don't know whether all polyhedra have nets. A bit more abstractly, we could specify the shape of each face, and the gluing pattern of the edges, without requiring that the faces form a connected net in the plane. By <a href="https://en.wikipedia.org/wiki/Cauchy%27s_theorem_(geometry)">Cauchy's theorem</a>, this is enough to determine the whole geometry. But it still has the issue that the shapes won't always glue together the way you think they should. Instead, a standard alternative description of the input to the problem is to describe its <i>development</i>: the metric space you get from shortest paths that stay within its surface. This can again be done by gluing together polygons. As long as the resulting complex has the topology of a sphere and each vertex has a total angle strictly less than 2<i>π</i>, it will form a unique convex polyhedron by <a href="https://en.wikipedia.org/wiki/Alexandrov%27s_uniqueness_theorem">Alexandrov's uniqueness theorem</a>, but not necessarily one with the input polygons as its faces. The examples that I'll use to show that closed-form formulas don't exist can be given in any of these three forms (nets, gluing patterns of faces, or developments), and their faces have particularly simple shapes: isosceles triangles with integer edge lengths.<br /><br />A first hint that a closed-form formula might not exist comes from a paper by Maksym Fedorchuk and Igor Pak, "<a href="http://www.math.ucla.edu/~pak/papers/pp16.pdf">Rigidity and polynomial invariants of convex polytopes</a>", <i>Duke Math. J.</i> 2005. They use regular bipyramids (the convex hulls of a convex polygon within a plane, a point above the polygon, and a point below the polygon, but with variable edge lengths) to show that the polynomials describing the coordinates as a function of edge lengths can have exponentially high degree. But of course high degree doesn't necessarily imply no closed form solution: regardless of how high <i>d</i> is, the polynomial <nobr><i>x</i><sup><i>d</i></sup> − 2</nobr> has a root with the simple closed form solution <nobr>2<sup>1/<i>d</i></sup>.</nobr><br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/Bipyramid.png"><br><br /><small>(<a href="https://commons.wikimedia.org/wiki/File:Zw%C3%B6lfseitige_Pyramide.png">image from Wikimedia commons</a>)</small></div><br /><br />In part because of this result, a later paper by Kane, Price, and Demaine ("<a href="http://erikdemaine.org/papers/Alexandrov_WADS2009/">A pseudopolynomial algorithm for Alexandrov's theorem</a>", WADS 2009) gave up on trying to find an exact algebraic solution for the vertex coordinates, and instead showed that (for developments as input) an accurate numerical approximation to a polyhedral realization could be found in pseudopolynomial time, using methods involving the solution to differential equations.<br /><br />So we know that the realization for a given input has an algebraic solution, but we don't know of a good closed form solution, and instead practical algorithms use numerical methods that converge reasonably quickly. Those characteristics should sound familiar: they're the same as the background to my paper "<a href="http://dx.doi.org/10.7155/jgaa.00349">The Galois Complexity of Graph Drawing</a>" (with Bannister, Devanny, and Goodrich, JGAA 2015). In that paper, we showed that many graph drawing problems with similar characteristics could not be solved in algebraic computation tree models of computation augmented with a polynomial-root-finding primitive, in two variations: the primitive could either compute <i>d</i>th roots of given numbers, for arbitrarily large <i>d</i>, or it could compute roots of more complicated polynomials but only of bounded degree. In both cases, we showed that this computational model is incapable of representing the exact solution to the graph drawing problems we studied, regardless of how much time it spends in searching for the solution.<br /><br />In <a href="http://11011110.livejournal.com/292904.html">the blog post in which I announced the preprint version of the Galois complexity paper</a>, I described similar results for an even simpler geometric problem: given a cycle graph with specified edge lengths, draw it in the plane so that all the vertices are in cycle order around a circle. The special case of this problem with all edge lengths equal to each other and the cycle length derived from a Sophie Germain prime cannot be drawn by a computation tree using bounded-degree-polynomial-root primitives, and an example found by Varfolomeev (2004) shows that it cannot be drawn by a computation tree using arbitrary-degree-<i>d</i>th-root primitives, because this example involves an unsolvable Galois group. In particular, it has no nice closed-form formula.<br /><br />So let's try applying these methods to polyhedral realizations. To do so, we'll transform Varfolomeev's cycle problem into an equivalent polyhedral problem. We'll assume that we have a cycle graph with specified integer edge lengths, but with an additional constraint: that the longest edge of the cycle is no more than a <nobr>2/(2 + <i>π</i>)</nobr> fraction of the total length of the cycle. This constraint doesn't make the cycle problem any easier. However, it ensures that, when we realize this system of lengths as a cyclic polygon, it will contain the center of its circle; we will need this to make our polyhedral realization convex. Now, choose a large integer <i>N</i> (say, equal to the total edge length of the cycle). For each edge of the cycle, of length <i>L</i> form a pair of isosceles triangles with side lengths <i>L</i>, <i>N</i>, and <i>N</i>, glued to each other on the length-<i>L</i> edge and glued to the next and previous pairs of triangles around the cycle on the length-<i>N</i> edges. The result is a structure that appears to be the development of a bipyramid (although we haven't shown that yet) and that can easily be unfolded in the plane by cutting all but one of the cycle edges and one edge incident to each apex of the bipyramid.<br /><br />I claim that this structure can be realized as a polyhedron, and that its unique realization has the given cycle lying on a circle in a plane. To show that such a realization exists, find a cyclic polygon with the given edge lengths, and then place the two apexes of the bipyramid on the line perpendicular to this polygon through the circle center. Because the cycle vertices are on a circle and the apexes are on an axis perpendicular to the circle, all the edges from the cycle vertices to the apexes have the same length, and the position of the apexes on the axis can be adjusted to make this length equal to <i>N</i>. After adjusting in this way, we have realized our input as a bipyramid with a cyclic polygon as its equator. But by Cauchy's or Alexandrov's theorem's, there can be only one realization, so this is the one.<br /><br />When my kids were little, they used to have a play structure in the form of a teepee, with a very similar construction: a bundle of equal-length poles connected to each other at their top ends, with an isosceles triangle of fabric between each consecutive pair of poles. The weight of the the structure would cause the triangles to spread out tight, and when they did the bottom ends of the poles would be roughly circular. The construction above is like doing the same thing on a mirrored surface, so that you see a second bundle of poles below the circle as well as the one above it.<br /><br /><div align="center"><img src="https://www.ics.uci.edu/~eppstein/pix/tim3bd/MilesTent1-m.jpg"></div><br /><br />If a closed form formula for the realization existed, it could be constructed by our modified computation tree model. And then the model could find the line through the two apexes of the bipyramid and project the realization perpendicularly to this line to construct a realization of the cyclic polygon determined by the given sequence of cycle edge lengths. Based on this, it could construct a closed form formula for the vertices of this polygon. But we already know from Varfolomeev that such a formula does not exist; therefore the bipyramid's vertex coordinates also have no closed form formula.<br /><br />Kane et al. state as an open problem reducing their pseudopolynomial complexity to polynomial. The construction outlined above suggests that strongly polynomial complexity is unlikely, but (as for e.g. circle packing) it might still be possible to find a numerical algorithm that converges to an accurate approximation of the solution in time polynomial in the input size and in the desired number of bits of precision of the output.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:320856Linkage2015-12-01T07:05:28Z2015-12-01T07:05:28Z<ul><li><a href="http://www.ams.org/profession/ams-fellows/new-fellows">This year's AMS fellows</a> including Alon, Moore, Pach, Propp, Shub, and Sipser (<a href="https://plus.google.com/100003628603413742554/posts/gBGpRt3DVaD">G+</a>)</li><br /><li><a href="http://www.metafilter.com/154757/Can-social-networks-substitute-for-peer-review-in-science">A computer science conference decides to replace its peer review process with online discussions on one of the more wretched hives of misogyny and racism on the net.</a> What could possibly go wrong? (<a href="https://plus.google.com/100003628603413742554/posts/C6vE58CZ1XF">G+</a>)</li><br /><li><a href="http://arxiv.org/abs/1511.02851">Segerman and Nelson visualize hyperbolic honeycombs</a> (<a href="https://plus.google.com/100003628603413742554/posts/5f26VRXqhhf">G+</a>)</li><br /><li><a href="http://aperiodical.com/2015/11/riemann-hypothesis-not-proved-part-2/">Riemann hypothesis still not proved</a> (<a href="https://plus.google.com/100003628603413742554/posts/gDjAcShmnJ5">G+</a>)</li><br /><li>Meet the new Google+. Simpler, faster, better, uglier. (<a href="https://plus.google.com/100003628603413742554/posts/14xWNgLAdwA">G+</a> only, no link)</li><br /><li><a href="http://news.sciencemag.org/scientific-community/2015/11/feature-how-hijack-journal">Pirates who hijack the web sites of scientific journals</a> and then divert article submission/publication fees to themselves (<a href="https://plus.google.com/100003628603413742554/posts/BB8h78dgRvT">G+</a>)</li><br /><li><a href="https://vimeo.com/146334254">Interview with Ken Ribet about how he uses the open-source Sage mathematics system</a> (<a href="https://plus.google.com/100003628603413742554/posts/GpJF1tezzMP">G+</a>)</li><br /><li><a href="https://books.google.com/books?id=ymud91nTc9YC&pg=PA352">Virasena's concept of ardhacheda</a>, said to be an ancient discovery of <a href="https://en.wikipedia.org/wiki/Binary_logarithm">binary logarithms</a>. But is it really, or is it instead the 2-adic order? (<a href="https://plus.google.com/100003628603413742554/posts">G+</a>)</li><br /><li><a href="http://bigthink.com/strange-maps/the-flatter-kansas-plan">Kansas made even flatter</a> and how to define the flatness of a region (<a href="https://plus.google.com/100003628603413742554/posts/2JUpwB2Tiyi">G+</a>)</li><br /><li><a href="http://www.thisiscolossal.com/2015/11/willamette-river-history-dan-coe/">The history of the Willamette River</a> made visible from the height of the land it once eroded (<a href="https://plus.google.com/100003628603413742554/posts/Tfcc4ysNTne">G+</a>)</li><br /><li><a href="https://medium.com/backchannel/the-end-of-the-internet-dream-ba060b17da61#.mfn2ckdh2">The end of the internet dream?</a> Or, making the internet "a lot more like TV and a lot less like a global conversation". (<a href="https://plus.google.com/100003628603413742554/posts/Lgps4som7tA">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=rEMP3XEgnws">How does a light-field camera work?</a> (<a href="https://plus.google.com/100003628603413742554/posts/GgpYA3w1uMo">G+</a>)</li><br /><li><a href="http://mathoverflow.net/q/224898/440">MathOverflow question re convergence of the 0-1 law for first-order properties of random graphs</a> (<a href="https://plus.google.com/100003628603413742554/posts/gM6AxjvhKjy">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:320638Congratulations, Dr. Lam!2015-11-24T22:49:26Z2015-11-24T22:49:26ZJenny Lam, my teaching assistant for algorithms this quarter, passed today her thesis defense. She has been working with Sandy Irani, primarily on online algorithms for replacement and memory management strategies in heterogeneous caches (such as web proxies that have to maintain copies of documents of widely varying sizes), and has three papers on that subject, one at ALENEX and two at Middleware. She also has another paper on security-related algorithms in submission, which I'm also a co-author on. My understanding is that she will be teaching next semester at Pomona College on a temporary basis, while she looks for a more permanent position. Congratulations!urn:lj:livejournal.com:atom1:11011110:320476Telephone primes2015-11-23T19:27:50Z2015-11-23T19:29:35ZThis weekend I helped add a new sequence to OEIS, <a href="https://oeis.org/A264737">A264737</a> of the prime numbers that divide at least one <a href="https://en.wikipedia.org/wiki/Telephone_number_(mathematics)">telephone number</a> (the numbers of matchings in a complete graph etc).<br /><br />The telephone numbers obey a simple recurrence T(n) = T(n-1) + (n-1)T(n-2), and it's easy to test whether a prime number p divides at least one telephone number by running this recurrence modulo p. Whenever n is 1 mod p, the right hand side of the recurrence simplifies to T(n-1) mod p, and we get two consecutive numbers that are equal mod p; After that point, the recurrence continues as it would from its initial conditiions (two consecutive ones), multiplied mod p by some unknown factor. Therefore, the recurrence mod p either repeats exactly with period p, or it becomes identically zero (as it does for p=2), or it repeats with a higher period that is a multiple of p and a divisor of p(p–1), where all sub-periods of length p are multiples of each other. In particular, if p divides at least one telephone number, it divides infinitely many of them, whose positions are periodic with period p.<br /><br />All primes divide at least one Fibonacci number (a sequence of numbers with an even simpler recurrence) but that is not true for the telephone numbers. For instance, the telephone numbers mod 3 form the infinite repeating sequence 1,1,2,1,1,2,... with no zeros. So how many of the prime numbers are in the new sequence? A heuristic estimate suggests that the telephone primes should form a 1–1/e fraction of all primes (around 63.21%): p is a telephone prime when there is a zero in the first p terms of the recurrence sequence mod p, and if we use random numbers instead of the actual recurrence then the probability of not getting a zero is approximately 1/e. With this estimate in mind, I tried some computational experiments and found that among the first 10000 primes, 6295 of them (approximately 63%) are in the sequence. Pretty accurate, I think! But I have no idea how to approach a rigorous proof that this estimate should be correct.<br /><br />Incidentally, while looking up background material for this I ran into a paper by Rote in 1992 that observes a relationship between the telephone number and another sequence, <a href="https://oeis.org/A086828">A086828</a>. A086828 counts the number of states in a dynamic programming algorithm for the traveling salesman problem on graphs of bandwidth k, for a parameter k. So its calculation, in the mid-1980s, can be seen as an early example of the parameterized analysis of algorithms. It has the same recurrence relation as the telephone numbers, but with different initial conditions, so we can consider using this sequence instead of the telephone numbers. But the same analysis above showing that all subperiods of length p are similar applies equally well to this sequence, showing that after an initial transient of length p, all subperiods are either identically zero or similar to the corresponding subperiods of the telephone numbers. So if we ask which primes divide at least one member of A086828, we get almost the same answer, except possibly for some additional primes that either divide one of the first p numbers of A086828 (and then no other members of A086828 later in the sequence) or that divide all but finitely many members of A086828.urn:lj:livejournal.com:atom1:11011110:320119Linkage2015-11-16T07:15:00Z2015-11-16T08:11:04Z<ul><li><a href="https://en.wikipedia.org/wiki/Reuleaux_triangle">Reuleaux triangle</a>, now a Wikipedia "good article" (<a href="https://plus.google.com/100003628603413742554/posts/Vy5jUbnW1mC">G+</a>)</li><br /><li><a href="http://mathoverflow.net/q/222412/440">Triangle centers from curve shortening</a> (MathOverflow question still missing a complete solution, but a lot of interesting partial results and discussion; <a href="http://mathoverflow.net/questions/222412/triangle-centers-from-curve-shortening">G+</a>)</li><br /><li><a href="https://lucatrevisan.wordpress.com/2015/11/03/laci-babai-and-graph-isomorphism/">Rumors of Babai's graph isomorphism result</a> (<a href="https://plus.google.com/100003628603413742554/posts/CpWcYSWQsNS">G+</a>)</li><br /><li><a href="http://www.wired.com/2015/10/important-rule-science-writing/">Balancing understandability and pedantry in popular scitech writing</a> (<a href="https://plus.google.com/100003628603413742554/posts/6YX1mRXUKFi">G+</a>)</li><br /><li><a href="https://twitter.com/neilhimself/status/662161811601555457">A dubious similarity network of novelists</a>, displaying the phenomenon that nearest-neghbor links are not bidirectional (<a href="https://plus.google.com/100003628603413742554/posts/ENVjFnqr6ox">G+</a>)</li><br /><li><a href="http://www-bcf.usc.edu/~dkempe/SoCalTheoryDay2015/index.html">Southern California Theory Day</a>, now past (<a href="https://plus.google.com/100003628603413742554/posts/Qsww455TKmG">G+</a>)</li><br /><li><a href="http://bit.ly/1IffFBe">Advice from Valerie Barr on being a good feminist ally</a> (<a href="https://plus.google.com/100003628603413742554/posts/13JyQzNVjip">G+</a>)</li><br /><li><a href="http://3.bp.blogspot.com/-zUQf6AuQadc/Vj-IsekF8II/AAAAAAAAMkg/CDm3JIJBJO0/s1600/show%2Byour%2Bthinking.jpg">Reductio ad absurdum of "show your thinking" style math problems</a> (<a href="https://plus.google.com/100003628603413742554/posts/AA9pN4KsiSk">G+</a>)</li><br /><li><a href="http://www.leru.org/index.php/public/extra/signtheLERUstatement/">Petition to the EU to protest using gold-model open access as an excuse to redirect funding from researchers to publiishers</a> (<a href="https://plus.google.com/100003628603413742554/posts/aPawczAM8Zu">G+</a>)</li><br /><li><a href="http://arxiv.org/abs/1507.02528">Unification of random-walk and interior-point convex optimization</a> (<a href="https://plus.google.com/u/0/100003628603413742554/posts/j37ny2g8SwD">G+</a>)</li><br /><li><a href="http://www.bemlegaus.com/2013/01/matematica-urbana.html">Mathematical street art</a> (<a href="https://plus.google.com/100003628603413742554/posts/c1d22vB5WRX">G+</a>)</li><br /><li><a href="http://www.gregegan.net/SCIENCE/NoCorkscrews/NoCorkscrews.html">Can planets have corkscrew orbits?</a> Greg Egan takes down an incautious 3-body analysis (<a href="https://plus.google.com/100003628603413742554/posts/NisnQPMbMRB">G+</a>)</li><br /><li><a href="http://www.wired.com/2015/11/argentina-many-female-astronomers/">More women in science doesn't mean less sexism: the example of Argentinian astronomy</a> (<a href="https://plus.google.com/100003628603413742554/posts/ZLhRMyfwqQD">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Planted_clique">The planted clique problem</a>, part of a growing trend of interest in quasipolynomial time algorithmics (<a href="https://plus.google.com/100003628603413742554/posts">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:319787Pan-rep-tiles2015-11-11T08:37:15Z2015-11-16T16:43:15ZA reptile is a creature, but a <a href="https://en.wikipedia.org/wiki/Rep-tile">rep-tile</a> is a shape that can tile a larger copy of the same shape. If you use that larger copy to tile still-larger copies, and so on, you get a tiling of the plane. It's often an aperiodic tiling, but not always: for instance, the square and the equilateral tiling are rep-tiles but generate periodic tilings when repped.<br /><br />Another of the properties of the square and the equilateral triangle is that they are rep-tiles in many different way. Any square number of these tiles can be put together to make a larger copy. A rep-tile is said to be rep-<i>k</i> if it uses <i>k</i> tiles to tile its larger self; these shapes are rep-<i>k</i> for all square <i>k</i>. For tiles whose side lengths are all rational multiples of each other, that's the most versatile a rep-tile can be, because the side length of the larger copy is the square root of <i>k</i>. Let's say that a rep-tile is a pan-rep-tile if it has this same property, of being rep-<i>k</i> for all square <i>k</i>. Are there other pan-rep-tiles?<br /><br />Over on the <a href="https://en.wikipedia.org/wiki/Talk:Rep-tile">Wikipedia talk page for the rep-tile article</a>, an anonymous (IP address) editor suggested this property as one that might actually be held by many rep-tiles, and gave some examples of tilings suggesting that the P pentomino and the <a href="https://en.wikipedia.org/wiki/Sphinx_tiling">sphinx</a> might be examples of pan-rep-tiles. It turns out not to be particularly difficult to show that the P pentomino is, in fact, a pan-rep-tile: see the visual demonstration below.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/panreptile.png"></div><br /><br />Is the same true for the sphinx?<br /><br /><b>ETA:</b> Yoshio Okamoto informs me that his paper with Ryuhei Uehara and Takashi Horiyama at <a href="http://www.kurims.kyoto-u.ac.jp/~takazawa/JCDCGG2015/">JCDCG^2 2015</a>, "Ls in L and Sphinxes in Sphinx", proves that sphinxes are indeed pan-rep-tiles.<br /><br /><b>ETA 2:</b> The results for the P-pentomino, sphinx, and several other rep-tiles are in Niţică, Viorel (2003), Rep-tiles revisited, MASS selecta, pp. 205–217, Amer. Math. Soc. Thanks to Gerhard Woeginger for the reference!<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:319499Halloween linkage2015-11-01T05:55:22Z2015-11-01T05:55:55Z<ul><li><a href="http://scholarlykitchen.sspnet.org/2015/10/14/return-of-the-big-brands/">How big publishers are likely to co-opt the open access movement</a> (<a href="https://plus.google.com/100003628603413742554/posts/EhSpKBzLigA">G+</a>)</li><br /><li><a href="http://slatestarcodex.com/2015/10/15/it-was-you-who-made-my-blue-eyes-blue/">Famous puzzle becomes amusing short story</a> (<a href="https://plus.google.com/100003628603413742554/posts/DfxwhGzMQ5D">G+</a>)</li><br /><li><a href="https://developer.mozilla.org/en-US/docs/Mozilla/MathML_Project/MathML_Torture_Test">Test how not-ready-for-prime-time your browser's MathML implementation is</a> (<a href="https://plus.google.com/100003628603413742554/posts/do3yQ71YYv7">G+</a>)</li><br /><li><a href="http://thinkprogress.org/health/2015/10/19/3713612/men-ignore-hard-evidence-of-gender-bias/">It's hard to convince men of gender bias in STEM</a> (<a href="https://plus.google.com/100003628603413742554/posts/Cw8f7MfkjdU">G+</a>)</li><br /><li><a href="http://www.danielwidrig.com/index.php?page=Work&id=Grid">Daniel Widrig's geometric sculpture</a> (<a href="https://plus.google.com/100003628603413742554/posts/SZqUgdpCY9z">G+</a>)</li><br /><li><a href="https://shar.es/1uAWr2">Maria Chudnovsky's search for a combinatorial perfect graph coloring algorithm</a> (<a href="https://plus.google.com/100003628603413742554/posts/7KzX5PGj6Vb">G+</a>)</li><br /><li><a href="http://tex.stackexchange.com/a/19734">Thank you LaTeX stack exchange for having useful answers to everyday questions</a>, this time about a bad interaction between hyperref, natbib, and dois in bibtex data (<a href="https://plus.google.com/100003628603413742554/posts/42rjq852PNM">G+</a>)</li><br /><li><a href="http://nautil.us/issue/29/scaling/how-to-build-a-search-engine-for-mathematics">Another article about Neil Sloane and the OEIS</a> (<a href="https://plus.google.com/100003628603413742554/posts/c4iN6i9hMqf">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2015-10-21/Op-ed">Wikipedia cites open-access journals more frequently than closed-access ones</a> (<a href="https://plus.google.com/100003628603413742554/posts/M9Q98SePkR4">G+</a>)</li><br /><li><a href="http://lemire.me/blog/2015/10/26/crazily-fast-hashing-with-carry-less-multiplications/">GF(2)-polynomial multiplication instructions lead to a fast XOR-universal hash function</a> (<a href="https://plus.google.com/100003628603413742554/posts/1X2mUDNsEN1">G+</a>)</li><br /><li><a href="http://www.vox.com/2015/10/26/9616370/science-committee-worse-benghazi-committee">Three times the House Science Committee abused its subpoena power to hassle scientists it disagreed with</a> (<a href="https://plus.google.com/100003628603413742554/posts/HGqqURXezmU">G+</a>)</li><br /><li><a href="https://gilkalai.wordpress.com/2015/10/28/convex-polytopes-seperation-expansion-chordality-and-approximations-of-smooth-bodies/">Simple 4-polytopes without good separators</a> (<a href="https://plus.google.com/100003628603413742554/posts/5ioYLBVvkY6">G+</a>)</li><br /><li><a href="http://arxiv.org/abs/1510.06535">Hollow heaps</a>, a supposedly-simpler replacement for Fibonacci heaps (<a href="https://plus.google.com/100003628603413742554/posts/KTPCV917q2a">G+</a>)</li><br /><li><a href="https://plus.google.com/101584805418938307037/posts/KcgPRwPr8Vr">SWAT (the Scandinavian Symposium and Workshops on Algorithm Theory) goes open-access with LIPIcs</a> (<a href="https://plus.google.com/100003628603413742554/posts/WTSF1FA3rVU">G+</a>)</li><br /><li><a href="https://theinnerframe.wordpress.com/2015/10/23/the-gyroids-algorithmic-geometry-iii/">Polyhedral approximations of gyroids and the Laves graph</a> (<a href="https://plus.google.com/100003628603413742554/posts/a967fr9Z9ds">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:319414Solar decathlon2015-10-19T05:00:57Z2015-10-19T05:02:39ZTwo years ago, and again this year, Irvine hosted the <a href="http://www.solardecathlon.gov/">Solar Decathlon</a>, a contest in which groups of university students design and build a small, inexpensive, and self-sufficient solar house and get judged on ten categories for how good their house is. The houses (and various related vendor exhibits) were on show to the public at a public park in Irvine, with students from each team on hand to explain their designs. So yesterday I went to see them, and took a few photos.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/solardecathlon/WestVirginiaTorVergata2-m.jpg" border="2" style="border-color:black;" /></div><br /><br />Unfortunately I didn't keep any shots from my favorite of the houses, an effort by Clemson involving a modular open-source design made out of aluminum composite panels, with amazingly lightweight but strong and attractive snap-together wooden furniture. It was also the only one to pack three bedrooms into the 1000 square foot limit for the enclosed part of the house.<br /><br />Instead, <a href="http://www.ics.uci.edu/~eppstein/pix/solardecathlon/">here are some of the other houses, with annotations</a>.urn:lj:livejournal.com:atom1:11011110:319220Linkage2015-10-16T05:16:26Z2015-10-16T05:16:26Z<ul><li><a href="https://en.wikipedia.org/wiki/Reversible_cellular_automaton">Reversible cellular automaton</a>, newly listed as a Wikipedia "Good Article" (<a href="https://plus.google.com/100003628603413742554/posts/U8xogi9usdY">G+</a>)</li><br /><li><a href="https://plus.google.com/112582901549166431017/posts/d6xxf4zH4u6">André Schulz's report from GD 2015</a>, <a href="http://www.csun.edu/gd2015/presentations.htm">all the GD presentation slides</a>, and <a href="https://plus.google.com/100680911101807674881/posts/Uv4PWgcvU4A">Pat Morin's contest-winning freshman research project</a> (<a href="https://plus.google.com/100003628603413742554/posts/To7gg7WcmFP">G+</a>)</li><br /><li><a href="https://liorpachter.wordpress.com/2015/09/20/unsolved-problems-with-the-common-core/">Lior Pachter's unsolved problems supplementing the Common Core</a> (<a href="http://www.metafilter.com/153162/Fun-math-for-kids">MF</a>; <a href="https://plus.google.com/100003628603413742554/posts/UygoeiS65oC">G+</a>)</li><br /><li><a href="http://cosmicdiary.org/lfenton/?p=944">Dune trails on Mars</a> (<a href="https://plus.google.com/100003628603413742554/posts/GMhTbCrHRLV">G+</a>)</li><br /><li><a href="http://boingboing.net/2015/10/06/global-coalition-tells-faceboo.html">Global coalition tells Facebook to kill its Real Names policy</a> (<a href="https://plus.google.com/100003628603413742554/posts/9nfCQwZCQzQ">G+</a>)</li><br /><li><a href="http://www.ics.uci.edu/~eppstein/bibs/sodafixes.sty">SODA formatting fixes</a> (<a href="https://plus.google.com/100003628603413742554/posts/TdnbACqsFqQ">G+</a>)</li><br /><li><a href="http://www.nature.com/news/the-biggest-mystery-in-mathematics-shinichi-mochizuki-and-the-impenetrable-proof-1.18509">Nature on Mochizuki's baffling ABC proof</a> (<a href="http://www.math.columbia.edu/~woit/wordpress/?p=8032">Via</a>; <a href="https://plus.google.com/100003628603413742554/posts/fsJvkM7FPG8">G+</a>)</li><br /><li><a href="https://theinnerframe.wordpress.com/2015/09/17/the-120-cell-spheres-xiii/">The 120-cell</a> and its decomposition into 12 rings of 10 dodecahedra (<a href="https://plus.google.com/100003628603413742554/posts/Wa1wWQsVFQA">G+</a>)</li><br /><li><a href="http://www.dailykos.com/story/2015/10/07/1428690/-Verizon-merges-its-cell-phone-tracking-with-its-AOL-ad-tracking-network">How not to be ad-tracked by your Verizon cell-phone</a> (<a href="https://plus.google.com/100003628603413742554/posts/NvrCMLGA9dG">G+</a>)</li><br /><li><a href="http://www.3quarksdaily.com/3quarksdaily/2015/10/popular-media-loves-nothing-better-than-leaning-on-a-tired-trope-when-telling-a-tale-mathematicians-are-always-solitary-geni.html">The collaborative and social nature of mathematics</a> (<a href="https://plus.google.com/100003628603413742554/posts/dKPfN74zRs2">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/WAVL_tree">The WAVL tree data structure</a> (<a href="https://plus.google.com/100003628603413742554/posts/F8apHipqhQu">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?t=5&v=i7zpDGN5B2A">Augmented reality meets food psychology</a> (<a href="https://plus.google.com/100003628603413742554/posts/SfXvNWR3J1c">G+</a>)</li><br /><li><a href="http://boingboing.net/2015/10/14/serial-sexual-harasser-and-ast.html">Serial sexual harasser resigns Berkeley professorship</a> (<a href="https://plus.google.com/100003628603413742554/posts/RJAetLigcBt">G+</a>)</li><br /><li><a href="http://www.nytimes.com/2015/10/12/opinion/the-importance-of-recreational-math.html">The importance of recreational mathematics</a> (<a href="https://plus.google.com/100003628603413742554/posts/VzjVyeTG1Mn">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:318761Treetopes2015-10-15T05:24:08Z2015-10-15T05:24:08ZI have another new paper on the arXiv, "<a href="http://arxiv.org/abs/1510.03152">Treetopes and their graphs</a>", arXiv:1510.03152. It's mostly about a class of 4-dimensional polytopes, and their connections to clustered planarity, but it's hard to visualize 4d, especially on a 2d screen. So to explain what's in the paper, let's start by analogy, several dimensions lower.<br /><br />Suppose you have a cycle graph (you can think of this as being one-dimensional: just vertices connected to each other in a single loop). But you also have, separately from the graph itself, some sort of hierarchical clustering on the graph. Some paths within the cycle form clusters, and each pair of clusters of this clustering are either nested or disjoint. Then it will always be possible to represent both the graph and the clustering in a single drawing, something like this:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/ClusteredCycle.png"></div><br /><br />The clusters are drawn as curves (here, circles) surrounding subsets of vertices of the graph. Edges never cross each other, cluster boundaries never cross each other, and edges cross cluster boundaries only when they have to: when one endpoint of the edge is inside the cluster and one edge is outside. A drawing with these properties is called <a href="https://en.wikipedia.org/wiki/Clustered_planarity">clustered planar</a>. Clusters of paths in a cycle always have such a drawing, but more generally it's a big open problem in graph drawing to test whether a given planar graph and clustering can be drawn in this way. There are lots of special cases where we know how to find such a drawing, but we don't know of an algorithm that solves all the cases, and we don't know of any hardness result that would prevent such an algorithm from existing.<br /><br />But back to the cycle. Choose a clustered drawing with no two curves that separate the points in the same way: if two clusters are complementary to each other, we only use a single curve for both of them. The cluster boundaries divide the plane (and the cycle vertices) into different regions; let's add a "cluster vertex" for each region, and connect it to the cluster vertices for neighboring regions and to the cycle graph vertices within its own region. The number of regions, and the number of cluster vertices, is one more than the number of curves you had; you can think of this as adding one more cluster to the clustering, containing all of the vertices in the graph. I call the resulting augmented graph a "cluster graph". Here's what we get for the same example:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/ClusteredHalin.png"></div><br /><br />The result is a <a href="https://en.wikipedia.org/wiki/Halin_graph">Halin graph</a>! A Halin graph is usually described as being formed from a tree that has no degree-two vertices and is embedded in the plane, by adding a cycle of edges connecting the leaves of the tree in clockwise order with respect to the embedding. Instead, here, we started with the cycle and then added the tree later, but the result is the same. Any nested clustering of paths on a cycle graph will have a Halin graph as its cluster graph, and all Halin graphs can be formed in this way. Every Halin graph is itself planar. So we've taken a one-dimensional graph (a cycle graph), added a clustering, and shown how to form a special kind of two-dimensional graph (a planar graph) from it.<br /><br />But Halin graphs are not just planar; they're a special case of the <a href="https://en.wikipedia.org/wiki/Polyhedral_graph">polyhedral graphs</a>, the graphs of convex 3-polytopes, just as cycle graphs are the graphs of convex 2-polytopes (that is, convex polygons). A planar graph is the graph of a polyhedron exactly when it is 3-connected (it can't be broken into two pieces by removing only two vertices) and Halin graphs are always 3-connected; in fact, they were studied by Halin as examples of graphs that are minimally 3-connected (removing any edge from the graph breaks this property). The polyhedra that you get from Halin graphs go under several different names, but they can be defined very simply in terms of their face structure: the Halin graph polyhedron has one special two-dimensional "base" face such that every other two-dimensional face shares an edge with the base. When this is true, the edges that are not part of the base face form a tree, and the base face forms a cycle that passes through the leaves of this tree, just like for the standard graph-theoretical definition of a Halin graph.<br /><br />So return once more to our cycle graph, its clustering, and the Halin graph that we get as its cluster graph. In graph-theoretic terms, we thought the cycle graph was one-dimensional and its planar cluster graph was two-dimensional. But when we look at them geometrically, really it seems that the cycle graph is two-dimensional (it's the graph of a convex polygon) and that its cluster graph is three-dimensional (it's the graph of a convex polyhedron). Can we bring this up another level, and turn three-dimensional graphs (the graphs of arbitrary convex polyhedra, not just Halin graphs) plus clusterings on those graphs (the still-mysterious clustered planar drawings) into four-dimensional objects?<br /><br />Yes! (Otherwise, why would I have asked those questions.)<br /><br />The same geometric definition of Halin graph polyhedra works equally well as the definition of a class of higher dimensional polytopes that I call treetopes. These are the polytopes where every 2-dimensional face meets a designated base face in an edge (equivalently where every face of dimension two or more has more than one vertex in common with the base case). These polytopes have many propertes in common with Halin graphs: for instance, the edges that are not part of the base face always form a tree. And in the four-dimensional case, these can be formed from the three-dimensional polyhedral graphs as the cluster graphs of a certain type of clustered planar drawing, and every such graph gives rise to a four-dimensional treetope in this way. Here's an example, of a clustered planar drawing of a polyhedral graph and of the cluster graph (the graph of a 4-polytope) that you get in this way:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/clustergraph.png"></div><br /><br />You can see the three-dimensional faces that turn this into a four-dimensional polytope, if you squint: there's a Halin graph growing above each face of the planar part, rising to a peak at the common ancestor of the vertices in the face, and each of these forms a three-dimensional polyhedron, one of the 3-faces of a 4-treetope. This is true for the external face of the planar part as well as for each of its internal faces. Finally, the base face of the 4-treetope is given by the planar graph itself.<br /><br />The proof that these clustered planar graphs are always the same thing as the corresponding 4-treetopes (and vice versa) uses an inductive construction in which one repeatedly collapses clusters down to single vertices or, in the other direction, blows up single vertices into their own little clusters with their own little polyhedral graphs inside the cluster. The same construction also leads to a polynomial time algorithm that can tell whether a given graph comes from a 4-treetope in this way, without having to be told which vertices are the ones on the base face and which of them represent clusters. We don't know of such an algorithm for the graphs of arbitrary 4-polytopes, and it seems that the problem should be hard for the existential theory of the reals, although we also don't know of a proof of such a hardness result.<br /><br />There's also a bit at the end of the new paper about the sparsity properties of these graphs. In particular, by working with the clustered planar view of these graphs instead of the 4-dimensional polytope view, it's possible to show that they have small separators and <a href="http://en.wikipedia.org/wiki/Bounded_expansion">bounded expansion</a>, so many sparse graph algorithms work on these graphs. In contrast, 4-polytopes in general can be very far from sparse (they can be complete graphs).<br /><br />But my proof that 4-treetopes have bounded expansion uses some of the special properties that their corresponding clustered planar drawings have. So this raises another question, that I don't know the answer to: suppose you're given a clustered planar drawing that does not meet the special conditions required for it to correspond to a 4-treetope. Just any clustered planar drawing. And then you construct the cluster graph of this drawing in the same way I described above. These graphs have constant edge/vertex ratio, because they are the union of a planar graph and a tree. But are they sparse in any stronger sense? Do they also have bounded expansion? I don't know.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:318523Why shallow minors matter for graph drawing2015-10-03T23:16:51Z2015-10-04T06:34:25Z<p>An ongoing concern in graph drawing research has been curve complexity. If you draw a graph using a certain style, how complicated are you going to have to make the edges? More complicated curves are harder for readers to follow, and therefore they make the graph less readable. But simpler curves (such as line segments) may have their own problems: not fitting the style (which may constrain the edges to certain directions), running through vertices, forming sharp angles with each other, etc. To balance these concerns, a lot of work in graph drawing has allowed edges to be polygonal paths but has tried to prove hard upper bounds on how many bends you need to use. I'm not fond of polylines and bends myself — I prefer smooth curves such as circular arcs meeting at inflection points — but in this case one can measure the curve complexity in terms of the number of arcs you need per edge, and the theory ends up being much the same.</p>
<p>A couple of examples: in the right angle crossing drawing style (<a href="https://en.wikipedia.org/wiki/RAC_drawing">RAC drawing</a>), crossings are allowed, but the crossings have to be at right angles.</p>
<p align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/RAC-K5-K34.png"></p>
<p>The examples above have no bends, but bendless RAC graphs are quite restrictive. In particular they can only have most 4<i>n</i> − 10 edges (as Didimo et al proved in <a href="https://dx.doi.org/10.1007%2F978-3-642-03367-4_19">their paper introducing RAC drawing</a>). On the other hand, if you allow bends, it's easy to turn any drawing into a RAC drawing: insert new bends near each crossing to allow the edges that cross to form the correct angles with respect to each other. How many bends per edge do you need? Exactly three. The graphs with two-bend-per-edge RAC drawings are still limited to a linear number of edges, but three bends let you draw anything, with a layout like the one below for <i>K</i><sub>8</sub>.</p>
<p align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/K8-RAC-3bend.png"></p>
<p>In another drawing style, three-page <a href="https://en.wikipedia.org/wiki/Book_embedding">topological book embeddings</a>, we wish to draw graphs on three half-planes in space, meeting at a 120 degree angle along a line. The vertices must be on this line, and the edges must be formed from semicircles within the half-planes.</p>
<p align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/3page-K5.png"></p>
<p>The graphs that can be drawn with only a single semicircle per edge form a restricted subclass of all graphs (again, with only a linear number of edges). However, all graphs can be drawn in a three-page book if the edges can be drawn as multiple semicircles that connect to each other across the spine of the drawing. Can we bound the number of spine crossings per edge that we might need?</p>
<p>In general, given some graph drawing style, and an appropriate notion of curve complexity within that style, we can ask similar questions. Does the style allow drawings of all graphs with bounded curve complexity? Or, if you impose a bound on the curve complexity, are you necessarily limiting the graphs you can draw to a restricted subclass of all graphs?</p>
<p>It turns out that questions like this can be formalized and answered using the theory of shallow minors and sparse classes of graphs, ably described by <a href="https://en.wikipedia.org/wiki/Jaroslav_Ne%C5%A1et%C5%99il">Jaroslav Nešetřil</a> and <a href="https://en.wikipedia.org/wiki/Patrice_Ossona_de_Mendez">Patrice Ossona de Mendez</a> in their book <i>Sparsity: Graphs, Structures, and Algorithms</i>.</p>
<p>A <a href="https://en.wikipedia.org/wiki/Shallow_minor">shallow minor</a> of a given graph is a minor (a smaller graph formed by edge contractions, edge deletions, and vertex deletions) such that the subsets of the original graph that are contracted into a single vertex in the minor have small diameter. The vertices of the shallow minor can be represented by a collection of low-diameter vertex-disjoint subtrees of the starting graph, such that two vertices of the minor are adjacent if and only if the two corresponding subtrees are connected by an edge. A shallow topological minor is almost the same but the subtrees can have only one vertex of degree greater than two, so that a subdivision of the minor (with few subdivision points per edge) forms a subgraph of the starting graph.</p>
<p>A family F of graphs is said to be somewhere dense if there exists a diameter bound such that the shallow minors of graphs in F, with that bound, include all graphs (or all complete graphs). Otherwise, it is nowhere dense. You can state the same definitions with shallow topological minors and it ends up being equivalent to the version with shallow minors.</p>
<p>But now, suppose we have a drawing style for graphs with the following properties. First, removing edges or vertices from a drawing should result in another valid drawing. And second, there is a notion of bend points for the drawing, such that the curve complexity of a drawing in this style is the maximum number of bends per edge, and such that replacing a bend point by a degree-two vertex or vice versa also results in a valid drawing. Let F be the family of graphs that can be drawn in this style with no bends. Then if every graph G can be drawn with bounded curve complexity, we can replace the bends in a drawing of G by subdivision points and get a graph in F that has G as a shallow minor, so F is somewhere dense. On the other hand, if F is somewhere dense, we can generate a drawing with bounded curve complexity for any graph G by finding a graph in F that has G as a shallow topological minor, drawing this bigger graph, removing the parts of the drawing that don't correspond to features of G, and turning the subdivision points back into bends. So: the drawing style can represent all graphs, with bounded curve complexity, if and only if its family of bendless graphs is somewhere dense.</p>
<p>Let's look again at the two drawing styles we used as examples. Are the bendless RAC graphs somewhere dense? Yes, because we can draw all graphs in RAC style with only three bends. It's been claimed that RAC graphs are closely related to <a href="https://en.wikipedia.org/wiki/1-planar_graph">1-planar graphs</a>, but this shows a big difference between the two classes of graphs. In particular, unlike 1-planar graphs, the bendless RAC graphs do not have <a href="https://en.wikipedia.org/wiki/Planar_separator_theorem">separator theorems</a>, because if they did they would have <a href="https://en.wikipedia.org/wiki/Bounded_expansion">bounded expansion</a>, something that can only be true of nowhere-dense graph families.</p>
<p>Does every graph have a three-page topological book embedding with a constant number of spine crossings per edge? No, because the graphs with three-page book embeddings (without spine crossings) have bounded expansion, and therefore cannot be somewhere dense. In fact in this case it's known that <a href="https://dx.doi.org/10.1016%2FS0166-218X%2899%2900044-X">logarithmically many crossings per edge might sometimes be needed</a> and that <a href="http://dx.doi.org/10.1137/S0895480195280319">this bound can always be achieved</a>.</p>
<p>There's an odd mental inversion related to this phenomenon. In graph theory, being nowhere dense is thought of as good (it tells you your family has some interesting properties) and somewhere dense is bad (your family contains too many graphs to have any nice structure). But in graph drawing, it's the somewhere dense classes that are better than the nowhere dense ones: they describe the drawing styles that are sufficiently general-purpose to be usable for all graphs, with bounded curve complexity. So maybe it's the somewhere dense drawings styles (like RAC drawing) rather than the nowhere dense ones (like 1-planar drawing) that we should be paying more attention to.</p><a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:318353Linkage2015-10-01T03:51:45Z2015-10-01T03:51:45Z<ul><li><a href="http://www.nytimes.com/interactive/2015/09/17/upshot/top-colleges-doing-the-most-for-low-income-students.html">UCI is #1 for low-income students</a> (<a href="https://plus.google.com/100003628603413742554/posts/YbR9MBcH3GQ">G+</a>)</li><br /><li><a href="https://cp4space.wordpress.com/2015/09/14/coverings-convolutions-and-corbynatorics/">Applications of convolutions in combinatorics</a> (<a href="https://plus.google.com/100003628603413742554/posts/ZvFWaBMZFhA">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2015-09-16/Editorial">Is it good to take advantage of free subscriptions to online information services for Wikipedia editors?</a> (Yes. <a href="https://plus.google.com/100003628603413742554/posts/CDshyozc7Au">G+</a>)</li><br /><li><a href="http://www.hs.fi/tiede/a1407209514060">A Helsinki street paved with Penrose tiles</a> (<a href="https://plus.google.com/100003628603413742554/posts/SaKeje5EveE">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=yN9g0XsDu-E">Colbert on diploma mills</a> (<a href="https://plus.google.com/100003628603413742554/posts/Bmu7Lw5AP2Z">G+</a>)</li><br /><li><a href="http://www.csun.edu/gd2015/Realizing_Graphs_as_Polyhedra.pdf">Realizing graphs as polyhedra</a> (talk slides from my talk at the Graph Drawing satellite workshop; <a href="https://plus.google.com/100003628603413742554/posts/2sQL8wPKEDD">G+</a>)</li><br /><li><a href="https://quantixed.wordpress.com/2015/05/05/wrong-number-a-closer-look-at-impact-factors">Impact factors are bad and you should feel bad for using them</a> (<a href="https://plus.google.com/100003628603413742554/posts/4v4Hi5cZfJk">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=ZREp1mAPKTM">Numberphile on the Houdini fold-and-cut trick</a> (<a href="https://plus.google.com/100003628603413742554/posts/6QGpopc9jyS">G+</a>)</li><br /><li><a href="http://doi.org/10.1007/s11192-015-1757-5">Using the h-index to measure research productivity can lock in and amplify pre-existing gender biases</a> (<a href="https://plus.google.com/100003628603413742554/posts/8q1G2gwUyum">G+</a>)</li><br /><li><a href="https://plus.google.com/100680911101807674881/posts/i49RBcwPt4e">Elsevier profiteers off open-access requirements, no longer usable by NSERC-funded researchers unless they pay thousands to get their own papers published</a> (<a href="https://plus.google.com/100003628603413742554/posts/31ZZVFrdBpv">G+</a>)</li><br /><li><a href="http://dumas.io/PML/">Visualizing the space of simple closed curves in hyperbolic manifolds</a> (<a href="https://plus.google.com/100003628603413742554/posts/YAcEgQ7suZk">G+</a>)</li></ul>