urn:lj:livejournal.com:atom1:110111100xDE0xDE0xDE2014-09-16T07:42:20Zurn:lj:livejournal.com:atom1:11011110:296537Linkage2014-09-16T06:20:03Z2014-09-16T07:42:20Z<ul><li><a href="http://www.thisiscolossal.com/2014/08/thomas-herbrich-smoke/">Unexpected shapes in smoke plumes</a>, as photographed by Thomas Herbrich (<a href="https://plus.google.com/100003628603413742554/posts/6CADmTYFxq3">G+</a>)</li><br /><li><a href="http://tcs.postech.ac.kr/isaac2014/accepted_papers.html">ISAAC 2014</a> and <a href="http://theory.utdallas.edu/COCOA2014/accepted-papers.html">COCOA 2014</a> accepted paper lists (<a href="https://plus.google.com/100003628603413742554/posts/BrPabmYReu4">G+</a>)</li><br /><li><a href="http://windowsontheory.org/2014/09/03/focs-2014-program-is-online/">FOCS 2014 program</a> and best paper winners (<a href="https://plus.google.com/100003628603413742554/posts/LnC49jZzLgt">G+</a>)</li><br /><li><a href="http://www.wired.com/2014/08/watch-804-wooden-balls-shape-shift-into-a-perfect-spiral/">Kinetic sculpture</a> made of wooden balls on threads, with some extensive software simulation behind its design (<a href="https://plus.google.com/100003628603413742554/posts/TBadJvQDmeY">G+</a>)</li><br /><li><a href="http://www.wired.com/2014/09/curvature-and-strength-empzeal/">How a 19th century math genius taught us the best way to hold a pizza slice</a>, or, a practical application of the theorem that when a flat surface is embedded in 3d, it remains flat in at least one direction (<a href="https://plus.google.com/100003628603413742554/posts/8MN9K3iXV7Z">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Centered_octahedral_number">Centered octahedral numbers</a> on Wikipedia (<a href="https://plus.google.com/100003628603413742554/posts/VHQFZG3ztR3">G+</a>)</li><br /><li><a href="http://aperiodical.com/2014/09/interesting-esoterica-summation-volume-9/">Interesting esoterica summation</a> (<a href="https://plus.google.com/100003628603413742554/posts/8Rx8UcAjksz">G+</a>)</li><br /><li><a href="http://dx.doi.org/10.1007/s00454-014-9627-0">A Möbius-Invariant Power Diagram and Its Applications to Soap Bubbles and Planar Lombardi Drawing</a> (journal version of two of my old conference papers; <a href="https://plus.google.com/100003628603413742554/posts/WniiFV3VREv">G+</a>)</li><br /><li><a href="http://news.sciencemag.org/people-events/2014/09/researcher-loses-job-nsf-after-government-questions-her-role-1980s-activist">Researcher loses job at NSF after government questions her role as 1980s activist</a> (<a href="https://plus.google.com/100003628603413742554/posts/EAxPtzQiusE">G+</a>)</li><br /><li><a href="http://infosthetics.com/archives/2014/09/pi_visualized_as_a_public_urban_art_mural.html">Pi visualized as a public urban art mural</a> (<a href="https://plus.google.com/100003628603413742554/posts/K3jpjjv9ypa">G+</a>)</li><br /><li><a href="http://igorpak.wordpress.com/2014/09/12/how-not-to-reference-papers/">How not to reference papers</a> (a sad story by Igor Pak of academic misattribution; <a href="https://plus.google.com/100003628603413742554/posts/XN5S8t9d8QV">G+</a>)</li><br /><li><a href="http://jocg.org/v5n1p10/">Steinitz Theorems for Simple Orthogonal Polyhedra</a> (journal version of another of my papers; <a href="https://plus.google.com/100003628603413742554/posts/2Nz66ruDFHu">G+</a>)</li><br /><li><a href="http://sbseminar.wordpress.com/2014/09/14/editorial-board-of-journal-of-k-theory-on-strike-demanding-tony-bak-hands-over-the-journal-to-the-k-theory-foundation/">Editorial board of <i>Journal of K-theory</i> goes on strike</a> over publisher profiteering (<a href="https://plus.google.com/100003628603413742554/posts/g1MfZRTPDf2">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:296235Bren Hall, East Stairs2014-09-13T21:57:06Z2014-09-13T21:57:06ZJust some test shots with my new travel lens (Canon's 17-40/F4 L, replacing a mysteriously nonfunctional and optically not as good 17-85IS).<br /><br /><div align="center"><table border="0" cellpadding="10">
<tr align="center" valign="middle">
<td><a href="http://www.ics.uci.edu/~eppstein/pix/brenstairs/1.html"><img src="http://www.ics.uci.edu/~eppstein/pix/brenstairs/1-m.jpg" border="2" width="160" height="240" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/brenstairs/2.html"><img src="http://www.ics.uci.edu/~eppstein/pix/brenstairs/2-m.jpg" border="2" width="240" height="160" style="border-color:black;" /></a></td>
</tr><tr align="center" valign="middle">
<td><a href="http://www.ics.uci.edu/~eppstein/pix/brenstairs/3.html"><img src="http://www.ics.uci.edu/~eppstein/pix/brenstairs/3-m.jpg" border="2" width="240" height="160" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/brenstairs/4.html"><img src="http://www.ics.uci.edu/~eppstein/pix/brenstairs/4-m.jpg" border="2" width="160" height="240" style="border-color:black;" /></a></td>
</tr></table></div>urn:lj:livejournal.com:atom1:11011110:295979Algorithmic representative democracy2014-09-10T05:15:24Z2014-09-10T07:32:47ZDid you ever wonder why different states of the US have the numbers of representatives in congress that they do? It's supposed to be proportional to population but that's not actually true: for instance the ratio of representatives to population is about 40% higher in Montana than California. <a href="https://en.wikipedia.org/wiki/United_States_congressional_apportionment#Apportionment_methods">What formula or algorithm do they use to pick the numbers?</a><br /><br />This has varied over the years but, Wikipedia tells me, currently it's the <a href="https://en.wikipedia.org/wiki/Huntington%E2%80%93Hill_method">Huntington–Hill method</a>. One way of describing this is by a simple but inefficient algorithm, with the following steps:<br /><br />1. Give each state a representative, since they all have to have at least one.<br /><br />2. Repeatedly, until there are the desired number of total representatives, prioritize the states by <img src="http://www.ics.uci.edu/~eppstein/0xDE/HuntingtonHill.gif" valign="middle"> and give one more representative to the state with the biggest priority.<br /><br />The problem of assigning seats to parties after a parliamentary election is very similar, using votes instead of population, but in that case it's ok for some parties to get zero seats. This causes the formulas that are used for prioritizing the parties to use different divisors, typically linear functions of the number of seats already assigned rather than the square root thing used here. This general type of apportionment method is called a <a href="https://en.wikipedia.org/wiki/Highest_averages_method">highest averages method</a>.<br /><br />The question asked by my most recent preprint (<a href="http://arxiv.org/abs/1409.2603">"Linear-time Algorithms for Proportional Apportionment", arXiv:1409.2603</a>, with Jack Cheng, to appear at ISAAC 2014) is: how quickly can you assign seats using these methods? Using the procedure described above, with a priority queue to do the prioritization, would take an amount of time slightly superlinear in the number of seats. But it turns out we can do quite a bit better: linear in the number of parties or states getting the seats. Probably this doesn't matter for actual elections, the slow part of which is collecting all the votes. But it might be useful if you want to run a lot of simulated elections, or to use apportionment algorithms for problems where the number of things being apportioned is much larger than the number of congressional representatives.<br /><br />There's also a nice way of viewing these problems more abstractly: suppose we have <i>n</i> different infinite arithmetic progressions, and an input parameter <i>k</i> How quickly can we find the <i>k</i>th smallest value in the disjoint union of the progressions? Answer: O(<i>n</i>) arithmetic operations, independently of <i>k</i>. For the parliamentary apportionment problem, you get these sequences by turning the priorities upside down, with the linear function of the number of representatives as the numerator and the number of votes as the denominator. For the congressional problem, this gives something that is not exactly an arithmetic progression, but it's close enough to one that the same algorithms work with only minor modification.<br /><br />Incidentally, there's a footnote on p. 3 of the preprint about two seemingly very relevant references, in Japanese, whose titles claim that they give linear time algorithms for related problems. Unfortunately despite attempts to contact both the authors of these references and the reviewer who used them as a reason to downvote our paper, we have been unable to obtain them <s>nor even to verify that they actually exist,</s> let alone to determine which variable their time is linear in and which apportionment methods they apply to. So we don't actually know whether our algorithms or results are really new. If anyone reading this has better access to these sources, we'd appreciate any help you could give us. ETA: I now have a copy of the IEICE Trans. D one, but haven't yet examined it. Apparently part of the difficulty is that there are two different IEICE Trans. D.'s.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:295706Efficiency of Rado graph representations2014-09-06T06:47:45Z2014-09-06T06:47:45ZThe <a href="https://en.wikipedia.org/wiki/Rado_graph">Rado graph</a> has interesting symmetry properties and plays an important role in the <a href="http://11011110.livejournal.com/295010.html">logic of graphs</a>. But it's an infinite graph, so how can we say anything about the complexity of algorithms on it?<br /><br />There are algorithmic problems that involve this graph and are independent of any representation of it, such as checking whether a first-order logic sentence is true of it (PSPACE-complete). But I'm interested here in problems involving the Rado graph where different ways of constructing and representing the graph lead to different algorithmic behavior. For instance: the Rado graph contains all finite graphs as induced subgraphs. How hard is it to find a given finite graph? Answer: it depends on how the Rado graph is represented.<br /><br />Historically the first way of constructing this graph involved the binary representations of the natural numbers: each vertex corresponds to a number, and vertices x and y are adjacent when the smaller of the two numbers is the index of a nonzero bit in the binary representation of the larger of the two numbers. For this representation it's very easy to find a copy of any given graph G as an induced subgraph: just find copies of the vertices of G one at a time. Each of the vertices you've already found, and its adjacency or nonadjacency to the next vertex, tells you one bit of the binary representation of the next number, and you just have to pad those bits with zeros in the remaining places to find the next vertex. The trouble is, these numbers grow very quickly, roughly as a tower of powers of two whose number of levels is the number of vertices in the graph. For instance, if you want to find an 5-vertex complete subgraph of the Rado graph, you can do it with the numbers 0, 1, 3, 11, 2059, but (according to <a href="https://oeis.org/A034797">OEIS A034797</a>) the smallest number you can use to extend this to a 6-vertex clique already has 620 decimal digits. And the one after that has more like 2^{2059} bits in its binary representation, too many to write down even in the biggest computers. So the algorithm for finding a given graph is easy to describe but not very efficient.<br /><br />An alternative construction just chooses randomly, for each pair of vertices, whether they form the endpoints of an edge. With infinitely many vertices, the result of these random choices is almost certainly the Rado graph. That's not a representation that can be used in a computer, but we could imagine an algorithm that had access to it as some sort of oracle. With this representation, an n-vertex graph should occur much more quickly: if G is such a graph, then the expected number of copies of G among the first N vertices of the Rado graph starts getting large when N is roughly 2^{n/2}. And that's the best you could hope for in any representation, because with fewer vertices there aren't enough n-tuples of vertices to cover all the different induced subgraphs that could exist. But finding a copy of G among these vertices would be difficult. Even for finding a clique, we don't know anything much better than trying all n-tuples of vertices and seeing which ones work. (Finding a clique of size approximately 2 log_2 N in an N-vertex random graph in polynomial time is a well known open problem even though we know that a clique of that size should usually exist.) Yet another construction of the Rado graph, based on the idea of Paley graphs, probably behaves similarly to the random construction but is difficult to prove much about.<br /><br />Here's a construction of an infinite graph in which induced subgraphs of any type are easy to find: instead of using binary numbers, use binary strings, of all possible lengths including zero. For any two strings s and t, connect them by an edge if s is shorter than t and the position of t indexed by the length of s is nonzero (or vice versa). Then you can build an n-vertex graph one vertex at a time, by using one bitstring of each length less than n, with the bits in each string given by the adjacencies to the earlier vertices with no padding. The copy of an n-vertex graph G will be somewhere in the first 2^n vertices (not 2^{n/2}), and the names of these vertices can be calculated and written down by an algorithm in time O(n^2) (matching the description complexity of G in terms of its adjacency matrix). But this is not the Rado graph. For instance, for the two binary strings "0" and "1", there is only one vertex in the graph adjacent to one and not the other (the empty string) whereas the Rado graph has infinitely many such vertices. One could construct a copy of the Rado graph by interspersing this construction with a very small number of random vertices, small enough that they don't affect the complexity of this subgraph-finding algorithm, but that seems a bit of a cheat.<br /><br />One way that it's a cheat is that it doesn't use the full power of the Rado graph. The actual defining property of the Rado graph is that if you start building a given induced subgraph, vertex by vertex, you can never make a mistake: it's always possible to add one more vertex. Or, more abstractly, if you have any two sets A and B of vertices in the Rado graph, there's always another vertex v that's adjacent to everything in A and nothing in B. By choosing A to be the set of already-placed vertices that are adjacent to the next vertex, and B to be the set of already-placed vertices that are not adjacent to the next vertex, you can use this property to find each successive vertex in an arbitrary induced subgraph. The graph of binary strings described above does not have this property, because when A={"0"} and B={"1"} there's no vertex v that matches.<br /><br />Is it possible to construct the Rado graph in such a way that the extension property becomes as easy as the subgraph property was for the graph of binary strings? The short answer is that I don't know. One attempt at an answer would be to build it in levels, much like the binary string graph can be divided into levels by the length of its strings. In the kth level, we include a collection of vertices that extends all subsets of k vertices from the previous level. But what is this collection? If there are N vertices in the previous level, then the vertices of the kth level can be described by N-bit bitstrings specifying their adjacencies. We want to choose as small as possible a set of bitstrings with the property that all k-tuples of previous vertices can be extended; a more geometric way to describe this is that we want to find a small set D of points in the N-dimensional hypercube that hits every (N-k)-dimensional subcube. Exactly this problem was one of the ones I studied in my recent paper "<a href="http://www.combinatorics.org/ojs/index.php/eljc/article/view/v21i3p20">grid minors in damaged grids</a>". But the proof in that paper that D can be relatively small (Theorem 14) uses the probabilistic method, meaning essentially that it chooses a random set of the right number of hypercube points. So as a way of constructing Rado graphs in which the extension property is efficient, it is not an improvement over the method of choosing edges randomly. But maybe this nondeterministic proof that a good set exists can lead the way to a deterministic and efficient construction?<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:295544Linkage2014-09-01T05:59:14Z2014-09-01T05:59:14ZMore Google+ links from the last couple of weeks:<br /><ul><li>An interview with Haida artist <a href="https://en.wikipedia.org/wiki/Jim_Hart_%28artist%29">Jim Hart</a> (<a href="https://plus.google.com/100003628603413742554/posts/dC2GkH8wQVK">G+):<br /><lj-embed id="52" /></li><br /><li><a href="http://chance.amstat.org/2012/11/interview-with-persi-diaconis/">Persi Diaconis discusses mathematics and magic</a> (<a href="https://plus.google.com/100003628603413742554/posts/dXapc9JXucR">G+</a>)</li><br /><li><a href="http://cstheory.stackexchange.com/questions/25509/edit-distance-in-sublinear-space">A still-unsolved question about whether it's possible to compute edit distance in sublinear space and polynomial time</a> (<a href="https://plus.google.com/100003628603413742554/posts/6qtvh6gAJht">G+</a>)</li><br /><li><a href="http://www.nytimes.com/interactive/2014/08/13/us/starbucks-workers-scheduling-hours.html">A New York Times story about how scheduling software makes part-time workers' lives harder</a>. Or does it? The <a href="http://www.metafilter.com/141920/Working-anything-but-9-to-5">MF discussion of the article</a> makes it clear that managers have been doing the same things with lower tech for a long time. (<a href="https://plus.google.com/100003628603413742554/posts/8PmLWMEpyAM">G+</a>)</li><br /><li><a href="http://makingsocg.wordpress.com/2014/08/22/colocation-with-stoc-2016/">Kerfuffle over SoCG colocation with STOC</a>, later <a href="http://makingsocg.wordpress.com/2014/08/27/mea-culpa-and-good-news/">resolved</a> (<a href="https://plus.google.com/100003628603413742554/posts/VWq1DHWHwW6">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Barrier_resilience">Barrier resilience</a> on Wikipedia (<a href="https://plus.google.com/100003628603413742554/posts/8B1xmfEEzxb">G+</a>)</li><br /><li><a href="http://www.ted.com/talks/robert_lang_folds_way_new_origami">Robert Lang talks</a> about the way mathematics done purely for its aesthetic value (in this case mathematical origami) can turn around and have practical applications. (<a href="https://plus.google.com/100003628603413742554/posts/4iWzqDnwNDm">G+</a>)</li><br /><li><a href="http://www.newyorker.com/magazine/2014/09/01/troll-slayer">The Troll Slayer</a>. New Yorker profile of classics professor Mary Beard, who knows better than most exactly how long men have been silencing women. (<a href="https://plus.google.com/100003628603413742554/posts/aqEde6zwK7Z">G+</a>)</li><br /><li><a href="http://www.kokomotribune.com/news/nation_world_news/article_7028faa8-2d42-11e4-b5a1-0019bb2963f4.html">A study on how social media causes us to self-censor our opinions</a> (<a href="https://plus.google.com/100003628603413742554/posts/X1ZH2ZemiiM">G+</a>)</li><br /><li><a href="http://www.ocregister.com/articles/internet-633032-jordan-policy.html">My UCI colleague Scott Jordan takes a position advising the FCC about net neutrality</a> (<a href="https://plus.google.com/100003628603413742554/posts/WDA8kJfvrq4">G+</a>)</li><br /><li><a href="http://zacharyabel.com/sculpture/">Sculpture by Zachary Abel</a>, one of my new co-authors on the <a href="http://11011110.livejournal.com/295261.html">flat-folding paper</a> (<a href="https://plus.google.com/100003628603413742554/posts/LEs7sm7BVcy">G+</a>)</li></ul></a>urn:lj:livejournal.com:atom1:11011110:295261Flattening things that aren't already flat2014-08-29T05:46:29Z2014-08-29T06:04:37ZThe last of my Graph Drawing preprints is also the first of the papers to see daylight from the <a href="http://11011110.livejournal.com/286162.html">workshop last spring in Barbados</a>: <a href="http://arxiv.org/abs/1408.6771">"Flat Foldings of Plane Graphs with Prescribed Angles and Edge Lengths"</a> (arXiv:1408.6771, with Zachary Abel, Erik Demaine, Martin Demaine, Anna Lubiw, and Ryuhei Uehara).<br /><br />It's part of what has become a long line of research on trying to understand which shapes can be <a href="http://tvtropes.org/pmwiki/pmwiki.php/Main/SquashedFlat">squashed flat</a>, starting with <a href="https://en.wikipedia.org/wiki/Maekawa's_theorem">Maekawa's theorem</a> (every vertex of a flat-folded sheet of paper has to have numbers of mountain folds and valley folds that differ by two) and <a href="https://en.wikipedia.org/wiki/Kawasaki's_theorem">Kawasaki's theorem</a> (alternating angles at each vertex must sum to π). If you have a piece of paper with a desired set of fold lines marked on it, and these fold lines all converge on a single vertex, then it's possible to exactly characterize whether you can fold it flat along those lines. A greedy algorithm can find a way to fold it efficiently, and a version of the <a href="https://en.wikipedia.org/wiki/Carpenter's_rule_problem">Carpenter's rule theorem</a> implies that, when a flat-folded state exists, you can always get to it by a continuous motion. But as Bern and Hayes showed at SODA 1996, the restriction to a single vertex is crucial: the folds at different vertices can interact with each other in complicated ways, making flat foldability of multi-vertex folding patterns NP-complete.<br /><br />The new paper follows an earlier one at ISAAC 2011 and IJCGA 2013, <a href="http://erikdemaine.org/papers/LinearEquilateral_IJCGA/">"Folding equilateral plane graphs"</a>, by an overlapping set of authors, in studying the flat-foldability of objects that are not just flat sheets of paper. But in order to avoid Bern and Hayes' NP-hardness proof, we again restrict our attention only to objects with a single vertex. That is, we have some sort of complex of flat polygons connected by piano hinges at their edges, but all the hinges meet at a point. As in the following illustration:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/two-cycles.png"></div><br /><br />The blue sphere highlights the vertex, but it also indicates a way to think of the problem in only two dimensions instead of three: cut the complex through the surface of the sphere so that each polygon turns into a line segment (or really a segment of a great circle), each hinge turns into a point, and what you have is just a planar graph drawing. The line segments are rigid but the hinges at their endpoints make the whole drawing floppy (within the plane it's defined in). The question is whether it is sufficiently floppy that it can be made to lie flat along a line.<br /><br />Our key insight is in some sense the opposite of Bern and Hayes': the folds within different faces of this planar graph cannot interact with each other at all. This is not obvious, or at least it wasn't to us. But with this in hand, the rest is easy: we already know how to flatten things with one face using the results about folding patterns on flat sheets of paper, so to test whether a complex can be flattened we just need to test each of its faces separately. By using dynamic programming within each face and then multiplying the results, we can also count the number of different flat-folded states.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:295010A brief introduction to the logic of graphs2014-08-28T00:43:41Z2014-08-31T20:15:20ZIf you're used to writing mathematics, but haven't paid much attention to model theory, you probably think a fully-quantified mathematical sentence is generally either true or false. Fermat's last theorem, for instance, can be written as the following sentence: For all positive integers <i>a</i>, <i>b</i>, <i>c</i>, and <i>n</i>, if <i>n</i> > 2, then <i>a</i><sup><i>n</i></sup> + <i>b</i><sup><i>n</i></sup> ≠ <i>c</i><sup><i>n</i></sup>. This sentence is fully quantified: the four variables <i>a</i>, <i>b</i>, <i>c</i>, and <i>n</i> are all covered by the quantifier "for all positive integers". It's one of the true ones, if difficult to prove.<br /><br />But when we're working with the logic of graphs, a (fully-quantified) sentence is itself just another mathematical object, and its truth is relative: it might be true for some graphs and false for others. Consider, for instance, the following sentence about an undirected graph: "There exist vertices <i>v</i> and <i>w</i> such that <i>v</i> and <i>w</i> are adjacent, and for all vertices <i>u</i>, if <i>u</i> and <i>v</i> are adjacent, then <i>u</i> equals <i>w</i>." It can be satisfied only when <i>v</i> is a vertex whose degree is exactly one, and <i>w</i> is its unique neighbor. We can write this more concisely using a notation in which adjacency is written using a tilde:<br /><br /><div align="center"><img src="http://latex.codecogs.com/gif.latex?\exists&space;v,w:&space;(v\sim&space;w)\wedge&space;(\forall&space;u:&space;(u\sim&space;v)&space;\rightarrow&space;(u&space;=&space;w))" title="\exists v,w: (v\sim w)\wedge (\forall u: (u\sim v) \rightarrow (u = w))" /></div><br /><br />Let's give this sentence a name, <i>D</i><sub>1</sub>. Then <i>D</i><sub>1</sub> is true for a graph that has a degree-one vertex, such as the complete bipartite graph <i>K</i><sub>1,4</sub>. But it's false for a graph that doesn't have such a vertex, such as the complete graph <i>K</i><sub>4</sub>. If a sentence is true for a graph, we say that the graph "models" the sentence, and we can also write that in mathematical notation:<br /><br /><div align="center"><img src="http://latex.codecogs.com/gif.latex?K_{1,4}\models&space;D_{1}" title="K_{1,4}\models D_{1}" /></div><br /><br />This kind of logic, in which the only things we can talk about are vertices and their adjacencies, is called the first order logic of graphs, and it's kind of weak. Each of its sentences is equivalent to an algorithm that can contain nested loops over vertices, if-then-else logic involving adjacency tests and equality, and the ability to return Boolean values, but nothing else. For instance:<br /><br /><pre>def d1(G):
for v in G:
for w in G:
if G.adjacent(v,w):
for u in G:
if G.adjacent(u,v):
if u != w:
break
else:
return True
return False</pre><br /><br />This is good enough to recognize some families of graphs (such as the ones with a finite set of forbidden induced subgraphs) but not many others. For instance, I don't know how to describe the <a href="https://en.wikipedia.org/wiki/Distance-hereditary_graph">distance-hereditary graphs</a> in this way. They can be described by forbidden induced subgraphs, but infinitely many of them, and we're not allowed to write infinitely-long sentences.<br /><br />On the other hand, the weakness of first-order logic means that we can prove interesting facts about it. For instance, every first-order sentence defines a family of graphs that can be recognized in polynomial time. Also, we have the 0-1 law: if <i>S</i> is any sentence in first-order logic then the probability that a graph chosen uniformly at random among all <i>n</i>-vertex graphs models <i>S</i> is either zero or one in the limit as <i>n</i> goes to infinity. Using the 0-1 law, even though we can't describe the distance-hereditary graphs precisely in first-order logic, we can get an approximate description that's good enough to prove that almost all random graphs are not distance-hereditary. A distance-hereditary graph either has a degree-one vertex (it models <i>D</i><sub>1</sub>) or it has two twin vertices, vertices whose sets of neighbors (not counting the two twins themselves) are identical. The existence of twins can also be described by a first-order sentence <i>T</i>:<br /><br /><div align="center"><img src="http://latex.codecogs.com/gif.latex?\exists&space;u,v:&space;(u\ne&space;v)\wedge\forall&space;w:&space;(u=w)\vee(v=w)\vee((u\sim&space;w)\leftrightarrow(v\sim&space;w))" title="\exists u,v: (u\ne v)\wedge\forall w: (u=w)\vee(v=w)\vee((u\sim w)\leftrightarrow(v\sim w))" /></div><br /><br />But for a uniformly random graph, both the expected number of degree-one vertices and the expected number of twin pairs, can be calculated directly from these formulas, and are exponentially small in the number <i>n</i> of vertices. So almost all graphs do not model <i>D</i><sub>1</sub>, do not model <i>T</i>, and therefore cannot be distance-hereditary.<br /><br />The name "first order" should be a hint that there's another kind of logic, "second order logic", and there is. In second order logic, the variables can represent complicated structures built out of <i>k</i>-ary relations (for instance, entire graphs), the quantifiers quantify over these structures, and we need more primitives to be able to look at what's inside these structures. The idea of using second order logic seems to be somewhat controversial in mathematics, in part because there's not a uniquely-defined way of assigning meanings to statements in this logic, but there's a restricted subset of the second order logic of graphs, called monadic second order logic, where these problems do not arise. Or actually there are two such subsets: MSO<sub>1</sub> and MSO<sub>2</sub>.<br /><br />MSO<sub>1</sub> is what you get from the first order logic described above when you add another type of variable for sets of vertices (usually written with capital letters) and you allow quantification over sets of vertices. The only other feature beyond first order logic that's necessary to define this logic is the ability to test whether a vertex belongs to a set. It's convenient to write formulas using more complicated tests such as whether one set is a subset of another, but those can be broken down into membership tests. We can also get the effect of using these sets as variable types that can be quantified over, by instead quantifying over all vertices but then testing whether the results of the quantification belong to the given set. For instance we can write sentences <i>D</i><sub>1</sub>[<i>X</i>] and <i>T</i>[<i>X</i>] that have the same form as <i>D</i><sub>1</sub> and <i>T</i>, but restrict all their variables to be in <i>X</i>. The effect of this restriction would be to test whether the subgraph induced by <i>X</i> has a degree-one vertex or has twins. A distance-hereditary graph is a graph in which every induced subgraph of two or more vertices has a degree-zero vertex, a degree-one vertex or twins, and this logic allows us to express this definition in a sentence <i>DH</i>:<br /><br /><div align="center"><img src="http://latex.codecogs.com/gif.latex?\forall&space;X:&space;(\exists&space;u,v:&space;u\in&space;X\wedge&space;v\in&space;X\wedge&space;u\ne&space;v)\rightarrow(D_0[X]\vee&space;D_1[X]\vee&space;T[X])" title="\forall X: (\exists u,v: u\in X\wedge v\in X\wedge u\ne v)\rightarrow(D_0[X]\vee D_1[X]\vee T[X])" /></div><br /><br />A graph <i>G</i> models DH if and only if <i>G</i> is distance-hereditary. MSO<sub>2</sub> is similar, but allows four types of variables: vertices, edges, and sets of vertices and edges. The ability to represent sets of edges allows it to express some properties (such as the property of having a Hamiltonian cycle) that cannot be expressed in MSO<sub>1</sub>.<br /><br />Unlike first-order logic, we don't necessarily get efficient algorithms out of MSO expressions. Simulating the formula directly would involve an exponential-time search over all possible subsets of vertices or edges in a given graph. But that's not the only way to turn one of these formulas into an algorithm. In particular, we can apply <a href="https://en.wikipedia.org/wiki/Courcelle%27s_theorem">Courcelle's theorem</a>, which says that every MSO<sub>2</sub> formula can be translated into a fixed-parameter tractable algorithm on graphs parameterized by their <a href="https://en.wikipedia.org/wiki/Treewidth">treewidth</a>, and that every MSO<sub>1</sub> formula can be translated into an FPT algorithm on graphs parameterized by their <a href="https://en.wikipedia.org/wiki/Clique-width">clique-width</a>. In the example of the distance-hereditary graphs, we also know that all such graphs have bounded clique-width. So applying Courcelle and plugging in the fixed bound on the clique-width of these graphs immediately tells us that there's a polynomial time algorithm for recognizing distance-hereditary graphs.<br /><br />All of this is, I think, completely constructive: it's not just that an algorithm exists, but in principle we could automatically translate the formula into the algorithm. It's also completely useless in practice because the dependence on the parameter is ridiculously high (some kind of tower of powers). When an algorithm is found in this way, additional research is needed to find a more direct algorithm that reduces this dependence to something more reasonable like single-exponential with a small base, or even removes it to get a polynomial time algorithm. In the case of the distance-hereditary graphs, there's an easy polynomial algorithm: look for degree one vertices or twins, and whenever one of these patterns is found use it to reduce the size of the graph by one vertex. With a little more care one can even achieve linear time for distance-hereditary graph recognition.<br /><br />My latest preprint, <a href="http://arxiv.org/abs/1408.6321">"Crossing minimization for 1-page and 2-page drawings of graphs with bounded treewidth"</a> (arXiv:1408.6321, with Michael Bannister, to appear at Graph Drawing), uses this same logical approach to attack some problems related to <a href="https://en.wikipedia.org/wiki/Book_embedding">book embedding</a>. We had a paper with Joe Simons in GD 2013 that showed that, for graphs formed from trees by adding a very small number of edges, we can find 1-page and 2-page book drawings with a minimum number of crossings in FPT time. In the new paper, we characterize these drawings using second order logic and apply Courcelle's theorem, allowing us to generalize these algorithms to the graphs of low treewidth, a much larger class of inputs. But because we use Courcelle's theorem, our algorithms are completely impractical. More research is needed to find a way to minimize crossings in practice.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:294778Planarization by vertex deletion2014-08-27T05:35:22Z2014-08-27T05:35:22ZAnother of my Graph Drawing papers is online today: <a href="http://arxiv.org/abs/1408.5939">"Planar Induced Subgraphs of Sparse Graphs", arXiv:1408.5939</a>, with Cora Borradaile and her student Pingan Zhu. It's about finding large planar subgraphs in arbitrary graphs; in the version of the problem we study, we want the planar subgraph to be an induced subgraph, so the goal is to find as large a subset of vertices as possible with the property that all edges connecting them can be drawn planarly. Equivalently, we want to delete as few vertices as possible to make the remaining graph planar. It's NP-complete, of course, so our goal is to prove good bounds on how many vertices you need to delete rather than computing this number exactly.<br /><br />We were inspired to look at this sort of problem by <a href="http://people.math.gatech.edu/~thomas/PAP/acyc.pdf">a 2001 paper of Alon, Mubayi, and Thomas</a>, who proved among other things that in triangle-free graphs with <i>m</i> edges you can delete <i>m</i>/4 of the vertices and eliminate all the cycles in the graph (so the remaining graph is a forest). We knew also (too easy for a publication) that you can delete <i>m</i>/3 vertices and get a linear forest, delete <i>m</i>/2 vertices and get a matching, or delete <i>m</i>/1 vertices and get an independent set. So we were hoping that this would be part of a hierarchy of graph classes, sort of like the hierarchy coming from the <a href="https://en.wikipedia.org/wiki/Colin_de_Verdi%C3%A8re_graph_invariant">Colin de Verdière graph invariant</a>: delete <i>m</i>/5 vertices and get an outerplanar graph, delete <i>m</i>/6 vertices and get a planar graph, delete <i>m</i>/7 vertices and get...I don't know, some interesting class of nonplanar graphs. It didn't quite work out that way.<br /><br />We did get one result sort of in this vein: you can delete <i>m</i>/5 vertices and get a partial 2-tree. And this is in some sense optimal, because there are graphs (such as <i>K</i><sub>5</sub>) for which that's the biggest partial 2-tree you can get. But then fractional divisors started turning up: you can delete <i>m</i>/4.5 vertices and get a pseudoforest (again optimal). And the best we could get for finding planar subgraphs was even messier (and used linear programming to work out the precise bounds): you can delete 23<i>m</i>/120 (around <i>m</i>/5.22) vertices and get a planar graph. Probably not optimal.<br /><br />Ok, so the idea of getting a hierarchy with integer divisors is dead, but maybe there's still a hierarchy, just with messier numbers. Maybe, but if you want minor-closed graph families (as all the above ones are) then the divisors can't get arbitrarily big: if you choose a divisor <i>k</i> that's bigger than six, then no matter how you try to delete <i>m</i>/<i>k</i> vertices, the smallest minor-closed family you can get will be the class of all graphs. The proof involves the existence of 3-regular <a href="http://11011110.livejournal.com/285735.html">cages</a>, sparse graphs without any short cycles: if you start with a cage, and don't delete enough vertices, you'll get a graph that still has high girth and high <a href="https://en.wikipedia.org/wiki/Circuit_rank">cyclomatic number</a>. The high girth can be used to contract a lot of edges without introducing any self-loops or multiple adjacencies, giving a much denser minor of the remaining graph, dense enough that it necessarily has <a href="https://en.wikipedia.org/wiki/Hadwiger_number">large clique minors</a>.<br /><br />Maybe there's some way of rescuing the idea of a hierarchy of subgraph classes by using classes of graphs that aren't closed under minors, but have weaker sparsity properties, such as the <a href="https://en.wikipedia.org/wiki/1-planar_graph">1-planar graphs</a>? I don't know; that's as far as we were able to prove. But it might be an interesting subject for future research.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:294551Circle packings with small area2014-08-22T06:16:34Z2014-08-22T06:17:51ZThe second of my papers for this year's Graph Drawing symposium is now online: <a href="https://arxiv.org/abs/1408.4902">"Balanced Circle Packings for Planar Graphs", arXiv:1408.4902</a>, with Jawaherul Alam, Mike Goodrich, Stephen Kobourov, and Sergey Pupyrev. It's about finding <a href="https://en.wikipedia.org/wiki/Circle_packing_theorem">circle packings</a> (interior-disjoint circles whose tangencies form a given graph) where the radii are all within a polynomial factor of each other. Or equivalently, if one normalizes the packing to make the smallest circles have unit radius, then the <a href="https://en.wikipedia.org/wiki/Area_(graph_drawing)">area</a> of the packing should be at most polynomial. We call packings with this property "balanced".<br /><br />The graphs that can be represented by circle packings are all the planar graphs, but not all of these graphs have balanced packings. In particular, for a maximal planar graph, the packing is essentially unique (any two packings can be transformed into each other by a Möbius transformation) and one can easily find graphs for which the circles differ exponentially in their sizes. But if the graph is not maximal planar, then there can be a lot more freedom in choosing a circle packing for it, allowing us to find balanced ones for many subclasses of the planar graphs.<br /><br />Probably the most practical of our results (though not very deep) are that trees, cactus graphs, and bounded-degree outerplanar graphs all have balanced packings. The tree construction is particularly simple: represent the tree by tangent squares (with size proportional to the number of descendants, and with children attached to the bottom side of each square), fill each square by a circle, and then push the circles straight up until they touch.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/tree-balcpack.png"></div><br /><br />The result for bounded-degree outerplanar graphs follows by adding a balanced binary tree to the outside face to make the graph have bounded degree and logarithmic diameter, augmenting it to be maximal planar (still with bounded degree), and then applying the standard circle packing theorem. It is known that for bounded degree graphs one can find circle packings in which adjacent circles have radii within a constant factor of each other. Because of the binary tree, any two circles are connected by a path of logarithmic length, and multiplying these constant factors along this path shows that the two circles' radii are within a polynomial of each other. With a little more care the same thing works if the starting graph is not outerplanar, but has at most a logarithmic number of layers from the outside face inwards. For two-layer graphs, bounded degree is necessary, but it might be possible that all outerplanar graphs, even the ones with unbounded degree, have balanced circle packings; we couldn't solve that question and left it open.<br /><br />A large fraction of the paper is taken up by what I think may be its deepest result (but certainly not its most useful): the planar graphs of bounded <a href="https://en.wikipedia.org/wiki/Tree-depth">tree-depth</a> such as the <a href="https://en.wikipedia.org/wiki/Book_(graph_theory)">book graphs</a> and <a href="https://en.wikipedia.org/wiki/Friendship_graph">friendship graphs</a> all have balanced circle packings. The result depends on a characterization of planar bounded-tree-depth graphs in terms of their <a href="https://en.wikipedia.org/wiki/SPQR_tree">SPQR trees</a>, and on a generalization of circle packings to non-tangent circles, called "<a href="https://en.wikipedia.org/wiki/Inversive_distance">inversive distance</a> circle packings". Not much was known about the existence of inversive-distance packings, but we were able to show that the ones we need for this result do exist and can be glued together using Möbius transformations to give us the balanced packings we want.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:294274Condorcet, Hugo, and sad puppies2014-08-19T00:22:57Z2014-08-19T00:22:57ZYesterday, this year's <a href="https://en.wikipedia.org/wiki/Hugo_Award">Hugo Award</a> winners <a href="http://www.loncon3.org/hugo_ceremony.php">were announced</a>; this is an annual fan popularity contest for the best works in science fiction and fantasy (there is a different set of awards voted on by the writers themselves, the Nebulas). I have a few thoughts on the nominees (like, why wasn't <i>Her</i> among them?) but that's not what I'm writing about. Rather, what interests me in this year's contest is the issue of voting systems and their resistance to manipulation.<br /><br />Some background: this year's award nomination and voting involved a group of fans from one of the subgenres of SF whose two main interests seem to be military jingoism and sexual and racial anti-inclusivity and who have apparently dubbed themselves the "sad puppies". These people pushed a slate of nominees onto the ballot, which then lost fairly decisively in the final voting. There was no unfair vote manipulation going on: everything was aboveboard and according to the rules. But it caused me to wonder: how large would a dedicated faction of the voters have to be to break into the winner's circle, against the will of the remaining voters?<br /><br />To model this we need to know something about how the voting worked and we need to make some assumptions (not visible in the actual <a href="http://www.thehugoawards.org/content/pdf/2014HugoStatistics.pdf">Hugo voting data</a>) about the details of the voter preferences. In the nomination stage, any eligible voter may cast one vote in any category. The top five vote winners (after removing ineligible candidates, candidates that received less than 5% of the total nominations, and candidates who decline the nomination) become the short list for the final voting. If there is a tie for fifth place, the tied candidates are all included. In the final voting, each eligible voter makes an ordered list of their favorite candidates, in order from first, second, etc. They are not required to list all candidates in this list, are allowed to list a special placeholder candidate called "no award", and are also allowed to list other candidates after no award. These preferences are aggregated using an algorithm (described below) to produce a winner for each category of the awards. I'm going to assume (without justification) that the sad puppies prefer their candidates, then no award, then all other candidates, while the other voters have the reverse preferences: all other candidates (in some combination of orders that I am not going to make assumptions about), then no award, then the sad puppy candidates.<br /><br />So, first, how easy is it to get your candidate on the short list of nominees? Pretty easy, it turns out. If the sad puppy faction were only 20% of the electorate, they would be able to guarantee themselves a place on the short list even if the remaining voters somehow conspired to make it as hard as possible for them. The actual cutoff was much lower, ranging from 5.0% in the best short story category to 13.9% in the best long-form dramatic presentation category. The percentages achieved by the sad puppies ranged from 9.5% to a little higher, enough to secure their nominations.<br /><br />Next, and more complicatedly, how easy is it for the sad puppies to actually win? The answer turns out to be: much harder. One indication of this is given by something called the <a href="https://en.wikipedia.org/wiki/Condorcet_criterion">Condorcet criterion</a> for preference balloting. This states that, if one candidate would win all head-to-head contests then that candidate should win the overall election; if this is the case, then an election system is immune to certain kinds of manipulation by minority factions. Unfortunately there are two problems with this analysis for the Hugos: first, the assumptions I've made don't imply the existence of a Condorcet winner. There could be a directed cycle of head-to-head winners among the non-sad-puppy candidates, even though each of them individually would win a head-to-head contest against a sad puppy. When this happens, the Condorcet criterion says nothing about who the winner should be. A generalized Condorcet property fixes this: if the candidates can be partitioned into two sets A and B such that every candidate in A wins a head-to-head contest against every candidate in B, then the winner should be from A. A system that satisfies the generalized Condorcet property would never pick a sad puppy unless the sad puppies had an outright majority of the voters.<br /><br />But the second problem with trying to apply the Condorcet analysis to this situation is that the election system used for the Hugos does not obey the Condorcet criterion (in either the single-winner or generalized forms). There are systems that do have the Condorcet property; my favorite is the <a href="https://en.wikipedia.org/wiki/Schulze_method">Schulze method</a>, which involves computing <a href="https://en.wikipedia.org/wiki/Widest_path_problem">widest paths</a> in a complete directed graph describing the head-to-head results between all pairs of candidates. If the Hugo awards used this method, the Condorcet criterion would ensure their immunity to sad puppies. But the method they actually use is <a href="https://en.wikipedia.org/wiki/Instant-runoff_voting">instant-runoff voting</a>. This method proceeds in a sequence of rounds, eliminating one candidate per round until a winner is left. In each round, each voter's vote is given to the remaining candidate that is highest on the voter's list (or the vote is discarded if no listed candidate remains) and the candidate with the fewest votes is kicked off the island. But this is not a Condorcet system; for instance, a candidate that is every voter's second choice would be eliminated in the first round (because that candidate gets no first place votes) but could easily be a Condorcet winner (if no other candidate gets an outright majority of first place votes).<br /><br />Instant-runoff does satisfy a different property, the <a href="https://en.wikipedia.org/wiki/Majority_criterion">majority criterion</a>: anyone who gets an outright majority of first-place votes will necessarily win, because their number of votes can only improve after other candidates are eliminated. This still doesn't help against the sad puppies, because outright majorities are unlikely (in the 2014 Hugos, it happened only for the fan artist category). But we can generalize it just like we can generalize the Condorcet criterion. Call the "generalized majority criterion" the following property: if the candidates can be partitioned into two sets A and B such that a majority of the voters thinks all candidates in A are better than all candidates in B, then the winner should be a candidate in A. Whenever a voting system satisfies the generalized Condorcet criterion it also satisfies the generalized majority criterion. Instant-runoff also satisfies the generalized majority criterion, because once we reach a round in which all but one of the candidates in A has been eliminated, the remaining candidate in A will have an outright majority and will win all remaining rounds. Using any voting system that obeys the generalized majority criterion, and with the assumptions about sad puppy voting patterns made above, the sad puppies can't win without an outright majority of the voters. If the sad puppies are not a majority, then a majority of voters agrees with the partition in which B is the sad puppy candidate and A is everybody else. In particular, the sad puppies can't win an instant runoff without a majority. The existence of "no award" doesn't really make much difference to this analysis: it would be valid with or without the ability to list no award.<br /><br />My colleague <a href="https://en.wikipedia.org/wiki/Donald_G._Saari">Don Saari</a> is an expert on voting systems and a strong advocate of a different preference balloting system, the <a href="https://en.wikipedia.org/wiki/Borda_count">Borda count</a>. This system is easier to explain than instant-runoff (and much easier than Schulze): if there are six candidates (counting no award) we give five points for each first place vote, four for each second place vote, etc., down to one point for a second-to-last vote, and give the award to whoever gets the most points. How does this method fare against the sad puppies? Not so well. If the other candidates are close to equal in strength, then their voters will split their votes evenly among 5, 4, 3, and 2 points, with 1 for no award. The average number of points per candidate will be 3.5x where x is the number of non-sad-puppy voters. On the other hand the sad puppies will vote their candidate first, and then (if they're trying to get the strongest result for this candidate) omit listing anybody else to deny them the points they would get for lower finishes. This would give the sad puppies 5y points where y is the number of sad puppy voters. If 5y > 3.5x (that is, if the sad puppies have a bit more than 41% of the electorate) then they have a chance of winning. In the limit of large numbers of candidates, even a 33% minority of sad puppies could be enough to swing an election. That is, although the Borda count has some robustness against small minority factions, it is more vulnerable to large minority factions than Schulze or instant runoff.<br /><br />As for which election system to choose: it depends on what you want winning to mean. The system you should use for a popularity contest such as this one could well be different than the system you would prefer for a political office. Is it better for the winner to be somewhat liked by most voters or to have strongly enthusiastic support by a smaller number of voters? The instant runoff system used by the Hugos demands a balance of both: a winner needs enough enthusiasm to make it through the early rounds of voting and enough depth of support to make it through the late rounds.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:293984Linkage2014-08-16T04:59:59Z2014-08-16T04:59:59ZSome links I've posted over on Google+ over the last couple of weeks (and reposted here, among other reasons, because I don't trust G+ to give me a usable interface for finding all of my old posts there):
<ul>
<li><p><a href="http://blogs.ams.org/visualinsight/2014/08/01/733-honeycomb/">{7,3,3} Honeycomb</a>, an interesting polyhedral tesselation of hyperbolic space with a <a href="http://blogs.ams.org/visualinsight/2014/08/14/733-honeycomb-meets-plane-at-infinity/">fractal boundary</a> (<a href="https://plus.google.com/100003628603413742554/posts/DuCPX2YVjAD">G+</a>)</p></li>
<li><p><a href="http://www.kansascity.com/news/local/article811395.html">How to distort school rankings in your favor</a> (<a href="https://plus.google.com/100003628603413742554/posts/gosX8zZSAqB">G+</a>)</p></li>
<li><p><a href="http://www.boredpanda.com/iran-mosque-photography-mohammad-domiri/">Some impressive fisheye photography of the heavily patterned interiors of Iranian mosques</a> (<a href="https://plus.google.com/100003628603413742554/posts/f5qR9QRXDPN">G+</a>)</p></li>
<li><p><a href="http://www.smithsonianmag.com/travel/winding-history-maze-180951998/?no-ist">A brief history of mazes and labyrinths</a>, in honor of a new one in place at the Smithsonian (<a href="https://plus.google.com/100003628603413742554/posts/Qyq2iD8Dpns">G+</a>)</p></li>
<li><p><a href="http://www.combinatorics.org/ojs/index.php/eljc/article/view/v21i3p20/pdf">The journal version of my paper "Grid Minors in Damaged Grids"</a> (<a href="https://plus.google.com/100003628603413742554/posts/4D3N9bV6a9R">G+</a>)</p></li>
<li><p><a href="http://crookedtimber.org/2014/08/06/another-anti-zionist-professor-punished-for-his-views/">UIUC rescinds the hire of an outspoken critic of Zionism</a> (<a href="https://plus.google.com/100003628603413742554/posts/H7ARbdSSzKH">G+</a> – as one might guess this led to much discussion)</p></li>
<li><p><a href="http://www.thisiscolossal.com/2014/08/eric-standley-laser-cut-paper-windows/">Stained Glass Windows Made from Stacked Laser-Cut Paper</a> (<a href="https://plus.google.com/100003628603413742554/posts/7gjgi1S7nHu">G+</a>)</p></li>
<li><p><lj-embed id="47" /> (<a href="https://plus.google.com/100003628603413742554/posts/WotzqKwMwxc">G+</a>)</p></li>
<li><p><a href="http://www.newscientist.com/article/dn26041-proof-confirmed-of-400yearold-fruitstacking-problem.html">Hales finally gets computer verification of his sphere-packing proof</a>, with inflammatory statement about journal referees (<a href="https://plus.google.com/100003628603413742554/posts/Kg6gyj984bX">G+</a>)</p></li>
<li><p><a href="http://blogs.lse.ac.uk/impactofsocialsciences/2014/04/23/academic-papers-citation-rates-remler/">Are 90% of academic papers really never cited?</a> For some reason this came up on a Wikipedia deletion discussion: someone wanted to argue that a half-dozen publications with 100+ citations each shouldn't count for much because basically everyone achieved that just by waiting long enough, and used this as evidence. (<a href="https://plus.google.com/100003628603413742554/posts/dCb2wL88Ms8">G+</a>)</p></li>
<li><p><a href="http://www.thisiscolossal.com/2014/08/functional-chocolate-legos-by-akihiro-mizuuchi/">Chocolate LEGOs</a> and <a href="http://boingboing.net/2014/08/15/wall-made-of-jello-bricks.html">other edible construction materials</a> (<a href="https://plus.google.com/100003628603413742554/posts/a8SMoDC9A83">G+</a>)</li>
</ul>urn:lj:livejournal.com:atom1:11011110:293869Museum of Anthropology2014-08-15T06:51:45Z2014-08-15T06:51:45ZMy <a href="http://www.ics.uci.edu/~eppstein/pix/moa/index.html">photos from the UBC Museum of Anthropology</a> are online now. Here is a sampling of thumbnails:<br /><br /><div align="center"><table border="0" cellpadding="10">
<tr align="center" valign="middle">
<td><a href="http://www.ics.uci.edu/~eppstein/pix/moa/ToShareHistory.html"><img src="http://www.ics.uci.edu/~eppstein/pix/moa/ToShareHistory-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/moa/GyaaGang.html"><img src="http://www.ics.uci.edu/~eppstein/pix/moa/GyaaGang-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/moa/TwoPoles.html"><img src="http://www.ics.uci.edu/~eppstein/pix/moa/TwoPoles-s.jpg" border="2" style="border-color:black;" /></a></td>
</tr><tr align="center" valign="middle">
<td><a href="http://www.ics.uci.edu/~eppstein/pix/moa/Dzunukwa.html"><img src="http://www.ics.uci.edu/~eppstein/pix/moa/Dzunukwa-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/moa/Remember.html"><img src="http://www.ics.uci.edu/~eppstein/pix/moa/Remember-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/moa/AguasDelRio.html"><img src="http://www.ics.uci.edu/~eppstein/pix/moa/AguasDelRio-s.jpg" border="2" style="border-color:black;" /></a></td>
</tr></table></div><br /><br />The museum is mostly devoted to Pacific Northwest First Nations art, but as a living culture rather than as something that happened in the past, so it includes a mix of older cultural artifacts with more modern art. It also has a gallery of Pacific Rim cultures, and a couple of rotating exhibit spaces; for our visit, one of them was on "urban aboriginal youth" and the other was a show of Afro-Cuban art including two sculptures shown above. Definitely worth a visit if you're in the Vancouver area, despite the minor inconvenience of getting there from downtown (we took a city bus).urn:lj:livejournal.com:atom1:11011110:293614Jun Ren, Freezing Water #7, Vanier Park2014-08-11T04:13:40Z2014-08-11T04:13:40ZI just returned from a short vacation in Vancouver (unrelated to SIGGRAPH, also happening there now) and took <a href="http://www.ics.uci.edu/~eppstein/pix/van/index.html">a few snapshots, mostly of boats or totem poles</a>. Another batch of photos from MOA, also with many totem poles, is still to come. But here's one that has neither:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/van/JunRenFreezingWater-m.jpg" border="2" style="border-color:black;" /></div><br /><br />Obviously, I don't understand the rules of photographic composition. By any rational standard, the tree should not be at the center. It's not the subject, it attracts too much attention to itself there, and centering typically makes the image very static. But when I cropped the shot (mostly to put it into this panoramic aspect ratio) the tree insisted that that's where it had to be. I don't understand why. Also, the small bush on the right, that seems like a distraction, is necessary. I had another version of this image from a slightly different perspective that eliminated the bush, and it didn't work as well. Again, I can't explain why.<br /><br />Some other Vancouver stuff I enjoyed but didn't photograph: visiting artists' studios, sampling artisinal sake, and learning about summer tree-planting jobs from another sake taster, on Granville Island (do it on a weekday); Shakespeare in the park (The Tempest, but they're also showing Midsummer Night's Dream if you haven't already seen that one more times than you can count); sushi at Miku (across the street from my hotel) and bubble tea next door (we have plenty of that at home but this one had even more variety); Douglas Coupland's big art show at the Vancouver Art Gallery (apparently he's not just a famous author); and waffles and berries for breakfast (there are several cafes that specialize in this — the one we chose was on Pender between Nicola and Broughton).urn:lj:livejournal.com:atom1:11011110:293289Three-colorable circle graphs and three-page book embeddings2014-08-10T02:13:31Z2014-08-10T02:13:31ZThe <a href="http://11011110.livejournal.com/229003.html">GD11 contest graph that I've written about earlier</a> turns out to be a <a href="https://en.wikipedia.org/wiki/Circle_graph">circle graph</a>. Here's a chord diagram representing it:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/contest-arrangement.png"></div><br /><br />I'm looking at this and similar graphs because I'm trying to understand a result claimed in a STACS 1992 paper by Walter Unger, "The Complexity of Colouring Circle Graphs", which claims that testing 3-colorability of circle graphs can be done in polynomial time. (More precisely Unger claims a time bound of <i>O</i>(<i>n</i> log <i>n</i>) if a chord diagram representing the graph is given.) The problem can equivalently be stated as one of finding a three-page <a href="https://en.wikipedia.org/wiki/Book_embedding">book embedding</a> (if one exists) given a fixed ordering of vertices along the spine of the embedding, or of sorting permutations using three stacks. The paper describes an algorithm that looks for "important subgraphs" and uses them to formulate a <a href="https://en.wikipedia.org/wiki/2-satisfiability">2-satisfiability</a> instance that is claimed to solve the problem, but many details are missing and there seems to be no journal version filling in the missing details. I think there should be a statute of limitations on these things: a 20-year-old conference announcement that is clearly not a full archival paper and that never was turned into one should be considered a non-result. (The paper does give better details for a different result, that it's NP-complete to test 4-colorability of circle graphs.)<br /><br />The contest-graph example shows that it's pointless to try to decompose circle graphs into 3-connected components (say) before trying to color them: by making multiple copies of each chord you can make the graph as highly connected as you like without changing its colorability.<br /><br />Coincidentally, another conference paper on 3- and 4-page book embeddings is also missing some important details. It's a more famous one, by Yannakakis in STOC 1986, stating that the maximum number of pages needed for book embeddings of planar graphs is exactly four. Yannakakis did publish a journal version of the upper bound (that all planar graphs have four-page book embeddings, in JCSS 1989). But the lower bound construction (the existence of a planar graph requiring four pages), again outlined with many missing details in the conference paper, does not appear in the journal paper. Again, I think the statute of limitations has expired for this claim.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:292904Queen Dido and the carpenter's rule2014-08-08T07:28:08Z2014-08-08T07:28:08ZThe story goes that Dido, a refugee from her home city of Tyre, took refuge in north Africa where king Iarbas granted her and her followers a small amount of land: the amount that she could surround by an oxhide. Cleverly, she cut the hide into a cord, which she arranged in a circle around a hill to maximize the area it would surround, and in so doing founded the city of Carthage. But what if Dido had been granted a carpenter's rule instead of an oxhide? Mathematically, the problem is: given a polygon in the plane, formed by rigid line segments connected to each other by flexible hinges at the vertices, adjust the angles of the polygon to surround the maximum area possible. What is the shape of the optimal solution?<br /><br />The answer turns out to be a convex and cyclic polygon, meaning that the endpoints of the line segments all lie on a single circle. It doesn't matter how the lengths of the segments in the polygon are permuted; you always get the same circle and the same area regardless of the permutation. But which circle? How to compute its radius? I think this problem is not hard to solve with an iterative numerical procedure that converges to the optimal circle radius (although I haven't worked out the details). It's also possible to set up a system of polynomial equations that has the optimal circle radius as a root. But it turns out not to be possible to write down an algebraic expression (even one involving fractional powers) for the solution. In algebraic terms, the desired radius might generate an extension of the rationals that has an unsolvable Galois group. (The reference I have for this is Varfolomeev, V.V.: Galois groups of the Heron–Sabitov polynomials for inscribed pentagons. Mat. Sb. 195 (2004) 3–16 Translation in Sb. Math. 195: 149–162, 2004.)<br /><br />My new arxiv preprint with the long title "<a href="http://arxiv.org/abs/1408.1422">The Galois Complexity of Graph Drawing: Why Numerical Solutions are Ubiquitous for Force-Directed, Spectral, and Circle Packing Drawings"</a> (arXiv:1408.1422, with Bannister, Devanny, and Goodrich, to appear at GD) similarly uses Galois theory to prove the non-existence of an exact algebraic formula for many important problems in the geometric representation of graphs. These include force-directed graph drawing methods (i.e. pretend your edges are springs pulling together vertices that have repulsive electrical charges on them and see what happens), spectral graph drawing methods (construct a matrix from your graph, find its eigenvectors, and use the top few as coordinates), multidimensional scaling (pretend your graph distances are Euclidean distances, transform the distance matrix to a matrix that in the Euclidean case would be the dot products of coordinate vectors, and use matrix factorization to find the corresponding Euclidean coordinates), and the construction of circle packings that represent planar graphs.<br /><br />Of course, algebraic formulas with fractional powers (also known as nested radicals) aren't the only thing one could use to describe the solutions to these problems with an exact formula. For instance, there's a way to use elliptic functions to solve fifth-degree polynomials, even the ones that have no solution in nested radicals. But we also show that, even if you have a black box for solving fifth-degree polynomials (or polynomials of any bounded degree), and you could use that black box iteratively on the numbers produced in earlier iterations, you still wouldn't be able to solve most of these problems. (For technical reasons this part of our results doesn't work for MDS.)<br /><br />What effect does any of this have on practical graph drawing? Essentially none. These problems are all solved in practice efficiently and quickly with numerical methods that converge quickly to the correct answer (except in the force-directed case, where they only converge to a local minimum, but that's usually considered good enough). The point is less to try to change practice and more to understand why approximate numerical methods are the preferred choice for these problems when exact algebraic methods work well for many other algorithmic problems in geometry and when these problems can be formulated algebraically.<br /><br />This project also taught me that Galois theory is far from a completed subject. For instance, suppose you want a sequence of polynomials, one for each degree, whose Galois groups are as nasty as they could be (the symmetric groups of order equal to the degree). Some sequences like this are known, for instance certain families of trinomials. But now, suppose you already have a sequence of polynomials, one for each degree, and you suspect their Galois groups are the symmetric groups. How to prove this? All the tools we could find for this sort of problem work by computing the Galois group of individual polynomials, one at a time, so we don't know how to compute Galois groups of infinite sequences of polynomials (even when we're pretty sure what those groups are). This is relevant for the new preprint because it shows nonexistence of two kinds of formulas: ones with fractional powers (of arbitrarily high degree) and ones with black boxes for roots (of bounded degree). What we really want to show is that the problems we consider lead to unsolvable Galois groups of arbitrarily high order. If we could do that, it would imply the nonexistence of formulas that are allowed to include both fractional powers (of unbounded degree) and black boxes for roots (of bounded degree). But we're prevented from doing that by our inability to compute Galois groups of infinite sequences of polynomials.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:292825A victory in the Nymwars and some Google+ links2014-08-01T04:02:04Z2014-08-01T04:02:04Z<p>The <a href="https://en.wikipedia.org/wiki/Nymwars">nymwars</a> are the struggle to maintain online safe havens for pseudonymous free speech, for people who don't feel safe linking their opinions with their real names (for fear of religious persecution / sexual predation / current or future job prospects / whatever else) in the face of attempts by Facebook, Google, and others to force everyone to cross-link all their personal information. Soon after its launch in 2011, Google+ took a strong stand that only people willing to post under their real names would be welcome on the site, and (as one very minor consequence) I stopped posting there. Now, finally, <a href="https://plus.google.com/101560853443212199687/posts/V5XkYQYYJqy">Google+ has relented</a> and will allow arbitrary user-chosen identities. They <a href="http://infotrope.net/2014/07/16/meanwhile-in-an-alternate-universe/">could have been more apologetic about it</a>, but it's <a href="https://plus.google.com/100003628603413742554/posts/DGS9E3eJBGb">enough for me to return there</a>.</p>
<p>I don't intend to change my Livejournal posting habits much, as I've been using my Google+ account for a somewhat different purpose: posting brief links to web sites that catch my attention and that I think might be of interest to my (mostly technical/academic) circles of contacts there. Here's a roundup of a dozen or so links I've posted so far (skipping the ones where I linked back to my LJ postings).</p>
<ul>
<li><a href="http://www.samefacts.com/2014/06/everything-else/memo-to-academic-journal-reviewers-dont-tell-your-editor-what-a-study-is-not/">Memo to academic journal reviewers: Don’t tell your editor what a study is not</a>. Some advice that would also be useful for CS conference reviewers. (<a href="https://plus.google.com/100003628603413742554/posts/FxXMXXU7aSy">G+</a>)</li>
<li><a href="http://boingboing.net/2014/07/01/3d-weaving-produces-strong-fl.html">3D weaving produces strong, flexible solids</a> (<a href="https://plus.google.com/100003628603413742554/posts/52CSuHdBSkX">G+</a>)</li>
<li><a href="http://www.southernfriedscience.com/?p=17417">These things are related</a>. On several recent instances of misogyny/sexism in science. (<a href="https://plus.google.com/100003628603413742554/posts/e6p7iqMx8CQ">G+</a>)</li>
<li><a href="http://www.thisiscolossal.com/2014/07/wink-space-mirror-tunnel/">Wink Space: An immersive kaleidoscopic mirror tunnel inside a shipping container</a>. Zippered mirrored polyhedra. (<a href="https://plus.google.com/100003628603413742554/posts/grirRV1TQdL">G+</a>)</li>
<li><a href="http://www.metafilter.com/141114/The-Miura-fold-art-and-mathematics-of-origami">The Miura fold: art and mathematics of origami</a>. (<a href="https://plus.google.com/100003628603413742554/posts/hfkbtJmZ6na">G+</a>)</li>
<li><a href="http://graphdrawing.de/contest2014/index.html">Graph Drawing contest</a>, <a href="http://lamut.informatik.uni-wuerzburg.de/gd2014/index.php?id=program">GD 2014 accepted papers</a>, and <a href="http://algo2014.ii.uni.wroc.pl/ipec/accepted.html">IPEC 2014 accepted papers</a>.</li>
<li><a href="https://www.eff.org/deeplinks/2014/07/colombian-student-faces-prison-charges-sharing-academic-article-online">Colombian student faces prison charges for sharing an academic article online</a>. (<a href="https://plus.google.com/100003628603413742554/posts/LGA2hGSvPQU">G+</a>)</li>
<li><a href="http://vimeo.com/channels/bkcfilms/98297985">Journey to the center of a triangle</a>, one of a set of CC-licenced educational films from the 1970s. (<a href="https://plus.google.com/100003628603413742554/posts/FqU6bqvMqAd">G+</a>)</li>
<li><a href="http://www.improbable.com/2014/07/27/he-and-they-approached-hallucinations-mathematically/">Mathematical analysis of geometric hallucinations</a>, helping explain how the human visual system works. (<a href="https://plus.google.com/100003628603413742554/posts/hxVXpcRUHYK">G+</a>)</li>
<li><a href="http://proofmathisbeautiful.tumblr.com/post/93137027229/staceythinx-selections-from-tallmadge-doyles">Selections from Tallmadge Doyle’s ethereal <i>Celestial Mapping Series</i></a>. (<a href="https://plus.google.com/100003628603413742554/posts/1qFzRp9s29p">G+</a>)</li>
<li><a href="http://www.theguardian.com/science/neurophilosophy/2014/jul/26/robo-rehab">Robo rehab</a>. Newspaper coverage of some research by one of my UCI colleagues, David Reinkensmeyer, who's using mechanical-linkage exoskeletons to assist injured and disabled people. (<a href="https://plus.google.com/100003628603413742554/posts/HeT7fok6hXc">G+</a>)</li></ul>urn:lj:livejournal.com:atom1:11011110:292583Montgomery Woods2014-07-27T06:04:34Z2014-07-27T06:04:34ZUntil 2006 (when bigger ones were found elsewhere) this was the home to the tallest known tree in the world. But it's not marked, so you just have to look at them all and guess which one might be the biggest.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/montywoods/3-m.jpg" border="2" style="border-color:black;" /></div><br /><br /><b>( <a href="http://www.ics.uci.edu/~eppstein/pix/montywoods/index.html">More</a> )</b>urn:lj:livejournal.com:atom1:11011110:292232Comptche2014-07-27T01:57:14Z2014-07-27T06:04:51ZAka that wide spot in the road on the way to <a href="https://en.wikipedia.org/wiki/Montgomery_Woods_State_Natural_Reserve">Montgomery Woods</a><br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/comptche/ComptcheStore-m.jpg" border="2" style="border-color:black;" /></div><br /><br /><b>( <a href="http://www.ics.uci.edu/~eppstein/pix/comptche/index.html">More</a> )</b>urn:lj:livejournal.com:atom1:11011110:291980Big grids in outerplanar strict confluent graphs2014-07-24T00:50:58Z2014-07-24T00:50:58ZI was wondering whether the <a href="http://arxiv.org/abs/1308.6824">outerplanar strict confluent drawings</a> I studied in a Graph Drawing paper last year had underlying diagrams whose <a href="https://en.wikipedia.org/wiki/Treewidth">treewidth</a> is bounded, similarly to the treewidth bound for the usual <a href="https://en.wikipedia.org/wiki/Outerplanar_graph">outerplanar graphs</a>. The confluent graphs themselves can't have low treewidth, because they include large <a href="https://en.wikipedia.org/wiki/Complete_bipartite_graph">complete bipartite graphs</a>, but I was hoping that a treewidth bound for the diagram could be used to prove that the graphs themselves have low <a href="https://en.wikipedia.org/wiki/Clique-width">clique-width</a>. Sadly, it turns out not to be true.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/hyperbolic-outerconfluent.png"></div><br /><br />The drawing above, shown with two levels of cells surrounding the central pentagon, can be extended to any number of levels. Besides the three shapes of faces shown here (with three, four, and five sharp points) there's one more possible shape, with three sharp points that are not consecutive, that would take a couple more levels to complete — the five points on the boundary with three incoming edges would each form the base of one of these shapes — but that's the only other thing that can happen. The drawing is based on the <a href="https://en.wikipedia.org/wiki/Order-5_pentagonal_tiling">order-5 pentagonal tiling</a> of the hyperbolic plane, and consists of five-sided regions with five regions meeting at each confluent junction. By the theory from my GD paper, it's not possible to find a different drawing with the same vertices in the same order in which some junctions have been merged, reducing the treewidth of the drawing. And because it can be extended to arbitrarily large patches of the pentagonal tiling (analogous to arbitrarily large grids in the Euclidean plane) it has unbounded treewidth.<br /><br />It doesn't seem to work to do this directly with the square grid because some confluent junctions will have three connections in one direction and one in the other, allowing them to be merged with a neighboring junction. Using a hyperbolic tiling pattern allows all junctions to have at least two connections in each direction.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:291634Using finite automata to draw graphs2014-07-21T23:54:27Z2014-07-22T04:56:08ZThe diagram below describes a finite state machine that takes as input a description of an <a href="https://en.wikipedia.org/wiki/Indifference_graph">indifference graph</a>, and produces as output a <a href="https://en.wikipedia.org/wiki/1-planar_graph">1-planar drawing</a> of it (that is, a drawing with each edge crossed at most once).<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/1planar-unit-interval.png"></div><br /><br />Indifference graphs are the graphs that can be constructed by the following process. Initialize an active set of vertices to be the empty set, and then perform a sequence of steps of two types: either add a new vertex to the active set and make it adjacent to all previous active vertices, or inactivate a vertex (removing it from the active set but not from the graph). Thus, they can be represented by a sequence of binary values that specify whether the next step is to add or inactivate a vertex. These values are the input to the finite state machine.<br /><br />In <a href="http://arxiv.org/abs/1304.5591">a paper at WADS last year with Bannister and Cabello</a>, we showed <a href="http://11011110.livejournal.com/267767.html">how to test 1-planarity for several special classes of graphs</a>, but not for indifference graphs. Some of our algorithms involved proving the existence of a finite set of forbidden configurations, and that idea works here, too: an indifference graph turns out to be 1-planar if and only if, for every K<sub>6</sub> subgraph, the first three vertices of the subgraph (in the activation order) have no later neighbors outside the subgraph, and the last three vertices have no other earlier neighbors. K<sub>6</sub> is 1-planar, but it has essentially only one drawing (modulo permutation of the vertices), and any example of this configuration would have a seventh vertex connected to four of the K<sub>6</sub> vertices, something that's not possible in a 1-planar drawing.<br /><br />At one level, the state diagram above can be viewed as a diagram for detecting this forbidden configuration. Every right-going transition is one that adds an active vertex, and every left-going transition is one that removes an active vertex. If a transition of either type does not exist in the diagram, it means that a step of that type will lead to an inescapable failure state. But the only missing transitions are the ones that would create a six-vertex active set by a sequence of transitions that does not end in three consecutive right arrows (creating a K<sub>6</sub> in which one of the last three vertices has an earlier neighbor) or the ones that would exit a six-vertex active set by a sequence of transitions that does not begin with three consecutive left arrows (creating a K<sub>6</sub> in which one of the first three vertices has a later neighbor). So, this automaton recognizes only the graphs that have no forbidden configuration.<br /><br />On another level, the drawings within each state of the diagram show how to use this finite state machine to construct a drawing. Each state is labeled with a drawing of its active vertices, possibly with a yellow region that represents earlier inactive parts of the drawing that can no longer be modified. The numbers on the vertices give the order in which they were activated. For each left transition, it is always possible to remove the oldest active vertex from the drawing and replace the parts of the drawing surrounding it by a yellow region to create a drawing that matches the new state. Similarly, for each right transition, it is always possible to add one more active vertex to the drawing, connect it to the other active vertices, and then simplify some parts of the drawing to yellow regions, again creating a drawing that matches the new state. Therefore, every graph that can be recognized by this state diagram has a 1-planar drawing.<br /><br />Since the machine described by the diagram finds a drawing for a given indifference graph if and only if the graph has no forbidden configurations, it follows that these forbidden configurations are the only ones we need to describe the 1-planar graphs and that this machine correctly finds a 1-planar drawing for every indifference graph that has one. This same technique doesn't always generalize: A result from my WADS paper that it's NP-complete to test 1-planarity for graphs of bounded bandwidth shows that, even when a class of graphs can be represented by strings of symbols from a finite alphabet it's not always going to be possible to find a finite state machine to test 1-planarity. But it would be interesting to find more graph classes for which the same simple technique works.<a name='cutid1-end'></a>urn:lj:livejournal.com:atom1:11011110:291361Four preprints2014-07-12T03:25:09Z2014-07-12T06:32:53ZI noticed that there was a higher-than-usual density of arxiv preprints among the web pages I'd been bookmarking lately, so I thought maybe I'd share. The first one, especially, is very timely:<br /><br /><b>From the "Brazuca" ball to Octahedral Fullerenes: Their Construction and Classification</b>, Yuan-Jia Fan, Bih-Yaw Jin, <a href="http://arxiv.org/abs/1406.7058">arXiv:1406.7058</a>, <a href="https://medium.com/the-physics-arxiv-blog/mathematicians-solve-the-topological-mystery-behind-the-brazuca-world-cup-football-2e11ab1f4391">via</a>. The classical pentagon and hexagon soccer ball pattern (introduced for the 1970 World Cup) later became even more famous as the structure of the <a href="https://en.wikipedia.org/wiki/Buckminsterfullerene">buckminsterfullerene</a> Carbon-60 molecule, from which the fullerene graphs (planar graphs in which all faces are pentagons or hexagons) took their name. Another soccer ball pattern, used in the 2006 world cup, is topologically a truncated octahedron but with distorted face shapes that reduce its symmetry to tetrahedral; there also exist fullerenes with tetrahedral symmetry. And there's a new soccer ball pattern for the Brazil world cup, with the topology of the cube; the "via" article says that it has octahedral symmetry but I'm not convinced, because it doesn't seem to have the reflection symmetries that octahedra should have. Nevertheless Fan and Jin asked: are there fullerenes with octahedral symmetry? The positive answer comes from a nice construction involving cutting equilateral triangles out of the hexagonal tiling and then gluing them together to make a finite polyhedron.<a name='cutid1-end'></a><br /><br /><b>The Shortest Path to Happiness: Recommending Beautiful, Quiet, and Happy Routes in the City</b>, Daniele Quercia, Rossano Schifanella, Luca Maria Aiello, <a href="http://arxiv.org/abs/1407.1031">arXiv:1407.1031</a>, <a href="http://gizmodo.com/yahoos-developing-a-map-algorithm-to-find-the-most-beau-1602266077">via</a>. Suppose you want an online map service to give you a walking tour of a city. You probably wouldn't want the shortest path from one part to another, but rather the nicest path. Changing how you weight the edges of the underlying graph to prioritize them differently is not so difficult (and I wrote <a href="http://arxiv.org/abs/cs.DS/9907001">a paper</a> long ago on how you might adjust these weights to fit different users' preferences) but the harder part is gathering the data to measure what it means for a route to be nice. These authors approach the problem with online photo databases, in one case crowdsourcing the problem of quantifying niceness and in another attempting to use the image tags to determine it automatically. The analysis of how well it worked seemed very anecdotal and handwavy to me but maybe that's the nature of the subject.<a name='cutid2-end'></a><br /><br /><b>The Convex Configurations of "Sei Shonagon Chie no Ita" and Other Dissection Puzzles</b>, Eli Fox-Epstein, Ryuhei Uehara, <a href="http://arxiv.org/abs/1407.1923">arXiv:1407.1923</a>. Given a puzzle like the <a href="https://en.wikipedia.org/wiki/Tangram">tangram</a> (a square subdivided into seven convex pieces) how many ways are there of rearranging it into convex shapes? I asked a similar question (with different shapes) in <a href="http://www.ics.uci.edu/~eppstein/pubs/Epp-COMB-01.pdf">one of my old talks</a> but the tangram question (and its answer) have been known for much longer, <a href="http://www.jstor.org/stable/2303340">since at least 1942</a>. This paper solves the same question for some more complex puzzles of a similar type. Like the 1942 tangram paper, the new preprint uses the observation that there is a smaller tile into which the puzzle pieces can all be subdivided, such that any solution is a subset of a regular tesselation of the plane by this tile. I don't know of a general algorithm for solving this kind of problem efficiently; perhaps there's something interesting to be done in that direction.<a name='cutid3-end'></a><br /><br /><b>Complexity of counting subgraphs: only the boundedness of the vertex-cover number counts</b>, Radu Curticapean, Dániel Marx, <a href="http://arxiv.org/abs/1407.2929">arXiv:1407.2929</a>. Ok, unlike the other ones this is not recreational mathematics. But it still interested me. The problem concerns the complexity of counting copies of subgraphs in larger graphs, something that seems to be quite topical lately. If the subgraph has a fixed number <i>k</i> of vertices and the larger graph has <i>n</i> vertices, then there's an obvious <i>O</i>(<i>n</i><sup><i>k</i></sup>) algorithm: just try all <i>k</i>-tuples of vertices, and it's not hard to extend this to the case where the subgraph has a subset of <i>k</i> vertices that together cover all of its edges. As Curticapean and Marx show, these are the only cases in which one gets a polynomial time algorithm (assuming standard complexity-theoretic conjectures): counting subgraphs from a given class does not have a fixed-parameter tractable algorithm unless the vertex cover number is bounded.<a name='cutid4-end'></a>urn:lj:livejournal.com:atom1:11011110:291080Black Phoebe2014-07-06T06:06:57Z2014-07-27T01:57:02ZFor the past month or so, until this weekend when they moved out, we've had some squatters on our front porch: a family of <a href="https://en.wikipedia.org/wiki/Black_phoebe">Black Phoebes</a>. They conveniently set up their nest in clear sight of a window over the front door, through which I could aim the camera. <br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/phoebe/3-m.jpg" border="2" style="border-color:black;" /></div>urn:lj:livejournal.com:atom1:11011110:290949Seth Teller2014-07-02T16:38:04Z2014-07-13T05:25:07ZVia <a href="http://newsoffice.mit.edu/2014/professor-seth-teller-dies">MIT's news office</a> I learn that Seth Teller has died, at the young age of 50. Seth primarily worked in robotics and vision, but was also a regular participant at the Symposium on Computational Geometry. For more about his many accomplishments, read the MIT story.<br /><br />ETA: <a href="http://www.scottaaronson.com/blog/">Scott Aaronson has more</a>urn:lj:livejournal.com:atom1:11011110:290793Book:Graph Drawing2014-06-30T05:21:11Z2014-06-30T05:21:11ZOne of Wikipedia's less well-known features is its Book: namespace. The things there are called books, and they could be printed on paper and bound into a book if you're one of those rare Wikipedia users who doesn't use a computer to read things, but really they're curated collections of links to Wikipedia articles. I've made two of them before, <a href="https://en.wikipedia.org/wiki/Book:Graph_Algorithms">Book:Graph Algorithms</a> and <a href="https://en.wikipedia.org/wiki/Book:Fundamental_Data_Structures">Book:Fundamental Data Structures</a>, which I have used for the readings in my graduate classes on those topics because I wasn't satisfied with the textbooks on those subjects. This week I put together a third one, <a href="https://en.wikipedia.org/wiki/Book:Graph_Drawing">Book:Graph Drawing</a>.<br /><br />It's not complete (what on Wikipedia is?), and the writing quality and depth of coverage are as variable as always, but there are about 100 topics there and I hope that collecting them in this way proves useful. I've listed a few more things that I think should be added but don't yet have their own Wikipedia articles on <a href="https://en.wikipedia.org/wiki/Book_talk:Graph_Drawing">the talk page</a>, but if you see something else missing then please let me know or, even better, add it.urn:lj:livejournal.com:atom1:11011110:290437The future of SoCG2014-06-27T21:18:02Z2014-06-27T21:18:02Z<a href="http://makingsocg.wordpress.com/2014/06/24/voting-is-open/">Voting opened this week</a> for members of the compgeom-announce mailing list, on whether the annual Symposium on Computational Geometry should leave ACM, as the <a href="http://computationalcomplexity.org/forum/open-letter/">Conference on Computational Complexity has recently done from IEEE</a>.<br /><br />There's a lot more opinion on both sides of the issue, and arguments both for staying with ACM and for leaving, in a series of postings at <a href="http://makingsocg.wordpress.com/">MakingSoCG</a>. If you're a compgeom-announce member, please inform yourself and then make your own opinion known through the vote. Past votes on the same issue had unconvincing results due to low turnout; we don't want the same problem to happen again.