0xDE
http://11011110.livejournal.com/
0xDE - LiveJournal.comSun, 01 Mar 2015 02:09:07 GMTLiveJournal / LiveJournal.com110111107784841personalhttp://l-userpic.livejournal.com/32934265/77848410xDE
http://11011110.livejournal.com/
100100http://11011110.livejournal.com/305884.htmlSun, 01 Mar 2015 02:09:07 GMTLinkage for the end of a short month
http://11011110.livejournal.com/305884.html
<ul><li><a href="http://www.theguardian.com/science/alexs-adventures-in-numberland/2015/jan/13/golden-ratio-beautiful-new-curve-harriss-spiral">The Harriss spiral</a> (<a href="https://plus.google.com/100003628603413742554/posts/cj7FuVzPcyY">G+</a>)</li><br /><li><a href="http://www.thisiscolossal.com/2015/02/ice-sand-scultpures-lake-michigan/">Wind-carved towers of sand and ice</a> (<a href="https://plus.google.com/100003628603413742554/posts/KAj7MgLygwJ">G+</a>)</li><br /><li><a href="http://boingboing.net/2015/01/28/watch-beachbot-make-large-scal.html">Beachbot</a>, a giant etch-a-sketch for your local beach (<a href="https://plus.google.com/100003628603413742554/posts/N7zNSZubpGG">G+</a>)</li><br /><li><a href="http://www.koutschan.de/data/link/index.html">Linkages that can draw any algebraic curve</a> (<a href="https://plus.google.com/100003628603413742554/posts/AojzKM96uR3">G+</a>)</li><br /><li><a href="https://cp4space.wordpress.com/2015/02/19/proto-penrose-tilings/">Precursors to the Penrose tiling</a> in the works of Kepler and the Islamic architects (<a href="https://plus.google.com/100003628603413742554/posts/h8aVPY67v4v">G+</a>)</li><br /><li><a href="http://fivethirtyeight.com/datalab/academy-awards-best-picture-instant-runoff/">Instant-runoff demo</a> (<a href="https://plus.google.com/100003628603413742554/posts/AQbqNjFsXi6">G+</a>)</li><br /><li><a href="https://3010tangents.wordpress.com/category/women-in-math">Women in mathematics</a> (<a href="https://plus.google.com/100003628603413742554/posts/KkeBR6hDLrD">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=i5oc-70Fby4">Big Bang Theory Eye of the Tiger Scene</a> (<a href="https://plus.google.com/100003628603413742554/posts/dUNx4JEs1n6">G+</a>)</li><br /><li><a href="http://www.scfbm.org/content/8/1/7/">Why using git is good scientific practice</a> (<a href="https://plus.google.com/100003628603413742554/posts/J21fqi9ZUqS">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Klam_value">Klam values</a> and other colorful neologisms from the parameterized complexity crowd (<a href="https://plus.google.com/100003628603413742554/posts/3aQLAeeKckW">G+</a>)</li><br /><li><a href="http://envisage-project.eu/proving-android-java-and-python-sorting-algorithm-is-broken-and-how-to-fix-it/">Timsort is broken</a> (and has been for the past 12 years) (<a href="https://plus.google.com/100003628603413742554/posts/MHsutRHNrQ1">G+</a>)</li></ul>http://11011110.livejournal.com/305884.htmltoolsgeometryalgorithmspublic0http://11011110.livejournal.com/305481.htmlThu, 26 Feb 2015 18:49:00 GMTHighly abundant numbers are practical
http://11011110.livejournal.com/305481.html
A <a href="https://en.wikipedia.org/wiki/Highly_abundant_number#References">highly abundant number</a> is a positive integer <i>n</i> that holds the record (among it and smaller numbers) for the biggest sum of divisors σ(<i>n</i>). While cleaning up some citations on the Wikipedia article, I ran across an unsolved problem concerning these numbers, posed by Jaycob Coleman and listed on <a href="https://oeis.org/A002093">the OEIS entry for them</a>: are all sufficiently large highly abundant numbers practical?<br /><br />A <a href="https://en.wikipedia.org/wiki/Practical_number">practical number</a> <i>n</i> has the property that all numbers up to <i>n</i> can be expressed as sums of distinct divisors of <i>n</i>. This can be tested by looking at the factorization of <i>n</i>: define the <<i>p</i>-smooth part of <i>n</i> to be the product of the prime factors of <i>n</i> that are less than <i>p</i>. Then <i>n</i> is practical if and only if, for each prime factor <i>p</i> of <i>n</i>, <i>p</i> is at most one more than the sum of divisors of the <<i>p</i>-smooth part of <i>n</i>. So, for instance, the highly abundant number 10 is not practical: the <5-smooth part of 10 is 2, and 5 is too big compared to σ(2) = 3. Also, 3 is not practical as its <3-smooth part is only one. Are these the only exceptions?<br /><br />As with other questions involving record-holders for <a href="https://en.wikipedia.org/wiki/Multiplicative_function">multiplicative functions</a>, the highly abundant numbers can be thought of as solutions to special instances of the <a href="https://en.wikipedia.org/wiki/Knapsack_problem">knapsack problem</a>: if we define the size of a prime power <i>p<sup>i</sup></i> to be log <i>p</i>, and we define its profit to be the logarithm of the factor (<i>p</i><sup><i>i</i> + 1</sup> − 1)/(<i>p</i><sup><i>i</i></sup> − 1) by which including <i>p<sup>i</sup></i> as a divisor of <i>n</i> would cause σ to increase (relative to the next lower power of <i>p</i>), then the factorization of <i>n</i> is given by the set of prime powers whose sizes add to at most log <i>n</i> and whose profits add to the largest number possible. I don't know how to use this knapsack view of the problem directly (in part because knapsack is a hard problem) but it is helpful in thinking about showing that certain factors must be present or absent because they would lead to a better knapsack solution.<br /><br />For instance, suppose that <i>n</i> is highly abundant, let <i>p</i> be the smallest prime that does not divide <i>n</i>, and let <i>P</i> be the largest prime factor of <i>n</i>. Then it must be true that <i>P</i> < <i>p</i><sup>2</sup>. For, if not, let <i>q</i> = floor(<i>P</i>/<i>p</i>). We could replace <i>P</i> in the factorization of <i>n</i> by <i>pq</i>, giving a smaller number than <i>n</i> with a bigger contribution to σ: at least (<i>p</i> + 1)<i>q</i>, versus at most <i>P</i> + 1, or smaller if <i>P</i> appears to a higher power than one.<br /><br />Based on this fact, it's straightforward to show that all highly abundant numbers that are divisible by four are practical. More strongly the same is true for other numbers <i>n</i> that are divisible by four and have the same inequality for <i>p</i> and <i>P</i>. For, if the first missing prime <i>p</i> is 3, then the sum of divisors of the <<i>p</i>-smooth part is at least 7, big enough to cover any prime factor <i>P</i> that satisfies the inequality. And for each additional prime factor of <i>n</i> smaller than <i>p</i>, the bound on <i>P</i> grows by at most four (by <a href="https://en.wikipedia.org/wiki/Bertrand%27s_postulate">Bertrand's postulate</a>) and the sum of divisors of the smooth part grows by at least four, so this sum of divisors always remains large enough to satisfy the condition for being practical.<br /><br /><s>But in their early work on highly abundant numbers, Alaoglu and Erdős observed that 210 is the largest highly abundant number to include only one factor of two in its prime factorization. All larger highly abundant numbers are divisible by four, and by the argument above they are all practical. The remaining cases are small enough to test individually, and they are all practical. So Jaycob Coleman's conjecture is true.</s><br /><br />Update: this claim about 210 is obviously wrong. 630 is highly abundant and is also not divisible by four. So here's a better argument along the same lines. The case <i>p</i> = 2 is easy to handle: <i>P</i> can only be 3, so <i>n</i> is a power of three. If it is not 3 itself, we could replace a factor of 9 in it by a factor of 8, getting a smaller number with a bigger contribution to σ (15 vs 13). So the only odd highly abundant number is 3. Similarly, if the first missing prime is 3, then <i>n</i> must be {2,5,7}-smooth. If it is divisible by 25, we can replace this factor by 24 (with a contribution of at least 32 to σ instead of 31) and if it is divisible by 7, we can replace this factor by 6 (with a contribution greater than 8 to σ instead of 8). So the only possible highly abundant numbers that are even but not divisible by 3 are powers of two and their multiples by five, and the only one of those that can be impractical is 10.<br /><br />Next, suppose that the first missing prime is 5, and there is only one factor of two. The <5-smooth part is at least 6 and its sum of divisors is 12, big enough to cover all primes less than 11, and if any of these primes is a factor of <i>n</i> then including it in the smooth part boosts the sum of divisors to large enough to cover all remaining factors. Similarly, if there is more than one factor of three, then the sum of divisors of the smooth part is at least 39, covering all possible prime factors. So the only possible impractical numbers in this case are not divisible by 5, 7, or 11 but are divisible by exactly one factor of 2 or 3 and may be divisible by 13, 17, 19, or 23. A factor of 13 can be replaced by a factor of 10 (contributing 14 to σ in either case, so giving a smaller number with the same sum of divisors). A factor of 17 can be replaced by a factor of 15 (contributing 19.5 to σ instead of 18). A factor of 19 can be replaced by a factor of 18 (contributing 23 1/2 instead of 20) and a factor of 23 can be replaced by a factor of 20 (contributing 30 instead of 24). So none of these cases give rise to new exceptions.<br /><br />Finally, if the first missing prime is 7, then <7-smooth part is at least 30 and its sum of divisors is at least 72, big enough to cover all primes less than 49, and from here we can use the same Bertrand postulate argument.<a name='cutid1-end'></a>http://11011110.livejournal.com/305481.htmlnumber theorypublic2http://11011110.livejournal.com/305358.htmlThu, 19 Feb 2015 01:49:18 GMTHalin graph algorithms made simple
http://11011110.livejournal.com/305358.html
I have a new paper on the arXiv, <a href="http://arxiv.org/abs/1502.05334">D3-reducible graphs</a> (arXiv:1502.05334), but it's a small one that is not related to this week's many conference submission deadlines (ICALP yesterday, COLT tomorrow, WADS friday). One reason for its existence was that I wanted an implementable algorithm for working with <a href="https://en.wikipedia.org/wiki/Halin_graph">Halin graphs</a> (the graphs that you get by drawing a tree in the plane, with no degree-two vertices, and then connecting the leaves by a cycle surrounding the tree) and the algorithms that I could find for them were based on linear-time planarity testing, something I haven't yet worked up the courage to try implementing. Instead I found that it's possible to recognize Halin graphs, and to solve a wide class of related problems (such as finding their planar embeddings, decomposing them into a tree and a cycle, or finding a Hamiltonian cycle) using a simple reduction-based algorithm that repeatedly finds and simplifies certain local configurations within the graph. The two reductions that I used are shown below; one of them collapses a triangle of degree-three vertices to a point, and the other shortens certain paths of degree-three vertices.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/D3-reductions.png"></div><br /><br />Every Halin graph can be simplified by these reductions to a complete graph on four vertices; in terms of the tree and cycle decomposition of the Halin graph, one of these reductions removes the children from a tree node with two leaf children, and the other removes the middle of three consecutive leaf children. But if you want to use these to recognize Halin graphs only, you need to restrict them a little, because some other graphs can also be simplified to the same four-vertex complete graph, and that's mostly what the paper is about. I call these D3-reducible graphs, and they have a lot of properties in common with the Halin graphs: they are planar, minimally 3-vertex-connected, Hamiltonian, bounded treewidth, etc. One of the smallest examples of a D3-reducible graph that is not a Halin graph is the truncated tetrahedron graph:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/trunctet.png"></div><br /><br />I have updated my <a href="http://www.ics.uci.edu/~eppstein/PADS/">PADS Python algorithm library</a> to include the new Halin graph recognition algorithm, and some related algorithms, as <a href="http://www.ics.uci.edu/~eppstein/PADS/Halin.py">Halin.py</a>. (I also updated the license text for the library, to use the MIT license — you can do almost anything you want but don't hold me responsible for it — rather than trying to claim that the code is public domain, which I'm told is not so meaningful legally.)<a name='cutid1-end'></a>http://11011110.livejournal.com/305358.htmlgraph algorithmspythonpaperspublic0http://11011110.livejournal.com/304968.htmlMon, 16 Feb 2015 01:32:12 GMTLinkage
http://11011110.livejournal.com/304968.html
I don't know what Google+ is doing under the hood (and don't really want to know) but whatever it is seems kind of bloated to me, enough to kill my browser and the responsiveness on my whole machine when I try to open 14 G+ tabs at once. But anyway, here they are:<br /><ul><li><a href="http://www.slate.com/articles/technology/bitwise/2014/12/wikipedia_editing_disputes_the_crowdsourced_encyclopedia_has_become_a_rancorous.single.html">Sexism and bureaucracy at Wikipedia</a> and <a href="http://ergodicity.net/2015/01/23/linkage-55/">an update on the Walter Lewin sexual harassment story</a> (<a href="https://plus.google.com/100003628603413742554/posts/TzcWwiVhKtr">G+</a>)</li><br /><li><a href="http://www.dailykos.com/story/2015/01/28/1360765/-Gov-Scott-Walker-seeks-300-million-in-university-cuts-but-220-million-to-build-Bucks-a-new-arena">Wisconsin gov. Walker seeks major cuts on universities so he can build a sportsball facility;</a> Calif. gov. Brown isn't much better (<a href="https://plus.google.com/100003628603413742554/posts/SKkguKRxmLB">G+</a>)</li><br /><li><a href="http://www.confsearch.org/confsearch/faces/pages/topic.jsp?topic=Theory&sortMode=1&graphicView=1">Conference search: Theory</a> (<a href="https://plus.google.com/100003628603413742554/posts/9My7JoFhSgN">G+</a>)</li><br /><li><a href="https://twitter.com/INTERESTING_JPG/status/562618942217531393">Automated textual image analysis results</a> and <a href="http://deeplearning.cs.toronto.edu/i2t">engine</a> (<a href="https://plus.google.com/100003628603413742554/posts/grLqEiKonkk">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=GznQgTdEdI4">Super eggs</a>: the mathematics behind the shape of, among other things, Azteca Stadium in Mexico City (<a href="https://plus.google.com/100003628603413742554/posts/2QrxUEH2NDx">G+</a>)</li><br /><li><a href="http://libraries.calstate.edu/equitable-access-public-stewardship-and-access-to-scholarly-information">Cal State Univ. gives up on Wiley journals after hefty price increases and refusal to unbundle</a> (<a href="https://plus.google.com/100003628603413742554/posts/URkXdWxDzew">G+</a>)</li><br /><li><a href="https://gilkalai.wordpress.com/2015/02/06/from-oberwolfach-the-topological-tverberg-conjecture-is-false">Topological Tverberg counterexample</a>. It is true for all prime-power dimensions but that wasn't good enough to be true for all dimensions. (<a href="https://plus.google.com/100003628603413742554/posts/KRqdQqCt9Gw">G+</a>)</li><br /><li><a href="http://blog.matthen.com/post/97284098616/take-a-rectangle-and-cut-it-along-a-random-line">Randomly cut and flipped rectangles</a> from another Tumblr of interesting mathematical visualizations (<a href="https://plus.google.com/100003628603413742554/posts/4Dw5FthmMjg">G+</a>)</li><br /><li><a href="http://www.maureeneppstein.com/mve_journal/?p=634">1961 interview with F1 racing driver Bruce McLaren's family</a>. From my mother's blog; McLaren was her second cousin. (<a href="https://plus.google.com/100003628603413742554/posts/4vd3YZSkdfK">G+</a>)</li><br /><li><a href="http://www.theguardian.com/science/alexs-adventures-in-numberland/2015/feb/10/muslim-rule-and-compass-the-magic-of-islamic-geometric-design">Muslim rule and compass: the magic of Islamic geometric design</a> (<a href="https://plus.google.com/100003628603413742554/posts/HorwnpBrtM9">G+</a>)</li><br /><li><a href="http://www.umass.edu/gradschool/sites/default/files/iranian_student_admissions_2_2015.pdf">UMass Amherst bans Iranian STEM grad students</a> (<a href="https://plus.google.com/100003628603413742554/posts/VDYSkY69tGe">G+</a>)</li><br /><li><a href="http://www.metafilter.com/146924/Paper-Engineering-Over-700-years-of-Fold-Pull-Pop-and-Turn">Many links on pop-up books and related paper engineering problems</a> (<a href="https://plus.google.com/100003628603413742554/posts/eijbaYWgV4w">G+</a>)</li><br /><li><a href="http://www.win.tue.nl/SoCG2015/?page_id=601">SoCG accepted papers, with abstracts</a> (<a href="https://plus.google.com/100003628603413742554/posts/LAJJqRDivFX">G+</a>)</li><br /><li><a href="http://boingboing.net/2015/02/14/facebook-tells-native-american.html">The Nymwars continue at Facebook</a> (<a href="https://plus.google.com/100003628603413742554/posts/Wa3EkaMXgog">G+</a>)</li></ul>http://11011110.livejournal.com/304968.htmlanonymitycombinatoricswikipediaacademiaconferencesgeometryfamilypublic0http://11011110.livejournal.com/304679.htmlFri, 06 Feb 2015 06:35:27 GMTWhere do you get your BibTeX data?
http://11011110.livejournal.com/304679.html
Formatting a couple hundred references for a proposal led me to wonder: If you find yourself wanting to look up the BibTeX data for a paper, where do you go? And how much do you have to edit it yourself afterwards?<br /><br />The three most obvious choices for me are <a href="http://www.informatik.uni-trier.de/~ley/db/">DBLP</a>, <a href="http://dl.acm.org/">ACM Digital Library</a>, or <a href="http://www.ams.org/mathscinet/">MathSciNet</a>.<br /><br />There used to be a project to maintain a collective file "geom.bib" with all the references that any computational geometer would ever use. I still have about 18 copies of it on my computer (presumably not all in sync with each other) from various papers that used it, but it became unwieldy (too big to use as one file) and seems to have fallen by the wayside. Additionally, many publishers supply citation files for their own publications, so you could use those, or even take the time to write your own. But my experience is that most of the publishers are not good at generating clean data (e.g. they use hyphens instead of en-dashes for page ranges, or permute conference title words into a different order than what you'd want to use in a citation), although at least they're better at it than Google scholar.<br /><br />The big three above all have their quirks, but they generate pretty clean data (especially if you tell DBLP not to use crossref). Copying from them can be a lot easier and less error-prone than typing it all in yourself, and picking one source and sticking to it could also help achieve greater consistency. DBLP has the best coverage for Computer Science, I think. I recently looked at a five-year window of my papers (for the prior work section of that proposal) and it missed only three (two in non-computer science journals about topology and mathematical psychology, and the third in an edited volume about cellular automata).<br /><br />My own idiosyncratic preference is for MathSciNet, though. Their coverage is almost as good for my purposes (sometimes better) but what ends up making the difference for me is their care about the capitalization of title words and formatting of math in titles. DBLP and ACM leave lots of words capitalized and let the bibtex style lowercase them later, which mostly works, but fails when some words are proper nouns that should stay capitalized. MathSciNet takes care to lowercase everything to how it should appear in a citation (my preference) and to protect the letters that should remain uppercase. And for titles that contain formulas, MathSciNet gets it right and the other two don't.<br /><br />Example: ACM: "The h-Index of a Graph and Its Application to Dynamic Subgraph Statistics".<br />DBLP: "The h-Index of a Graph and Its Application to Dynamic Subgraph Statistics" (journal version); "The \emph{h}-Index of a Graph and Its Application to Dynamic Subgraph Statistics" (conference version).<br />MathSciNet: "The {$h$}-index of a graph and its application to dynamic subgraph statistics". One of these is correct and the others aren't.<br /><br />But maybe there's some new tool or database that beats all of these that I haven't yet found out about. One of my co-authors uses Zotero, but I haven't tried that myself. Are systems like it based on shared libraries rather than comprehensive databases still useful?<br /><br />(See also <a href="https://plus.google.com/u/0/100003628603413742554/posts/T7msni7sGmJ">discussion on G+</a> from the same post.)<a name='cutid1-end'></a>http://11011110.livejournal.com/304679.htmltoolsbibliographypublic4http://11011110.livejournal.com/304478.htmlSun, 01 Feb 2015 03:17:46 GMTLinkage
http://11011110.livejournal.com/304478.html
Did you know...<ul><li>... that <a href="http://www.imdb.com/title/tt2582802/">Bernard Chazelle's son directed a film that has been nominated for a best-picture Oscar?</a> (<a href="https://plus.google.com/100003628603413742554/posts/FsKPpc8K545">G+</a>)</li><br /><li>... that <a href="http://www.sciencepubs.org/content/347/6217/14.full">the rebellion in Ukraine has caused many scientists and whole universities to move?</a> (<a href="https://plus.google.com/100003628603413742554/posts/fAThtaZX9kT">G+</a>)</li><br /><li>... that <a href="https://adamsheffer.wordpress.com/2015/01/19/a-list-of-recent-papers/">there have been many recent papers on counting geometric incidences?</a> (<a href="https://plus.google.com/100003628603413742554/posts/5AB15iLt8kc">G+</a>)</li><br /><li>... that <a href="http://www.wired.com/2015/01/chocolates-whose-intricate-architecture-designed-tweak-taste-buds/">the shape of a piece of 3d-printed chocolate might influence its flavor?</a> (<a href="https://plus.google.com/100003628603413742554/posts/Ri7GagMRtza">G+</a>)</li><br /><li>... that <a href="http://www.wired.com/2015/01/quanta-curves-from-flatness-kirigami/">placing precise slits in a flat paper surface can cause it to curve in predictable ways?</a> (<a href="https://plus.google.com/100003628603413742554/posts/RxzVP7VWdkJ">G+</a>)</li><br /><li>... that <a href="https://www.youtube.com/watch?v=on3ZLLKQp-4">the waterbear is a new fast knightship in Conway's game of life?</a> (<a href="https://plus.google.com/100003628603413742554/posts/hkGgm2ohJfG">G+</a>)</li><br /><li>... that <a href="http://www.thisiscolossal.com/2015/01/intricate-modular-paper-sculptures-by-richard-sweeney/">Richard Sweeney's paper-folding artworks are inspired by snow and clouds?</a> (<a href="https://plus.google.com/100003628603413742554/posts/e6xLbXJeeJS">G+</a>)</li><br /><li>... that <a href="https://www.google.com/webmasters/tools/mobile-friendly/">Google has a service for checking whether your home page is mobile-friendly?</a> (<a href="https://plus.google.com/100003628603413742554/posts/8fdGejK5U1W">G+</a>)</li><br /><li>... that <a href="http://hyrodium.tumblr.com/post/109000595139/i-made-gif-animations-of-sum-of-square-numbers">the sum of the first n squares is n(n+1)(2n+1)/6?</a> (<a href="https://plus.google.com/100003628603413742554/posts/JxFABtKkxj1">G+</a>)</li><br /><li>... that <a href="https://facultystaff.richmond.edu/~ebunn/homocentric/">epicycles can be visualized by spheres spinning inside each other?</a> (<a href="https://plus.google.com/100003628603413742554/posts/P5H2889XWxR">G+</a>)</li><br /><li>... that <a href="https://www.youtube.com/watch?v=74BGYzSkMeU">Paul Erdős traveled to Madras to meet Krishnaswami Alladi when Alladi was only an undergraduate?</a> (<a href="https://plus.google.com/100003628603413742554/posts/MRo43mTmSGN">G+</a>)</li><br /><li>... that <a href="http://aperiodical.com/2015/01/apiological-mathematical-speculations-about-bees-part-1-honeycomb-geometry/">you can persuade bees to make honeycombs in nonstandard tessellations by giving them patterned foundation plates?</a> (<a href="https://plus.google.com/100003628603413742554/posts/7ytwWuMJzsJ">G+</a>)</li><br /><li>... that <a href="http://googlescholar.blogspot.com/2015/01/blast-from-past-reprint-request.html">professors used to send each other postcards requesting printed copies of their recent papers?</a> (<a href="https://plus.google.com/100003628603413742554/posts/UPntqoRxWtk">G+</a>)</li><br /><li>... that <a href="http://boingboing.net/2015/01/22/origami-dollar-bill-koi.html">you can fold a dollar bill into a fish?</a> (<a href="https://plus.google.com/100003628603413742554/posts/RPB3AWW8Dhb">G+</a>)</li><br /><li>... that <a href="http://www.jebiga.com/strandbeest-kinetic-animal-sculptures-theo-jansen/">Theo Jansen's autonomous walking creatures have no brains?</a> (<a href="https://plus.google.com/100003628603413742554/posts/G8Cd8U1MEsk">G+</a>)</li></ul>http://11011110.livejournal.com/304478.htmltoolscellular automataacademianumber theorygeometryartorigamipublic0http://11011110.livejournal.com/304362.htmlThu, 22 Jan 2015 23:29:25 GMTThe linear algebra of edge sets of graphs
http://11011110.livejournal.com/304362.html
This quarter, in my advanced algorithms class, I've been going through <a href="http://www.cc.gatech.edu/fac/Vijay.Vazirani/book.pdf">Vazirani's <i>Approximation Algorithms</i> book</a> chapter-by-chapter, and learning lots of interesting material that I didn't already know myself in the process.<br /><br />One of the things I recently learned (in covering chapter 6 on feedback vertex set approximation)<sup>*</sup> is that, although all the students have taken some form of linear algebra, many of them have never seen a vector space in which the base field is not the real numbers or in which the elements of the vector space are not tuples of real coordinates. So instead of discussing the details of that algorithm I ended up spending much of the lecture reviewing the theory of binary vector spaces. These are very important in algebraic graph theory, so I thought it might be helpful to write a very gentle introduction to this material here.<br /> <br />First of all we need the concept of a <a href="https://en.wikipedia.org/wiki/Field_(mathematics)">field</a>. This is just a system of elements in which we can perform the usual arithmetic operations (addition, subtraction, multiplication, and division) and expect them to behave like the familiar real number arithmetic: addition and multiplication are associative and commutative, there are special values 0 and 1 that are the identities for addition and multiplication respectively, subtraction is inverse to addition, division is inverse to multiplication by anything other than zero, and multiplication distributes over addition. The field that's important for this material is a particularly simple one, <a href="https://en.wikipedia.org/wiki/GF(2)">GF(2)</a>, in which the required special values 0 and 1 are the only elements. The arithmetic of these two values can be described as ordinary integer arithmetic mod 2, or equivalently it can be described by saying that addition is Boolean xor and multiplication is Boolean and. Subtraction turns out to be the same as addition, and division by 1 (the only value that it's possible to divide by) is just the identity operation. It's not hard to verify that these operations have all the desired properties of a field, and doing so maybe makes a useful exercise (Exercise 1).<br /><br />Next, a <a href="https://en.wikipedia.org/wiki/Vector_space">vector space</a> is a collection of elements that can be added to each other and multiplied by <a href="https://en.wikipedia.org/wiki/Scalar_(mathematics)">scalars</a> from a field. (One can generalize the same concept to other kinds of arithmetic than fields but then one gets modules instead of vector spaces.) The vector addition operation must be commutative and invertible; this implies that it has an identity element, and this element (whatever it happens to be) is called the zero vector. Additionally, scalar-scalar-vector multiplications must be associative, scalar multiplication by the special element 1 of the field must be the identity operation, and scalar multiplication must be distributive over both vector and field addition.<br /><br />One easy way to construct vector spaces over a field <b>F</b> is to make its elements be <i>k</i>-tuples of elements of <b>F</b> with the addition and scalar multiplication operations acting independently on each coordinate, but it's not the only way. For the vector spaces used in this chapter, a different construction is more natural: we let the elements of the vector space be sets in some family of sets, and the vector addition operation be the <a href="https://en.wikipedia.org/wiki/Symmetric_difference">symmetric difference</a> of sets. The symmetric difference <i>S</i> Δ <i>T</i> of two sets <i>S</i> and <i>T</i> is the set of elements that occur in one but not both of <i>S</i> and <i>T</i>. This operation is associative, commutative, and invertible, where the inverse of a set is the same set itself: <i>S</i> Δ <i>T</i> Δ <i>T</i> = <i>S</i> regardless of which order you use to perform the symmetric difference operations. If a nonempty family of sets has the property that the symmetric difference of every two sets in the family stays in the family, then these sets can be interpreted as the elements of a vector space over GF(2) in which the vector addition operation is symmetric difference, the zero vector is the empty set (necessarily in the family because it's the symmetric difference of any other set with itself), scalar multiplication by 0 takes every set to the empty set, and scalar multiplication by 1 takes every set to itself. One has to verify that these addition and multiplication operations are distributive, but again this is a not-very-difficult exercise (Exercise 2).<br /><br />As with other kinds of vector spaces, these vector spaces of sets have bases, collections of vectors such that everything in the vector space has a unique representation as a sum of scalar products of basis vectors. Every two bases have the same number of vectors as each other (Exercise 3: prove this), and this number is called the dimension of the vector space. If the dimension is <i>d</i>, the total number of vectors in the vector space is always exactly 2<sup><i>d</i></sup>, because that is the number of different ways that you can choose a scalar multiple (0 or 1) for each basis vector. <br /><br />The families of sets that are needed for this chapter are subsets of edges of a given undirected graph. These can also be interpreted as subgraphs of the graph, but they're not quite the same because the usual definition of a subgraph also allows you to specify a subset of the vertices (as long as all edges in the edge subset have endpoints in the vertex subset), and we won't be doing that. Every graph has three important vector spaces of this type associated with it, the edge space, the cycle space, and the cut space. The edge space is the family of all subsets of edges (including the set of all edges of the given graph and the empty set). That is, it is the <a href="https://en.wikipedia.org/wiki/Power_set">power set</a> of the set of all edges; it has a natural basis in which the basis vectors are the one-edge sets, and its dimension is the number of edges in the graph.<br /><br />The <a href="https://en.wikipedia.org/wiki/Cycle_space">cycle space</a> is the family of all subsets of edges that have even degree at all of the vertices of the graph (Exercise 4: prove that this family is closed under symmetric difference operations). So it includes the simple cycles of the graph, but it also includes other subgraphs; for instance in the graph of an octahedron (a six-vertex graph with four edges at each vertex) the set of all edges is in the cycle space, as are the sets of edges formed by pairs of triangles that touch each other at a single vertex and the sets complementary to triangles or 4-cycles. It's always possible to find a basis for the cycle space in which the basis elements are themselves simple cycles; such a basis is called a <a href="https://en.wikipedia.org/wiki/Cycle_basis">cycle basis</a>. For instance you can form a "fundamental cycle basis" by choosing a spanning forest of the given graph and then finding all cycles that have one edge <i>e</i> outside this forest and that include also the edges of the unique path in the forest that connects the endpoints of <i>e</i>. Or, for a planar graph, you can form a cycle basis by choosing one cycle for each bounded face of a planar embedding of the graph. There are lots of interesting algorithmic problems associated with the cycle space and its cycle bases, but for this chapter the main thing that's needed is to compute its dimension, which has the nice formula |<i>E</i>| − |<i>V</i>| + <i>c</i>, where <i>E</i> is the edge set of the given graph, <i>V</i> is the vertex set, and <i>c</i> is the number of connected components. One name for this dimension is the <a href="https://en.wikipedia.org/wiki/Circuit_rank">cyclomatic number</a> of the graph, and the book chapter denotes it as cyc(<i>G</i>). (It's also possible to interpret it topologically as the first Betti number of the graph but for students who don't already know about binary vector spaces that would probably be more confusing than helpful.)<br /><br />The cut space of the graph doesn't take part in this chapter, but can be defined similarly as the set of all cut-sets of the graph. A <a href="https://en.wikipedia.org/wiki/Cut_(graph_theory)">cut</a> of a graph is a partition of its vertices into two disjoint subsets; in some contexts we require the subsets to both be nonempty but we don't do that here, so the partition into an empty set and the set of all vertices is one of the allowed cuts. The corresponding cut-set is the set of edges that have one endpoint in each of the two subsets. The family of cut-sets is closed under symmetric difference (Exercise 5) so it forms a vector space, the edge space. If the edges are all given positive weights and the graph is connected, then the minimum weight basis of the edge space can be represented by a tree on the vertices of the graph, in which each tree edge determines a cut (the partition of the tree into two subtrees formed by deleting that edges) and has an associated number (the weight of its cut). This tree is called the <a href="https://en.wikipedia.org/wiki/Gomory%E2%80%93Hu_tree">Gomory–Hu tree</a> of the graph and it came up (stripped of its linear-algebra origin) earlier, in an approximation for <i>k</i>-cuts in chapter 4. I also have a recent preprint on computing this basis and this tree for graphs that can be embedded onto low-genus surfaces: see <a href="http://arxiv.org/abs/1411.7055">arXiv:1411.7055</a>.<br /><br /><small><sup>*</sup>Unrelatedly, in preparing to cover this topic, I was confused for a long time by a typo in this chapter. On page 56 it states that, for a minimal feedback set, "clearly" the sum over feedback vertices of the number of components formed by deleting that one vertex equals the number of feedback vertices plus the number of components that are formed by deleting the whole feedback set but that touch only one vertex in the set. This is not true. What is true, and what is needed for the later argument, is that the left hand side is greater than or equal to the right hand side.</small><a name='cutid1-end'></a>http://11011110.livejournal.com/304362.htmlgraph theorypublic7http://11011110.livejournal.com/304060.htmlFri, 16 Jan 2015 04:16:46 GMTLinkage
http://11011110.livejournal.com/304060.html
<ul><li><a href="http://www.thisiscolossal.com/2015/01/pixel-a-mesmerizing-dance-performance-incorporating-digital-projection/">Real-time 3d special effects in modern dance</a> (<a href="https://plus.google.com/100003628603413742554/posts/Hp8vcVRmzHS">G+</a>)</li><br /><li><a href="http://stemfeminist.com/2015/01/05/450/">How not to react to conference talks that happen to be presented by women</a> (<a href="https://plus.google.com/100003628603413742554/posts/KvCqKMhU84U">G+</a>, including also an unrelated report from the SODA business meeting)</li><br /><li><a href="http://www.neatorama.com/2015/01/07/Iced-Intrigue/">Photos of icy landscapes</a> showing how varied the geometry of ice can be (<a href="https://plus.google.com/100003628603413742554/posts/M9v6nj2Kfu2">G+</a>)</li><br /><li><a href="http://awards.acm.org/press_releases/fellows-2014b.pdf">New ACM fellows</a> (<a href="https://plus.google.com/100003628603413742554/posts/8HgRjNyNuQE">G+</a>)</li><br /><li><a href="http://www.maths.manchester.ac.uk/~jm/Choreographies/about.html">n-body choreagraphies</a> (strange solutions to the n-body problem in which all bodies follow each other along a curve; <a href="http://gminton.org/#choreo">more</a> and <a href="https://en.wikipedia.org/wiki/N-body_choreography">still more</a>; <a href="https://plus.google.com/100003628603413742554/posts/84uAkqPtzrM">G+</a>)</li><br /><li><a href="http://www.washingtonpost.com/news/speaking-of-science/wp/2015/01/08/men-on-the-internet-dont-believe-sexism-is-a-problem-in-science-even-when-they-see-evidence/?Post+generic=?tid%3Dsm_twitter_washingtonpost">Men (on the Internet) don’t believe sexism is a problem in science, even when they see evidence</a> (<a href="https://plus.google.com/100003628603413742554/posts/9kgtv1mh5SR">G+</a>)</li><br /><li><a href="https://plus.google.com/101584889282878921052/posts/VbBk9JrLxqm">The fractional chromatic number of the plane</a> (<a href="https://plus.google.com/100003628603413742554/posts/Ea6VqUWL6XG">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=KboGyIilP6k">Elwyn Berlekamp video on dots-and-boxes strategy</a> (<a href="https://plus.google.com/100003628603413742554/posts/UrgtLhCcEi9">G+</a> <a href="https://plus.google.com/113862074718836293294/posts/aJi4HxTP9Pe">reshare</a>)</li><br /><li><a href="http://richardelwes.co.uk/2015/01/02/the-grothendieck-song/">Richard Elwes sings the Grothendieck Song for us</a> (<a href="https://plus.google.com/100003628603413742554/posts/XDe3WtoERW5">G+</a>)</li><br /><li><a href="http://www.thisiscolossal.com/2015/01/fascinating-3d-printed-fibonacci-zoetrope-sculptures/">Animated shapes from a 3d printed object, a turntable, and a strobe light</a> (<a href="https://plus.google.com/100003628603413742554/posts/Jpk5j2sKQqB">G+</a> <a href="https://plus.google.com/117273001021476361745/posts/QjURgBC7K3j">reshare</a>)</li><br /><li><a href="http://gruze.org/tilings/">Why tilings by regular polygons can't include the pentagon</a> (<a href="https://plus.google.com/100003628603413742554/posts/PZMj7dnC9oC">G+</a> via <a href="http://www.metafilter.com/146120/No-Pentagons">MF</a>)</li></ul>http://11011110.livejournal.com/304060.htmlcolorgame theorytilingacademiaconferencesgeometryartpublic2http://11011110.livejournal.com/303850.htmlWed, 07 Jan 2015 07:56:32 GMTReport from SODA, ALENEX, and ANALCO
http://11011110.livejournal.com/303850.html
I just returned from San Diego, where ALENEX, ANALCO, and SODA were held this year. I'm only going to write about a fraction of the things that happened at these conferences, in part because (with four different sessions happening in parallel much of the time) it was only possible for one person to see a fraction of those things. Also I already posted bits about the <a href="http://11011110.livejournal.com/303471.html">ALENEX/ANALCO business meeting</a> and <a href="https://plus.google.com/100003628603413742554/posts/KvCqKMhU84U">SODA business meeting</a> so I won't repeat those here.<br /><br />Sunday's scheduled plenary talk by Peter Winkler on pursuit games was unfortunately cancelled because of illness; instead we got a nice talk on <a href="https://en.wikipedia.org/wiki/Locally_decodable_code">locally decodable codes</a> by Sergey Yekhanin. Unfortunately I missed a couple of Sunday afternoon talks i wanted to see (Timothy Chan on the Four Russians and Julia Chuzhoy on the wall theorem) because they conflicted with another session of interest to me. In it, Michael Walter described how to list all short lattice vectors in exponential time (but a better exponential than before). Friedrich Eisenbrand showed that an old greedy algorithm to approximate the largest simplex within the convex hull of a high-dimensional point set is better than previously thought, by a nice analysis that involves forming an orthogonal basis for the space that has the same volume as the simplex, grouping the basis vectors by their lenghts, and approximating the convex hull by a product of balls for each group. And Sepideh Mahabadi showed that if you want to construct a data structure for a set of lines that can find the approximate-nearest line to a query point, the const is only about the same as for the more standard nearest point to a query point.<br /><br />The mystery of the late Sunday session was the meaning of the talk title "The parameterized complexity of K", by Bingkai Lin, the winner of both the best student paper and best overall paper awards. It turned out to be a typo in the program: "K" should have been "k-biclique". The problem Lin studied is finding a complete bipartite subgraph K<sub>k,k</sub> in a given graph; one would expect it to be W[1]-hard, like clique-finding, but this wasn't known. Lin proved the expected hardness result by a reduction from clique-finding in which he took a graph product of a graph possibly containing a large clique with another graph having some sort of Ramsey property, and showed that the resulting product graph either does or doesn't contain a large biclique. He gave two constructions for the Ramsey graph, a randomized one that only blows up the parameter k (the size of the clique one is trying to find) by a polynomial factor, and a deterministic one that blows it up by a factorial factor. So there is a big gap between deterministic and randomized, but to me that's not surprising when Ramsey theory is involved. The bigger question to me is whether the polynomial blowup of even the randomized reduction can be changed into constant blowup, so that we can extend known results that n<sup>O(k)</sup> is optimal for clique-finding (unless some form of the exponential time hypothesis is false) to similar results for biclique-finding. <br /><br />Monday I ended up spending the whole day at ALENEX. UCI student Jenny Lam started the day with online algorithms for a version of caching in which the items to be cached have sizes (like a web cache and unlike a virtual memory system) and must be assigned contiguous locations within the cache memory. A system of breaking the cache memory into blocks, grouping items into FIFO queues of items with similar priority, and representing each queue by a linked sequence of blocks turns out to work well. It gives up a little in performance compared to systems for caching that don't worry about the memory placement part of the problem, but most of what it gives up is because of using FIFO instead of LRU within each queue rather than because of the fixed memory placement. Also Monday morning, Claire Mathieu gave the second of the invited talks, on moving from worst-case inputs to a "noisy input" model in which one assumes that the input comes from some kind of ground truth with a nice structured solution that has been randomly perturbed; one would like to be able to recover the solution (with high probability) to the extent possible. This turns out to be mathematically almost the same as the "planted solution" model of generating test inputs, in which a random input is perturbed by making it have a solution of higher quality than a random input is likely to have by itself, and then one asks whether this planted solution can be found among the randomness. However, the emphasis of why one is doing this is different: not as a test case, but because real-world inputs are often nicer than worst-case inputs and we want to try to capture that in our input model.<br /><br />In the first afternoon talk, Sandor Fekete set up an integer program for finding triangulations that maximize the length of the shortest edge. He discussed two lower bounds for the problem, one being the shortest convex hull edge and the second being the shortest interior diagonal that is not crossed by any other diagonal; the second one turns out to be more powerful than the first, and he posed as an open question (or actually a conjecture but with an answer I don't believe) the problem of finding the expected length of this shortest uncrossed diagonal, for random inputs (say uniform in a square). And Maarten Löffler and Irina Kostitsyna gave what I think was the most entertaining talk of the conference, with hidden party hats under the seats of some audience members and gummy bears on the slides, about algorithms for approximating certain geometric probabilities (whether two random points from two given distributions can see each other around given obstacles). The most memorable talk for me from the late afternoon session was the last one, by my academic brother Pino Italiano, on 2-connectivity in directed graphs. In undirected graphs, you can define a 2-connected block to be a maximal subgraph that can't be disconnected by a vertex deletion, or you can define a 2-connected component to be an equivalence class of the pairwise relation of having two vertex-disjoint paths, but these two definitions give you the same thing. In directed graphs, the blocks and components are not the same, and you can construct blocks in linear time but the best algorithms for components are quadratic. In his ALENEX paper (he also had a SODA paper in this area), Pino implemented and tested several algorithms for these problems, with the surprising result that even though the worst case performance of the component algorithms is quadratic the practical performance seems to be linear. So this probably means there are still theoretical improvements to be made.<br /><br />That brings me to today. In the morning, Natan Rubin spoke about systems of Jordan curves that all intersect each other (it is conjectured that most of the intersections must consist of more than one point, and he confirmed this for some important special cases) and Andrew Suk spoke about geometric Ramsey theory (for instance if you have n points in general position in the plane you can find a logarithmic subset for which all triples have the same order type, meaning they are in convex position); he significantly increased the size of the subset one can find for several similarly-defined problems. And Avrim Blum gave the third invited talk, on algorithmic problems in machine learning.<br /><br />In the early afternoon, there were again two sessions I wanted to go to in parallel, so I decided to try switching between them. In one of the two, Joshua Wang spoke on his work with Vassilevska-Williams, Williams, and Yu (how likely is VVW to be first alphabetically among so many authors?) on subgraph isomorphism. For finding three-vertex induced subgraphs, the hardest graphs to find are triangles or their complements, which can be found in matrix-multiplication time. The authors of this work showed that the same time bounds can extend to certain four-vertex subgraphs, such as the diamond (K4 minus an edge). A randomized algorithm for finding diamonds turns out to be easy, by observing that a certain matrix product counts diamonds plus six times the number of K4s, and then choosing a random subgraph to make it likely that if diamond exists, the number of diamonds is nonzero mod six. The harder part of the paper was making all this deterministic. In the other session, Topi Talvitie formulated a natural geometric version of the k-shortest paths problem: find k shortest paths that are locally unimprovable, or equivalently find k shortest homotopy classes of paths. It can be solved by a graph k-shrotest paths algorithm on the visibility graph of the obstacles, or by a continuous Dijkstra algorithm that he explained with a nice analogy to the levels of a parking garage (see the demo at <a href='http://dy.fi/wsn'>http://dy.fi/wsn</a>). Louis Barbu showed that there are still more tricks to be extracted from Dobkin-Kirkpatric hierarchies: by using one hierarchy of each type and switching between a polyhedron and its polar as necessary, it is possible to find either a separating plane or an intersection point of two polyhedra in log time. And Jeff Erickson's student Chao Xu spoke on weakly simple polygons; these are polygons whose edges are allowed to overlap each other without crossing, and both defining them properly and recognizing them efficiently turns out to be an interesting problem.<br /><br />In the final session, I learned from Daniel Reichman about contagious sets: sets of vertices in a graph with the property that if you repeatedly augment the set by adding vertices that have two neighbors already in the set, it eventually grows to cover the whole graph. In a d-regular expander, the size of such a set should be n/d<sup>2</sup> (it turns out), and Reichman presented several partial results in this direction. Loukas Georgiadis gave the more theoretical of the two talks on directed 2-connectivity. Jakub Tarnawski showed how to generate uniformly random spanning trees in time O(m<sup>4/3</sup>), the best known for sparse graphs. (The problem can be solved by taking a random walk until the whole graph is covered and then selecting the first edge into each vertex, but that is slower.) And Hongyang Zhang formulated a notion of connectivity of pairs of vertices in which one chooses a random spanning forest in a graph and then looks at the probability that the pair is part of the same tree; this apparently has connections to liquidity in certain financial webs of trust.<br /><br />The <a href="http://epubs.siam.org/doi/book/10.1137/1.9781611973754">ALENEX</a>, <a href="http://epubs.siam.org/doi/book/10.1137/1.9781611973761">ANALCO</a>, and <a href="http://epubs.siam.org/doi/book/10.1137/1.9781611973730">SODA</a> proceedings are all online, there's plenty more of interest beyond what I've mentioned here, and it all appears to be freely accessible without any need for institutional subscriptions.<a name='cutid1-end'></a>http://11011110.livejournal.com/303850.htmlconferencestalkspaperspublic0http://11011110.livejournal.com/303471.htmlMon, 05 Jan 2015 06:06:26 GMTCircular arc contacts, Miura slides, and ALENEX business
http://11011110.livejournal.com/303471.html
While I've been at SODA one of my co-authors has been busy preparing what turns out to be my first preprint of the new year: <a href="http://arxiv.org/abs/1501.00318">Contact Representations of Sparse Planar Graphs</a> (arXiv:1501.00318, with Alam, Kaufmann, Kobourov, Pupyrev, Schulz, and Ueckerdt). I think this is one of these cases where a picture can go a lot farther than words in explaining: here's an example of what we're looking at.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/arc-cuboctahedron.png"></div><br /><br />The 12 circular arcs of this diagram correspond to the 12 vertices of a <a href="https://en.wikipedia.org/wiki/Cuboctahedron">cuboctahedron</a>, and the 24 contact points between arcs (the points where one arc ends as it runs into another arc) correspond to the 24 edges of the cuboctahedron. What we want to know is which other graphs can be represented like the cuboctahedron in this way? They have to be planar, and every subgraph has to have at most twice as many edges as vertices (because every set of arcs has twice as many endpoints as arcs) and beyond that it's a little mysterious. But we have some natural subclasses of the planar graphs for which we can prove that such a representation always exists (for instance the 4-regular planar graphs) and some NP-hardness results.<br /><br />Two other uploads of possible interest: <a href="http://www.ics.uci.edu/~eppstein/0xDE/ALENEX15-business-meeting.pdf">my report as co-PC-chair at the ALENEX business meeting</a>, and <a href="http://www.ics.uci.edu/~eppstein/pubs/BalDamEpp-SODA-15.pdf">my talk on Miura folding</a> (both with small corrections to the slides I actually used). I've <a href="http://11011110.livejournal.com/297829.html">posted here before</a> about the Miura folding results. For the ALENEX report, besides the usual breakdown of acceptance rates and subtopics, there were two more substantial issues for future planning: should ALENEX move its submission deadline earlier than the SODA notification date (so that the PC has adequate time to review the submissions), and should it accept more papers? The sentiment at the meeting seemed to be in favor of both ideas.http://11011110.livejournal.com/303471.htmlgraph drawingconferencesorigamipaperspublic0http://11011110.livejournal.com/303315.htmlSun, 04 Jan 2015 00:53:07 GMTGreetings from San Diego
http://11011110.livejournal.com/303315.html
I've just arrived in San Diego for the annual <a href="http://www.siam.org/meetings/da15/">Symposium on Discrete Algorithms</a> and its associated satellite workshops ALENEX and ANALCO. That little strip of blue on the left edge of the photo is the harbor; you can also see a little bit of it directly from the hotel, if your window faces in the right direction. If you're also here, greetings!<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/WestinSanDiego/WestinSanDiego-m.jpg" border="2" style="border-color:black;" /></div>http://11011110.livejournal.com/303315.htmlarchitecturephotographypublic0http://11011110.livejournal.com/302869.htmlFri, 02 Jan 2015 05:44:20 GMT2014 in algorithm preprints
http://11011110.livejournal.com/302869.html
Happy New Year, everyone! It's time once again to give a status report on the cs.DS (data structures and algorithms) section of the arXiv. The arXiv as a whole just hit a big milestone, one million preprints uploaded. cs.DS forms a small fraction of that, but still, last year there were 1182 new preprints, up a little from <a href="http://11011110.livejournal.com/281196.html">the previous year</a>.<br /><br />There has been some talk recently about possible changes to the system, including replacing author choices of subarea within cs by the results of an automated text classification system (which already exists but is only used for advisory purposes now), and allowing moderators to reject papers that they deem to be unscientific or not of any plausible interest to the readers (as already happens in the physics and math parts of arXiv). I think any actual change is likely to happen only very slowly, but it's possibly worth thinking about the things arXiv does well and the other things that it might be able to do better.<br /><br />With so many preprints, it's hard to choose among them (I guess that's what we have conference program committees for). Still, here's a selection of ten(-ish) I found personally interesting, excluding my own papers and a few <a href="http://11011110.livejournal.com/291361.html">I wrote about earlier</a>. I'm sure I missed some other good ones, so feel free to leave your own favorites in the comments.<br /><ul><li><b>Popular conjectures imply strong lower bounds for dynamic problems</b>, Amir Abboud and Virginia Vassilevska Williams, <a href="http://arxiv.org/abs/1402.0054">arXiv:1402.0054</a> and FOCS 2014. The <a href="http://en.wikipedia.org/wiki/Exponential_time_hypothesis">exponential time hypothesis</a> is the unproven but widely-believed conjecture that certain NP-complete problems require exponential time. We already know how to scale it down, showing that (if ETH is true) certain known polynomial-time or fixed-parameter-tractable algorithms for static problems are optimally fast. This paper extends these results to scaled-down dynamic graph algorithms for basic problems such as reachability, showing that ETH explains the fact that we don't have subpolynomial update times for these problems.</li><br /><li><b>Shortest paths in intersection graphs of unit disks</b>, Sergio Cabello and Miha Jejčič, <a href="http://arxiv.org/abs/1402.4855">arXiv:1402.4855</a> and CGTA 2014. <a href="https://en.wikipedia.org/wiki/Unit_disk_graph">Unit disk graphs</a> can have a quadratic number of edges, so algorithms whose running time is subquadratic have to use the geometric structure rather than just constructing the graph and using a general-purpose graph algorithm. This paper shows that unweighted shortest paths can be found in <i>O</i>(<i>n</i> log <i>n</i>) time (essentially optimal) and that the weighted problem can be solved only a little slower.</li><br /><li><b>The complexity of the simplex method</b>, John Fearnley and Rahul Savani, <a href="http://arxiv.org/abs/1404.0605">arXiv:1404.0605</a>. <b>On simplex pivoting rules and complexity theory</b>, Ilan Adler, Christos Papadimitriou, and Aviad Rubinstein, <a href="http://arxiv.org/abs/1404.3320">arXiv:1404.3320</a> and IPCO 2014. A long line of algorithms and discrete geometry research is rooted in the phenomenon that the simplex method can be used to find linear program solutions in practice, but in theory most variants of it can be forced to take an exponential number of steps. These two papers look at the solution trajectories found by this method, and show that (as well as being long) they have high computational complexity: it can be PSPACE-complete to tell whether a point is part of the trajectory, or (for degenerate problems) which solution the simplex method will end up at.</li><br /><li><b>Parameterized streaming algorithms for vertex cover</b>, Rajesh Chitnis, Graham Cormode, MohammadTaghi Hajiaghayi, and Morteza Monemizadeh, <a href="http://arxiv.org/abs/1405.0093">arXiv:1405.0093</a> and SODA 2015. <b>Streaming kernelization</b>, Stefan Fafianie and Stefan Kratsch, <a href="http://arxiv.org/abs/1405.1356">arXiv:1405.1356</a> and MFCS 2014. Streaming meets parameterized complexity: many parameterized algorithms take linear time in their input size but exponential or worse time in some other parameter, so it makes sense to ask whether their linear time can scale even to problems too big to fit into main memory.</li><br /><li><b>Flip distance is in FPT time <i>O</i>(<i>n</i> + <i>k</i>⋅<i>c</i><sup><i>k</i></sup>)</b>, Iyad Kanj and Ge Xia, <a href="http://arxiv.org/abs/1407.1525">arXiv:1407.1525</a> and STACS 2015. The version of flip distance considered here is for triangulations of geometric point sets; a flip is a replacement of two triangles that form a convex quadrilateral by the other two triangles for the same quadrilateral, and flip distance is the minimum number of flips needed to change one triangulation to another. It's known to be NP-hard for this variant, and even the special case of convex polygons is interesting (of unknown complexity). You might think that FPT is obvious: just keep the parts of the triangulation that are already correct, and flip the rest. But it's more complicated than that, because sometimes you might want to flip things that are already correct to get them out of the way, and then flip them back again later. (This makes a convenient point to repeat my standard warning that formulas in paper titles are a bad idea.)</li><br /><li><b>Generating <i>k</i>-independent variables in constant time</b>, Tobias Christiani and Rasmus Pagh, <a href="http://arxiv.org/abs/1408.2157">arXiv:1408.2157</a> and FOCS 2014. From the title, this looks like it might be about hash functions, but it's not. <i>k</i>-wise independence is an important assumption in that context, allowing practical hashing algorithms to be used with limited randomness, but there's a lower bound showing that computing such hash functions in constant time requires a large amount of memory. On the other hand, in some other algorithms you might just need some pseudorandom values rather than a function that you can call later with the same input and get the same output. So <i>k</i>-wise independent random generation should be easier than <i>k</i>-wise independent hashing, and this paper shows that it actually is. Specifically, they show how to generate pseudorandom values over a finite field in constant time per value with a number of bits that's linear in the independence parameter and logarithmic in everything else.</li><br /><li><b>Dynamic integer sets with optimal rank, select, and predecessor search</b>, Mihai Pătrașcu and Mikkel Thorup, <a href="http://arxiv.org/abs/1408.3045">arXiv:1408.3045</a> and FOCS 2014. Mihai's last paper? This uses word-RAM operations to provide a constant-time data structure for the named operations on sets whose size is polynomial in the word length. This is almost like the atomic sets in fusion trees, but better because it can be updated more quickly.</li><br /><li><b>Computing classic closeness centrality, at scale</b>, Edith Cohen, Daniel Delling, Thomas Pajor, and Renato F. Werneck, <a href="http://arxiv.org/abs/1409.0035">arXiv:1409.0035</a> and COSN 2014. The closeness centrality of a node in a network is, essentially, the average distance to all the other nodes. Nodes with smaller average distance are more central; it is in this sense that Paul Erdős is central to the mathematical collaboration network or Kevin Bacon is central to the acting co-star network. I had a paper at SODA 2001 on approximating centrality, but it depended on an assumption that the network has low diameter. This paper makes no such assumption but nevertheless manages to estimate the centrality of all nodes accurately in near-linear time.</li><br /><li><b>Simple PTAS's for families of graphs excluding a minor</b>, Sergio Cabello and David Gajser, <a href="http://arxiv.org/abs/1410.5778">arXiv:1410.5778</a>. Approximation algorithms for planar graphs and their generalizations are nothing new, but they generally involve the algorithm knowing a lot about the graph and its structure (for instance using a separator decomposition). This paper shows that even very simple methods (just a local search) can give good approximations, with all the graph structure showing up in the analysis rather than in the algorithm itself.</li><br /><li><b>Beyond the Euler characteristic: Approximating the genus of general graphs</b>, Ken-ichi Kawarabayashi and Anastasios Sidiropoulos, <a href="http://arxiv.org/abs/1412.1792">arXiv:1412.1792</a>. If a given graph can be embedded into a surface of bounded genus, then it can be embedded into a surface of bounded (but larger) genus in polynomial time (independent of the genus). The dependence of the embedded genus to the optimal genus is only polynomial, and this also leads to approximation algorithms with a sublinear approximation ratio. Previously such an approximation was only known for graphs of bounded degree.</li></ul><a name='cutid1-end'></a>http://11011110.livejournal.com/302869.htmlalgorithmspaperspublic0http://11011110.livejournal.com/302619.htmlWed, 31 Dec 2014 23:18:53 GMTLinkage for the end of the year
http://11011110.livejournal.com/302619.html
After <a href="http://11011110.livejournal.com/292825.html">I returned to Google+</a> in mid-year (once Google rescinded their real-name policy) I've been sharing approximately a link a day there, and collecting the links for twice-monthly roundups here (in part to allow me to find my posts again since G+ provides very little organization for old posts). This is the latest batch:<br /><ul><li><a href="http://www.wired.com/2014/12/disqus/">Pseudonyms are used mostly for privacy, not trolling</a> (<a href="https://plus.google.com/100003628603413742554/posts/DiyEZByxKKG">G+</a>)</li><br /><li><a href="http://www.scientificamerican.com/article/for-sale-your-name-here-in-a-prestigious-science-journal/">Attacks on the peer review system</a> including authorship for sale, false refereeing, plagiarism, etc (the <a href="https://plus.google.com/u/0/100003628603413742554/posts/9R2ZRpSqDM9">G+</a> post includes several more links)</li><br /><li><a href="http://www.thisiscolossal.com/2014/12/randomly-generated-polygonal-insects-by-istvan-giordano-for-neonmob/">Randomly generated polygonal insects</a> (<a href="https://plus.google.com/100003628603413742554/posts/VcHk3nJYFLa">G+</a>)</li><br /><li><a href="https://ayvlasov.wordpress.com/2012/07/23/qu-ants/">Qu-ant reversible cellular automata</a> (<a href="https://plus.google.com/117663015413546257905/posts/aTzuQjw3w4c">G+</a> <a href="https://plus.google.com/100003628603413742554/posts/9nwqyTG7MpW">reshare</a>)</li><br /><li><a href="http://yeyorigami.blogspot.com/2014/12/sonobe-unit-polyhedra.html">Sonobe unit polyhedra</a>, modular polyhedra as Christmas decorations (<a href="https://plus.google.com/112844741882313681044/posts/CcjCPUSkMxS">G+</a> <a href="https://plus.google.com/100003628603413742554/posts/4iAu9jyBBRM">reshare</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=y97rBdSYbkg">Domino chain reaction</a> video demonstrating exponential scaling in energy amplification (<a href="https://plus.google.com/100003628603413742554/posts/gXNSrTWZx8m">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Langley%E2%80%99s_Adventitious_Angles">The adventitious angle puzzle</a> for the advent season (<a href="https://plus.google.com/100003628603413742554/posts/JYtwdoRXFDM">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=sxnX5_LbBDU">The Gauss Christmath Special</a> video (<a href="https://plus.google.com/u/0/100003628603413742554/posts/XbLjcWcXdEy">G+</a>)</li><br /><li><a href="http://mathoverflow.net/questions/191495/collection-of-conjectures-and-open-problems-in-graph-theory">Collections of graph theory open problems</a> (<a href="https://plus.google.com/100003628603413742554/posts/g4gjcVYRdNQ">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=v678Em6qyzk&feature=autoshare">Knuth's dragon-curve mistake</a> video (<a href="https://plus.google.com/113862074718836293294/posts/F3ft1xuawGJ">G+</a> <a href="https://plus.google.com/100003628603413742554/posts/WswwnKhbSs4">reshare</a>)</li><br /><li><a href="http://www.the-scientist.com/?articles.view/articleNo/41677/title/Q-A--One-Million-Preprints-and-Counting/">One million arXiv preprints</a> (<a href="https://plus.google.com/100003628603413742554/posts/LgijnoYsZGK">G+</a>)</li><br /><li><a href="http://www.extremetech.com/extreme/168288-folded-paper-lithium-ion-battery-increases-energy-density-by-14-times">Origami batteries</a> (<a href="https://plus.google.com/100003628603413742554/posts/QNSqmVNBRvG">G+</a>)</li></ul>http://11011110.livejournal.com/302619.htmlunsolvedcellular automataanonymityacademiageometrygraph theoryorigamipublic0http://11011110.livejournal.com/302520.htmlWed, 31 Dec 2014 05:43:30 GMTMendocino menagerie
http://11011110.livejournal.com/302520.html
My parents' house in Mendocino is full of books and sculptures of creatures (mostly cats). Here are some of them:<br /><br /><div align="center"><table border="0" cellpadding="10">
<tr align="center" valign="middle">
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/FrontPorchCat.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/FrontPorchCat-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/OaxacanJaguar.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/OaxacanJaguar-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/AwayRoomCat.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/AwayRoomCat-s.jpg" border="2" style="border-color:black;" /></a></td>
</tr><tr align="center" valign="middle">
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/DreamingOfBirds.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/DreamingOfBirds-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/SunnySpot.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/SunnySpot-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/StairSentryCat.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/StairSentryCat-s.jpg" border="2" style="border-color:black;" /></a></td>
</tr></table></div><br /><br /><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/index.html">The rest of the gallery</a>.http://11011110.livejournal.com/302520.htmlartmendocinophotographyfamilypublic0http://11011110.livejournal.com/302266.htmlTue, 30 Dec 2014 05:37:04 GMTBack from the land of no internet
http://11011110.livejournal.com/302266.html
In the unlikely event that anyone wondered why I didn't post anything either here or over on my Google+ account for the last few days, it's because I unexpectedly found myself incommunicado, visiting relatives for Christmas. Here are two of them, my cousin's daughters.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/zj/ZoyaAndJessa-m.jpg" border="2" style="border-color:black;" /></div><br /><br />It's normal for there to be no cell phone service at my parents' house in Mendocino. Cell phones finally reached downtown Mendocino a couple of years ago over the objections of some protesters who were terrified of being exposed to any form of electromagnetic radiation, but there's a hill between downtown and the house that blocks the signal. There would normally be landline phone service there, but the lines got flooded in the big storm a couple of weeks ago and AT&T hasn't succeeded in drying them out yet. And my parents also have cable internet, but for some other reason that went down too. So we had to resort to old-fashioned behavior like reading books or actually interacting with each other instead of all being absorbed in our own separate electronic devices the way we otherwise would likely have been.http://11011110.livejournal.com/302266.htmlmendocinophotographyfamilypublic2http://11011110.livejournal.com/301965.htmlWed, 17 Dec 2014 08:17:49 GMTLinked polytopes and toric grid tessellations
http://11011110.livejournal.com/301965.html
In my recent posting on <a href="http://11011110.livejournal.com/301197.html">four-dimensional polytopes containing linked or knotted cycles of edges</a>, I showed pictures of linked cycles in three examples, the (3,3)-duopyramid, hypercube, and (in the comments) truncated 5-cell. All three of these have some much more special properties: the two linked cycles are induced cycles (there are no edges between two non-consecutive vertices in the same cycle), they include all the vertices in the graph, and their intersection with any two- or three-dimensional face of the polytope forms a connected path.<br /><br />When this happens, we can use it to construct a nice two-dimensional grid representation of the polytope. The set of pairs (<i>x</i>,<i>y</i>) where <i>x</i> is a position on one of the cycles (at a vertex or along an edge) and <i>y</i> is a position on the other cycle form a two-dimensional space, topologically a torus. We can think of this as a grid with wrap-around boundary conditions, where the grid lines correspond to vertex positions on one or the other cycle. The number of grid lines in each dimension is just the length of the cycle. Then, each non-cycle edge of the polytope connects one point from each cycle, so it can be represented as a grid point on this torus. Each two-dimensional face of the polytope has two non-cycle edges, and can be represented as a line segment connecting the corresponding two grid points (perhaps wrapping around from one side of the grid to the other). And when we draw these grid points and line segments, they divide the grid into cells (again, perhaps wrapping around) that turn out to correspond to the 3-dimensional faces of the polytope. So all the features of the polytope that are not part of the two cycles instead show up somewhere on this grid.<br /><br />For instance below, in this two-dimensional grid representation, are the duopyramid (two 3-cycles, so a 3 × 3 grid), hypercube (8 × 8 grid), and truncated 5-cell (10 × 10 grid) again. I've drawn these with the wraparound points halfway along an edge of each cycle in order to avoid placing a grid line on the boundary of the drawing. In the hypercube and truncated 5-cell, the axes are labeled by numberings of the vertices. For the hypercube, the vertices can be numbered by the 16 hexadecimal digits, where two digits are adjacent if they differ in a single bit in their binary representations. The two eight-vertex cycles can be obtained by cycling through the order of which bit changes. For the truncated 5-cell, the vertices can be numbered by ordered pairs of unequal digits from 1 to 5, where the neighbors of each vertex are obtained by changing the second digit or swapping the two digits.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/33dp-q4-t5.png"></div><br /><br />Another way of thinking about this is that, on the three-dimensional surface of a 4d unit sphere, we can draw two linked unit circles, one in the <i>xy</i> plane and the other in the <i>wz</i> plane. The medial axis of these circles (the points on the sphere equally distant from both of them) is a torus, and what we're drawing in this diagram is how a polyhedral version of the same torus slices through the faces of the polytope.<br /><br />You can read off the structure of each 3-dimensional cell in the polytope from the corresponding polygon in the diagram. Recall that these cells are themselves three-dimensional polyhedra whose vertices have been divided into two induced paths. So (just as in the four-dimensional case) we can make a grid from the product of these two paths, represent non-path edges as grid points, and represent two-dimensional faces as line segments connecting grid points. Each two-dimensional face has two non-path sides, and a number of path sides given by the difference in coordinates between the corresponding two grid points. So, the total number of sides of the face is just two plus the Manhattan length of the line segment representing the face. For instance, the unit line segments in the duopyramid diagram represent triangles, and the squares formed by four of these segments represent tetrahedra (four triangles). The segments of Manhattan length two in the hypercube diagram represent quadrilaterals, and the hexagons formed by six of these segments represent cubes (six quadrilaterals). In order to represent a polyhedron in this way, a grid polygon has to have vertical edges at its left and right extremes, and horizontal edges at its top and bottom extremes, because otherwise a vertex at the end of one of the two paths would have only two incident edges, impossible in a polyhedron. For the same reason each intermediate grid line must have a grid point (representing a non-path edge of the polyhedron) on it. We can make a dictionary of small polyhedra that can be decomposed into two induced paths, and their associated grid polygons:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/2pathpoly.png"></div><br /><br />Notice that the grid polygons don't have to be strictly convex: the octahedron has eight grid points, four of which are at the corners of a 2 × 2 square but the other four of which are in the middle of the edges of this square. But in order for a collection of polyhedra to meet up to form the faces of a four-dimensional polytope, each grid point needs at least three line segments connecting to it (each polytope edge has to be surrounded by three or more two-dimensional faces). This can only happen if each grid polygon has at most one slanted side in each of the four corners of its bounding box. So these polygons are convex except for possibly having vertices on their horizontal and vertical sides. There are also some other constraints on their shape; for instance, a hexagon with two diagonal sides within a 2 × 2 square doesn't correspond to a polyhedron, because it forms a shape that is not 3-vertex-connected.<br /><br />Given this dictionary, we can form new patterns by tessellating a rectangular wrap-around grid by these grid polygons, and then ask: does the tessellation represent a 4-dimensional polytope? We have to be a little careful here, because we cannot place horizontal or vertical sides that are subdivided (e.g. in the octahedron) next to similar-looking sides that are not subdivided (e.g. in the cube). There are infinitely many possibilities, some of which give known polytopes, and some of which are unknown to me. For instance, extending the grid of squares shown for the (3,3)-duopyramid to grid of squares in a larger rectangle produces the diagram for another kind of duopyramid.<br /><br />In the drawing below, the left grid tessellation represents a linked-cycle decomposition of the <a href="https://en.wikipedia.org/wiki/Rectified_5-cell">hypersimplex</a> with five tetrahedra and five octahedra (one of the polytopes Gil Kalai asked about in the comments on my previous post). It can be formed from the truncated 5-cell by contracting all of the edges that are not part of tetrahedra; because the cycle edges of the linked cycles of the truncated 5-cell alternate between tetrahedral and non-tetrahedral edges, this contraction preserves the cycle decomposition. The right grid tessellation represents the <a href="https://en.wikipedia.org/wiki/Octahedral_prism">octahedral prism</a>, with eight triangular-prism cells and two octahedral cells. Therefore, both of these polytopes are linked. In both cases I found the infinite tessellation first, found its smallest period of horizontal and vertical translation, and then was able to identify the corresponding polytope with the help of the low number of cells and high symmetry of these tessellations. But I'm confused by the brick wall in the middle. It has the right number of vertices (10) and the right shape of 3-cells (triangular dipyramids) to be the dual hypersimplex, but the number of bricks is wrong: it should be 10 and is instead 12. (A herringbone pattern of bricks will also tessellate nicely, but then the number of vertices in both cycles would be a multiple of four.) It would be nice to have a theorem characterizing which tessellations give polytopes more generally.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/r5c-op-dhc.png"></div><br /><br />One thing is clear: the polytopes that have linked-cycle decompositions are a very special subclass of the 4-polytopes. For instance, for general 4-polytopes, it remains unknown whether they can have fat <a href="https://en.wikipedia.org/wiki/Face_lattice">face lattices</a>. That is, can a polytope with a small number of vertices and 3-cells have a large number of edges and 2-faces? This can't happen in 3d, by a calculation involving Euler's formula, but the same calculation in 4d doesn't rule out this possibility. But in linked-cycle-decomposable polytopes, the number of cycle edges equals the number of vertices. And because the 3-cells are faces of a torus graph, the number of non-cycle edges (vertices of the torus graph) and 2-faces (edges of the torus graph) are bounded by linear functions of the number of 3-cells. In particular, if there are <i>v</i> vertices and <i>c</i> 3-cells, then there can be at most <i>v</i> + 2<i>c</i> edges and at most 3<i>c</i> 2-cells. This bound is tight whenever the torus diagram is simple (has exactly three edges at each vertex), as it is in the hypercube, truncated 5-cell, and hypersimplex cases.<a name='cutid1-end'></a>http://11011110.livejournal.com/301965.htmlgeometrypublic0http://11011110.livejournal.com/301718.htmlWed, 17 Dec 2014 04:46:05 GMTSurvey on k-best enumeration algorithms
http://11011110.livejournal.com/301718.html
When I was asked earlier this year to write <a href="http://dx.doi.org/10.1007/978-3-642-27848-8_733-1">a short survey on <i>k</i>-best enumeration algorithms</a> for the Springer <i>Encyclopedia of Algorithms</i>, I wrote a first draft before checking the formatting requirements. It ended up being approximately five pages of text and seven more pages of references, and I knew I would have to cut some of that. But then I did check the format, and saw that it needed to be much shorter, approximately two pages of text and a dozen references. I don't regret doing it this way; I think having a longer version to cut down helped me to organize the results and figure out which parts were important. But then I thought: why not make the long(er) version available too? I added a few more references, so now it's about six pages of text and ten of references, still closer to an annotated bibliography than an in-depth survey. Here it is: <a href="http://arxiv.org/abs/1412.5075">arXiv:1412.5075</a>.http://11011110.livejournal.com/301718.htmlalgorithmspaperspublic0http://11011110.livejournal.com/301376.htmlTue, 16 Dec 2014 06:25:54 GMTLinkage for mid-December
http://11011110.livejournal.com/301376.html
<ul><li><a href="http://blogs.scientificamerican.com/roots-of-unity/2014/11/30/the-saddest-thing-i-know-about-the-integers/">The number theory behind why you can't have both perfect fifths and perfect octaves on a piano keyboard</a> (with bonus <a href="https://en.wikipedia.org/wiki/Fokker_periodicity_block">lattice quotient music theory</a> link; <a href="https://plus.google.com/100003628603413742554/posts/ZH6ijTsiGDN">G+</a>)</li><br /><li>Sad news of <a href="https://en.wikipedia.org/wiki/Rudolf_Halin">Rudolf Halin</a>'s death (<a href="https://plus.google.com/100003628603413742554/posts/Q53k1pVKEtR">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=R9ogDS-QYT0">Frankenstein vs The Glider Gun</a> video (<a href="https://plus.google.com/100003628603413742554/posts/67p7GUHnumD">G+</a>)</li><br /><li><a href="http://www.theguardian.com/science/alexs-adventures-in-numberland/2014/dec/03/durers-polyhedron-5-theories-that-explain-melencolias-crazy-cube">Günter Ziegler on Dürer's solid</a> (<a href="https://en.wikipedia.org/wiki/Truncated_triangular_trapezohedron">WP</a>; <a href="http://www.metafilter.com/145043/Drers-polyhedron-5-theories-that-explain-Melencolias-crazy-cube">MF</a>; <a href="https://plus.google.com/100003628603413742554/posts/PKLcXds94FK">G+</a>)</li><br /><li><a href="http://www.metafilter.com/144972/Nature-will-make-its-articles-back-to-1869-free-to-share-online">Nature will make its articles back to 1869 free to share online</a>, for certain values of "free" that you might or might not agree with (<a href="https://plus.google.com/100003628603413742554/posts/3Foe25Nxc4o">G+</a>)</li><br /><li><a href="http://polyhedron100.wordpress.com/">Albert Carpenter's polyhedron models</a> (<a href="https://plus.google.com/100003628603413742554/posts/FUkqR1h6WjN">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Fermat%27s_right_triangle_theorem">The only complete proof from Fermat</a> and <a href="https://en.wikipedia.org/wiki/Congruum">the gaps in arithmetic progressions of squares</a> (<a href="https://plus.google.com/100003628603413742554/posts/6GmBrCwx1tH">G+</a>)</li><br /><li><a href="http://blog.plover.com/2014/12/01/">Mark-Jason Dominus on how and why he negotiated with his book publishers to be able to keep a free online copy of his Perl book</a> (<a href="https://plus.google.com/100003628603413742554/posts/LsWFCCixZLV">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=rtR63-ecUNo">Video on drawing mushrooms with sound waves</a> (<a href="https://plus.google.com/100003628603413742554/posts/eBSPF2o3PDU">G+</a>)</li><br /><li><a href="http://mashable.com/2014/12/10/senate-wikipedia-torture-report/">Senate staffer tries to scrub "torture" reference from Wikipedia's CIA torture article</a> (<a href="https://plus.google.com/100003628603413742554/posts/PwFE9gJL3DE">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=SL2lYcggGpc">Numberphile video on origami angle trisection</a> (<a href="https://plus.google.com/100003628603413742554/posts/DxdUsDmBjuM">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=ZMByI4s-D-Y">Video on the world's roundest object</a> and why it was made (<a href="https://plus.google.com/100003628603413742554/posts/hyiYUfzEJu3">G+</a>)</li><br /><li><a href="http://arxiv.org/abs/1412.2716">How much text re-use is too much?</a> A statistical study of plagiarism on arXiv (<a href="http://www.improbable.com/2014/12/14/lots-and-lots-of-bits-of-copying-in-scientific-literature/">via</a>; <a href="https://plus.google.com/100003628603413742554/posts/PBeiZ56MuVh">G+</a>)</li></ul>http://11011110.livejournal.com/301376.htmlcellular automataopen accesswikipedianumber theorygeometryplagiarismorigamipublic2http://11011110.livejournal.com/301197.htmlSat, 13 Dec 2014 23:53:40 GMTLinks and knots in the graphs of four-dimensional polytopes
http://11011110.livejournal.com/301197.html
<p>The surface of a three-dimensional polyhedron is a two-dimensional space that's topologically equivalent to the sphere. By the Jordan curve theorem, every cycle of edges and vertices in this space cuts the surface into two topological disks. But the surface of a four-dimensional polytope is a three-dimensional space that's topologically equivalent to the hypersphere, or to three-dimensional Euclidean space completed by adding one point at infinity. So, just as in conventional Euclidean space, polygonal chains (such as the cycles of edges and vertices of the polytope) can be nontrivially knotted or linked. If so, this can also be seen in three-dimensions, as a knot or link in the <a href="https://en.wikipedia.org/wiki/Schlegel_diagram">Schlegel diagram</a> of the polytope (a subdivision of a convex polyhedron into smaller convex polyhedra). Does this happen for actual 4-polytopes? Yes! Actually, it's pretty ubiquitous among them.</p>
<p>The linked 4-polytope with the fewest vertices is a <a href="https://en.wikipedia.org/wiki/Duopyramid">duopyramid</a> formed from the convex hull of two equilateral triangles centered at the origin, one in the <i>xy</i>-plane and the other in the <i>zw</i>-plane. These two triangles are not actually two-dimensional faces of the duopyramid; instead, in the Schlegel diagram, they appear as two linked triangles. This polytope has nine tetrahedral facets; in the Schlegel diagram, they appear as one outer tetrahedron, two more adjacent to the top edge of the top linked triangle, two adjacent to the bottom edge of the bottom linked triangle, and four wrapping around the middle vertical edge connecting the two links.</p>
<p align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/linked-duopyramid.png"></p>
<p>Similarly, a seven-vertex <a href="https://en.wikipedia.org/wiki/Neighborly_polytope">neighborly polytope</a> forms a complete graph on seven vertices. <a href="https://en.wikipedia.org/wiki/Linkless_embedding">As with every embedding</a> of the seven-vertex complete graph into space, it contains a knot.</p>
<p>What about <a href="https://en.wikipedia.org/wiki/Simple_polytope">simple 4-polytopes</a>? This means that every vertex has exactly four neighbors. The duopyramid doesn't have this property: its vertices all have five neighbors. The simple 4-polytope with the fewest vertices and facets in which I've found a link is a hypercube, with eight cubical facets.</p>
<p align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/linked-hypercube.png"></p>
<p>It is also possible to form a trefoil knot in the <a href="https://en.wikipedia.org/wiki/4-6_duoprism">(4,6)-duoprism</a>, the Cartesian product of a square and a hexagon, and to form a link in the (3,6)-duoprism. These are simple polytopes with ten and nine facets respectively.</p>
<p>There are at least two interesting classes of 4-polytopes that don't have nontrivial knots or links, however. One of these is the class of <a href="https://en.wikipedia.org/wiki/Polyhedral_pyramid">polyhedral pyramids</a>: 4-dimensional pyramids with a 3-dimensional polyhedron base. Their graphs are <a href="https://en.wikipedia.org/wiki/Apex_graph">apex graphs</a>, embedded nicely with no knots; they have Schlegel diagrams in which the base forms the outside face and the apex of the pyramid is the only vertex inside it, connected to all the other vertices. So any system of closed curves must stay on the planar surface of the base with the exception of one pair of edges through the apex; that's not enough to make a knot or link.</p>
<p>The other is the class of stacked polytopes, formed by gluing simplices face-to-face. Their Schlegel diagrams are formed by repeatedly subdividing a tetrahedron into four smaller tetrahedra meeting at an interior point of the larger tetrahedron, and their graphs are the <a href="https://en.wikipedia.org/wiki/K-tree">4-trees</a>. For any collection of vertex-edge cycles in such a polytope, it's possible to undo one of the subdivision steps and either simplify the collection without changing its topological type, by shortcutting the subdivision vertex, or remove a cycle that forms a face of the polytope. So by induction there can be no knots.</p><a name='cutid1-end'></a>http://11011110.livejournal.com/301197.htmlknot theorygeometrypublic3http://11011110.livejournal.com/300993.htmlFri, 05 Dec 2014 07:27:36 GMTA strike against ERGMs
http://11011110.livejournal.com/300993.html
The <a href="https://en.wikipedia.org/wiki/Exponential_random_graph_models">exponential random graph model</a> is a system for describing probability distributions on graphs, used to model social networks. One fixes a set of vertices, and determines a collection of "features" among the edges of this fixed set (such as whether or not a particular edge or combination of a small number of edges), each with an associated real-valued weights. Then to determine the probability of seeing a particular graph, one simply looks at which features it has; the probability is exp(sum of feature weights), divided by a normalizing constant (the "partition function").<br /><br />This is a good model for several reasons: it is powerful enough that (with complicated enough features) it can describe any distribution. With simple enough features (e.g. just individual edges) it degenerates to the standard <a href="https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model">Erdős–Rényi–Gilbert model</a> of random graphs. It's easy to express features that model sociological theories of network formation, such as <a href="https://en.wikipedia.org/wiki/Assortative_mixing">assortative mixing</a> (more similar people are more likely to be friends) or <a href="https://en.wikipedia.org/wiki/Triadic_closure">triadic closure</a> (friends-of-friends are more likely to be friends). And by fitting the weights to actual social networks, one can learn something about the strengths of these effects.<br /><br />But on the other hand, there are some theoretical and practical obstacles to its use. It seems to be difficult to set up features and weights such that, when one generates graphs using the distribution they describe, the results actually look like social networks. If we go even a little bit beyond the Erdős–Rényi–Gilbert model we don't have closed form solutions to anything and have to use MCMC simulations to compute the partition function, fit weights, or generate graphs, and we don't know much about how quickly or slowly these simulations converge.<br /><br />And now, with my latest preprint "<a href="http://arxiv.org/abs/1412.1787">ERGMs are Hard</a>" (arXiv:1412.1787, with Michael Bannister and Will Devanny), the picture gets darker. We prove complexity-theoretic hardness results showing that with completely realistic features (in fact the ones for assortative mixing and triadic closure, but with unrealistic weights) we can't compute the partition function, we can't get anywhere close to approximating the partition function, we can't generate graphs with the right probabilities, and we can't even get anywhere close to the right probability distribution. And trying to escape the hardness by tweaking the features to something a little more complicated doesn't help: the same hardness results continue to be true when the features include induced subgraphs of any fixed type with more than one edge.<br /><br />The short explanation for why is that lurking inside these models are computationally hard combinatorial problems such as (the one we mainly use) finding or counting the largest induced triangle-free subgraphs. It was known that the maximization version of this problem was hard, but the reduction wasn't parsimonious (less technically, this means that the reduction can't be used to prove that the counting version of the problem is hard). So for that part we had to find our own reduction from another hard counting problem, counting perfect matchings in cubic bipartite graphs. Here it is in a picture.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/match2maxtf.png"></div><br /><br />Each vertex of the original graph turns into four triangles after the reduction. In this example, counting matchings in a cube turns into counting maximum triangle-free subgraphs of a <a href="https://en.wikipedia.org/wiki/Snub_cube">snub cube</a>. These subgraphs are formed by deleting all the snub cube edges that correspond to unmatched cube edges, and then deleting one more edge inside each four-triangle gadget. When I posted a drawing of <a href="http://11011110.livejournal.com/275610.html">a stereographic projection of a snub cube</a> a bit over a year ago, this is what it was for. Since that time, we've also been using the image of this reduction in the logo of <a href="http://www.ics.uci.edu/~theory/">our local research center</a>.<a name='cutid1-end'></a>http://11011110.livejournal.com/300993.htmlcomplexity theorysocial networkspaperspublic2http://11011110.livejournal.com/300600.htmlMon, 01 Dec 2014 02:20:01 GMTLinkage for the end of November
http://11011110.livejournal.com/300600.html
<ul><li><a href="https://www.insidehighered.com/news/2014/11/11/gamergate-supporters-attack-digital-games-research-association">Gamergate's attackers move on from (female) indie game developers to (female) game researchers</a> (<a href="https://plus.google.com/100003628603413742554/posts/K7V5MBJ88UJ">G+</a>)</li><br /><li><a href="http://retractionwatch.com/2014/11/17/fake-citations-plague-some-google-scholar-profiles/">Scammy publisher uses your name as the author of fake papers</a> (<a href="https://plus.google.com/100003628603413742554/posts/N4q6NrHusHq">G+</a>)</li><br /><li><a href="https://plus.google.com/101584889282878921052/posts/HTVRuPCTJXm">Escher-like impossible figures</a> by Regalo Bizzi based on a triangular grid (<a href="https://plus.google.com/100003628603413742554/posts/TpqDDw8oN14">G+</a>)</li><br /><li><a href="https://medium.com/the-open-company/trip-report-vegas-lights-cba073735683">James Turrell installation in Las Vegas</a> (<a href="https://plus.google.com/100003628603413742554/posts/VEtFfrQo4vR">G+</a>)</li><br /><li><a href="http://beesgo.biz/godot.html">Waiting for Godot: The Game</a> (by Zoe Quinn; <a href="https://plus.google.com/100003628603413742554/posts/bmkrFAKsUAA">G+</a>)</li><br /><li><a href="http://www.ams.org/samplings/feature-column/fc-2013-11">Fedorov's Five Parallelohedra</a>, a complete classification of the shapes that can tile space by translation (<a href="https://plus.google.com/100003628603413742554/posts/Gs3MRQePNAd">G+</a>)</li><br /><li><a href="http://www.cjr.org/behind_the_news/journalism_has_a_plagiarism_pr.php?page=all">On the high variance in journalistic standards for plagiarism</a> (<a href="https://plus.google.com/100003628603413742554/posts/d1NvmNbVMDY">G+</a>)</li><br /><li><a href="http://mathoverflow.net/a/187908/440">How many median graphs are there?</a> (<a href="https://plus.google.com/100003628603413742554/posts/TuceX7PT3hL">G+</a>)</li><br /><li><a href="http://www.siam.org/meetings/da15/">SODA/ALENEX/ANALCO 2015</a> preregistration closes Monday, Dec. 1 (<a href="https://plus.google.com/100003628603413742554/posts/DWK9RC62Lz1">G+</a>)</li><br /><li><div align="center"><lj-embed id="54" /></div><br />(<a href="https://plus.google.com/100003628603413742554/posts/6cRKBR8PYuU">G+</a>)</li><br /><li><a href="http://erkdemon.blogspot.com/2012/02/hexagonal-diamond-other-form-of-diamond.html">Hexagonal diamond</a>, a crystalline carbon structure even harder than true diamond (<a href="https://plus.google.com/100003628603413742554/posts/briWWcfq6za">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Laves_graph">The Laves graph</a>, an infinite symmetric 3-regular graph that forms yet another possible carbon crystal (<a href="https://plus.google.com/u/0/100003628603413742554/posts/gBzaBivAAbg">G+</a>)</li></ul>http://11011110.livejournal.com/300600.htmlcoloracademiaconferencesgeometrygraph theorypublic0http://11011110.livejournal.com/300302.htmlThu, 27 Nov 2014 07:25:54 GMTTrees that represent bandwidth
http://11011110.livejournal.com/300302.html
In my algorithms class today, I covered minimum spanning trees, one property of which is that they (or rather maximum spanning trees) can be used to find the bottleneck in communications bandwidth between any two vertices in a network. Suppose the network edges are labeled by bandwidth, and we compute the maximum spanning tree using these labels. Then between any two vertices the path in this tree has the maximum bandwidth possible, among all paths in the network that connect the same two vertices. (There may also be other equally good paths that aren't part of the tree.) So if you want to send all of your data on a single route in the network, and you're not worried about other people using the same links at the same time, and bandwidth is your main quality issue (a lot of ifs) then that's the best path to take. The bandwidth of the path is controlled by its weakest link, the edge on the path with the smallest bandwidth. If you want to quickly look up the bandwidth between pairs of vertices, you can do it in constant time using the nearest common ancestor in a <a href="https://en.wikipedia.org/wiki/Cartesian_tree">Cartesian tree</a> derived from the maximum spanning tree.<br /><br />Ok, but if you're so concerned about bandwidth then maybe you should use a more clever routing scheme that spreads your messages across multiple paths to get even more bandwidth. This can be modeled as a network flow, and the bottleneck to getting the most bandwidth is no longer a single edge. Instead, the <a href="https://en.wikipedia.org/wiki/Max-flow_min-cut_theorem">max-flow min-cut theorem</a> tells you that the bottleneck takes the form of a cut: a partition of the graphs into two disjoint subsets, whose bandwidth is the sum of the bandwidths of the edges crossing the cut.<br /><br />Despite all this added complexity, it turns out that the bandwidth in this sort of multi-path routing scheme can still be described by a single tree. That is, there's a tree whose vertices are the vertices of your graph (but whose edges and edge weights are no longer those of the graph) such that the bandwidth you can get from one vertex to another by routing data along multiple paths in the graph is the same as the bandwidth of the single path between the same two vertices in the tree. More, the edges of the tree can be labeled by cuts in the graph such that the weakest link in the tree path between any two vertices is labeled by the minimum cut that separates those vertices in the original graph. This tree is called the <a href="https://en.wikipedia.org/wiki/Gomory%E2%80%93Hu_tree">Gomory–Hu tree</a> of the given (undirected and edge-weighted) graph. Using the same Cartesian tree ancestor technique, you can look up the bandwidth between any pair of vertices, in constant time per query.<br /><br />My latest arXiv preprint, <a href="http://arxiv.org/abs/1411.7055">All-Pairs Minimum Cuts in Near-Linear Time for Surface-Embedded Graphs</a> (arXiv:1411.7055, with Cora Borradaile and new co-authors Amir Nayyeri and Christian Wulff-Nilsen) is on exactly these trees. For arbitrary graphs they can be found in polynomial time, but slowly, because the computation involves multiple flow computations. For planar graphs it was known how to find the Gomory–Hu tree much more quickly, in O(<i>n</i> log<sup>3</sup> <i>n</i>) time; we shave a log off this time bound. Then, we extend this result to a larger class of graphs, the graphs of bounded genus, by showing how to slice the bounded-genus surface up (in a number of ways that's exponential in the genus) into planar pieces such that, for every pair of vertices, one of the pieces contains the minimum cut. That gives us exponentially many Gomory-Hu trees, one for each piece, but it turns out that these can all be combined into a single tree for the whole graph.<br /><br />One curious difference between the planar and higher-genus graphs is that, for planar graphs, the set of cuts given by the Gomory–Hu tree also solves a different problem: it's the minimum-weight <a href="https://en.wikipedia.org/wiki/Cycle_basis">cycle basis</a> of the dual graph. In higher-genus graphs, we don't quite have enough cuts to generate the dual cycle space (we're off by the genus) but more importantly some of the optimal cycle basis members might not be cuts. So although the new preprint also improves the time for finding cycle bases in planar graphs, making a similar improvement in the higher genus case remains open.<a name='cutid1-end'></a>http://11011110.livejournal.com/300302.htmlgraph algorithmspaperspublic0http://11011110.livejournal.com/300115.htmlTue, 25 Nov 2014 08:12:35 GMTLIPIcs formatting tricks
http://11011110.livejournal.com/300115.html
If, like me, you're working on a SoCG submission, and this is the first time you've tried using the LIPIcs format that SoCG is now using, you may run into some minor formatting issues (no worse than the issues with the LNCS or ACM formats, but new and different). Here are the ones I've encountered, with workarounds where I have them:<br /><ul><li>The LIPIcs format automatically includes several standard LaTeX packages including <tt>babel</tt>, <tt>amsmath</tt>, <tt>amsthm</tt>, <tt>amssymb</tt>, and <tt>hyperref</tt>. So there's no point in including them yourself and (if you specify incompatible options) it may cause an error to include them yourself. I haven't needed to change the <tt>hyperref</tt> options, but if you do, see <a href="http://tex.stackexchange.com/questions/75542/incompatibility-between-lipics-and-hyperref">here</a>.</li><br /><li>You may like to use the <tt>lineno</tt> package so that, when reviewers give you comments on your submission, they can tell you more accurately which line they're referring to. If you try this with the LIPIcs format, you will notice that you don't get line numbers in the last paragraph of a proof, nor in a paragraph that contains a displayed equation (even if you correctly delimit the equation with <tt>\[...\]</tt> instead of using the obsolete <tt>$$...$$</tt>, which can also cause problems with <tt>lineno</tt>). The solution to the proof problem is to change the proof termination symbol (a triangle instead of the usual Halmos box) to use an explicit <tt>$...$</tt> instead of <tt>\ensuremath</tt>):<br /><br /><tt>\renewcommand\qedsymbol{\textcolor{darkgray}{$\blacktriangleleft$}}</tt><br /><br />The solution to the displayed equation problem is more complicated, but is given in <a href="http://phaseportrait.blogspot.com/2007/08/lineno-and-amsmath-compatibility.html">this blog post from 2007</a> (the update near the start of that post). Why this incompatibility hasn't been fixed in the last seven years is a different question.</li><br /><li>If you use the <tt>numberwithinsect</tt> document class option (pronounced "number with insect"), then the LIPIcs format numbers theorems, lemmas, etc by what section they are in: Lemma 2.1 for the first lemma in Section 2, etc. But if you also run past the page limit and use appendices, you may notice that the lemmas are being given numbers that re-use previously used numbers, because although the appendices have letters rather than numbers the lemmas use numbers: the first lemma in Appendix B is also called Lemma 2.1. The problem is that the LIPIcs style expands the <tt>\thesection</tt> macro (the one that gives the number or letter of the current section) at the time it defines <tt>\thetheorem</tt> (the one that gives the name of the current theorem or lemma). So when you use <tt>\appendix</tt> (or <tt>\begin{appendix}</tt> if you like to pretend that non-environment commands are really environments), <tt>\thesection</tt> gets changed but it's too late to make a difference to <tt>\thetheorem</tt>. The fix is to add the following lines after <tt>\appendix</tt>:<br /><br /><tt>\makeatletter<br />\edef\thetheorem{\expandafter\noexpand\thesection\@thmcountersep\@thmcounter{theorem}}<br />\makeatother</tt></li><br /><li>I've <a href="https://plus.google.com/100003628603413742554/posts/Fy34Vv4Xk6y">already written</a> about the problems with using the <tt>\autoref</tt> command of the <tt>hyperref</tt> package: because LIPIcs wants to use a shared numeric sequence for theorems, lemmas, etc., <tt>\autoref</tt> thinks they are all theorems. <a href="http://tex.stackexchange.com/questions/213821/using-cleveref-with-lipics-documentclass-fails-for-theorem-environments-sharing">Someone else also recently asked about this problem</a>. This is a more general incompatibilty between <tt>amsthm</tt> and <tt>hyperref</tt>, but LIPIcs also includes some of its own code for theorem formatting, which seems to be causing the fixes one can find for <tt>amsthm</tt> not to work. The solution is to fall back to the non-<tt>hyperref</tt> way of doing things: <tt>Lemma~\ref{lem:my-lemma-name}</tt> etc.</li><br /><li>Speaking of theorems: if you use <tt>\newtheorem</tt>, you probably want to have a previous line <tt>\theoremstyle{definition}</tt> so that whatever you're defining looks like the other theorems and lemmas.</li><br /><li>On the first page of a LIPIcs paper, the last line of text may be uncomfortably close to the Creative Commons licencing icon. I haven't found a direct workaround for this (although probably it's possible) but you can obtain better spacing by having a footnote (for instance one listing your grant acknowledgements) or a bottom figure on that page.</li><br /><li>If you're used to trying to fit things into a page limit with LNCS format, you may have learned to use <tt>\paragraph</tt> as a trick to save space over subsections. That doesn't work very well in LIPIcs, for two reasons. First, because it will give you an ugly paragraph number like 6.0.0.1 (the first numbered paragraph in an un-numbered subsubsection of an un-numbered subsection of section 6). You can work around this by using <tt>\paragraph*</tt>. But second, because unlike in LNCS the paragraph heading won't be folded into the paragraph that follows it, so you save a lot less space. I don't want to try to work around this one. And fortunately I haven't yet seen my coauthors adding code like <tt>\noindent{\bfseries Paragraph heading.} Paragraph text...</tt> (or worse <tt>\bf ...</tt>). Solution: find different tricks for your space-saving efforts. Like maybe write more tersely.</li></ul><br />Anyone else have any timely tips?<a name='cutid1-end'></a>http://11011110.livejournal.com/300115.htmltoolspublic5http://11011110.livejournal.com/299980.htmlTue, 25 Nov 2014 06:00:27 GMTThin folding
http://11011110.livejournal.com/299980.html
I have another new preprint on arXiv this evening: <a href="http://arxiv.org/abs/1411.6371">Folding a Paper Strip to Minimize Thickness, arXiv:1411.6371</a>, with six other authors (Demaine, Hesterberg, Ito, Lubiw, Uehara, and Uno); it's been accepted at <a href="http://www.buet.ac.bd/cse/walcom2015/">WALCOM</a>.<br /><br />The basic goal of this is to try to understand how to measure the thickness of a piece of paper that has been folded into a shape that lies flat in the plane. For instance, in designing origami pieces, it's undesirable to have too much thickness, both because it wastes paper causing your piece to be smaller than it needs to and because it may be an obstacle to forming features of your piece that are supposed to be thin.<br /><br />The obvious thing to do is to just count the number of layers of paper that cover any point of the plane, but can be problematic. For instance, if you have two offset accordion folds (drawn sideways below)<br /><pre>/\/\/\/\/\
\/\/\/\/\/</pre>then it's not really accurate to say that the thickness is the same as if you just had one of the two sets of folds: one of the two folds is raised up by the thickness of the other one so the whole folded piece of paper is more like the sum of the thicknesses of its two halves.<br /><br />In the preprint, we model the thickness by assuming that the flat parts of the paper are completely horizontal, at integer heights, that two overlapping parts of paper have to be at different heights, and that a fold can connect parts of paper that are at any two different heights. But then, it turns out that finding an assignment of heights to the parts of paper that minimizes the maximum height is hard, even for one-dimensional problems where we are given a crease pattern of mountain and valley folds as input, without being told exactly how to arrange those folds. The reason is that there can be ambiguities about how the folded shape can fit into pockets formed by other parts of the fold, and choosing the right pockets is difficult.http://11011110.livejournal.com/299980.htmlorigamipaperspublic0http://11011110.livejournal.com/299547.htmlSun, 16 Nov 2014 06:03:27 GMTLinkage
http://11011110.livejournal.com/299547.html
<ul><li><a href="http://blog.peerj.com/post/100580518238/whos-afraid-of-open-peer-review">An experiment in allowing journal reviewers to reveal their names</a> (the <a href="https://plus.google.com/100003628603413742554/posts/2dffs1kDkz4">G+</a> post has several additional links on academics including some well known graph theorists taking money to deliberately distort university rankings)</li><br /><li><a href="http://www.theguardian.com/science/alexs-adventures-in-numberland/2014/oct/30/pumpkin-geometry-stunning-shadow-sculptures-that-illuminate-an-ancient-mathematical-technique?CMP=share_btn_fb">Pumpkin geometry</a>: stereographic projection of shadows from carved balls (<a href="https://plus.google.com/100003628603413742554/posts/GiB8FLHscM2">G+</a>; no actual pumpkins involved)</li><br /><li><a href="http://www.clintfulkerson.com/">Clint Fulkerson</a>: an abstract artist whose work feels somehow both geometric and organic (<a href="https://plus.google.com/100003628603413742554/posts/P19K5D3qdC5">G+</a>)</li><br /><li><a href="http://www.thisiscolossal.com/2014/11/spectacular-paper-pop-up-sculptures-designed-by-peter-dahmen/">Paper popups by Peter Dahmen</a> (<a href="https://plus.google.com/100003628603413742554/posts/57MJFLk9TK4">G+</a>)</li><br /><li><a href="http://www.planetjune.com/blog/polyhedral-balls-crochet-pattern/">Crochet Platonic polyhedra by June Gilbank</a> (<a href="https://plus.google.com/100003628603413742554/posts/UVXPvGuBJ3M">G+</a>)</li><br /><li><a href="http://tex.stackexchange.com/questions/187388/amsthm-with-shared-counters-messes-up-autoref-references/187395">Advice for combining autoref with shared counters for theorems and lemmas</a> (and in the <a href="https://plus.google.com/100003628603413742554/posts/Fy34Vv4Xk6y">G+</a> post, a plea for something similar that will work with the LIPIcs LaTeX format)</li><br /><li><a href="http://lifehacker.com/5914894/put-a-duvet-cover-on-with-minimum-effort-by-rolling-it-like-a-burrito">A topological trick with duvet covers</a> (<a href="https://plus.google.com/100003628603413742554/posts/dmqcGTrUWty">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Harborth%27s_conjecture">Harborth's conjecture</a> on graph drawing with integer edge lengths (<a href="https://plus.google.com/100003628603413742554/posts/GaNexRFKu84">G+</a>)</li><br /><li><a href="http://thorehusfeldt.net/2014/11/09/at-first-eppstein-liked-minimum-spanning-trees-4/">A cryptic crossword by Thore Husfeldt</a> featuring parameterized complexity and my name (<a href="https://plus.google.com/100003628603413742554/posts/bAWeuPdp9iA">G+</a>)</li><br /><li><a href="http://www.wired.com/2014/11/verdict-overturned-italian-geoscientists-convicted-manslaughter/">Giving scientific advice that turns out to be incorrect is not a crime</a> (<a href="https://plus.google.com/100003628603413742554/posts/c2TStXaVR5f">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Chv%C3%A1tal-Sankoff_constants">Chvátal-Sankoff constants</a> (the expected length of a random longest common subsequence; <a href="https://plus.google.com/100003628603413742554/posts/Ry6jQ7Jrwhi">G+</a>)</li><br /><li><a href="https://github.com/lizadaly/nanogenmo2014">Automatic Voynich</a> (<a href="https://plus.google.com/100003628603413742554/posts/7Eiv9TNQiTu">G+</a>)</li><br /><li><a href="http://www.theguardian.com/science/alexs-adventures-in-numberland/2014/nov/04/macaus-magic-square-stamps-just-made-philately-even-more-nerdy">Magic squares on stamps</a> (<a href="https://plus.google.com/100003628603413742554/posts/4FRqJvZtCCX">G+</a>)</li><br /><li><a href="http://snappizz.com/holyhedron">Polyhedra in which all faces are holy</a>, and an update on big Life spaceships (<a href="https://plus.google.com/100003628603413742554/posts/PiMnx4ZK2AN">G+</a>)</li><br /><li><a href="http://kevintwomey.com/calculators.html">Photos of mechanical calculators</a> (via <a href="http://www.wired.com/2014/11/kevin-twomey-low-tech/">Wired</a> and <a href="http://boingboing.net/2014/11/13/beautiful-detailed-photos-of.html">BB</a>; <a href="https://plus.google.com/100003628603413742554/posts/cFc4LuJg511">G+</a>)</li></ul>http://11011110.livejournal.com/299547.htmltoolswikipediaacademiageometryartpublic0