0xDE
http://11011110.livejournal.com/
0xDE - LiveJournal.comTue, 21 Apr 2015 01:11:54 GMTLiveJournal / LiveJournal.com110111107784841personalhttp://l-userpic.livejournal.com/32934265/77848410xDE
http://11011110.livejournal.com/
100100http://11011110.livejournal.com/308431.htmlTue, 21 Apr 2015 01:11:54 GMTPerturbing weighted elements to make set weights distinct
http://11011110.livejournal.com/308431.html
Suppose you have a polynomial-time algorithm that operates on sets of weighted elements, and involves comparisons of the weights of different sets. (This describes many different algorithms for shortest paths, minimum spanning trees, minimum weight matchings, <a href="http://11011110.livejournal.com/307881.html">closures</a>, etc.) But suppose also that your algorithm is only guaranteed to work correctly when different sets always have distinct total weights. When comparisons could come out equal, your algorithm could crash or produce incorrect results. But equal weights are likely to happen when the element weights are small integers, for instance. Is there some semi-automatic way of patching your algorithm to work in this case, without knowing any details about how it works?<br /><br />An obvious thing to try is to add small distinct powers of two to the element weights. If these numbers are small enough they won't affect initially-unequal comparisons. And if they're distinct powers of two then their sums are also distinct, so each two sets get a different perturbation. But this method involves computing with numbers that have an additional <i>n</i> bits of precision (where <i>n</i> is the number of elements in the problem), and a realistic analysis of this method would give it a near-linear slowdown compared to the unperturbed algorithm. Can we do better?<br /><br />Exactly this issue comes up in my latest preprint, "<a href="http://arxiv.org/abs/1504.04931">Rooted Cycle Bases</a>" (with McCarthy and Parrish, arXiv:1504.04931, to appear at WADS). The paper is motivated by some problems concerning <a href="http://11011110.livejournal.com/279049.html">kinematic chains</a>, and studies problems of finding a cycle basis of a given graph in which all basis cycles are constrained to contain a specific edge. When all cycles have distinct weights a simple greedy algorithm can be used to find a minimum-weight basis, but if there are ties then this algorithm can easily go astray. Its analysis is complicated enough that, rather than trying to add special case tie-breaking rules to the algorithm and proving that they still work correctly, I'd like a general-purpose method for converting algorithms that work for distinct path and cycle weights into algorithms that don't require distinctness.<br /><br />If randomization is allowed, it's not difficult to perturb the weights efficiently, so that additions and comparisons of weights still take constant time. Just let ε be a sufficiently small number (or by symbolic computation treat it as an infinitesimal) and perturb each element weight by a randomly chosen integer multiple of ε where the random integers of this scheme have polynomial magnitude. These integers are small enough that (on a machine capable of addressing its own memory) they fit into a machine word, so adding them and comparing their sums takes constant time per operation. And by choosing the polynomial to be large enough, we can ensure that with high probability each two sets that we compare will have different perturbations. (We don't care about the many other pairs of sets that we don't compare.)<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/set-comparison.png"></div><br /><br />The deterministic case is trickier. To solve it (in an appendix of the preprint) I define a data structure that can build up a persistent collection of sets, by adding one element at a time to a previously-constructed set, and then can answer queries that seek the smallest index of an element that belongs to one set and not another. Essentially, it involves a binary tree structure imposed on the elements, and a recursive representation of each set that follows the tree structure but shares substructures with other sets, so that differing elements can be found by tracing down through the tree looking for non-shared substructures. The figure above (from the paper) illustrates in a schematic way what it looks like; see the appendix for details. This allows the power-of-two technique to work, by replacing numerical comparisons on high-precision numbers by these set queries. It would also be possible to add element-removal operations, although I didn't need these for the cycle basis problem. But it's a bit cumbersome and slow: comparing two sets with this method takes logarithmic time, and adding an element to a set is slightly slower than that. And the details involve deterministic integer dictionary data structures that are theoretically efficient but for practical problem sizes probably worse than binary search trees. So I think there's definitely scope for coming up with a cleaner and faster solution.<a name='cutid1-end'></a>http://11011110.livejournal.com/308431.htmlalgorithmspaperspublic0http://11011110.livejournal.com/307976.htmlFri, 17 Apr 2015 07:16:44 GMTThe red door
http://11011110.livejournal.com/307976.html
I couldn't resist photographing this door to a lecture hall in the science sector of the UCI campus. I'm not sure what the pink paint brushmarks are: vandalism? Rustoleum? But they make a nice pattern.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/reddoor/2-m.jpg" border="2" style="border-color:black;" /></div><br /><br /><b>( <a href="http://www.ics.uci.edu/~eppstein/pix/reddoor/1.html">Another shot of the same door</a> )</b>http://11011110.livejournal.com/307976.htmlarchitectureuciphotographypublic0http://11011110.livejournal.com/307881.htmlFri, 17 Apr 2015 01:08:46 GMTParametric closures
http://11011110.livejournal.com/307881.html
My latest arXiv preprint, <a href="http://arxiv.org/abs/1504.04073">The Parametric Closure Problem</a> (arXiv:1504.04073, to appear at WADS) concerns an old optimization problem that can be used, among other applications, in the planning process for open-pit mining.<br /><br />Suppose you have the mining rights to a three-dimensional patch of earth and rock, in which the ore is of a type and depth that make it appropriate to remove the ore by digging down to it from above rather than by tunneling. You can make a three-dimensional model of your mining area, in which different three-dimensional blocks of material might represent ore of different values or worthless overburden (the stuff on top of the ore that you have to remove to get to the ore). Each block has its own value: the profit that can be extracted from its ore minus the cost of digging it out and processing it. Additionally, each block has some blocks above it (maybe staggered in a three-dimensional brick-wall pattern) that have to be removed first before you can get to it. Some blocks are worth digging for; others are buried so deeply under other worthless material that it would cost more to dig them out than you would get in profit from them. How should you go about deciding which blocks to excavate and which to leave in place?<br /><br />This can be modeled mathematically by the <a href="https://en.wikipedia.org/wiki/Closure_problem">closure problem</a>, in which you have as input a partially ordered set (the blocks of the mine, ordered by which ones have to be excavated first before you can get to which other ones) with weights on each element (the net profit of excavating each block). The goal is to find a downward-closed subset of the partial order (a set of blocks such that, whenever a block is in the set, so is all of its overburden) with maximum total weight. Alternatively, instead of a partial order, you can think about a directed acyclic graph, in which you have to find a set of vertices with no outgoing edges; the problem is essentially the same. It has long been known that this can be solved in polynomial time using a transformation to the minimum cut problem.<br /><br />Ok, but that assumes that the price of the material you're extracting (gold, say) is fixed. What happens as the price of gold varies? If gold is more expensive, it will be worthwhile to dig deeper for it; if it is cheap enough, you might even prefer to shut down the whole mine. How many different mining plans do you need for different prices of gold, and how can you compute them all? This is an example of a parametric optimization problem, one in which the weight of each element depends continuously on a parameter rather than being a fixed number.<br /><br />Alternatively, what if you want to optimize a quantity that isn't just a sum of element weights? Suppose, for instance, that it takes a certain up-front cost to extract a block of ore, but that you only get the value of the gold in the ore later. How can you choose a mining plan that maximizes your return-on-investment, the ratio between the profit you expect and the cost you have to pay now? This can also be modeled as a parametric problem, where the weight of a block has the form C × profit − cost for an unknown parameter C. If you can find all the different mining plans that would be obtained by different choices of C, you can then search through them to choose the plan with the optimal return-on-investment, and this turns out to be optimal.<br /><br />My paper defines the parametric (and bicriterion) closure problems, but I was only able to find polynomial-time solutions (and polynomial bounds on the number of different solutions to be found) for some special cases of partial orders, including series-parallel partial orders, semiorders, and orders of bounded width. However, the partial orders arising in the mining problem are unlikely to be any of these, so a lot more remains to be done. In particular I'd like to know whether there can exist a partial order whose parametric closure problem has exponentially many solutions, or whether they all have only a polynomial number of solutions. (Anything in between would also be interesting.)<br /><br />Incidentally, it's tempting to try to generalize closures of partial orders to feasible sets of antimatroids, and ask for an algorithm that can find the maximum weight feasible set. Unfortunately, this antimatroid closure problem is NP-complete. Consider, for instance, an antimatroid defined from a family of sets <i>S<sub>i</sub></i> in which there is one antimatroid element <i>x<sub>i</sub></i> corresponding to each set <i>S<sub>i</sub></i>, another antimatroid element <i>y<sub>j</sub></i> corresponding to each element of a set, and the feasible sets consist of any subset of the <i>x<sub>i</sub></i>'s together with any of the <i>y<sub>j</sub></i>'s that are covered by sets among the chosen <i>x<sub>i</sub></i>'s. If we give the <i>x<sub>i</sub></i>'s small equal negative weights and the <i>y<sub>j</sub></i>'s big equal positive weights, then the optimal feasible set is given by the optimal solution to a set cover problem. Although this complexity result doesn't prove anything about the number of solutions to the corresponding parametric problem, it makes me think that the parametric antimatroid problem is likely to be exponential.<a name='cutid1-end'></a>http://11011110.livejournal.com/307881.htmlantimatroidsalgorithmspaperspublic0http://11011110.livejournal.com/307498.htmlThu, 16 Apr 2015 05:41:47 GMTLinkage for tax day
http://11011110.livejournal.com/307498.html
<ul><li><p><a href="https://www.youtube.com/watch?v=RYH_KXhF1SY">Fractal flat torus flyover video</a> (<a href="https://plus.google.com/100003628603413742554/posts/ESNzvcmNWPL">G+</a>)</p></li>
<li><p><a href="http://retractionwatch.com/2015/04/01/you-cant-make-this-stuff-up-plagiarism-guideline-paper-retracted-for-plagiarism/">The author of the plagiarized article on plagiarism turns out to have himself been a past victim of plagiarism</a> (<a href="https://plus.google.com/100003628603413742554/posts/fuW4FPEHzuC">G+</a>)</p></li>
<li><p><a href="http://arxiv.org/abs/1501.03837">2048 NP-hardness</a> (<a href="https://plus.google.com/100003628603413742554/posts/UXrCSYbdW4c">G+</a>)</p></li>
<li><p><a href="http://mkukla.com/Stone/stone_07_1.html">Michael Kukla's organically-shaped stone sculpture</a> (<a href="https://plus.google.com/100003628603413742554/posts/Yudx1zunsN6">G+</a>)</p></li>
<li><p><a href="http://greenupgrader.com/15763/water-saving-tip-the-shower-bucket/">Save water: shower with a bucket</a> (<a href="https://plus.google.com/100003628603413742554/posts/53q1YH9JpBx">G+</a>)</p></li>
<li><p><a href="http://www.nytimes.com/2015/04/03/opinion/south-koreas-invasion-of-privacy.html">The reach of the all-seeing eye extends to the land of morning calm</a> (<a href="https://plus.google.com/100003628603413742554/posts/gFrJMvfEECz">G+</a>)</p></li>
<li><p><a href="http://research.cs.queensu.ca/cccg2015/">CCCG call for papers</a> (<a href="https://plus.google.com/100003628603413742554/posts/CP2rxPCRTLB">G+</a>)</p></li>
<li><p><a href="http://www.thisiscolossal.com/2015/04/layered-glass-sculptures-niyoko-ikuta/">Layered cut-glass sculpture by Niyoko Ikuta</a> (<a href="https://plus.google.com/100003628603413742554/posts/LexAfZzuV1L">G+</a>)</p></li>
<li><p><a href="http://theconversation.com/using-wikipedia-a-scholar-redraws-academic-lines-by-including-it-in-his-syllabus-39103">Editing Wikipedia as course assignment</a> (<a href="https://plus.google.com/100003628603413742554/posts/8pBR5Vx3pzv">G+</a>)</p></li>
<li><p><a href="https://dl.dropboxusercontent.com/u/73307148/www.wads.org/Home/accepted.html">WADS accepted papers</a> (<a href="https://plus.google.com/100003628603413742554/posts/Wxn6MF5orK7">G+</a>)</p></li>
<li><p><a href="http://www.dataisnature.com/?p=2138">The fractal architecture and algorithmic design of Hindu temples</a> (<a href="https://plus.google.com/100003628603413742554/posts/9FWZLhi5P2c">G+</a>)</p></li>
<li><p><a href="http://m759.net/wordpress/?p=49049">4d hypercube in a 4x4 planar grid</a> (<a href="https://plus.google.com/100003628603413742554/posts/8GTbEjYwvFK">G+</a>)</p></li>
<li><p><a href="http://boingboing.net/2015/04/13/village-has-a-model-village-wh.html">Self-containing model village Droste effect</a> (<a href="https://plus.google.com/100003628603413742554/posts/CZiE7z5P6ch">G+</a>)</p></li>
<li><p><a href="http://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-services-uses-formal-methods/fulltext">CACM on formal methods at Amazon</a> (<a href="https://plus.google.com/100003628603413742554/posts/6CrVgj1zCsn">G+</a>)</p></li>
<li><p><a href="https://plus.google.com/117663015413546257905/posts/E4cfuyhawYh">Infinite reflection within a mirrored sphere</a> (<a href="https://plus.google.com/100003628603413742554/posts/YskmJuQUcik">G+</a>)</p></li></ul>http://11011110.livejournal.com/307498.htmlfractalsarchitecturewikipediaconferencessecurityhypercubeartplagiarismpublic0http://11011110.livejournal.com/307408.htmlWed, 01 Apr 2015 03:56:27 GMTLinkage
http://11011110.livejournal.com/307408.html
<ul><br /><li><a href="http://www.bbc.com/news/technology-31302312">Non-uniformly-random playlists sound more random than random ones</a> (<a href="https://plus.google.com/100003628603413742554/posts/RsmyXs8GpMZ">G+</a>)</li><br /><li><a href="http://www.metafilter.com/147970/If-you-can-read-this-sentence-you-can-talk-with-a-scientist">Is it a good thing that science is monolingual?</a> Is it even true? (<a href="https://plus.google.com/100003628603413742554/posts/aV9jJwG3ddr">G+</a>)</li><br /><li><a href="http://blog.wikimedia.org/2015/03/17/raspberry-pi-tanzania-school/">Bringing Wikipedia to a school without electricity</a> (<a href="https://plus.google.com/100003628603413742554/posts/eg7upcxdKRC">G+</a>)</li><br /><li><a href="https://plus.google.com/115585433364871264133/posts/7pYEqXYu36G">Erik Demaine presents the MAA Centennial Lecture</a> (<a href="https://plus.google.com/100003628603413742554/posts/Q6PNMKv51db">G+</a>)</li><br /><li><a href="https://quomodocumque.wordpress.com/2015/03/18/math-bracket-2015/">What if we held elimination tournaments based on the strength of math departments?</a> (<a href="https://plus.google.com/100003628603413742554/posts/dyjP8wBTbX8">G+</a>)</li><br /><li><a href="http://bldgblog.blogspot.co.uk/2014/06/mathematical-equations-as-architectonic.html">Mathematical equations as architectonic forms</a> (<a href="https://plus.google.com/100003628603413742554/posts/gujwuuur157">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=AxJubaijQbI">Persi Diaconis on good and bad ways to shuffle cards</a> (<a href="https://plus.google.com/100003628603413742554/posts/jS8u9tWRJV3">G+</a>)</li><br /><li><a href="http://www.cems.uvm.edu/~darchdea/problems/problems.html">Open problems in topological graph theory</a> from the late Dan Archdeacon (<a href="https://plus.google.com/100003628603413742554/posts/iRsQaEVpaGP">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=OuF-WB7mD6k">How to fix a wobbly table.</a> But only if the problem is the uneven ground, not the table itself. <a href="https://plus.google.com/100003628603413742554/posts/VqteLTwupnP">G+</a>)</li><br /><li><a href="http://www.scientificamerican.com/article/new-form-of-ice-forms-in-graphene-sandwich/">Square ice in graphene sandwiches</a> (<a href="https://plus.google.com/100003628603413742554/posts/aW4jWARGbvx">G+</a>)</li><br /><li><a href="http://news.sciencemag.org/scientific-community/2015/03/editor-quits-journal-over-pay-expedited-peer-review-offer">Nature Publishing Group lets authors pay for faster reviews.</a> One editor quits in disgust. (<a href="https://plus.google.com/100003628603413742554/posts/eNcuZdZGEfW">G+</a>)</li><br /><li><a href="https://hbr.org/2015/03/the-5-biases-pushing-women-out-of-stem">Five biases pushing women out of STEM</a> (<a href="https://plus.google.com/100003628603413742554/posts/E5ATuXyMZvP">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Entropy_compression">Entropy compression</a>, proving that randomized algorithms terminate because their past histories have too little information (<a href="https://plus.google.com/100003628603413742554/posts/gKxGtagGxRe">G+</a>)</li><br /><li><a href="https://www.simonsfoundation.org/multimedia/mathematical-impressions-multimedia/mathematical-impressions-the-golden-ratio/">George Hart on why you shouldn't believe many claims about appearances of the golden ratio</a> (<a href="https://plus.google.com/100003628603413742554/posts/JcBqFQGkJyr">G+</a>)</li><br /></ul>http://11011110.livejournal.com/307408.htmlfeminismarchitecturewikipediaacademiageometrygraph theorypublic0http://11011110.livejournal.com/307017.htmlSun, 22 Mar 2015 05:18:21 GMTClique minors in de Bruijn graphs
http://11011110.livejournal.com/307017.html
In my new Wikipedia article on <a href="https://en.wikipedia.org/wiki/Queue_number">the queue number of graphs</a>, the binary de Bruijn graphs form an important family of examples. These are 4-regular graphs with one vertex for every <i>n</i>-bit binary string, and with an edge from every string of the form 0s or 1s to s0 or s1. <a href="http://11011110.livejournal.com/75392.html">I posted about them</a> here several years ago, with the following drawing, which can be interpreted as a 2-queue drawing with one queue for the edges that wrap around the left side and another for the edges that wrap around the right.<br /><br /><div align="center"><a href="http://en.wikipedia.org/wiki/Image:DeBruijn-3-2.png"><img src="http://www.ics.uci.edu/~eppstein/0xDE/dbg32b.png" border="0"></a></div><br /><br />Graph minors also showed up in the article, and it occurred to me to wonder: do de Bruijn graphs belong to any minor-closed graph families? The answer should be no, because they're too highly connected, but can we quantify this? One way would be to determine the <a href="https://en.wikipedia.org/wiki/Hadwiger_number">Hadwiger number</a> of the de Bruijn graphs, i.e., the size of their largest clique minors. As long as this is not bounded by a constant, the de Bruijn graphs do not belong to any nontrivial minor-closed family. And in fact, that turns out to be true: the Hadwiger number is somewhere near the square root of the number of vertices.<br /><br />One direction is easy: an <i>n</i>-vertex de Bruijn graph has 2<i>n</i> edges, and a <i>k</i>-vertex clique minor needs at least <i>k</i>(<i>k</i> − 1)/2 edges, so <i>k</i> has to be at most approximately 2√<i>n</i>.<br /><br />In the other direction, it's possible to exhibit an explicit clique minor of size nearly the square root of <i>n</i> in any de Bruijn graph. To do so, I need three ingredients:<br /><br />(1) A representative vertex in the de Bruijn graph for each clique vertex,<br /><br />(2) A path in the de Bruijn graph between any two representative vertices (not necessarily disjoint from the other paths), and<br /><br />(3) A mapping from the vertices within these paths to representative vertices, such that each path can be split into two segments that are mapped to the two endpoints of the path.<br /><br />With these ingredients, the minor itself can be formed by throwing away non-path vertices and contracting path edges between pairs of vertices that are mapped to the same endpoint as each other. (Every clique minor of any graph can be represented in this way.)<br /><br />So here are the representative vertices: for order-<i>k</i> de Bruijn graphs (with <i>n</i> = 2<sup><i>k</i></sup> vertices) they are the binary strings of the form 1<i>x</i>1<i>y</i>1<i>y</i>, where <i>x</i> is a string of about log<sub>2</sub> <i>k</i> consecutive 0's and <i>y</i> is a string of length (<i>k</i> − len(<i>x</i>) − 3)/2 that doesn't contain <i>x</i> as a substring. The <i>y</i> part of this is what distinguishes this representative vertex from all the other ones, and we will look for this string to determine how to map path vertices to representative vertices. The <i>x</i> part of the string carries no useful identifying information, but instead will allow us to find <i>y</i> even when the string has been shifted and mangled in the process of finding a path between two representative vertices. With this choice of the length of <i>x</i>, a constant fraction of the strings that are the right length to be <i>y</i> are valid (don't contain <i>x</i> as a substring). The number of valid <i>y</i>'s, and therefore the size of the clique minor that we find, is proportional to the square root of <i>n</i>/log <i>n</i>.<br /><br />To find a path from one representative vertex to another, we simply follow edges that shift the bitstring left by one position, shifting in the bits of the second representative vertex as we shift out the bits of the first. This actually gives two paths between each two representative vertices (one in each direction) but that isn't a problem; just pick one of the two.<br /><br />In order to define the mapping from path vertices to representative vertices, it's convenient to think of a bitstring (vertex of the de Bruijn graph) as having its left end wrapped around and glued to the right end to form a single cyclic sequence of bits. As we follow the path, the string <i>x</i> of consecutive 0's will rotate from the left side of the string to the right and then back to the left, but will always be uniquely identifiable as the only string of consecutive 0's of the correct length in this cyclic sequence. From the position of <i>x</i>, in any path vertex, we can identify two substrings in the cyclic sequence, in the correct positions relative to <i>x</i> to be the <i>y</i>'s of a representative vertex. For the first half of the path, one of these two <i>y</i> substrings will be equal to the <i>y</i> of the starting vertex of the path, and the second will be arbitrary (some mix of the two path endpoints). For the second half of the path, the pattern is reversed: the other one of the two <i>y</i> substrings will be equal to the <i>y</i> of the ending vertex of the path, and the other one will be a mix. But we can tell which of these two situations is the case by looking at the position of the consecutive 0's. So we map each path vertex to the representative vertex for one of its two <i>y</i> substrings, the one that isn't mixed up.<br /><br />So which of sqrt(<i>n</i>) (the edge-counting upper bound) and sqrt(<i>n</i>/log <i>n</i>) (the explicit construction of a clique minor) is closer to the truth? I'm not sure. On the one hand, if you have representative vertices <i>k</i> units apart from each other (as seems necessary, up to constant factors) with disjoint paths between them in the clique minor, then comparing the total number of edges in these paths with the total number of edges in the complete minor would show that the sqrt(<i>n</i>/log <i>n</i>) bound is tight. On the other hand, in the construction above, the paths are not disjoint, and they can't be because the representative vertex doesn't have high degree. But I don't know how to define the mapping from paths from representative vertices without, seemingly, wasting bits on the <i>x</i> strings which are used only as markers to determine where in the path each vertex is.<a name='cutid1-end'></a>http://11011110.livejournal.com/307017.htmlde bruijn graphgraph theorypublic0http://11011110.livejournal.com/306907.htmlSat, 21 Mar 2015 04:55:15 GMTShattered glass
http://11011110.livejournal.com/306907.html
A broken pane in the main stairwell of my department's building (maybe a bird strike?) gave me a chance to play with the geometry of shattered glass.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/brenglass/1-m.jpg" border="2" style="border-color:black;" /></div><br /><br /><b>( <a href="http://www.ics.uci.edu/~eppstein/pix/brenglass/index.html">The rest of the photos</a> )</b>http://11011110.livejournal.com/306907.htmlarchitecturephotographypublic0http://11011110.livejournal.com/306573.htmlMon, 16 Mar 2015 01:11:03 GMTLinkage for the ides of March
http://11011110.livejournal.com/306573.html
<ul><li><a href="http://blogs.ams.org/visualinsight/2015/03/01/schmidt-arrangement/">The Schmidt arrangement</a>, triangular rosettes of circles from number theory (<a href="https://plus.google.com/100003628603413742554/posts/RM8JpoWaoA4">G+</a>)</li><br /><li><a href="https://archive.org/details/vieleckeundvielf00bruoft">Brückner, <i>Vielecke und Vielflache</i> (1900)</a>, with <a href="http://rudygodinez.tumblr.com/post/79054495133/prof-dr-max-bruckner-four-plates-from-the-book-vielecke">a tumblr post of some stellated polyhedra photographed in the book</a> (<a href="https://plus.google.com/100003628603413742554/posts/fZ4Txj3kJ7p">G+</a>)</li><br /><li><a href="http://www.fq.math.ca/Announcements/Riordan6.pdf">$1000 prize for solving open problems in OEIS</a> (<a href="https://plus.google.com/100003628603413742554/posts/Dj3e9dfkDaX">G+</a>)</li><br /><li><a href="http://bit.ly/1aKcndl">MAA celebrates Women's History Month</a> (<a href="https://plus.google.com/100003628603413742554/posts/TZsABVCueCi">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=0eC4A2PXM-U">Tensegrity robot video</a> (<a href="https://plus.google.com/100003628603413742554/posts/c7UwZtBgv52">G+</a>)</li><br /><li><a href="http://theconversation.com/you-probably-havent-heard-of-these-five-amazing-women-scientists-so-pay-attention-38329">Mini-biographies of five women scientists</a> (<a href="https://plus.google.com/100003628603413742554/posts/XvdMgN1vNhW">G+</a>)</li><br /><li><a href="http://www.youtube.com/watch?v=l4bmZ1gRqCc">Numberphile video on the diversity of human natural-language number systems</a> (<a href="https://plus.google.com/100003628603413742554/posts/73Njxen87s2">G+</a>)</li><br /><li><a href="http://iacopoapps.appspot.com/hopalongwebgl/">Interactive 3d fractal fly-through</a> (<a href="https://plus.google.com/100003628603413742554/posts/Vem5zGKrBpX">G+</a>)</li><br /><li><a href="http://www.laurenbcollister.com/well-well-look-whos-at-it-again">Yet more Elsevier misbehavior</a> (charging for access to open-access papers; <a href="https://plus.google.com/100003628603413742554/posts/BSXDuPdECwK">G+</a>)</li><br /><li><a href="http://sarielhp.org/blog/?p=8827">Sad news of Jirka Matousek's death</a> (<a href="https://plus.google.com/100003628603413742554/posts/DTwFj8qmvvT">G+</a>)</li><br /><li><a href="http://retractionwatch.com/2015/03/12/yes-we-are-seeing-more-attacks-on-academic-freedom-guest-post-by-historian-of-science-and-medicine/">Increasing attacks on academic freedom</a> (<a href="https://plus.google.com/100003628603413742554/posts/ELfJRoJda4c">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=2E9m6yDEIj8">Vi Hart throws cold water on the whole Pi day thing and how arbitrary it is</a> (<a href="https://plus.google.com/100003628603413742554/posts/6YiHR4YHEj4">G+</a>)</li><br /><li><a href="https://cameroncounts.wordpress.com/2015/03/15/folding-de-bruijn-graphs/">Folding de Bruijn graphs</a> (<a href="https://plus.google.com/100003628603413742554/posts/DUGi3ED1ppW">G+</a>)</li></ul>http://11011110.livejournal.com/306573.htmlfeminismlanguagefree speechnumber theorypublic0http://11011110.livejournal.com/306195.htmlSun, 15 Mar 2015 04:31:47 GMTPhotos from Bellairs
http://11011110.livejournal.com/306195.html
I was in Barbados last week for the <a href="http://cglab.ca/~morin/misc/bb2015/">Third Annual Workshop on Geometry and Graphs</a>. This time, unlike <a href="http://11011110.livejournal.com/286162.html">my visit last year</a>, I remembered to bring my camera.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/bellairs15/23-m.jpg" border="2" style="border-color:black;" /></div><br /><br /><b>( <a href="http://www.ics.uci.edu/~eppstein/pix/bellairs15/index.html">Many more photos, not all by me</a> )</b>http://11011110.livejournal.com/306195.htmlconferencesphotographypublic1http://11011110.livejournal.com/305937.htmlFri, 06 Mar 2015 06:46:36 GMTThe nearest neighbor in an antimatroid
http://11011110.livejournal.com/305937.html
Franz Brandenburg, Andreas Gleißner, and Andreas Hofmeier have <a href="http://dx.doi.org/10.1142/S1793830913600033">a 2013 paper</a> that considers the following problem: given a finite partial order P and a permutation π of the same set, find the nearest neighbor to π among the linear extensions of P. Here "nearest" means minimizing the <a href="https://en.wikipedia.org/wiki/Kendall_tau_distance">Kendall tau distance</a> (number of inversions) between π and the chosen linear extension. Or, to put it another way: you are given a directed acyclic graph whose vertices are tagged with distinct numbers, and you want to choose a topological ordering of the graph that minimizes the number of pairs that are out of numerical order.<br />Among other results they showed that this is NP-hard, 2-approximable, and fixed-parameter tractable.<br /><br />An idea I've been pushing (most explicitly in my recent <i>Order</i> paper) is that, when you have a question involving linear extensions of a partial order, you should try to generalize it to the basic words of an <a href="https://en.wikipedia.org/wiki/Antimatroid">antimatroid</a>. So now, let A be an antimatroid and π be a permutation on its elements. What is the nearest neighbor of π among the basic words of A? Can the fixed-parameter algorithm for partial orders be generalized to this problem?<br /><br />Answer: Yes, no, and I don't know. Yes, the problem is still fixed-parameter tractable with a nice dependence on the parameter. No, not all FPT algorithms generalize directly. And I don't know, because I don't seem to have subscription access to the journal version of the BGH paper, the <a href="http://www.uni-passau.de/fileadmin/files/forschung/mip-berichte/MIP-1102.pdf">preprint version</a> doesn't include the FPT algorithm, and I don't remember clearly enough what Franz told me about this a month or so ago, so I can't tell which one they're using.<br /><br />But anyway, here's an easy FPT algorithm for the partial order version of the problem (that might or might not be the BGH algorithm). For any element x, we can define a set L of the elements coming before x in the given permutation π, and another set R of the elements coming after x in the permutation; L, x, and R form a three-way partition of the elements. We say that x is "safe" if there exists a linear extension of P that gives the same partition for x. Otherwise, we call x "unsafe". Then in the linear extension nearest to π, every safe element has the same position that it has in π. For, if we had a linear extension σ for which this wasn't true, then the sequence (σ ∩ L),x,(σ ∩ R) would also be a linear extension and would have fewer inversions. On the other hand, every unsafe element participates in at least one inversion, so if the optimal solution value is k then there can be at most 2k unsafe elements. Therefore, we can restrict both π and P to the subset of unsafe elements, solve the problem on the resulting <a href="https://en.wikipedia.org/wiki/Kernelization">linear-sized kernel</a>, and then put back the safe elements in their places, giving an FPT algorithm.<br /><br />You can define safe elements in the same way for antimatroids but unfortunately they don't necessarily go where they should. As an extreme example, consider the antimatroid on the symbols abcdefghijklmnopqrstuvwxyz* whose basic words are strings of distinct symbols that are alphabetical up to the star and then arbitrary after it, and the permutation π = zyxwvutsrqponmlkjihgfedcba* that wants the symbols in backwards order but keeps the star at the end. The star is safe, but if we put it in its safe place then the only possible basic word is abcdefghijklmnopqrstuvwxyz* with 325 inversions. Instead, putting it first gives us the basic word *zyxwvutsrqponmlkjihgfedcba with only 26 inversions. So the same kernelization doesn't work. It does work to restrict π and P to the elements whose positions in π are within k steps of an unsafe element, but that gives a bigger kernel (quadratic rather than linear).<br /><br />Instead, let's try choosing the elements of the basic word one at a time. At each step, if the element we choose comes later in π than i other elements that we haven't chosen yet, it will necessarily cause i inversions with those other elements, and the total number of inversions of the word we're finding is just the sum of these numbers i. So when the number of inversions is small, then in most steps we should choose i = 0, and in all steps we should choose small values of i. In fact, whenever it's possible to choose i = 0, it's always necessary to do so, because any basic word consistent with the choices we've already made that doesn't make this choice could be made better by moving the i = 0 element up to the next position.<br /><br />So this leads to the following algorithm for finding a basic word with distance k: at each step where we can choose i = 0, do so. And at each step where the antimatroid doesn't allow the i = 0 choice, instead recursively try all possible choices of i from 1 to k that are allowed by the antimatroid, but then subtract the value of i we chose from k because it counts against the number of inversions we have left to find.<br /><br />Each leaf of the recursion takes linear time for all its i = 0 choices, so the main factor in the analysis is how many recursive branches there are. This number is one for k = 0 (because we can never branch), and it's also one for k = 1 (because at a branch point we can only choose i = 1 after which we are in the k = 0 case). For each larger value of k, the first time we branch we will be given a choice of all possible smaller values of k, and the total number of branches in the recursion will be the sum of the numbers of branches for these smaller values. That is, if R(k) denotes the number of recursive branches for parameter k, it obeys the recursion R(0) = R(1) = 1, R(k) = sum<sub>i<k</sub>R(i), which solves to R(k)=2<sup>k−1</sup>. So this algorithm is still fixed-parameter tractable, with only single-exponential dependence on k.<br />If we don't know k ahead of time, we can run the whole algorithm for k = 1,2,3,... and the time bound will stay the same.<br /><br />Given the existence of this simple O(2<sup>k</sup>nI) algorithm (where I is the time for testing whether the antimatroid allows an element to be added in the current position), does it make sense to worry about a kernelization, which after all doesn't completely solve the problem, but only reduces it to a smaller one? Yes. The reason is that if you kernelize (using the O(k<sup>2</sup>)-size kernel that restricts to elements that are within k steps of an unsafe element) before recursing, you separate out the exponential and linear parts, and get something more like O(nI + 2<sup>k</sup>k<sup>2</sup>I). But the difference between quadratic and linear kernels is swamped by the exponential part of the time bound, so rather than looking for smaller kernels it would be better to look for a more clever recursion with less branching.<br /><br />The same authors also have <a href="http://dx.doi.org/10.1007/s10878-012-9467-x">another paper</a> on <a href="https://en.wikipedia.org/wiki/Spearman&quot;s_rank_correlation_coefficient">Spearman footrule distance</a> (how far each element is out of its correct position, summed over all the elements) but the kernelization in this paper looks a little trickier and I haven't thought carefully about whether the same approach might work for the antimatroid version of that problem as well.<a name='cutid1-end'></a>http://11011110.livejournal.com/305937.htmlantimatroidspublic0http://11011110.livejournal.com/305884.htmlSun, 01 Mar 2015 02:09:07 GMTLinkage for the end of a short month
http://11011110.livejournal.com/305884.html
<ul><li><a href="http://www.theguardian.com/science/alexs-adventures-in-numberland/2015/jan/13/golden-ratio-beautiful-new-curve-harriss-spiral">The Harriss spiral</a> (<a href="https://plus.google.com/100003628603413742554/posts/cj7FuVzPcyY">G+</a>)</li><br /><li><a href="http://www.thisiscolossal.com/2015/02/ice-sand-scultpures-lake-michigan/">Wind-carved towers of sand and ice</a> (<a href="https://plus.google.com/100003628603413742554/posts/KAj7MgLygwJ">G+</a>)</li><br /><li><a href="http://boingboing.net/2015/01/28/watch-beachbot-make-large-scal.html">Beachbot</a>, a giant etch-a-sketch for your local beach (<a href="https://plus.google.com/100003628603413742554/posts/N7zNSZubpGG">G+</a>)</li><br /><li><a href="http://www.koutschan.de/data/link/index.html">Linkages that can draw any algebraic curve</a> (<a href="https://plus.google.com/100003628603413742554/posts/AojzKM96uR3">G+</a>)</li><br /><li><a href="https://cp4space.wordpress.com/2015/02/19/proto-penrose-tilings/">Precursors to the Penrose tiling</a> in the works of Kepler and the Islamic architects (<a href="https://plus.google.com/100003628603413742554/posts/h8aVPY67v4v">G+</a>)</li><br /><li><a href="http://fivethirtyeight.com/datalab/academy-awards-best-picture-instant-runoff/">Instant-runoff demo</a> (<a href="https://plus.google.com/100003628603413742554/posts/AQbqNjFsXi6">G+</a>)</li><br /><li><a href="https://3010tangents.wordpress.com/category/women-in-math">Women in mathematics</a> (<a href="https://plus.google.com/100003628603413742554/posts/KkeBR6hDLrD">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=i5oc-70Fby4">Big Bang Theory Eye of the Tiger Scene</a> (<a href="https://plus.google.com/100003628603413742554/posts/dUNx4JEs1n6">G+</a>)</li><br /><li><a href="http://www.scfbm.org/content/8/1/7/">Why using git is good scientific practice</a> (<a href="https://plus.google.com/100003628603413742554/posts/J21fqi9ZUqS">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Klam_value">Klam values</a> and other colorful neologisms from the parameterized complexity crowd (<a href="https://plus.google.com/100003628603413742554/posts/3aQLAeeKckW">G+</a>)</li><br /><li><a href="http://envisage-project.eu/proving-android-java-and-python-sorting-algorithm-is-broken-and-how-to-fix-it/">Timsort is broken</a> (and has been for the past 12 years) (<a href="https://plus.google.com/100003628603413742554/posts/MHsutRHNrQ1">G+</a>)</li></ul>http://11011110.livejournal.com/305884.htmltoolsgeometryalgorithmspublic0http://11011110.livejournal.com/305481.htmlThu, 26 Feb 2015 18:49:00 GMTHighly abundant numbers are practical
http://11011110.livejournal.com/305481.html
A <a href="https://en.wikipedia.org/wiki/Highly_abundant_number#References">highly abundant number</a> is a positive integer <i>n</i> that holds the record (among it and smaller numbers) for the biggest sum of divisors σ(<i>n</i>). While cleaning up some citations on the Wikipedia article, I ran across an unsolved problem concerning these numbers, posed by Jaycob Coleman and listed on <a href="https://oeis.org/A002093">the OEIS entry for them</a>: are all sufficiently large highly abundant numbers practical?<br /><br />A <a href="https://en.wikipedia.org/wiki/Practical_number">practical number</a> <i>n</i> has the property that all numbers up to <i>n</i> can be expressed as sums of distinct divisors of <i>n</i>. This can be tested by looking at the factorization of <i>n</i>: define the <<i>p</i>-smooth part of <i>n</i> to be the product of the prime factors of <i>n</i> that are less than <i>p</i>. Then <i>n</i> is practical if and only if, for each prime factor <i>p</i> of <i>n</i>, <i>p</i> is at most one more than the sum of divisors of the <<i>p</i>-smooth part of <i>n</i>. So, for instance, the highly abundant number 10 is not practical: the <5-smooth part of 10 is 2, and 5 is too big compared to σ(2) = 3. Also, 3 is not practical as its <3-smooth part is only one. Are these the only exceptions?<br /><br />As with other questions involving record-holders for <a href="https://en.wikipedia.org/wiki/Multiplicative_function">multiplicative functions</a>, the highly abundant numbers can be thought of as solutions to special instances of the <a href="https://en.wikipedia.org/wiki/Knapsack_problem">knapsack problem</a>: if we define the size of a prime power <i>p<sup>i</sup></i> to be log <i>p</i>, and we define its profit to be the logarithm of the factor (<i>p</i><sup><i>i</i> + 1</sup> − 1)/(<i>p</i><sup><i>i</i></sup> − 1) by which including <i>p<sup>i</sup></i> as a divisor of <i>n</i> would cause σ to increase (relative to the next lower power of <i>p</i>), then the factorization of <i>n</i> is given by the set of prime powers whose sizes add to at most log <i>n</i> and whose profits add to the largest number possible. I don't know how to use this knapsack view of the problem directly (in part because knapsack is a hard problem) but it is helpful in thinking about showing that certain factors must be present or absent because they would lead to a better knapsack solution.<br /><br />For instance, suppose that <i>n</i> is highly abundant, let <i>p</i> be the smallest prime that does not divide <i>n</i>, and let <i>P</i> be the largest prime factor of <i>n</i>. Then it must be true that <i>P</i> < <i>p</i><sup>2</sup>. For, if not, let <i>q</i> = floor(<i>P</i>/<i>p</i>). We could replace <i>P</i> in the factorization of <i>n</i> by <i>pq</i>, giving a smaller number than <i>n</i> with a bigger contribution to σ: at least (<i>p</i> + 1)<i>q</i>, versus at most <i>P</i> + 1, or smaller if <i>P</i> appears to a higher power than one.<br /><br />Based on this fact, it's straightforward to show that all highly abundant numbers that are divisible by four are practical. More strongly the same is true for other numbers <i>n</i> that are divisible by four and have the same inequality for <i>p</i> and <i>P</i>. For, if the first missing prime <i>p</i> is 3, then the sum of divisors of the <<i>p</i>-smooth part is at least 7, big enough to cover any prime factor <i>P</i> that satisfies the inequality. And for each additional prime factor of <i>n</i> smaller than <i>p</i>, the bound on <i>P</i> grows by at most four (by <a href="https://en.wikipedia.org/wiki/Bertrand%27s_postulate">Bertrand's postulate</a>) and the sum of divisors of the smooth part grows by at least four, so this sum of divisors always remains large enough to satisfy the condition for being practical.<br /><br /><s>But in their early work on highly abundant numbers, Alaoglu and Erdős observed that 210 is the largest highly abundant number to include only one factor of two in its prime factorization. All larger highly abundant numbers are divisible by four, and by the argument above they are all practical. The remaining cases are small enough to test individually, and they are all practical. So Jaycob Coleman's conjecture is true.</s><br /><br />Update: this claim about 210 is obviously wrong. 630 is highly abundant and is also not divisible by four. So here's a better argument along the same lines. The case <i>p</i> = 2 is easy to handle: <i>P</i> can only be 3, so <i>n</i> is a power of three. If it is not 3 itself, we could replace a factor of 9 in it by a factor of 8, getting a smaller number with a bigger contribution to σ (15 vs 13). So the only odd highly abundant number is 3. Similarly, if the first missing prime is 3, then <i>n</i> must be {2,5,7}-smooth. If it is divisible by 25, we can replace this factor by 24 (with a contribution of at least 32 to σ instead of 31) and if it is divisible by 7, we can replace this factor by 6 (with a contribution greater than 8 to σ instead of 8). So the only possible highly abundant numbers that are even but not divisible by 3 are powers of two and their multiples by five, and the only one of those that can be impractical is 10.<br /><br />Next, suppose that the first missing prime is 5, and there is only one factor of two. The <5-smooth part is at least 6 and its sum of divisors is 12, big enough to cover all primes less than 11, and if any of these primes is a factor of <i>n</i> then including it in the smooth part boosts the sum of divisors to large enough to cover all remaining factors. Similarly, if there is more than one factor of three, then the sum of divisors of the smooth part is at least 39, covering all possible prime factors. So the only possible impractical numbers in this case are not divisible by 5, 7, or 11 but are divisible by exactly one factor of 2 or 3 and may be divisible by 13, 17, 19, or 23. A factor of 13 can be replaced by a factor of 10 (contributing 14 to σ in either case, so giving a smaller number with the same sum of divisors). A factor of 17 can be replaced by a factor of 15 (contributing 19.5 to σ instead of 18). A factor of 19 can be replaced by a factor of 18 (contributing 23 1/2 instead of 20) and a factor of 23 can be replaced by a factor of 20 (contributing 30 instead of 24). So none of these cases give rise to new exceptions.<br /><br />Finally, if the first missing prime is 7, then <7-smooth part is at least 30 and its sum of divisors is at least 72, big enough to cover all primes less than 49, and from here we can use the same Bertrand postulate argument.<a name='cutid1-end'></a>http://11011110.livejournal.com/305481.htmlnumber theorypublic2http://11011110.livejournal.com/305358.htmlThu, 19 Feb 2015 01:49:18 GMTHalin graph algorithms made simple
http://11011110.livejournal.com/305358.html
I have a new paper on the arXiv, <a href="http://arxiv.org/abs/1502.05334">D3-reducible graphs</a> (arXiv:1502.05334), but it's a small one that is not related to this week's many conference submission deadlines (ICALP yesterday, COLT tomorrow, WADS friday). One reason for its existence was that I wanted an implementable algorithm for working with <a href="https://en.wikipedia.org/wiki/Halin_graph">Halin graphs</a> (the graphs that you get by drawing a tree in the plane, with no degree-two vertices, and then connecting the leaves by a cycle surrounding the tree) and the algorithms that I could find for them were based on linear-time planarity testing, something I haven't yet worked up the courage to try implementing. Instead I found that it's possible to recognize Halin graphs, and to solve a wide class of related problems (such as finding their planar embeddings, decomposing them into a tree and a cycle, or finding a Hamiltonian cycle) using a simple reduction-based algorithm that repeatedly finds and simplifies certain local configurations within the graph. The two reductions that I used are shown below; one of them collapses a triangle of degree-three vertices to a point, and the other shortens certain paths of degree-three vertices.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/D3-reductions.png"></div><br /><br />Every Halin graph can be simplified by these reductions to a complete graph on four vertices; in terms of the tree and cycle decomposition of the Halin graph, one of these reductions removes the children from a tree node with two leaf children, and the other removes the middle of three consecutive leaf children. But if you want to use these to recognize Halin graphs only, you need to restrict them a little, because some other graphs can also be simplified to the same four-vertex complete graph, and that's mostly what the paper is about. I call these D3-reducible graphs, and they have a lot of properties in common with the Halin graphs: they are planar, minimally 3-vertex-connected, Hamiltonian, bounded treewidth, etc. One of the smallest examples of a D3-reducible graph that is not a Halin graph is the truncated tetrahedron graph:<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/trunctet.png"></div><br /><br />I have updated my <a href="http://www.ics.uci.edu/~eppstein/PADS/">PADS Python algorithm library</a> to include the new Halin graph recognition algorithm, and some related algorithms, as <a href="http://www.ics.uci.edu/~eppstein/PADS/Halin.py">Halin.py</a>. (I also updated the license text for the library, to use the MIT license — you can do almost anything you want but don't hold me responsible for it — rather than trying to claim that the code is public domain, which I'm told is not so meaningful legally.)<a name='cutid1-end'></a>http://11011110.livejournal.com/305358.htmlgraph algorithmspythonpaperspublic0http://11011110.livejournal.com/304968.htmlMon, 16 Feb 2015 01:32:12 GMTLinkage
http://11011110.livejournal.com/304968.html
I don't know what Google+ is doing under the hood (and don't really want to know) but whatever it is seems kind of bloated to me, enough to kill my browser and the responsiveness on my whole machine when I try to open 14 G+ tabs at once. But anyway, here they are:<br /><ul><li><a href="http://www.slate.com/articles/technology/bitwise/2014/12/wikipedia_editing_disputes_the_crowdsourced_encyclopedia_has_become_a_rancorous.single.html">Sexism and bureaucracy at Wikipedia</a> and <a href="http://ergodicity.net/2015/01/23/linkage-55/">an update on the Walter Lewin sexual harassment story</a> (<a href="https://plus.google.com/100003628603413742554/posts/TzcWwiVhKtr">G+</a>)</li><br /><li><a href="http://www.dailykos.com/story/2015/01/28/1360765/-Gov-Scott-Walker-seeks-300-million-in-university-cuts-but-220-million-to-build-Bucks-a-new-arena">Wisconsin gov. Walker seeks major cuts on universities so he can build a sportsball facility;</a> Calif. gov. Brown isn't much better (<a href="https://plus.google.com/100003628603413742554/posts/SKkguKRxmLB">G+</a>)</li><br /><li><a href="http://www.confsearch.org/confsearch/faces/pages/topic.jsp?topic=Theory&sortMode=1&graphicView=1">Conference search: Theory</a> (<a href="https://plus.google.com/100003628603413742554/posts/9My7JoFhSgN">G+</a>)</li><br /><li><a href="https://twitter.com/INTERESTING_JPG/status/562618942217531393">Automated textual image analysis results</a> and <a href="http://deeplearning.cs.toronto.edu/i2t">engine</a> (<a href="https://plus.google.com/100003628603413742554/posts/grLqEiKonkk">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=GznQgTdEdI4">Super eggs</a>: the mathematics behind the shape of, among other things, Azteca Stadium in Mexico City (<a href="https://plus.google.com/100003628603413742554/posts/2QrxUEH2NDx">G+</a>)</li><br /><li><a href="http://libraries.calstate.edu/equitable-access-public-stewardship-and-access-to-scholarly-information">Cal State Univ. gives up on Wiley journals after hefty price increases and refusal to unbundle</a> (<a href="https://plus.google.com/100003628603413742554/posts/URkXdWxDzew">G+</a>)</li><br /><li><a href="https://gilkalai.wordpress.com/2015/02/06/from-oberwolfach-the-topological-tverberg-conjecture-is-false">Topological Tverberg counterexample</a>. It is true for all prime-power dimensions but that wasn't good enough to be true for all dimensions. (<a href="https://plus.google.com/100003628603413742554/posts/KRqdQqCt9Gw">G+</a>)</li><br /><li><a href="http://blog.matthen.com/post/97284098616/take-a-rectangle-and-cut-it-along-a-random-line">Randomly cut and flipped rectangles</a> from another Tumblr of interesting mathematical visualizations (<a href="https://plus.google.com/100003628603413742554/posts/4Dw5FthmMjg">G+</a>)</li><br /><li><a href="http://www.maureeneppstein.com/mve_journal/?p=634">1961 interview with F1 racing driver Bruce McLaren's family</a>. From my mother's blog; McLaren was her second cousin. (<a href="https://plus.google.com/100003628603413742554/posts/4vd3YZSkdfK">G+</a>)</li><br /><li><a href="http://www.theguardian.com/science/alexs-adventures-in-numberland/2015/feb/10/muslim-rule-and-compass-the-magic-of-islamic-geometric-design">Muslim rule and compass: the magic of Islamic geometric design</a> (<a href="https://plus.google.com/100003628603413742554/posts/HorwnpBrtM9">G+</a>)</li><br /><li><a href="http://www.umass.edu/gradschool/sites/default/files/iranian_student_admissions_2_2015.pdf">UMass Amherst bans Iranian STEM grad students</a> (<a href="https://plus.google.com/100003628603413742554/posts/VDYSkY69tGe">G+</a>)</li><br /><li><a href="http://www.metafilter.com/146924/Paper-Engineering-Over-700-years-of-Fold-Pull-Pop-and-Turn">Many links on pop-up books and related paper engineering problems</a> (<a href="https://plus.google.com/100003628603413742554/posts/eijbaYWgV4w">G+</a>)</li><br /><li><a href="http://www.win.tue.nl/SoCG2015/?page_id=601">SoCG accepted papers, with abstracts</a> (<a href="https://plus.google.com/100003628603413742554/posts/LAJJqRDivFX">G+</a>)</li><br /><li><a href="http://boingboing.net/2015/02/14/facebook-tells-native-american.html">The Nymwars continue at Facebook</a> (<a href="https://plus.google.com/100003628603413742554/posts/Wa3EkaMXgog">G+</a>)</li></ul>http://11011110.livejournal.com/304968.htmlanonymitycombinatoricswikipediaacademiaconferencesgeometryfamilypublic0http://11011110.livejournal.com/304679.htmlFri, 06 Feb 2015 06:35:27 GMTWhere do you get your BibTeX data?
http://11011110.livejournal.com/304679.html
Formatting a couple hundred references for a proposal led me to wonder: If you find yourself wanting to look up the BibTeX data for a paper, where do you go? And how much do you have to edit it yourself afterwards?<br /><br />The three most obvious choices for me are <a href="http://www.informatik.uni-trier.de/~ley/db/">DBLP</a>, <a href="http://dl.acm.org/">ACM Digital Library</a>, or <a href="http://www.ams.org/mathscinet/">MathSciNet</a>.<br /><br />There used to be a project to maintain a collective file "geom.bib" with all the references that any computational geometer would ever use. I still have about 18 copies of it on my computer (presumably not all in sync with each other) from various papers that used it, but it became unwieldy (too big to use as one file) and seems to have fallen by the wayside. Additionally, many publishers supply citation files for their own publications, so you could use those, or even take the time to write your own. But my experience is that most of the publishers are not good at generating clean data (e.g. they use hyphens instead of en-dashes for page ranges, or permute conference title words into a different order than what you'd want to use in a citation), although at least they're better at it than Google scholar.<br /><br />The big three above all have their quirks, but they generate pretty clean data (especially if you tell DBLP not to use crossref). Copying from them can be a lot easier and less error-prone than typing it all in yourself, and picking one source and sticking to it could also help achieve greater consistency. DBLP has the best coverage for Computer Science, I think. I recently looked at a five-year window of my papers (for the prior work section of that proposal) and it missed only three (two in non-computer science journals about topology and mathematical psychology, and the third in an edited volume about cellular automata).<br /><br />My own idiosyncratic preference is for MathSciNet, though. Their coverage is almost as good for my purposes (sometimes better) but what ends up making the difference for me is their care about the capitalization of title words and formatting of math in titles. DBLP and ACM leave lots of words capitalized and let the bibtex style lowercase them later, which mostly works, but fails when some words are proper nouns that should stay capitalized. MathSciNet takes care to lowercase everything to how it should appear in a citation (my preference) and to protect the letters that should remain uppercase. And for titles that contain formulas, MathSciNet gets it right and the other two don't.<br /><br />Example: ACM: "The h-Index of a Graph and Its Application to Dynamic Subgraph Statistics".<br />DBLP: "The h-Index of a Graph and Its Application to Dynamic Subgraph Statistics" (journal version); "The \emph{h}-Index of a Graph and Its Application to Dynamic Subgraph Statistics" (conference version).<br />MathSciNet: "The {$h$}-index of a graph and its application to dynamic subgraph statistics". One of these is correct and the others aren't.<br /><br />But maybe there's some new tool or database that beats all of these that I haven't yet found out about. One of my co-authors uses Zotero, but I haven't tried that myself. Are systems like it based on shared libraries rather than comprehensive databases still useful?<br /><br />(See also <a href="https://plus.google.com/u/0/100003628603413742554/posts/T7msni7sGmJ">discussion on G+</a> from the same post.)<a name='cutid1-end'></a>http://11011110.livejournal.com/304679.htmltoolsbibliographypublic4http://11011110.livejournal.com/304478.htmlSun, 01 Feb 2015 03:17:46 GMTLinkage
http://11011110.livejournal.com/304478.html
Did you know...<ul><li>... that <a href="http://www.imdb.com/title/tt2582802/">Bernard Chazelle's son directed a film that has been nominated for a best-picture Oscar?</a> (<a href="https://plus.google.com/100003628603413742554/posts/FsKPpc8K545">G+</a>)</li><br /><li>... that <a href="http://www.sciencepubs.org/content/347/6217/14.full">the rebellion in Ukraine has caused many scientists and whole universities to move?</a> (<a href="https://plus.google.com/100003628603413742554/posts/fAThtaZX9kT">G+</a>)</li><br /><li>... that <a href="https://adamsheffer.wordpress.com/2015/01/19/a-list-of-recent-papers/">there have been many recent papers on counting geometric incidences?</a> (<a href="https://plus.google.com/100003628603413742554/posts/5AB15iLt8kc">G+</a>)</li><br /><li>... that <a href="http://www.wired.com/2015/01/chocolates-whose-intricate-architecture-designed-tweak-taste-buds/">the shape of a piece of 3d-printed chocolate might influence its flavor?</a> (<a href="https://plus.google.com/100003628603413742554/posts/Ri7GagMRtza">G+</a>)</li><br /><li>... that <a href="http://www.wired.com/2015/01/quanta-curves-from-flatness-kirigami/">placing precise slits in a flat paper surface can cause it to curve in predictable ways?</a> (<a href="https://plus.google.com/100003628603413742554/posts/RxzVP7VWdkJ">G+</a>)</li><br /><li>... that <a href="https://www.youtube.com/watch?v=on3ZLLKQp-4">the waterbear is a new fast knightship in Conway's game of life?</a> (<a href="https://plus.google.com/100003628603413742554/posts/hkGgm2ohJfG">G+</a>)</li><br /><li>... that <a href="http://www.thisiscolossal.com/2015/01/intricate-modular-paper-sculptures-by-richard-sweeney/">Richard Sweeney's paper-folding artworks are inspired by snow and clouds?</a> (<a href="https://plus.google.com/100003628603413742554/posts/e6xLbXJeeJS">G+</a>)</li><br /><li>... that <a href="https://www.google.com/webmasters/tools/mobile-friendly/">Google has a service for checking whether your home page is mobile-friendly?</a> (<a href="https://plus.google.com/100003628603413742554/posts/8fdGejK5U1W">G+</a>)</li><br /><li>... that <a href="http://hyrodium.tumblr.com/post/109000595139/i-made-gif-animations-of-sum-of-square-numbers">the sum of the first n squares is n(n+1)(2n+1)/6?</a> (<a href="https://plus.google.com/100003628603413742554/posts/JxFABtKkxj1">G+</a>)</li><br /><li>... that <a href="https://facultystaff.richmond.edu/~ebunn/homocentric/">epicycles can be visualized by spheres spinning inside each other?</a> (<a href="https://plus.google.com/100003628603413742554/posts/P5H2889XWxR">G+</a>)</li><br /><li>... that <a href="https://www.youtube.com/watch?v=74BGYzSkMeU">Paul Erdős traveled to Madras to meet Krishnaswami Alladi when Alladi was only an undergraduate?</a> (<a href="https://plus.google.com/100003628603413742554/posts/MRo43mTmSGN">G+</a>)</li><br /><li>... that <a href="http://aperiodical.com/2015/01/apiological-mathematical-speculations-about-bees-part-1-honeycomb-geometry/">you can persuade bees to make honeycombs in nonstandard tessellations by giving them patterned foundation plates?</a> (<a href="https://plus.google.com/100003628603413742554/posts/7ytwWuMJzsJ">G+</a>)</li><br /><li>... that <a href="http://googlescholar.blogspot.com/2015/01/blast-from-past-reprint-request.html">professors used to send each other postcards requesting printed copies of their recent papers?</a> (<a href="https://plus.google.com/100003628603413742554/posts/UPntqoRxWtk">G+</a>)</li><br /><li>... that <a href="http://boingboing.net/2015/01/22/origami-dollar-bill-koi.html">you can fold a dollar bill into a fish?</a> (<a href="https://plus.google.com/100003628603413742554/posts/RPB3AWW8Dhb">G+</a>)</li><br /><li>... that <a href="http://www.jebiga.com/strandbeest-kinetic-animal-sculptures-theo-jansen/">Theo Jansen's autonomous walking creatures have no brains?</a> (<a href="https://plus.google.com/100003628603413742554/posts/G8Cd8U1MEsk">G+</a>)</li></ul>http://11011110.livejournal.com/304478.htmltoolscellular automataacademianumber theorygeometryartorigamipublic0http://11011110.livejournal.com/304362.htmlThu, 22 Jan 2015 23:29:25 GMTThe linear algebra of edge sets of graphs
http://11011110.livejournal.com/304362.html
This quarter, in my advanced algorithms class, I've been going through <a href="http://www.cc.gatech.edu/fac/Vijay.Vazirani/book.pdf">Vazirani's <i>Approximation Algorithms</i> book</a> chapter-by-chapter, and learning lots of interesting material that I didn't already know myself in the process.<br /><br />One of the things I recently learned (in covering chapter 6 on feedback vertex set approximation)<sup>*</sup> is that, although all the students have taken some form of linear algebra, many of them have never seen a vector space in which the base field is not the real numbers or in which the elements of the vector space are not tuples of real coordinates. So instead of discussing the details of that algorithm I ended up spending much of the lecture reviewing the theory of binary vector spaces. These are very important in algebraic graph theory, so I thought it might be helpful to write a very gentle introduction to this material here.<br /> <br />First of all we need the concept of a <a href="https://en.wikipedia.org/wiki/Field_(mathematics)">field</a>. This is just a system of elements in which we can perform the usual arithmetic operations (addition, subtraction, multiplication, and division) and expect them to behave like the familiar real number arithmetic: addition and multiplication are associative and commutative, there are special values 0 and 1 that are the identities for addition and multiplication respectively, subtraction is inverse to addition, division is inverse to multiplication by anything other than zero, and multiplication distributes over addition. The field that's important for this material is a particularly simple one, <a href="https://en.wikipedia.org/wiki/GF(2)">GF(2)</a>, in which the required special values 0 and 1 are the only elements. The arithmetic of these two values can be described as ordinary integer arithmetic mod 2, or equivalently it can be described by saying that addition is Boolean xor and multiplication is Boolean and. Subtraction turns out to be the same as addition, and division by 1 (the only value that it's possible to divide by) is just the identity operation. It's not hard to verify that these operations have all the desired properties of a field, and doing so maybe makes a useful exercise (Exercise 1).<br /><br />Next, a <a href="https://en.wikipedia.org/wiki/Vector_space">vector space</a> is a collection of elements that can be added to each other and multiplied by <a href="https://en.wikipedia.org/wiki/Scalar_(mathematics)">scalars</a> from a field. (One can generalize the same concept to other kinds of arithmetic than fields but then one gets modules instead of vector spaces.) The vector addition operation must be commutative and invertible; this implies that it has an identity element, and this element (whatever it happens to be) is called the zero vector. Additionally, scalar-scalar-vector multiplications must be associative, scalar multiplication by the special element 1 of the field must be the identity operation, and scalar multiplication must be distributive over both vector and field addition.<br /><br />One easy way to construct vector spaces over a field <b>F</b> is to make its elements be <i>k</i>-tuples of elements of <b>F</b> with the addition and scalar multiplication operations acting independently on each coordinate, but it's not the only way. For the vector spaces used in this chapter, a different construction is more natural: we let the elements of the vector space be sets in some family of sets, and the vector addition operation be the <a href="https://en.wikipedia.org/wiki/Symmetric_difference">symmetric difference</a> of sets. The symmetric difference <i>S</i> Δ <i>T</i> of two sets <i>S</i> and <i>T</i> is the set of elements that occur in one but not both of <i>S</i> and <i>T</i>. This operation is associative, commutative, and invertible, where the inverse of a set is the same set itself: <i>S</i> Δ <i>T</i> Δ <i>T</i> = <i>S</i> regardless of which order you use to perform the symmetric difference operations. If a nonempty family of sets has the property that the symmetric difference of every two sets in the family stays in the family, then these sets can be interpreted as the elements of a vector space over GF(2) in which the vector addition operation is symmetric difference, the zero vector is the empty set (necessarily in the family because it's the symmetric difference of any other set with itself), scalar multiplication by 0 takes every set to the empty set, and scalar multiplication by 1 takes every set to itself. One has to verify that these addition and multiplication operations are distributive, but again this is a not-very-difficult exercise (Exercise 2).<br /><br />As with other kinds of vector spaces, these vector spaces of sets have bases, collections of vectors such that everything in the vector space has a unique representation as a sum of scalar products of basis vectors. Every two bases have the same number of vectors as each other (Exercise 3: prove this), and this number is called the dimension of the vector space. If the dimension is <i>d</i>, the total number of vectors in the vector space is always exactly 2<sup><i>d</i></sup>, because that is the number of different ways that you can choose a scalar multiple (0 or 1) for each basis vector. <br /><br />The families of sets that are needed for this chapter are subsets of edges of a given undirected graph. These can also be interpreted as subgraphs of the graph, but they're not quite the same because the usual definition of a subgraph also allows you to specify a subset of the vertices (as long as all edges in the edge subset have endpoints in the vertex subset), and we won't be doing that. Every graph has three important vector spaces of this type associated with it, the edge space, the cycle space, and the cut space. The edge space is the family of all subsets of edges (including the set of all edges of the given graph and the empty set). That is, it is the <a href="https://en.wikipedia.org/wiki/Power_set">power set</a> of the set of all edges; it has a natural basis in which the basis vectors are the one-edge sets, and its dimension is the number of edges in the graph.<br /><br />The <a href="https://en.wikipedia.org/wiki/Cycle_space">cycle space</a> is the family of all subsets of edges that have even degree at all of the vertices of the graph (Exercise 4: prove that this family is closed under symmetric difference operations). So it includes the simple cycles of the graph, but it also includes other subgraphs; for instance in the graph of an octahedron (a six-vertex graph with four edges at each vertex) the set of all edges is in the cycle space, as are the sets of edges formed by pairs of triangles that touch each other at a single vertex and the sets complementary to triangles or 4-cycles. It's always possible to find a basis for the cycle space in which the basis elements are themselves simple cycles; such a basis is called a <a href="https://en.wikipedia.org/wiki/Cycle_basis">cycle basis</a>. For instance you can form a "fundamental cycle basis" by choosing a spanning forest of the given graph and then finding all cycles that have one edge <i>e</i> outside this forest and that include also the edges of the unique path in the forest that connects the endpoints of <i>e</i>. Or, for a planar graph, you can form a cycle basis by choosing one cycle for each bounded face of a planar embedding of the graph. There are lots of interesting algorithmic problems associated with the cycle space and its cycle bases, but for this chapter the main thing that's needed is to compute its dimension, which has the nice formula |<i>E</i>| − |<i>V</i>| + <i>c</i>, where <i>E</i> is the edge set of the given graph, <i>V</i> is the vertex set, and <i>c</i> is the number of connected components. One name for this dimension is the <a href="https://en.wikipedia.org/wiki/Circuit_rank">cyclomatic number</a> of the graph, and the book chapter denotes it as cyc(<i>G</i>). (It's also possible to interpret it topologically as the first Betti number of the graph but for students who don't already know about binary vector spaces that would probably be more confusing than helpful.)<br /><br />The cut space of the graph doesn't take part in this chapter, but can be defined similarly as the set of all cut-sets of the graph. A <a href="https://en.wikipedia.org/wiki/Cut_(graph_theory)">cut</a> of a graph is a partition of its vertices into two disjoint subsets; in some contexts we require the subsets to both be nonempty but we don't do that here, so the partition into an empty set and the set of all vertices is one of the allowed cuts. The corresponding cut-set is the set of edges that have one endpoint in each of the two subsets. The family of cut-sets is closed under symmetric difference (Exercise 5) so it forms a vector space, the edge space. If the edges are all given positive weights and the graph is connected, then the minimum weight basis of the edge space can be represented by a tree on the vertices of the graph, in which each tree edge determines a cut (the partition of the tree into two subtrees formed by deleting that edges) and has an associated number (the weight of its cut). This tree is called the <a href="https://en.wikipedia.org/wiki/Gomory%E2%80%93Hu_tree">Gomory–Hu tree</a> of the graph and it came up (stripped of its linear-algebra origin) earlier, in an approximation for <i>k</i>-cuts in chapter 4. I also have a recent preprint on computing this basis and this tree for graphs that can be embedded onto low-genus surfaces: see <a href="http://arxiv.org/abs/1411.7055">arXiv:1411.7055</a>.<br /><br /><small><sup>*</sup>Unrelatedly, in preparing to cover this topic, I was confused for a long time by a typo in this chapter. On page 56 it states that, for a minimal feedback set, "clearly" the sum over feedback vertices of the number of components formed by deleting that one vertex equals the number of feedback vertices plus the number of components that are formed by deleting the whole feedback set but that touch only one vertex in the set. This is not true. What is true, and what is needed for the later argument, is that the left hand side is greater than or equal to the right hand side.</small><a name='cutid1-end'></a>http://11011110.livejournal.com/304362.htmlgraph theorypublic7http://11011110.livejournal.com/304060.htmlFri, 16 Jan 2015 04:16:46 GMTLinkage
http://11011110.livejournal.com/304060.html
<ul><li><a href="http://www.thisiscolossal.com/2015/01/pixel-a-mesmerizing-dance-performance-incorporating-digital-projection/">Real-time 3d special effects in modern dance</a> (<a href="https://plus.google.com/100003628603413742554/posts/Hp8vcVRmzHS">G+</a>)</li><br /><li><a href="http://stemfeminist.com/2015/01/05/450/">How not to react to conference talks that happen to be presented by women</a> (<a href="https://plus.google.com/100003628603413742554/posts/KvCqKMhU84U">G+</a>, including also an unrelated report from the SODA business meeting)</li><br /><li><a href="http://www.neatorama.com/2015/01/07/Iced-Intrigue/">Photos of icy landscapes</a> showing how varied the geometry of ice can be (<a href="https://plus.google.com/100003628603413742554/posts/M9v6nj2Kfu2">G+</a>)</li><br /><li><a href="http://awards.acm.org/press_releases/fellows-2014b.pdf">New ACM fellows</a> (<a href="https://plus.google.com/100003628603413742554/posts/8HgRjNyNuQE">G+</a>)</li><br /><li><a href="http://www.maths.manchester.ac.uk/~jm/Choreographies/about.html">n-body choreagraphies</a> (strange solutions to the n-body problem in which all bodies follow each other along a curve; <a href="http://gminton.org/#choreo">more</a> and <a href="https://en.wikipedia.org/wiki/N-body_choreography">still more</a>; <a href="https://plus.google.com/100003628603413742554/posts/84uAkqPtzrM">G+</a>)</li><br /><li><a href="http://www.washingtonpost.com/news/speaking-of-science/wp/2015/01/08/men-on-the-internet-dont-believe-sexism-is-a-problem-in-science-even-when-they-see-evidence/?Post+generic=?tid%3Dsm_twitter_washingtonpost">Men (on the Internet) don’t believe sexism is a problem in science, even when they see evidence</a> (<a href="https://plus.google.com/100003628603413742554/posts/9kgtv1mh5SR">G+</a>)</li><br /><li><a href="https://plus.google.com/101584889282878921052/posts/VbBk9JrLxqm">The fractional chromatic number of the plane</a> (<a href="https://plus.google.com/100003628603413742554/posts/Ea6VqUWL6XG">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=KboGyIilP6k">Elwyn Berlekamp video on dots-and-boxes strategy</a> (<a href="https://plus.google.com/100003628603413742554/posts/UrgtLhCcEi9">G+</a> <a href="https://plus.google.com/113862074718836293294/posts/aJi4HxTP9Pe">reshare</a>)</li><br /><li><a href="http://richardelwes.co.uk/2015/01/02/the-grothendieck-song/">Richard Elwes sings the Grothendieck Song for us</a> (<a href="https://plus.google.com/100003628603413742554/posts/XDe3WtoERW5">G+</a>)</li><br /><li><a href="http://www.thisiscolossal.com/2015/01/fascinating-3d-printed-fibonacci-zoetrope-sculptures/">Animated shapes from a 3d printed object, a turntable, and a strobe light</a> (<a href="https://plus.google.com/100003628603413742554/posts/Jpk5j2sKQqB">G+</a> <a href="https://plus.google.com/117273001021476361745/posts/QjURgBC7K3j">reshare</a>)</li><br /><li><a href="http://gruze.org/tilings/">Why tilings by regular polygons can't include the pentagon</a> (<a href="https://plus.google.com/100003628603413742554/posts/PZMj7dnC9oC">G+</a> via <a href="http://www.metafilter.com/146120/No-Pentagons">MF</a>)</li></ul>http://11011110.livejournal.com/304060.htmlcolorgame theorytilingacademiaconferencesgeometryartpublic2http://11011110.livejournal.com/303850.htmlWed, 07 Jan 2015 07:56:32 GMTReport from SODA, ALENEX, and ANALCO
http://11011110.livejournal.com/303850.html
I just returned from San Diego, where ALENEX, ANALCO, and SODA were held this year. I'm only going to write about a fraction of the things that happened at these conferences, in part because (with four different sessions happening in parallel much of the time) it was only possible for one person to see a fraction of those things. Also I already posted bits about the <a href="http://11011110.livejournal.com/303471.html">ALENEX/ANALCO business meeting</a> and <a href="https://plus.google.com/100003628603413742554/posts/KvCqKMhU84U">SODA business meeting</a> so I won't repeat those here.<br /><br />Sunday's scheduled plenary talk by Peter Winkler on pursuit games was unfortunately cancelled because of illness; instead we got a nice talk on <a href="https://en.wikipedia.org/wiki/Locally_decodable_code">locally decodable codes</a> by Sergey Yekhanin. Unfortunately I missed a couple of Sunday afternoon talks i wanted to see (Timothy Chan on the Four Russians and Julia Chuzhoy on the wall theorem) because they conflicted with another session of interest to me. In it, Michael Walter described how to list all short lattice vectors in exponential time (but a better exponential than before). Friedrich Eisenbrand showed that an old greedy algorithm to approximate the largest simplex within the convex hull of a high-dimensional point set is better than previously thought, by a nice analysis that involves forming an orthogonal basis for the space that has the same volume as the simplex, grouping the basis vectors by their lenghts, and approximating the convex hull by a product of balls for each group. And Sepideh Mahabadi showed that if you want to construct a data structure for a set of lines that can find the approximate-nearest line to a query point, the const is only about the same as for the more standard nearest point to a query point.<br /><br />The mystery of the late Sunday session was the meaning of the talk title "The parameterized complexity of K", by Bingkai Lin, the winner of both the best student paper and best overall paper awards. It turned out to be a typo in the program: "K" should have been "k-biclique". The problem Lin studied is finding a complete bipartite subgraph K<sub>k,k</sub> in a given graph; one would expect it to be W[1]-hard, like clique-finding, but this wasn't known. Lin proved the expected hardness result by a reduction from clique-finding in which he took a graph product of a graph possibly containing a large clique with another graph having some sort of Ramsey property, and showed that the resulting product graph either does or doesn't contain a large biclique. He gave two constructions for the Ramsey graph, a randomized one that only blows up the parameter k (the size of the clique one is trying to find) by a polynomial factor, and a deterministic one that blows it up by a factorial factor. So there is a big gap between deterministic and randomized, but to me that's not surprising when Ramsey theory is involved. The bigger question to me is whether the polynomial blowup of even the randomized reduction can be changed into constant blowup, so that we can extend known results that n<sup>O(k)</sup> is optimal for clique-finding (unless some form of the exponential time hypothesis is false) to similar results for biclique-finding. <br /><br />Monday I ended up spending the whole day at ALENEX. UCI student Jenny Lam started the day with online algorithms for a version of caching in which the items to be cached have sizes (like a web cache and unlike a virtual memory system) and must be assigned contiguous locations within the cache memory. A system of breaking the cache memory into blocks, grouping items into FIFO queues of items with similar priority, and representing each queue by a linked sequence of blocks turns out to work well. It gives up a little in performance compared to systems for caching that don't worry about the memory placement part of the problem, but most of what it gives up is because of using FIFO instead of LRU within each queue rather than because of the fixed memory placement. Also Monday morning, Claire Mathieu gave the second of the invited talks, on moving from worst-case inputs to a "noisy input" model in which one assumes that the input comes from some kind of ground truth with a nice structured solution that has been randomly perturbed; one would like to be able to recover the solution (with high probability) to the extent possible. This turns out to be mathematically almost the same as the "planted solution" model of generating test inputs, in which a random input is perturbed by making it have a solution of higher quality than a random input is likely to have by itself, and then one asks whether this planted solution can be found among the randomness. However, the emphasis of why one is doing this is different: not as a test case, but because real-world inputs are often nicer than worst-case inputs and we want to try to capture that in our input model.<br /><br />In the first afternoon talk, Sandor Fekete set up an integer program for finding triangulations that maximize the length of the shortest edge. He discussed two lower bounds for the problem, one being the shortest convex hull edge and the second being the shortest interior diagonal that is not crossed by any other diagonal; the second one turns out to be more powerful than the first, and he posed as an open question (or actually a conjecture but with an answer I don't believe) the problem of finding the expected length of this shortest uncrossed diagonal, for random inputs (say uniform in a square). And Maarten Löffler and Irina Kostitsyna gave what I think was the most entertaining talk of the conference, with hidden party hats under the seats of some audience members and gummy bears on the slides, about algorithms for approximating certain geometric probabilities (whether two random points from two given distributions can see each other around given obstacles). The most memorable talk for me from the late afternoon session was the last one, by my academic brother Pino Italiano, on 2-connectivity in directed graphs. In undirected graphs, you can define a 2-connected block to be a maximal subgraph that can't be disconnected by a vertex deletion, or you can define a 2-connected component to be an equivalence class of the pairwise relation of having two vertex-disjoint paths, but these two definitions give you the same thing. In directed graphs, the blocks and components are not the same, and you can construct blocks in linear time but the best algorithms for components are quadratic. In his ALENEX paper (he also had a SODA paper in this area), Pino implemented and tested several algorithms for these problems, with the surprising result that even though the worst case performance of the component algorithms is quadratic the practical performance seems to be linear. So this probably means there are still theoretical improvements to be made.<br /><br />That brings me to today. In the morning, Natan Rubin spoke about systems of Jordan curves that all intersect each other (it is conjectured that most of the intersections must consist of more than one point, and he confirmed this for some important special cases) and Andrew Suk spoke about geometric Ramsey theory (for instance if you have n points in general position in the plane you can find a logarithmic subset for which all triples have the same order type, meaning they are in convex position); he significantly increased the size of the subset one can find for several similarly-defined problems. And Avrim Blum gave the third invited talk, on algorithmic problems in machine learning.<br /><br />In the early afternoon, there were again two sessions I wanted to go to in parallel, so I decided to try switching between them. In one of the two, Joshua Wang spoke on his work with Vassilevska-Williams, Williams, and Yu (how likely is VVW to be first alphabetically among so many authors?) on subgraph isomorphism. For finding three-vertex induced subgraphs, the hardest graphs to find are triangles or their complements, which can be found in matrix-multiplication time. The authors of this work showed that the same time bounds can extend to certain four-vertex subgraphs, such as the diamond (K4 minus an edge). A randomized algorithm for finding diamonds turns out to be easy, by observing that a certain matrix product counts diamonds plus six times the number of K4s, and then choosing a random subgraph to make it likely that if diamond exists, the number of diamonds is nonzero mod six. The harder part of the paper was making all this deterministic. In the other session, Topi Talvitie formulated a natural geometric version of the k-shortest paths problem: find k shortest paths that are locally unimprovable, or equivalently find k shortest homotopy classes of paths. It can be solved by a graph k-shrotest paths algorithm on the visibility graph of the obstacles, or by a continuous Dijkstra algorithm that he explained with a nice analogy to the levels of a parking garage (see the demo at <a href='http://dy.fi/wsn'>http://dy.fi/wsn</a>). Louis Barbu showed that there are still more tricks to be extracted from Dobkin-Kirkpatric hierarchies: by using one hierarchy of each type and switching between a polyhedron and its polar as necessary, it is possible to find either a separating plane or an intersection point of two polyhedra in log time. And Jeff Erickson's student Chao Xu spoke on weakly simple polygons; these are polygons whose edges are allowed to overlap each other without crossing, and both defining them properly and recognizing them efficiently turns out to be an interesting problem.<br /><br />In the final session, I learned from Daniel Reichman about contagious sets: sets of vertices in a graph with the property that if you repeatedly augment the set by adding vertices that have two neighbors already in the set, it eventually grows to cover the whole graph. In a d-regular expander, the size of such a set should be n/d<sup>2</sup> (it turns out), and Reichman presented several partial results in this direction. Loukas Georgiadis gave the more theoretical of the two talks on directed 2-connectivity. Jakub Tarnawski showed how to generate uniformly random spanning trees in time O(m<sup>4/3</sup>), the best known for sparse graphs. (The problem can be solved by taking a random walk until the whole graph is covered and then selecting the first edge into each vertex, but that is slower.) And Hongyang Zhang formulated a notion of connectivity of pairs of vertices in which one chooses a random spanning forest in a graph and then looks at the probability that the pair is part of the same tree; this apparently has connections to liquidity in certain financial webs of trust.<br /><br />The <a href="http://epubs.siam.org/doi/book/10.1137/1.9781611973754">ALENEX</a>, <a href="http://epubs.siam.org/doi/book/10.1137/1.9781611973761">ANALCO</a>, and <a href="http://epubs.siam.org/doi/book/10.1137/1.9781611973730">SODA</a> proceedings are all online, there's plenty more of interest beyond what I've mentioned here, and it all appears to be freely accessible without any need for institutional subscriptions.<a name='cutid1-end'></a>http://11011110.livejournal.com/303850.htmlconferencestalkspaperspublic0http://11011110.livejournal.com/303471.htmlMon, 05 Jan 2015 06:06:26 GMTCircular arc contacts, Miura slides, and ALENEX business
http://11011110.livejournal.com/303471.html
While I've been at SODA one of my co-authors has been busy preparing what turns out to be my first preprint of the new year: <a href="http://arxiv.org/abs/1501.00318">Contact Representations of Sparse Planar Graphs</a> (arXiv:1501.00318, with Alam, Kaufmann, Kobourov, Pupyrev, Schulz, and Ueckerdt). I think this is one of these cases where a picture can go a lot farther than words in explaining: here's an example of what we're looking at.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/0xDE/arc-cuboctahedron.png"></div><br /><br />The 12 circular arcs of this diagram correspond to the 12 vertices of a <a href="https://en.wikipedia.org/wiki/Cuboctahedron">cuboctahedron</a>, and the 24 contact points between arcs (the points where one arc ends as it runs into another arc) correspond to the 24 edges of the cuboctahedron. What we want to know is which other graphs can be represented like the cuboctahedron in this way? They have to be planar, and every subgraph has to have at most twice as many edges as vertices (because every set of arcs has twice as many endpoints as arcs) and beyond that it's a little mysterious. But we have some natural subclasses of the planar graphs for which we can prove that such a representation always exists (for instance the 4-regular planar graphs) and some NP-hardness results.<br /><br />Two other uploads of possible interest: <a href="http://www.ics.uci.edu/~eppstein/0xDE/ALENEX15-business-meeting.pdf">my report as co-PC-chair at the ALENEX business meeting</a>, and <a href="http://www.ics.uci.edu/~eppstein/pubs/BalDamEpp-SODA-15.pdf">my talk on Miura folding</a> (both with small corrections to the slides I actually used). I've <a href="http://11011110.livejournal.com/297829.html">posted here before</a> about the Miura folding results. For the ALENEX report, besides the usual breakdown of acceptance rates and subtopics, there were two more substantial issues for future planning: should ALENEX move its submission deadline earlier than the SODA notification date (so that the PC has adequate time to review the submissions), and should it accept more papers? The sentiment at the meeting seemed to be in favor of both ideas.http://11011110.livejournal.com/303471.htmlgraph drawingconferencesorigamipaperspublic0http://11011110.livejournal.com/303315.htmlSun, 04 Jan 2015 00:53:07 GMTGreetings from San Diego
http://11011110.livejournal.com/303315.html
I've just arrived in San Diego for the annual <a href="http://www.siam.org/meetings/da15/">Symposium on Discrete Algorithms</a> and its associated satellite workshops ALENEX and ANALCO. That little strip of blue on the left edge of the photo is the harbor; you can also see a little bit of it directly from the hotel, if your window faces in the right direction. If you're also here, greetings!<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/WestinSanDiego/WestinSanDiego-m.jpg" border="2" style="border-color:black;" /></div>http://11011110.livejournal.com/303315.htmlarchitecturephotographypublic0http://11011110.livejournal.com/302869.htmlFri, 02 Jan 2015 05:44:20 GMT2014 in algorithm preprints
http://11011110.livejournal.com/302869.html
Happy New Year, everyone! It's time once again to give a status report on the cs.DS (data structures and algorithms) section of the arXiv. The arXiv as a whole just hit a big milestone, one million preprints uploaded. cs.DS forms a small fraction of that, but still, last year there were 1182 new preprints, up a little from <a href="http://11011110.livejournal.com/281196.html">the previous year</a>.<br /><br />There has been some talk recently about possible changes to the system, including replacing author choices of subarea within cs by the results of an automated text classification system (which already exists but is only used for advisory purposes now), and allowing moderators to reject papers that they deem to be unscientific or not of any plausible interest to the readers (as already happens in the physics and math parts of arXiv). I think any actual change is likely to happen only very slowly, but it's possibly worth thinking about the things arXiv does well and the other things that it might be able to do better.<br /><br />With so many preprints, it's hard to choose among them (I guess that's what we have conference program committees for). Still, here's a selection of ten(-ish) I found personally interesting, excluding my own papers and a few <a href="http://11011110.livejournal.com/291361.html">I wrote about earlier</a>. I'm sure I missed some other good ones, so feel free to leave your own favorites in the comments.<br /><ul><li><b>Popular conjectures imply strong lower bounds for dynamic problems</b>, Amir Abboud and Virginia Vassilevska Williams, <a href="http://arxiv.org/abs/1402.0054">arXiv:1402.0054</a> and FOCS 2014. The <a href="http://en.wikipedia.org/wiki/Exponential_time_hypothesis">exponential time hypothesis</a> is the unproven but widely-believed conjecture that certain NP-complete problems require exponential time. We already know how to scale it down, showing that (if ETH is true) certain known polynomial-time or fixed-parameter-tractable algorithms for static problems are optimally fast. This paper extends these results to scaled-down dynamic graph algorithms for basic problems such as reachability, showing that ETH explains the fact that we don't have subpolynomial update times for these problems.</li><br /><li><b>Shortest paths in intersection graphs of unit disks</b>, Sergio Cabello and Miha Jejčič, <a href="http://arxiv.org/abs/1402.4855">arXiv:1402.4855</a> and CGTA 2014. <a href="https://en.wikipedia.org/wiki/Unit_disk_graph">Unit disk graphs</a> can have a quadratic number of edges, so algorithms whose running time is subquadratic have to use the geometric structure rather than just constructing the graph and using a general-purpose graph algorithm. This paper shows that unweighted shortest paths can be found in <i>O</i>(<i>n</i> log <i>n</i>) time (essentially optimal) and that the weighted problem can be solved only a little slower.</li><br /><li><b>The complexity of the simplex method</b>, John Fearnley and Rahul Savani, <a href="http://arxiv.org/abs/1404.0605">arXiv:1404.0605</a>. <b>On simplex pivoting rules and complexity theory</b>, Ilan Adler, Christos Papadimitriou, and Aviad Rubinstein, <a href="http://arxiv.org/abs/1404.3320">arXiv:1404.3320</a> and IPCO 2014. A long line of algorithms and discrete geometry research is rooted in the phenomenon that the simplex method can be used to find linear program solutions in practice, but in theory most variants of it can be forced to take an exponential number of steps. These two papers look at the solution trajectories found by this method, and show that (as well as being long) they have high computational complexity: it can be PSPACE-complete to tell whether a point is part of the trajectory, or (for degenerate problems) which solution the simplex method will end up at.</li><br /><li><b>Parameterized streaming algorithms for vertex cover</b>, Rajesh Chitnis, Graham Cormode, MohammadTaghi Hajiaghayi, and Morteza Monemizadeh, <a href="http://arxiv.org/abs/1405.0093">arXiv:1405.0093</a> and SODA 2015. <b>Streaming kernelization</b>, Stefan Fafianie and Stefan Kratsch, <a href="http://arxiv.org/abs/1405.1356">arXiv:1405.1356</a> and MFCS 2014. Streaming meets parameterized complexity: many parameterized algorithms take linear time in their input size but exponential or worse time in some other parameter, so it makes sense to ask whether their linear time can scale even to problems too big to fit into main memory.</li><br /><li><b>Flip distance is in FPT time <i>O</i>(<i>n</i> + <i>k</i>⋅<i>c</i><sup><i>k</i></sup>)</b>, Iyad Kanj and Ge Xia, <a href="http://arxiv.org/abs/1407.1525">arXiv:1407.1525</a> and STACS 2015. The version of flip distance considered here is for triangulations of geometric point sets; a flip is a replacement of two triangles that form a convex quadrilateral by the other two triangles for the same quadrilateral, and flip distance is the minimum number of flips needed to change one triangulation to another. It's known to be NP-hard for this variant, and even the special case of convex polygons is interesting (of unknown complexity). You might think that FPT is obvious: just keep the parts of the triangulation that are already correct, and flip the rest. But it's more complicated than that, because sometimes you might want to flip things that are already correct to get them out of the way, and then flip them back again later. (This makes a convenient point to repeat my standard warning that formulas in paper titles are a bad idea.)</li><br /><li><b>Generating <i>k</i>-independent variables in constant time</b>, Tobias Christiani and Rasmus Pagh, <a href="http://arxiv.org/abs/1408.2157">arXiv:1408.2157</a> and FOCS 2014. From the title, this looks like it might be about hash functions, but it's not. <i>k</i>-wise independence is an important assumption in that context, allowing practical hashing algorithms to be used with limited randomness, but there's a lower bound showing that computing such hash functions in constant time requires a large amount of memory. On the other hand, in some other algorithms you might just need some pseudorandom values rather than a function that you can call later with the same input and get the same output. So <i>k</i>-wise independent random generation should be easier than <i>k</i>-wise independent hashing, and this paper shows that it actually is. Specifically, they show how to generate pseudorandom values over a finite field in constant time per value with a number of bits that's linear in the independence parameter and logarithmic in everything else.</li><br /><li><b>Dynamic integer sets with optimal rank, select, and predecessor search</b>, Mihai Pătrașcu and Mikkel Thorup, <a href="http://arxiv.org/abs/1408.3045">arXiv:1408.3045</a> and FOCS 2014. Mihai's last paper? This uses word-RAM operations to provide a constant-time data structure for the named operations on sets whose size is polynomial in the word length. This is almost like the atomic sets in fusion trees, but better because it can be updated more quickly.</li><br /><li><b>Computing classic closeness centrality, at scale</b>, Edith Cohen, Daniel Delling, Thomas Pajor, and Renato F. Werneck, <a href="http://arxiv.org/abs/1409.0035">arXiv:1409.0035</a> and COSN 2014. The closeness centrality of a node in a network is, essentially, the average distance to all the other nodes. Nodes with smaller average distance are more central; it is in this sense that Paul Erdős is central to the mathematical collaboration network or Kevin Bacon is central to the acting co-star network. I had a paper at SODA 2001 on approximating centrality, but it depended on an assumption that the network has low diameter. This paper makes no such assumption but nevertheless manages to estimate the centrality of all nodes accurately in near-linear time.</li><br /><li><b>Simple PTAS's for families of graphs excluding a minor</b>, Sergio Cabello and David Gajser, <a href="http://arxiv.org/abs/1410.5778">arXiv:1410.5778</a>. Approximation algorithms for planar graphs and their generalizations are nothing new, but they generally involve the algorithm knowing a lot about the graph and its structure (for instance using a separator decomposition). This paper shows that even very simple methods (just a local search) can give good approximations, with all the graph structure showing up in the analysis rather than in the algorithm itself.</li><br /><li><b>Beyond the Euler characteristic: Approximating the genus of general graphs</b>, Ken-ichi Kawarabayashi and Anastasios Sidiropoulos, <a href="http://arxiv.org/abs/1412.1792">arXiv:1412.1792</a>. If a given graph can be embedded into a surface of bounded genus, then it can be embedded into a surface of bounded (but larger) genus in polynomial time (independent of the genus). The dependence of the embedded genus to the optimal genus is only polynomial, and this also leads to approximation algorithms with a sublinear approximation ratio. Previously such an approximation was only known for graphs of bounded degree.</li></ul><a name='cutid1-end'></a>http://11011110.livejournal.com/302869.htmlalgorithmspaperspublic0http://11011110.livejournal.com/302619.htmlWed, 31 Dec 2014 23:18:53 GMTLinkage for the end of the year
http://11011110.livejournal.com/302619.html
After <a href="http://11011110.livejournal.com/292825.html">I returned to Google+</a> in mid-year (once Google rescinded their real-name policy) I've been sharing approximately a link a day there, and collecting the links for twice-monthly roundups here (in part to allow me to find my posts again since G+ provides very little organization for old posts). This is the latest batch:<br /><ul><li><a href="http://www.wired.com/2014/12/disqus/">Pseudonyms are used mostly for privacy, not trolling</a> (<a href="https://plus.google.com/100003628603413742554/posts/DiyEZByxKKG">G+</a>)</li><br /><li><a href="http://www.scientificamerican.com/article/for-sale-your-name-here-in-a-prestigious-science-journal/">Attacks on the peer review system</a> including authorship for sale, false refereeing, plagiarism, etc (the <a href="https://plus.google.com/u/0/100003628603413742554/posts/9R2ZRpSqDM9">G+</a> post includes several more links)</li><br /><li><a href="http://www.thisiscolossal.com/2014/12/randomly-generated-polygonal-insects-by-istvan-giordano-for-neonmob/">Randomly generated polygonal insects</a> (<a href="https://plus.google.com/100003628603413742554/posts/VcHk3nJYFLa">G+</a>)</li><br /><li><a href="https://ayvlasov.wordpress.com/2012/07/23/qu-ants/">Qu-ant reversible cellular automata</a> (<a href="https://plus.google.com/117663015413546257905/posts/aTzuQjw3w4c">G+</a> <a href="https://plus.google.com/100003628603413742554/posts/9nwqyTG7MpW">reshare</a>)</li><br /><li><a href="http://yeyorigami.blogspot.com/2014/12/sonobe-unit-polyhedra.html">Sonobe unit polyhedra</a>, modular polyhedra as Christmas decorations (<a href="https://plus.google.com/112844741882313681044/posts/CcjCPUSkMxS">G+</a> <a href="https://plus.google.com/100003628603413742554/posts/4iAu9jyBBRM">reshare</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=y97rBdSYbkg">Domino chain reaction</a> video demonstrating exponential scaling in energy amplification (<a href="https://plus.google.com/100003628603413742554/posts/gXNSrTWZx8m">G+</a>)</li><br /><li><a href="https://en.wikipedia.org/wiki/Langley%E2%80%99s_Adventitious_Angles">The adventitious angle puzzle</a> for the advent season (<a href="https://plus.google.com/100003628603413742554/posts/JYtwdoRXFDM">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=sxnX5_LbBDU">The Gauss Christmath Special</a> video (<a href="https://plus.google.com/u/0/100003628603413742554/posts/XbLjcWcXdEy">G+</a>)</li><br /><li><a href="http://mathoverflow.net/questions/191495/collection-of-conjectures-and-open-problems-in-graph-theory">Collections of graph theory open problems</a> (<a href="https://plus.google.com/100003628603413742554/posts/g4gjcVYRdNQ">G+</a>)</li><br /><li><a href="https://www.youtube.com/watch?v=v678Em6qyzk&feature=autoshare">Knuth's dragon-curve mistake</a> video (<a href="https://plus.google.com/113862074718836293294/posts/F3ft1xuawGJ">G+</a> <a href="https://plus.google.com/100003628603413742554/posts/WswwnKhbSs4">reshare</a>)</li><br /><li><a href="http://www.the-scientist.com/?articles.view/articleNo/41677/title/Q-A--One-Million-Preprints-and-Counting/">One million arXiv preprints</a> (<a href="https://plus.google.com/100003628603413742554/posts/LgijnoYsZGK">G+</a>)</li><br /><li><a href="http://www.extremetech.com/extreme/168288-folded-paper-lithium-ion-battery-increases-energy-density-by-14-times">Origami batteries</a> (<a href="https://plus.google.com/100003628603413742554/posts/QNSqmVNBRvG">G+</a>)</li></ul>http://11011110.livejournal.com/302619.htmlunsolvedcellular automataanonymityacademiageometrygraph theoryorigamipublic0http://11011110.livejournal.com/302520.htmlWed, 31 Dec 2014 05:43:30 GMTMendocino menagerie
http://11011110.livejournal.com/302520.html
My parents' house in Mendocino is full of books and sculptures of creatures (mostly cats). Here are some of them:<br /><br /><div align="center"><table border="0" cellpadding="10">
<tr align="center" valign="middle">
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/FrontPorchCat.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/FrontPorchCat-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/OaxacanJaguar.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/OaxacanJaguar-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/AwayRoomCat.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/AwayRoomCat-s.jpg" border="2" style="border-color:black;" /></a></td>
</tr><tr align="center" valign="middle">
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/DreamingOfBirds.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/DreamingOfBirds-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/SunnySpot.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/SunnySpot-s.jpg" border="2" style="border-color:black;" /></a></td>
<td><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/StairSentryCat.html"><img src="http://www.ics.uci.edu/~eppstein/pix/menagerie/StairSentryCat-s.jpg" border="2" style="border-color:black;" /></a></td>
</tr></table></div><br /><br /><a href="http://www.ics.uci.edu/~eppstein/pix/menagerie/index.html">The rest of the gallery</a>.http://11011110.livejournal.com/302520.htmlartmendocinophotographyfamilypublic0http://11011110.livejournal.com/302266.htmlTue, 30 Dec 2014 05:37:04 GMTBack from the land of no internet
http://11011110.livejournal.com/302266.html
In the unlikely event that anyone wondered why I didn't post anything either here or over on my Google+ account for the last few days, it's because I unexpectedly found myself incommunicado, visiting relatives for Christmas. Here are two of them, my cousin's daughters.<br /><br /><div align="center"><img src="http://www.ics.uci.edu/~eppstein/pix/zj/ZoyaAndJessa-m.jpg" border="2" style="border-color:black;" /></div><br /><br />It's normal for there to be no cell phone service at my parents' house in Mendocino. Cell phones finally reached downtown Mendocino a couple of years ago over the objections of some protesters who were terrified of being exposed to any form of electromagnetic radiation, but there's a hill between downtown and the house that blocks the signal. There would normally be landline phone service there, but the lines got flooded in the big storm a couple of weeks ago and AT&T hasn't succeeded in drying them out yet. And my parents also have cable internet, but for some other reason that went down too. So we had to resort to old-fashioned behavior like reading books or actually interacting with each other instead of all being absorbed in our own separate electronic devices the way we otherwise would likely have been.http://11011110.livejournal.com/302266.htmlmendocinophotographyfamilypublic2