I just finished attending the 25th annual Canadian Conference on Computational Geometry, at the University of Waterloo. I haven't attended this conference very regularly but when I have it's always been fun with many of the presentations concerning results that are too "small" for the big-name conferences but still elegant, novel, and useful. In its 25 year history, the acceptance rate has moved steadily downwards starting from roughly 100% to this year's low of 61%, so perhaps it's also started becoming a little more respectable.
The whole proceedings is on line, but unfortunately I don't know of a good way to link to any individual papers. Here are a few of the highlights for me:
The first contributed talk I saw on Thursday was by Therese Biedl (tbiedl), newly returned to Waterloo from her sabbatical in Salzburg, where she worked on straight skeletons with Martin Held, Stefan Huber, Peter Palfrader and Dominik Kaaser. Straight skeletons can be used to design building roofs, and weighting the skeleton (that is, making edges move inwards with different speeds) leads to roof planes of multiple slopes. Negative weights make sense in this context as overhung parts of the roof. But there can be serious definitional problems with weighted skeletons: if (in the shrinking process of the input polygon by which the skeleton is defined) two parallel edges of different speeds overtake each other, what happens? And if multiple vertices crash into each other, some with negatively weighted incident edges, does there always exist a consistent continuation? Therese sidestepped these questions by assuming that there is some rule for resolving these situations, but then showed that there are other problems as well, especially when negative weights are allowed: the straight skeleton can cross itself, it can be disconnected, and it can have faces with holes.
Thursday afternoon, Jack Snoeyink (with many co-authors) discussed a problem inspired by an origami joke: the one-fold stegasaurus. It forms a shape that cannot be covered by the initial square; on the other hand, no matter how you fold or crumple a circular sheet of origami paper, you can cover the result with an unfolded sheet of the same size. Jack asked which shapes of paper can always cover the results of folding them (it's more than just the sphere) and how big a square sheet of paper you would need in order to be able to cover all shapes folded from a smaller square sheet (the Stegasaurus turns out not to be the worst case).
The excursion later that afternoon consisted of time traveling to 1914, taking a hike along the river, and then chatting with some of the villagers from the time. I learned that 1914 was muddy, and that modern telecommunications (in the form of catalog clothing sales) were putting local shopkeepers out of business already back then.
Alla Sheffer gave a very nice invited talk Friday morning on what designers mean when they sketch 3d shapes using contour lines, and on what computers can do with the information. The methods she desribed involve reconstructing surface normal vectors at the crossings of contour lines (using assumptions that these lines lie on perpendicular planes and cross nearly perpendicularly), and then extending the reconstruction to the rest of the contour lines and then to the interior of the faces of a drawing. The reconstruction isn't accurate enough to get a full 3d model, but it is good enough to perform shading, turning the sketches into nice colored and shaded drawings. You can see more at crossshade.com. One little tidbit I got from her talk was that in this context Coons patches work much better than minimal surfaces for extending the boundaries of 3d surface patches to a whole patch. Since Coons patches are four-sided, this may help explain the popularity of quadrilateral meshes in applications such as Disney's computer animations: they are a good fit for how the artists think about 3d.
The early afternoon sessions on Friday featured a talk by UCI student Paweł Pszona on parametric search, a technique used in designing efficient algorithms for problems such as the Theil–Sen method for robust line-fitting. In many of these algorithms (including Theil–Sen), the method involves simulating a sorting algorithm, but which one you simulate matters: in particular, it should do many comparisons in parallel. The AKS sorting network is the theoretically optimal choice but completely impossible in practice, due to the huge constants in its O-notation (not to mention the difficulty of understanding and implementing it). Van Oostrum and Veltkamp instead suggested quicksort; it has a lot of parallelism, but not quite enough flexibility in the ordering of its comparisons, so it ends up losing a log factor in its worst case runtime compared to AKS. Instead, Paweł and Mike Goodrich show that another sorting algorithm called boxsort can be used, giving an algorithm that they could implement, that is typically at least as good as the quicksort version in practice (and sometimes better) and that has the good theoretical O-notation of AKS.
Later on Friday, Don Sheehy spoke about geometric separators. The Miller-Teng-Thurston-Vavasis-whoever method for finding planar separators and separators of higher dimensional well shaped meshes involves a form of lifting transformation: stereographically projecting a circle packing onto a sphere, Möbius transforming the sphere so that its center coincides with a centerpoint of the circle centers, picking a random great circle on the sphere, and then reversing all of the transformations to figure out what's happening back in the original space. Instead, Don showed that it's much easier to use a different lifting transformation, onto a paraboloid, the same one that works so well for Voronoi diagrams and Delaunay transformations. The spherical lifting is still needed to explain why the random choice works, but that part is in the analysis and not in the parts you have to implement.
On Saturday morning, Vincent Kusters presented an NP-hardness proof for translating multiple sets of points to put their union into convex position, work done with Michael Hoffman, Gunter Rote, Maria Saumell and Rodrigo Silveira. Although it's easy to test whether a given translation works, the problem is not known to be NP-complete, because we don't know that the translation vectors (if they exist) can be represented with low enough numerical precision. I guess in that sense it joins the Euclidean TSP as a problem that might be NP-complete, might be complete for the existential theory of the reals, or might be something else.
One of the last talks of the conference, rescheduled from earlier, was Timothy Sun's on the notorious problem of whether all planar graphs can be drawn with all edges having integer lengths. Several special cases are known, including subquartic graphs (graphs that have maximum degree at most four but are not 4-regular), based on "Berry's theorem": if two points a and b have integer distance, and a, b, and a third point c all have rational coordinates, then the rational points at rational distance from all three are dense in the plane. Sun combined this with some rigidity theory to extend these results to a couple more cases of 4-regular graphs: the graphs that have a pair of separating edges, and the graphs that have two triangles sharing an edge.