( Read more...Collapse )
I just finished attending the 25th annual Canadian Conference on Computational Geometry, at the University of Waterloo. I haven't attended this conference very regularly but when I have it's always been fun with many of the presentations concerning results that are too "small" for the big-name conferences but still elegant, novel, and useful. In its 25 year history, the acceptance rate has moved steadily downwards starting from roughly 100% to this year's low of 61%, so perhaps it's also started becoming a little more respectable.
The whole proceedings is on line, but unfortunately I don't know of a good way to link to any individual papers. Here are a few of the highlights for me:( Read more...Collapse )
My latest preprint to appear on the arXiv is "Superpatterns and Universal Point Sets" (with Michael Bannister, Jack Cheng, and Will Devanny, to appear at GD 2013, arXiv:1308.0403). The main idea of the paper is to connect two open problems from different areas of research, the size of universal point sets in graph drawing and the size of superpatterns in the theory of permutation patterns, and maybe make progress on both sides.
You can read about the details in the paper, but I wanted to highlight something else, a tool we use in both this and the previous preprint that isn't really about graph drawings or permutations. It's the integer sequence
1,3,1,7,1,3,1,15,1,3,1,7,1,3,1,... (OEIS A038712)where each term is one less than a power of two. The nth term in the sequence can be generated as the exclusive-or of the two consecutive integers n − 1 and n; for instance, the sixth term, 3, is the exclusive or of 5 and 6 (101 and 110 in binary). ( Read more...Collapse )
( Read more...Collapse )
( Read more...Collapse )
It's a bit of a departure for me, about computerized tutoring systems for subjects such as high school mathematics. My co-editor, Jean-Claude Falmagne, doesn't think it's appropriate to test students' knowledge of these subjects by percentages of correct answers on multiple choice exams and worksheets. Instead, for many years he's been developing a combinatorial theory of knowledge, in which a student's knowledge is represented as the set of short-answer questions that he or she can consistently answer correctly. Not all sets are possible, because of prerequisite relations between the questions. The "knowledge space" of the title is the collection of states of knowledge that are possible; mathematically, it can be represented as an antimatroid. Using this structure and some Bayesian theory, a computer can ask students a small number of questions and accurately assess not only what they know or don't know, but also what they're ready to learn next, and use that to guide the choice of the next lesson to present.
Jean-Claude's system ALEKS does exactly that. The first half of the book consists of reports of experience with ALEKS and similar systems for different groups of students and different subject areas. The second half (which is where I come in) is more mathematical, and includes several chapters on the theory of these systems. My own two chapters (heavily revised from a preprint I put on the arXiv a few years ago) describe the algorithms you need to implement to make such a system work. The first of the two chapters is primarily about an efficient computer representation of antimatroids in terms of their basic words (in learning terms, different ways to order the lessons of a subject so that, whenever a lesson is presented, students who have learned the previous lessons are ready to learn it), and algorithms that use this representation to generate all of the sets in the antimatroid (a key subroutine for the Bayesian assessment algorithm). My other chapter is about modifying knowledge spaces, for instance by restricting to a subset of the possible lessons, in order to make these spaces small enough for the algorithms run quickly.
"Set-Difference Range Queries" (with Mike Goodrich and Joe Simons, arXiv:1306.3482) is about data structures for keeping track of multiple point sets in the same geometric space, and determining how two sets differ within some region of the space. The basic idea is to combine the spatial decompositions commonly used for range searching with sketches of the data associated with each node of the decomposition; one of the sketches we use is the invertible Bloom filter from my work with Mike at WADS'07.
"Universal Point Sets for Planar Graph Drawings with Circular Arcs" (with Patrizio Angelini, Fabrizio Frati, Michael Kaufmann, Silvain Lazard, Tamara Mchedlidze, Monique Teillaud and Alexander Wolff) is about universal point sets, sets of points in the plane that can be used as the vertices for planar drawings of all possible n-vertex planar graphs. For drawings in which the edges are straight line segments, universal point sets generally have to have more than n points, and determining how many points are needed is a big open problem. But one can also consider universal point sets for other drawing styles; for instance, it was known that every strictly convex curve in the plane contains a set of n points that is universal for drawings in which each edge is drawn as a polyline with at most one bend. These polylines could be approximated by hyperbolic arcs, giving a drawing with very simple smooth curves for each edge, but maybe one could use curves that are even more restricted. In this paper we show that this is true for circular arcs: a certain set of n points on a parabola is universal for circular-arc drawings of planar graphs. It's purely a theoretical result, though, as the vertices are too unevenly spaced and the edges too tightly packed into a neighborhood of the parabola for these drawings to be useful as visualizations.
ETA: Sariel Har-Peled has also posted some photos from his trip to the conference.