0xDE
30 June 2015 @ 06:11 pm
Although it only hints at the connection, one way of interpreting my latest preprint is about higher-dimensional graph drawings. The paper is "Track Layouts, Layered Path Decompositions, and Leveled Planarity" (with Bannister, Devanny, Dujmović, and Wood, arXiv:1506.09145).

Read more...Collapse )
 
 
0xDE
29 June 2015 @ 06:55 pm
Here's another town in the Netherlands that I visited just before Computational Geometry Week: Thorn, also known as "the white village". The story goes that when Napoleon took over the Netherlands, he instituted a building tax based on how many windows each building had. So the villagers bricked up many of their windows and then, to make the change less obvious, whitewashed the buildings. The buildings are still painted white and give the place a distinctive look.

It's a small town, so not something that would likely fill a whole day of sightseeing, but very pretty. Behind the church we found an art gallery where an older man had put on a show of his art about trains, including paintings, prints based on old engineering drawings, and a giant model of the bridge over the river Kwai; it was the first day of the show and we were the first to visit.



( The rest of the photos )
 
 
0xDE
28 June 2015 @ 02:51 pm
I arrived a day early in the Netherlands for Computational Geometry Week, to allow me longer to get used to the nine-hour time change. One of the things I did with the extra time was to visit Delft, one of many pretty Dutch canal cities, which turned out to be holding a fun flea market that day as well as some sort of children's marching band competition. I didn't take any photos of those, but I did get some from a tour of the Royal Delft Museum and Factory there. Royal Delft is probably best known for its ornamental blue-and-white painted plates, but I was more interested in their architectural ceramics:



( The rest of the photos )
 
 
0xDE
27 June 2015 @ 03:33 pm
I just returned from visiting Eindhoven, the Netherlands, for Computational Geometry Week, including the 31st International Symposium on Computational Geometry, the 4th Annual Minisymposium on Computational Topology, the Workshop on Geometric Networks, the Workshop on Stochastic Geometry and Random Generation, the Workshop on Geometric Intersection Graphs, the Young Researchers Forum, and the CG Week Multimedia Exposition, almost all of which I attended pieces of (it was not possible to attend everything because SoCG had two parallel sessions and the workshops were run in parallel to each other).

Read more...Collapse )
 
 
0xDE
16 June 2015 @ 08:30 pm
Somehow I seem to have two new papers online that I haven't mentioned here before.

Maximal SubsequencesCollapse )

Genus, Treewidth, and Local Crossing NumberCollapse )
 
 
 
0xDE
07 June 2015 @ 06:11 pm
I have another new preprint out this evening: "Metric Dimension Parameterized by Max Leaf Number", arXiv:1506.01749. The metric dimension of a graph is the minimum number of vertices you need to choose as landmarks so that all other vertices are uniquely determined by their distances to the landmarks.

Read more...Collapse )
 
 
 
0xDE
20 May 2015 @ 12:37 am
In a recent paper Ron Graham surveys the work of Paul Erdős on Egyptian fractions. Did you know that Erdős' second paper was on the subject? I didn't. It proved that the sum of a harmonic progression can never form an Egyptian fraction representation of an integer (there is always at least one prime that appears in only one term). Graham himself is also a fan, having studied Egyptian fractions in his Ph.D. thesis.

Read more...Collapse )
 
 
0xDE
15 May 2015 @ 10:36 pm
 
 
0xDE
One of the key principles of parametric optimization is that, when you are faced with optimizing the nonlinear combination of two linear values (sums of element weights, costs, etc) you should instead look at the set of optima for all possible linear combinations of the same two values. Let's see how this applies to the number-theoretic knapsack problems I posted about earlier this week.

Read more...Collapse )
 
 
0xDE
In a recent post I observed that the largest highly abundant number below some threshold n could be found as the optimal solution of a certain knapsack problem in which the items to be packed into the knapsack are prime powers pi with profit log (pi + 1 − 1)/(pi − 1) and size log p (both independent of n), and with knapsack capacity log n. In particular every highly abundant number has a factorization that can be generated as the solution to this knapsack problem with the number itself as the threshold.

Unfortunately, the knapsack problem is NP-complete, making its solutions vary in complicated ways, and making it tricky to extract useful information about highly abundant numbers and their factorizations from this formulation. But fortunately, there's a class of knapsack problems that are really easy to solve: the ones where the optimal fractional solution is the same as the optimal integer solution. These are the solutions that you get by a greedy algorithm that at each step chooses the item with maximum profit/size. This greedy strategy is not optimal for all capacities, but it is optimal when the capacity happens to equal the solution size. So which highly abundant numbers have this greedy property?

Read more...Collapse )
 
 
0xDE
11 May 2015 @ 06:08 pm
My student Michael Bannister passed his thesis defense this afternoon. Michael has published nearly a dozen papers on topics involving graph algorithms and computational geometry (see his home page for a complete listing). His thesis research involved lower bounds and fixed-parameter upper bounds for graph drawing: inapproximability of layout compaction, the use of Galois theory to prove the nonexistence of exact algorithms for optimizing the vertex placement in many styles of graph drawing, and parameterized algorithms for one-page and two-page crossing minimization.

Michael has also been one of our most popular teaching assistants and has enthusiastically encouraged undergraduates to take part in research projects, leading to a poster at last year's Graph Drawing symposium and an ongoing project that we hope to turn into another publication. Next year he'll be putting those skills to good use as a visiting assistant professor at Pomona College, a highly selective private school also located in Southern California, while his wife (another theoretician, Jenny Lam) finishes her own doctorate.

Congratulations, Michael, and congratulations Pomona! Our loss is your gain.
 
 
0xDE
07 May 2015 @ 11:15 pm
The Schulze method for determining the results of multiway votes has three parts:

1. Use the ballots to determine the results (winner and margin of victory) of each possible head-to-head contest.
2. Perform an all-pairs widest path computation on a directed complete graph weighted by the margins of victory.
3. Find the candidate with wider outgoing than incoming paths to all other candidates.

The second part can be done in cubic time using the Floyd-Warshall algorithm (the choice in practice) or faster using fast matrix multiplication. And the third part is easy. But what about the first part? Here, some Wikipedia editor wrote in 2011 that the first part, "if implemented in the most straightforward way, takes time proportional to C2 times the number of voters" (where C is the number of candidates). But then last year some other editor tagged this claim as being original research.

This raised some questions for me. Given how straightforward it is, can this really be considered to be original research? Is it possible to find a published source for the time analysis of this step that can be used to untag it? (If you know of one, please tell me or add it to the article.) Is the algorithm with this time bound really "the most straightforward way"? And if this is the time bound you get by doing things straightforwardly, can we get a better time bound by trying to be more clever?

Read more...Collapse )
 
 
 
0xDE
Suppose you have a polynomial-time algorithm that operates on sets of weighted elements, and involves comparisons of the weights of different sets. (This describes many different algorithms for shortest paths, minimum spanning trees, minimum weight matchings, closures, etc.) But suppose also that your algorithm is only guaranteed to work correctly when different sets always have distinct total weights. When comparisons could come out equal, your algorithm could crash or produce incorrect results. But equal weights are likely to happen when the element weights are small integers, for instance. Is there some semi-automatic way of patching your algorithm to work in this case, without knowing any details about how it works?

An obvious thing to try is to add small distinct powers of two to the element weights. If these numbers are small enough they won't affect initially-unequal comparisons. And if they're distinct powers of two then their sums are also distinct, so each two sets get a different perturbation. But this method involves computing with numbers that have an additional n bits of precision (where n is the number of elements in the problem), and a realistic analysis of this method would give it a near-linear slowdown compared to the unperturbed algorithm. Can we do better?

Read more...Collapse )
 
 
0xDE
17 April 2015 @ 12:16 am
I couldn't resist photographing this door to a lecture hall in the science sector of the UCI campus. I'm not sure what the pink paint brushmarks are: vandalism? Rustoleum? But they make a nice pattern.



( Another shot of the same door )
 
 
0xDE
16 April 2015 @ 06:08 pm
My latest arXiv preprint, The Parametric Closure Problem (arXiv:1504.04073, to appear at WADS) concerns an old optimization problem that can be used, among other applications, in the planning process for open-pit mining.

Read more...Collapse )