0xDE (11011110) wrote,

Workshop on Theory and Many-Cores

Longtime algorithms researchers will remember that in the 1980s, parallel algorithms used to be a hot topic, but that it faded as Moore's law caused new single-processor machines to be faster (and much easier to program) than parallel computers made with many older and cheaper processors. Nowadays, Moore's law itself has faded (or, more accurately, stopped leading to single-processor speedups) and parallel computing has been making a comeback, especially as many researchers have realized that we already have powerful and highly parallel processors cheaply available to us in the graphics coprocessors of our home video game systems. Researchers such as (at UCI) graduating student Nodari Sitchinava and his advisor Mike Goodrich have been hard at work on developing analysis models and algorithms for these new systems and figuring out how many of the old PRAM-algorithm techniques can be carried over to them. But there has also been a lot of work in this area in non-theory communities that could benefit from our theoretical expertise. In part, this is one of the themes of the Workshop on Massive Data Algorithmics to be held in conjunction with SoCG in Aarhus this summer, and older parallel algorithms conferences such as SPAA have also adapted to these new directions.

Uzi Vishkin has asked me to publicize another workshop, even more focused on this subject, to be held a little closer to (my) home. The Workshop on Theory and Many-Cores is to be held in College Park, Maryland, on May 29, 2009, the day before STOC and in the same state. There's an abstract submission deadline of April 27 (less than two weeks away). As the conference announcement states,

The sudden shift from single-processor computer systems to many-processor parallel computing systems requires reinventing much of Computer Science (CS): how to actually build and program the new parallel systems. Indeed, the programs of many mainstream computer science conferences, such as ASPLOS, DAC, ISCA, PLDI and POPL are heavily populated with papers on parallel computing and in particular on many-core computing. In contrast, the recent programs of flagship theory conferences, such as FOCS, SODA and STOC, hardly have any such paper. This low level of activity should be a concern to the theory community, for it is not clear, for example, what validity the theory of algorithms will have if the main model of computation supported by the vendors is allowed to evolve away from any studied by the theory. The low level of current activity in the theory community is not compatible with past involvement of theorists in parallel computing, and to their representation in the technical discourse. For example, 19 out of 38 participants in a December 1988 NSF-IBM Workshop on Opportunities and Constraints of Parallel Computing in IBM Almaden had theory roots. The lack of involvement of theorists should also concern vendors that build many-core computers: theorists are often the instructors of courses on algorithms and data-structures, and without their cooperation it will be difficult to introduce parallelism into the curriculum.

The main objective of the workshop will be to explore opportunities for theoretical computer science research and education in the emerging era of many-core computing, and develop understanding of the role that theory should play in it.

There's still time to get something together for this, so get working! And I'm sure Uzi would appreciate workshop participants who want to learn more about this subject but are not ready to speak on it themselves, as well.
Tags: algorithms, conferences
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

  • 1 comment