Monday, September 21, 2009

The New vs. the Good

I have wondered for a while whether the computer science community should not place more value on journal articles, rather than conference papers. Journal articles are not just longer versions of conference papers that take much more time to review -- they are meant to represent a capstone on a large piece of work, which is something we often overlook in our field.

Much systems research is driven by a rapid cycle of develop, evaluate, publish (and not always in that order). With a couple of major conference venues every year, and the need to build a strong publication record as a major determinant of one's (perceived) success in the field, there is a high incentive to push out new papers as quickly as possible, irrespective of how half-baked the content might be. Many conference papers do little more than scratch the surface of an idea -- it is hard to do more in only 14 pages. The expected longevity of a paper (even a good one at a top conference) is little more than a year, two tops. And most systems on which the papers are based never see the light of day, apart from perhaps a tarball slapped together and linked on a student's website.

It's a collective form of ADHD -- hack, publish, move onto the next thing. In some sense, it's more important to be the first person to publish in an area rather than to develop a system to the point where the major problems have been actually solved, and the concept thoroughly vetted. Research fads come and go pretty quickly. (Remember distributed hash tables?) Once the first few papers have been published in an area people start to get antsy looking for the next big idea.

In other scientific communities, there is a vastly different expectation of the maturity of a piece of work before it can be published, using journal articles as the primary means of dissemination. As much as we scorn journals, they do have the virtue of slowing things down -- requiring more in-depth presentation of the ideas, extensive reviews, and sometimes multiple revisions before the work can be published. (My wife, who is a psychiatrist, reports that several of her articles have been in the review and revision cycle for more than a year and a half. Computer scientists don't have this kind of patience.)

One can argue that the journal editorial cycle is too slow for a fast-moving field like CS. I think that's naive; other scientific disicplines -- molecular biology, particle physics -- are innovating at least as rapidly and manage to do so within the content of a journal article framework. Those communities have the means for getting early results out there -- posters and oral presentations at conferences, online repositories like arXiv -- but there is a much clearer line drawn between the early work and the culmination of a major research effort. In the systems community, we have workshops like HotOS for floating new ideas, but it's not uncommon for a HotOS paper to turn into a major conference publication just a few months later. (One could argue that a project at that point of maturity should not be a candidate for a "hot topics" workshop. CS research seems to exhibit a high degree of entropy: work goes from "hot" to "cold" pretty quickly.)

I wonder what this rapid cycle does to the quality and depth of the work in our field, compared to that in other fields. I like to think that CS has shed the antiquated, lumbering trappings of other academic disciplines, but in our rush to keep the publication cycle going, what are we missing? Does our rapid-fire approach to research cause us to spend too much time on playing small-ball, rather than investing time into the hard problems that could take years to bear fruit? Does it make sense to place more value on the currency of journal articles in CS?

SenSys 2009 Registrations now open

Registration is now open for SenSys 2009, which will be in Berkeley from November 4-6. The early registration deadline is October 12. The conference program is fantastic and we have Bill Weihl from Google lined up to do a keynote.