Wednesday, April 29, 2009

Scaling up program committees

As a follow-up to my earlier post on scaling up the number of papers that conferences accept, I wanted to comment on the reviewing load imposed on program committees. Ken Birman and Fred Shneider have a thought-provoking article on this topic in May's issue of CACM (thanks to Yuiry Brun for the pointer). They touch on many points, but one issue they do not explicitly consider is the possibility of increasing the size of the program committee itself to reduce the workload.

The figure below shows the size of the program committee and the number of submissions for the last few years of SOSP and OSDI (OSDI 2002 is left out since I could not find data on the number of submissions). Note that I am not counting program chairs in the PC size, since presumably they do not shoulder the same burden for paper reviews (indeed, they have a much harder job).



I also estimate the number of reviews by each PC member, assuming that -- on average -- every paper gets four reviews. This is a guess and it may in fact be closer to 3 reviews per paper, but many conferences are now doing at least two reviewing rounds, so this seems reasonable. Split the difference if you like. I happened to be on the OSDI 2004 PC when the number of submissions spiked, and indeed I did have to review around 45 papers. (My average review length for that year was 60 lines of ASCII text, or around 3.5 KB per review -- you do the math -- I worked my butt off.)

As the figure shows, in the last couple of years, program chairs have caught on that it is time to increase the PC size to compensate for the increased number of submissions. Prior to 2007, the typical PC size was 12 or 13, whereas in the last couple of years it has spiked to 26, 31, and 33 (for SOSP 2009). Some conferences have adopted a "light" and "heavy" PC model in which the "heavy" members get more papers to review and have to attend the PC meeting.

In general I think it is beneficial to increase the program committee size, within reason. The classic model that the PC was mainly comprised of a "wise council of elders" seems too limiting and, as Ken and Fred point out, cannot scale. Looking at the last few OSDI and SOSP PCs, they are fairly diverse, with quite a few names that I haven't conventionally associated with these program committees, whereas prior to about 2002 there was far more homogeneity. This practice widens the scope of the community and gives more people an opportunity to help shape the direction the conferences take. This is a good thing.

Tuesday, April 28, 2009

Scaling up conferences

Many prestigious conferences in systems and networking -- such as SOSP, SenSys, and SIGCOMM -- constrain themselves to a single track over two and a half days. This limits the number of papers that can be presented at the conference to around 25-30 at most, assuming 30-minute talk slots.

The problem is that the field has been growing, but the publication venues have not. This means it is becoming increasingly competitive to get a paper accepted to one of these venues. You can check out the stats for yourself here. Although the stats are not yet on that page, SIGCOMM 2009 accepted only 10% of submitted papers. Conference publications in top venues are now a highly prized commodity, and one that is becoming increasingly valued over time. Unfortunately this leads to spiraling inflation, in the sense that the harder it is to get a paper accepted, the more it is seen as a superhuman achievement ("Wow! You got a paper into SIGCOMM!"), causing more people to submit more papers to the conference: a vicious cycle.

This value inflation is evident in the CVs of the hotshot graduate students on the job market last year. Several of us on the hiring committee were amazed at how many freshly-minted Ph.D.s were coming out with multiple papers in places like SOSP, OSDI, and NSDI. Clearly there is a lot more weight placed on getting those papers accepted than there used to be. When I was a grad student, publicatons were important, but nobody pushed me to publish relentlessly -- I submitted papers when I had something to submit. (I'll admit this was not very strategic.) Somewhere along the way the ante has been upped considerably.

Of course, this is a "great" thing for those lucky few who are successful at publishing at these venues. (To be fair, I count myself as part of that cohort.) But it does little to foster the community as a whole. Grad students need publications in top conferences to be taken seriously for faculty jobs. Junior faculty need them for tenure. If the conference venues become more and more selective, I don't see how we can sustain growth in the field as a whole. It seems to me that the number of papers accepted at conferences needs to keep pace with the number of new faculty and students entering the field. Either that or we need to reset our expectations of what constitutes making a research contribution.

There is also a potential loss to the diversity of the research community. Getting a paper into SOSP or NSDI takes a substantial investment of money and time. Research groups with more firepower (like, say, Microsoft Research) have a much easier time than those who might have good ideas but fewer resources. I don't have hard data to back this up, but it feels that it is increasingly rare to see papers from anything other than "top ten" universities and labs in the top venues. One thing that would help would be a quota on the number of papers that a given institution or author could submit to a conference, much as the NSF does for some of its programs. (It seems that everyone I know at MSR is working on at least least three papers for every major deadline. This is insane.)

Now, I am not suggesting that conferences lower their standards. But we need to decide what is reasonable and scale up as the community grows. One way to scale up is to create new venues, but this is not very cost-effective: it is expensive and time-consuming to run a conference, and who has time to go to so many events each year? Accepting a few more papers once you already have the whole program committee in the room incurs much less overhead.

This may mean going to multiple tracks, having shorter talks (my preferred solution), or not presenting every paper at the conference orally. As much as people bemoan multi-track conferences, every "single track" conference is really double-track: there is the track of people who stay in the room and listen to the talks, and the track of those who stand in the hallway and chat.

Finally, it's clear that paper quality varies over time, and it seems unlikely that any specific target for acceptance rates (say, 25%) makes sense for a given conference in a given year. But I think we should take a hard look at where we are headed, and ask whether we shouldn't open up the field more by devaluing the currency of the conference paper.

Monday, April 27, 2009

Abolish universities? Not so fast

The NY Times is carrying an editorial today from Mark C. Taylor, the chair of the Religion department at Columbia, saying that we need to rethink the structure of graduate education, and universities as a whole, to make them more relevant in today's world. The article is generally thought-provoking, but dead wrong when it comes to science and engineering. Unfortunately, the article does not qualify its statements as being relevant only to the humanities and social sciences, which is too bad considering that some readers might extend this flawed line of thinking to apply to other fields.

I'm surprised the author would be so careless to say things like:
"Most graduate programs in American universities produce a product for which there is no market..."
and
"Young people enroll in graduate programs ... all because of the illusory promise of faculty appointments." [Emphasis mine.]
What planet is this guy from? What he really means is that in the areas of "religion, politics, history, economics, anthropology, sociology, literature, art, religion and philosophy" (the author's all-encompassing list of the realms of human thought that apparently really matter) it is damned hard to get a decent job after graduate school, and I agree. But this has little to do with the situation in the sciences and engineering, where graduate students go on to a wide range of careers in industry, government, military, and, yes, academia.

I grant that it is possible that we are headed towards a brick wall in these other fields. According to the latest Taulbee survey, Ph.D. production in Computer Science has been skyrocketing, at its highest levels since the survey was started. However, far more students are going into industry than academia. Even for students dead set on a faculty position, many can get a job straight out of graduate school -- postdocs are still the exception rather than the rule. This situation could change, but I'm not sure it's time to end universities as we know them. Religion departments are maybe another matter.

Friday, April 24, 2009

NSDI 2009, Day Three

Today is the last day of NSDI 2009 here in Boston. The conference was great this year, and the community is clearly going strong. My only regret is that, the conference being in Boston, there was no excuse for me to go out cavorting with my colleagues until the wee hours. (Not that this stopped my grad students...)

My favorite talks from today:

Softspeak: Making VoIP Play Well in Existing 802.11 Deployments
Patrick Verkaik, Yuvraj Agarwal, Rajesh Gupta, and Alex C. Snoeren, University of California, San Diego

This paper is about improving the performance of VoIP flows in wireless networks, which can be very negatively impacted by bulk TCP and UDP traffic. I liked how this work looks at something other than bulk throughput as the only performance metric for a wireless network: this paper focuses on the MOS scores for the VoIP calls. The basic idea is to allow the VoIP stations to use a shorter contention window and aggregate downlink traffic across multiple VoIP stations. It's a clever idea and very well evaluated in the paper, although it seems fairly complex and would require substantial changes to the clients and APs to support. I was surprised how many questions were asked after the talk. I raised the question about what happens with multiple TCP bulk flows. Given that a single TCP flow is so badly impacted by VoIP, it seems to me that multiple TCP flows would really hammer the VoIP traffic.

Making Routers Last Longer with ViAggre
Hitesh Ballani, Paul Francis, and Tuan Cao, Cornell University; Jia Wang, AT&T Labs—Research

Cheeky title aside, this paper focuses on the problem of reducing the size of the expensive FIB memory in Internet routers as the routing table sizes increase. The key idea is to partition forwarding responsibility so that each router only maintains routes to a fraction of the IP address space. This can be supported on unmodified routers by using separate "route reflectors" that filter the routing tables on behalf of the routers themselves. Of course, this approach requires encapsulation and tunneling since intermediate routers don't have the complete routing information.

Ironically, the talk just before this (NetReview: Detecting When Interdomain Routing Goes Wrong) dealt with detecting BGP misconfigurations -- the ViAggre approach adds even more complexity and potentially creates a nightmare for someone trying to debug their network (or, at least for Andreas). That said they've thought hard about how to make this deployable in practice.


Thursday, April 23, 2009

NSDI 2009, Day Two

NSDI marches on. A couple of my favorite talks from today:

Tolerating Latency in Replicated State Machines Through Client Speculation
Benjamin Wester, University of Michigan; James Cowling, MIT CSAIL; Edmund B. Nightingale, Microsoft Research; Peter M. Chen and Jason Flinn, University of Michigan; Barbara Liskov, MIT CSAIL

In this paper, the authors propose to permit clients to speculate on the result of a replicated operation assuming the response from the first server is correct. Although this seems like an obvious idea, it's well executed here. Of course, it requires that clients also implement checkpoint and rollback in case the speculation was incorrect. To avoid side-effecting operations from triggering before the true result of the speculation is known, dependent requests can carry a predicate tied to the result of a previous speculation; this effectively forces a "collapse of the wave function" (as I think of it), forcing the servers to finalize the agreement.

Studying Spamming Botnets Using Botlab
John P. John, Alexander Moshchuk, Steven D. Gribble, and Arvind Krishnamurthy, University of Washington

Botnets are a huge problem and are responsible for a great deal of spam and DDoS attacks. This paper focuses on the problem of intercepting malicious binaries and observing a botnet's behavior in a safe environment. (This is not a trivial problem: 10 bot instances running in their sandbox generate 6 million emails a day!) It turns out that many botnets propagate through social engineering rather than exploiting security holes in software. Therefore the authors collect binaries by intercepting spam. Some clever techniques are involved, such as the use of network connection fingerprints to determine if two potentially malicious binaries are the same (a simple hash on the binary won't work). They also test binaries to see if they do VMM detection by running one instance under a real Windows XP machine and another on VMWare, and compare the network signatures they produce.

This was a very cool talk (and I can't wait to read the paper). My only concern is that the authors might find a few tough guys with Slavic features carrying Kalashnikovs outside of their offices one day...

Wednesday, April 22, 2009

NSDI 2009, Day One

NSDI is happening this week here in Boston. This years' conference has 32 papers (selected out of about 160 submissions) and there are more than 240 attendees, which is an NSDI record. The topics this year are pretty diverse, including content distribution (which seems to be a euphemism for "P2P"), software-defined radios, botnets, and of course the mandatory session on BFT.

A couple of highlights from my favorite talks today.

TrInc: Small Trusted Hardware for Large Distributed Systems
Dave Levin, University of Maryland; John R. Douceur, Jacob R. Lorch, and Thomas Moscibroda, Microsoft Research

This paper proposes to add a small trusted hardware component (which they implement as a smart card), incorporating a counter and a key, providing provable attestation for state updates performed by a node participating in a distributed system. This can be used to prevent a malicious or selfish node from "equivocating" by sending different messages to different peers. For example, the authors show how this can prevent attacks on BitTorrent where a node lies about what chunks of a file it has received in order to increase its own download capacity.

HashCache: Cache Storage for the Next Billion
Anirudh Badam, Princeton University; KyoungSoo Park, Princeton University and University of Pittsburgh; Vivek S. Pai and Larry L. Peterson, Princeton University

This paper develops a Web cache that uses far less memory and storage than conventional caches. The idea is to provide an effective caching solution for use in developing countries with poor Internet access and little access to high-end hardware. They employ a bunch of clever techniques to reduce the overheads and trade off storage, memory, and performance. To be honest, I never thought that Web caching could be sexy again, but this paper proved me wrong.

Making Byzantine Fault Tolerant Systems Tolerate Byzantine Faults
Allen Clement, Edmund Wong, Lorenzo Alvisi, and Mike Dahlin, The University of Texas at Austin; Mirco Marchetti, The University of Mondena and Reggio Emilia

This paper deals with the "elephant in the room" that most BFT systems don't provide acceptable (or any) performance if some nodes are actually faulty. The authors describe Aardvark, which is designed to provide robust BFT rather than just peak performance in the best case. I like the authors' shift in priorities in developing their system to focus on robustness. Kudos for using "Big MAC Attack" as the title of one of the slides.

The poster and demo session was just afterwards, though being completely beat after a long day I decided to head home.

By the way, why don't conference hotels train their staff not to make a huge racket immediately outside of rooms where presentations are happening? The last three conferences I've been to have been disrupted by noisy hotel staff yelling at each other and moving dishes around outside of the conference hall. You would think they would somehow be aware that they are just next door to a roomful of 300 people trying to listen to a talk.


Monday, April 13, 2009

Brown project spamming MediaWiki sites

A few weeks ago I noticed some very strange looking pages showing up on the TinyOS Docs Wiki which I maintain. These pages contained what appeared to be ASCII-encoded binary data of some kind, although the format was not anything I recognized. Cursory searches for what might be causing this turned up nothing, so I ended up spending a couple of hours locking down the site to prevent malicious edits.

Turns out this was (what appears to be) a student project from Brown called Graffiti which is intended to provide a kind of encrypted, distributed filesystem (I gather, since the paper isn't available) on top of "public" MediaWiki sites. (I should point out that the bogus pages on my site did not have the explanatory message at the top saying that they were related to this project - I guess this was only added in a later version of their code.)

The authors seem to be reticent about the trouble they have caused, but a comment that previously appeared on the project page suggests that they don't quite get why this is such a problem:
03/09/2009 - Rejection!
Our paper got rejected from IPTPS. One of the main points brought up by the reviewers was that our system was not a true peer-to-peer system. Most reviewers also seemed appalled at the idea of commandeering abandoned websites in order to store illegal content. Nevertheless, we are not deterred and will be searching for the next workshop/conference that is bold enough to take on the ideas of the Graffiti project!
(Seen on this discussion board.)

Now, while the idea of a distributed filesystem riding on top of "open" sites is cool, the way the authors went about this is problematic. Just because some MediaWiki sites are open doesn't make it OK to spam them with bogus pages for the purpose of doing your research -- I am sure this violates both the terms of service of Brown's network as well as the networks of those sites they spammed.

There are better ways to evaulate this system than to hammer on unprotected wiki sites without permission. They could have used PlanetLab and run their own wikis to evaluate the system's scalability and robustness. They could have asked permission from site owners with a promise to clean up after themselves after the experiments were run. I hope the authors are kidding about the "bold enough" comment above. It suggests they underestimated the legal and ethical issues raised by spamming open sites just to get a paper published, nor the amount of hassle they have caused sysadmins of the affected sites. I just hope they learned some kind of lesson from this.

Thursday, April 9, 2009

Rumors of the death of newspapers have been greatly exaggerated

A lot has been said lately about the decline and fall of the newspaper industry. In the last week I've seen at least two TV interviews with newspaper publishers moaning that blogs (ahem) can't provide the same quality of reporting as they can. Yet, they give newspapers away for free, online, which seems to me to be a race to the bottom. If everything is free, how are readers supposed to value the reporting provided by newspapers over what they can get from the Huffington Post or (God forbid) Digg?

Look, if newspapers want to stay in business, they have to start charging money for online access. It's as simple as that. The trick is balancing revenue from subscriptions with revenue for online ads driven by "free" access. The NY Times ran an experiment a couple of years ago where they started charging for "prime" content such as the editorial pages. In the end they pulled the plug since they were losing hits. But the question is not how many hits - it's dollars per hit that matter. With web browsers like Firefox making it trivial to block online ads, a site can't continue to rely on ads alone to keep the business afloat.

This is going to require some creative pricing models. Personally I'd like to see newspapers make the current day's stories free, but require that you be a subscriber to access the archives. If I email a link to a story and the recipient doesn't read it that very day, well, too bad, they need to sign up to get access. A (generous) free trial period will lure people in. This model can work. I pay something like $5 a month for Angie's List yet I access the site only a couple of times a year. Mostly it's because the opportunity cost of not having access to the site is high when I need it (i.e., to find a good plumber when my dishwasher explodes). Same goes for sites like the NY Times. If someone emailed me a link to Bittman's latest recipe for chocolate chip waffles and I couldn't read it, I would just have to subscribe, now wouldn't I?

Another model would be to bundle access to a wide range of online publications together, much like you get when you sign up for cable or satellite TV. The NY Times, Boston Globe, WSJ, Wired, Salon, etc. should form subscription packages where you get access to all of the sites for "one low monthly payment." Just like I can't watch anywhere near all of the 200+ channels I get with my DirecTV subscription, most people can't consume all of this content, so it gives consumers the appearance of getting a lot more than they are paying for.

Newspapers aren't going anywhere fast. But they do need to get with the times and reset readers' expectations that you get what you pay for.

Thursday, April 2, 2009

Don't believe the netbook hype

The NY Times is running an article today on the rise of netbooks, which are roughly defined as cheap laptops using low-power chips, sometimes without a hard drive. Of course, the terminology is fuzzy and confusing.
The article claims that netbooks are poised to eat into the conventional laptop and PC market in a big way, mainly because they're cheaper. I don't buy it.

The potential game-changer for netbooks is that companies traditionally associated with the cell phone market are bringing out new processors and other components that bring the cost and power consumption down. The low-cost and low-power ARM chips don't run Windows, so some netbooks run a stripped down version of Linux (though where did the NY Times get the idea that Linux costs $3 versus $25 for Windows XP?). And many current netbooks are too puny to run "real" applications -- by which I mean things like PowerPoint, games, or playing a DVD.

Now, I'm a big Linux advocate, but I don't buy the idea that just because netbooks are cheaper, they're going to take over a significant chunk of the market. If history has taught us anything, it's clear that Intel and Microsoft will bring down their prices and power consumption profiles to compete in this space. At the end of the day, it's software that sells devices, not price or power consumption. It's wrongheaded to take the technology-centric view that because the technology changes, users will follow suit and go along with a stripped-down sub-laptop that can't run real software. We've seen this before (anybody remember WebTV?) and it hasn't worked out. The Nokia n770 "Internet Tablets" are another classic example of a device that never took off in a significant way -- just because you can build something like this, doesn't mean that anybody wants it.

So I think the industry should look at where netbooks fit in with user needs. I'm hoping Apple does a netbook (a rumor that keeps circulating) since the iPhone OS is, in many ways, an ideal netbook platform -- and the amazing growth of the iPhone app market says a lot about its capabilities. And, knowing Apple, they'll put the user first, rather than the technology.

Update: David Pogue has a great video on the Great Netbook Compromise.

Startup Life: Three Months In

I've posted a story to Medium on what it's been like to work at a startup, after years at Google. Check it out here.