My research is unapologetically applications-driven: we've deployed sensor networks for
monitoring volcanoes,
disaster response, and for
measuring limb movements in patients with Parkinson's Disease. One of the joys of working on sensor networks is that a lot of exciting research derives from close collaborations with domain experts, shedding light on challenges that we wouldn't otherwise be exposed to. It also keeps us in check and ensures we're working on real problems, rather than artificial ones.
At the same time, it's a sad truth that "deployment" or "application" papers often face an uphill battle when it comes to getting published in major conferences. I've seen plenty of (good!) application-focused papers get dinged in program committees for, well, simply
not being novel enough. Now, we could have a healthy argument about the inherent novelty of building a real system, getting it to work, deploying it in a challenging field setting, and reporting on the results. But it's true that these papers are pretty different than those about a new protocol, algorithm, or language. I've thought a bit about what makes it harder for apps papers to get into these venues and have come up with the following observations.
1) Getting something to work in the real world often involves simplifying it to the point where most of the "sexy" ideas are watered down.
It is very rare for a successful sensor network deployment to involve brand-new, never-before-published techniques; doing so would involve a tremendous amount of risk. Generally it's necessary to use fairly robust code that embodies well-worn ideas, at least for the underpinnings of the system design (MAC, routing, time sync, and so forth). As a result, the components of the system design might end up not being very novel. Also, many application papers involve a combination of several "well known" techniques, but combined together in interesting ways. Still, when a reviewer picks apart a paper piece by piece, it's hard to identify the individual contributions. The hope is that the whole is greater than the sum of the parts; but this is often difficult to convey.
There is a way to avoid this problem, and that is to write the paper about something other than the "mundane" aspects of the system design itself. For our
OSDI paper on the volcano sensor network, we decided to focus on the validation of the network's operation during the deployment, not the individual pieces that made up the system. Although it took a lot of work to take the "well-tested" implementations of major components (such as MultihopLQI) and get them to work robustly in the field, we didn't think the paper could rest on that refinement of previously-published ideas. The
Berkeley paper on monitoring redwoods took a similar approach by focusing on the data analysis.
2) Academic research tends to reward those who come up with an idea first, not those who get the idea to work.
There are lots of great ideas in the literature that have only been studied in simulation or small-scale experiments. Almost no credit goes to those who manage to get an idea actually deployed and working under less certain conditions. So even though it might take an incredible amount of sweat to take, say, a routing protocol and get it working on real hardware in a large-scale field deployment, unless you ended up making substantial changes to the protocol, or learned something new about its operation, you're unlikely to get much credit for doing so.
We learned this the hard way with our paper on adapting the
ADMR multicast protocol to work on motes, which we needed for the
CodeBlue medical monitoring platform. It turns out that taking an existing protocol (which had only been studied using ns-2 with a simplistic radio model, and without consideration for memory or bandwidth limitations of mote-class devices), and implementing it on real hardware, didn't blow away the program committees the way we hoped it would. Eventually, we did
publish this work (in the aptly-named
REALMAN workshop). But the initial reviews contained things like "
everybody knows that MANET protocols won't work on motes!" That was frustrating.
3) Deployments carry a substantial risk that the system won't actually work, making it harder to convince a reviewer that the paper is worth accepting.
Maybe there should be a built-in handicap for real deployment papers. Whereas in the lab, you can just keep tweaking and rerunning experiments until you get the results you want, this isn't possible in the field. On the other hand, it's not clear that we can really hold deployment papers to a different standard; after all, what constitutes a "real" deployment? Is an installation of nodes around an academic office building good enough? (We've seen plenty of those. If the world ever wants to know the average temperature or light level of the offices in a CS department, we are ready!) Or does it have to be in some gritty, untethered locale, like a forest, or a glacier? Does use of machetes and/or pack animals to reach the deployment site count for anything?
Of course, it is possible to get a great paper out of a deployment that goes sideways. The best way is to write the paper as a kind of retrospective, explaining what went wrong, and why. These papers are often entertaining to read, and provide valuable lessons for those attempting future work along the same lines. Also, failures can often take your research into entirely new directions, which I've
blogged about before. As an example, we ended up developing
Lance specifically to address the data quality challenges that arose in our deployment at Reventador. We would have never stumbled across that problem had our original system worked as planned.
One thing I
don't think we should do is sequester deployment and application papers in their own venues, for example, by having a workshop on sensor networks applications. I understand the desire to get like-minded people together to share war stories, but I think it's essential that these kinds of papers be given equal billing with papers on more "fundamental" topics. In the best case, they can enrich an otherwise dry technical program, as well as inpire and inform future research. Besides, the folks who would go to such a workshop don't need to be convinced of the merits of application papers.
Personally, I'd like to see a bunch of real deployment papers submitted to
Sensys 2009. Jie and I are thinking of ways of getting the program committee to think outside the box when reviewing these papers, and any suggestions as to how we should encourage a more open-minded perspective are most welcome.