My research is unapologetically applications-driven: we've deployed sensor networks for monitoring volcanoes, disaster response, and for measuring limb movements in patients with Parkinson's Disease. One of the joys of working on sensor networks is that a lot of exciting research derives from close collaborations with domain experts, shedding light on challenges that we wouldn't otherwise be exposed to. It also keeps us in check and ensures we're working on real problems, rather than artificial ones.
At the same time, it's a sad truth that "deployment" or "application" papers often face an uphill battle when it comes to getting published in major conferences. I've seen plenty of (good!) application-focused papers get dinged in program committees for, well, simply not being novel enough. Now, we could have a healthy argument about the inherent novelty of building a real system, getting it to work, deploying it in a challenging field setting, and reporting on the results. But it's true that these papers are pretty different than those about a new protocol, algorithm, or language. I've thought a bit about what makes it harder for apps papers to get into these venues and have come up with the following observations.
1) Getting something to work in the real world often involves simplifying it to the point where most of the "sexy" ideas are watered down.
It is very rare for a successful sensor network deployment to involve brand-new, never-before-published techniques; doing so would involve a tremendous amount of risk. Generally it's necessary to use fairly robust code that embodies well-worn ideas, at least for the underpinnings of the system design (MAC, routing, time sync, and so forth). As a result, the components of the system design might end up not being very novel. Also, many application papers involve a combination of several "well known" techniques, but combined together in interesting ways. Still, when a reviewer picks apart a paper piece by piece, it's hard to identify the individual contributions. The hope is that the whole is greater than the sum of the parts; but this is often difficult to convey.
There is a way to avoid this problem, and that is to write the paper about something other than the "mundane" aspects of the system design itself. For our OSDI paper on the volcano sensor network, we decided to focus on the validation of the network's operation during the deployment, not the individual pieces that made up the system. Although it took a lot of work to take the "well-tested" implementations of major components (such as MultihopLQI) and get them to work robustly in the field, we didn't think the paper could rest on that refinement of previously-published ideas. The Berkeley paper on monitoring redwoods took a similar approach by focusing on the data analysis.
2) Academic research tends to reward those who come up with an idea first, not those who get the idea to work.
There are lots of great ideas in the literature that have only been studied in simulation or small-scale experiments. Almost no credit goes to those who manage to get an idea actually deployed and working under less certain conditions. So even though it might take an incredible amount of sweat to take, say, a routing protocol and get it working on real hardware in a large-scale field deployment, unless you ended up making substantial changes to the protocol, or learned something new about its operation, you're unlikely to get much credit for doing so.
We learned this the hard way with our paper on adapting the ADMR multicast protocol to work on motes, which we needed for the CodeBlue medical monitoring platform. It turns out that taking an existing protocol (which had only been studied using ns-2 with a simplistic radio model, and without consideration for memory or bandwidth limitations of mote-class devices), and implementing it on real hardware, didn't blow away the program committees the way we hoped it would. Eventually, we did publish this work (in the aptly-named REALMAN workshop). But the initial reviews contained things like "everybody knows that MANET protocols won't work on motes!" That was frustrating.
3) Deployments carry a substantial risk that the system won't actually work, making it harder to convince a reviewer that the paper is worth accepting.
Maybe there should be a built-in handicap for real deployment papers. Whereas in the lab, you can just keep tweaking and rerunning experiments until you get the results you want, this isn't possible in the field. On the other hand, it's not clear that we can really hold deployment papers to a different standard; after all, what constitutes a "real" deployment? Is an installation of nodes around an academic office building good enough? (We've seen plenty of those. If the world ever wants to know the average temperature or light level of the offices in a CS department, we are ready!) Or does it have to be in some gritty, untethered locale, like a forest, or a glacier? Does use of machetes and/or pack animals to reach the deployment site count for anything?
Of course, it is possible to get a great paper out of a deployment that goes sideways. The best way is to write the paper as a kind of retrospective, explaining what went wrong, and why. These papers are often entertaining to read, and provide valuable lessons for those attempting future work along the same lines. Also, failures can often take your research into entirely new directions, which I've blogged about before. As an example, we ended up developing Lance specifically to address the data quality challenges that arose in our deployment at Reventador. We would have never stumbled across that problem had our original system worked as planned.
One thing I don't think we should do is sequester deployment and application papers in their own venues, for example, by having a workshop on sensor networks applications. I understand the desire to get like-minded people together to share war stories, but I think it's essential that these kinds of papers be given equal billing with papers on more "fundamental" topics. In the best case, they can enrich an otherwise dry technical program, as well as inpire and inform future research. Besides, the folks who would go to such a workshop don't need to be convinced of the merits of application papers.
Personally, I'd like to see a bunch of real deployment papers submitted to Sensys 2009. Jie and I are thinking of ways of getting the program committee to think outside the box when reviewing these papers, and any suggestions as to how we should encourage a more open-minded perspective are most welcome.
Saturday, February 28, 2009
Wednesday, February 25, 2009
Requiem for the Compact Disc
It's the end of an era. Today I packed up my entire CD collection (around 1,000 CDs) and stuffed it away in the basement, never to see the light of day again. I had them all on this glorious solid oak rack -- now for sale on Craigslist! -- that I needed to move due to some remodeling work that we're doing in the house. It's too cumbersome to move that many CDs and then put them back again, so I decided, today was the day -- no more CDs. So here is my euology for my Compact Disc collection. It has served me exceptionally well over the years, and I think it deserves a proper sendoff before being relegated, forever, to the dankness of my basement and the phantom dipoles of my hard drive.
I'm an avid music collector -- apart from the physical CDs, I probably have another couple of hundred albums worth of music in digital format alone. Although I have not bought CDs for more than a year (moving completely over to the Amazon MP3 store, and occasionally iTunes), there was always something comforting about having that big rack of CDs there against the wall, some reassurance in the physical presence of the media. Of course, the whole rack now fits entirely on an iPod. But having them all there there, bigger than life, an artistic statement, conversation piece at parties, something to thumb through... now it is all just bits, nothing tangible. Of course, we are talking about music here -- which is by nature ephemeral and invisible -- but the physical CDs were like the projection of that music sent to us from a higher dimension into our humble bounded universe.
Of course, I am not getting rid of the CDs just yet -- just boxing them away. I can't bring myself to part with them. There is something deeply ingrained in my identity with those battered and scratched plastic cases, the copies of Gish and Washing Machine and Dolittle that have been played countless times, discs that have followed me from high school to college, Europe, grad school, here to Boston... they are like old friends, and very much a part of who I am.
What I am most saddened by is that I haven't digitized all of them, and my biggest fear is that I will simply forget about some long-lost gem buried deep in a box in the basement... what if I never get a chance to rediscover that wacky Ween album? The Rahsaan Roland Kirk box set? My God, did I really forget to rip the Boo Radleys and the Dead Can Dance? Will these songs simply fade from memory forever? Without the glimpse of the CD cover, will nothing jog my memory back to that night in San Francisco when I first listened to Kid A while driving over the Bay Bridge? It's all too much.
Finally, a few factoids.
First CD: Music for the Masses by Depeche Mode. Probably bought along with Appetite for Destruction by Guns n' Roses (I had a very confused childhood).
Favorite CD: That's easy. Loveless by My Bloody Valentine. A close runner-up used to be the Smashing Pumpkins' Siamese Dream until Billy Corgan's head expansion got in the way of him being able to write decent music.
Last CD: Mail-order copy of Swell's South of the Rain and Snow. My brother gave me a copy of Thom Yorke's Spitting Feathers for Christmas, which is pretty good.
Favorite record store: Tie between Amoeba and Rasputin's in Berkeley. I used to go there every other week or so and buy 10, 12 CDs at a time. Most of them I had never heard (or heard of) before, and many I only listened to once or twice. But not infrequently I would just discover something -- maybe based on the band name, or the album art -- that blew my mind. Interpol's Turn on the Bright Lights is a recent example of something plucked entirely at random from the bin (before they became popular, of course.)
Favorite online record store: Amazon MP3 store. Entirely DRM-free, all MP3 format, and a very good selection. It's rare I can't find what I'm looking for there. There's a free download manager that will automatically install new music into your iTunes library too.
Best music website: Pitchfork Media. Focus on indie music, very much in line with my tastes, with five detailed album reviews every day. Silly scoring system but anything rated 8.5 or higher I usually just buy (without preview) as long as it's not death metal or twee.
I'm an avid music collector -- apart from the physical CDs, I probably have another couple of hundred albums worth of music in digital format alone. Although I have not bought CDs for more than a year (moving completely over to the Amazon MP3 store, and occasionally iTunes), there was always something comforting about having that big rack of CDs there against the wall, some reassurance in the physical presence of the media. Of course, the whole rack now fits entirely on an iPod. But having them all there there, bigger than life, an artistic statement, conversation piece at parties, something to thumb through... now it is all just bits, nothing tangible. Of course, we are talking about music here -- which is by nature ephemeral and invisible -- but the physical CDs were like the projection of that music sent to us from a higher dimension into our humble bounded universe.
Of course, I am not getting rid of the CDs just yet -- just boxing them away. I can't bring myself to part with them. There is something deeply ingrained in my identity with those battered and scratched plastic cases, the copies of Gish and Washing Machine and Dolittle that have been played countless times, discs that have followed me from high school to college, Europe, grad school, here to Boston... they are like old friends, and very much a part of who I am.
What I am most saddened by is that I haven't digitized all of them, and my biggest fear is that I will simply forget about some long-lost gem buried deep in a box in the basement... what if I never get a chance to rediscover that wacky Ween album? The Rahsaan Roland Kirk box set? My God, did I really forget to rip the Boo Radleys and the Dead Can Dance? Will these songs simply fade from memory forever? Without the glimpse of the CD cover, will nothing jog my memory back to that night in San Francisco when I first listened to Kid A while driving over the Bay Bridge? It's all too much.
Finally, a few factoids.
First CD: Music for the Masses by Depeche Mode. Probably bought along with Appetite for Destruction by Guns n' Roses (I had a very confused childhood).
Favorite CD: That's easy. Loveless by My Bloody Valentine. A close runner-up used to be the Smashing Pumpkins' Siamese Dream until Billy Corgan's head expansion got in the way of him being able to write decent music.
Last CD: Mail-order copy of Swell's South of the Rain and Snow. My brother gave me a copy of Thom Yorke's Spitting Feathers for Christmas, which is pretty good.
Favorite record store: Tie between Amoeba and Rasputin's in Berkeley. I used to go there every other week or so and buy 10, 12 CDs at a time. Most of them I had never heard (or heard of) before, and many I only listened to once or twice. But not infrequently I would just discover something -- maybe based on the band name, or the album art -- that blew my mind. Interpol's Turn on the Bright Lights is a recent example of something plucked entirely at random from the bin (before they became popular, of course.)
Favorite online record store: Amazon MP3 store. Entirely DRM-free, all MP3 format, and a very good selection. It's rare I can't find what I'm looking for there. There's a free download manager that will automatically install new music into your iTunes library too.
Best music website: Pitchfork Media. Focus on indie music, very much in line with my tastes, with five detailed album reviews every day. Silly scoring system but anything rated 8.5 or higher I usually just buy (without preview) as long as it's not death metal or twee.
Friday, February 20, 2009
How I almost killed Facebook
It is a little-known fact that I nearly killed Facebook before it started.
At least, that's what I like to think. Though, perhaps it was inevitable that Facebook would have happened despite my attempted interference. I should make it clear that I'm a huge fan of Facebook and am, well, kind of addicted to it. But imagine how different the world might have been...
You see, back in 2004, Mark Zuckerberg was a student in my Operating Systems course (the much-feared and oft-maligned CS161, widely known as one of the more challenging courses at Harvard). Zuck was one of those students who pretty much nailed the course material without my having to teach it to him; this trait often goes hand-in-hand with having plenty of, shall we say, extracurricular interests and as a result he was not coming to lectures very regularly. At one point I called him to my office to find out where he had been, since class participation (by which I roughly mean, coming to lecture and not falling asleep) is part of the course grade.
Zuck explained that he was very busy starting a new social networking website called Facebook. Besides, he had also been in some trouble with the University for appropriating the digital database of student photo IDs and putting up a kind of "hot-or-not" site where visitors rated the relative attractiveness two randomly-selected pictures. (I like to think that I taught Zuck something about how to implement that in an efficient and deadlock-free manner, but that would no doubt be giving myself too much credit.)
Of course, at that time I thought that social networking sites were a complete waste of time -- both for the users and those developing the sites -- so I earnestly tried to talk Mark out of squandering his precious Harvard education on such a frivolous endeavor. "You think you're going to compete against Friendster and Orkut?" was the general outline of my argument. There were already too many social networking sites out there, I claimed, and building yet another one was clearly a waste of time. After all, didn't he want to graduate? And make an A in CS161 while he was at it?
Zuck was -- remarkably! -- unphased by this line of thinking and went ahead and built Facebook, and just to prove how useless everything we teach about efficiency and careful software design in CS161 really is, he implemented it in PHP. And it worked. Oh yeah, and he managed to grow the site to over 175 million users. So clearly I know pretty much zip when it comes to figuring out what a good startup business strategy is going to be, which explains why I should remain safely ensconced in my office at Harvard.
I want to take a little credit for one aspect of Facebook, though. At a later meeting (after Zuck had left Harvard and gone off to raise millions in capital) we were talking about the future of Facebook and I suggested to him that they really needed to find a way to get people to login to the site regularly. With most social networking sites, you sign up, add your few dozen friends, and maybe for one or two weeks get a kick out of messaging them as they join your friend list. But after that, there's little or no reason to keep returning to the site -- as a result your profile just stagnates. Well, wouldn't you know it -- a few months later Facebook came out with the News Feed feature which shows you what all of your friends are up on on an up-to-the-minute basis. Pure genius! Had I only thought to patent the idea before Mark got his grubby hands on it I would be a very rich (and far less grumpy) man right now.
Ah well. It is, after all, my firmly held conviction that the goal of academia is to broadly disseminate good ideas for others to exploit. All I really need to do is keep publishing papers, and I don't care how many millions that News Feed is worth, it ain't good enough to get into SOSP. Right?
One last thought. If, by any chance, you are thinking about forming your own startup, by all means come talk to me first, and do exactly the opposite of what I tell you. Guaranteed success!
(Update 2/26/09: Some of those commenting here seem to have missed the intended irony of this post. In response to one of the comments, yes, I did in fact invent the Internet. As it turns out, I am also responsible for many other features of Facebook, including the ability to link to your friends, set your status, post photos, and of course -- my favorite feature by far -- superpoke. These were all originally my ideas and I hope you have enjoyed using them during your own Facebook experience. Thanks for giving me the opportunity to clarify!)
At least, that's what I like to think. Though, perhaps it was inevitable that Facebook would have happened despite my attempted interference. I should make it clear that I'm a huge fan of Facebook and am, well, kind of addicted to it. But imagine how different the world might have been...
You see, back in 2004, Mark Zuckerberg was a student in my Operating Systems course (the much-feared and oft-maligned CS161, widely known as one of the more challenging courses at Harvard). Zuck was one of those students who pretty much nailed the course material without my having to teach it to him; this trait often goes hand-in-hand with having plenty of, shall we say, extracurricular interests and as a result he was not coming to lectures very regularly. At one point I called him to my office to find out where he had been, since class participation (by which I roughly mean, coming to lecture and not falling asleep) is part of the course grade.
Zuck explained that he was very busy starting a new social networking website called Facebook. Besides, he had also been in some trouble with the University for appropriating the digital database of student photo IDs and putting up a kind of "hot-or-not" site where visitors rated the relative attractiveness two randomly-selected pictures. (I like to think that I taught Zuck something about how to implement that in an efficient and deadlock-free manner, but that would no doubt be giving myself too much credit.)
Of course, at that time I thought that social networking sites were a complete waste of time -- both for the users and those developing the sites -- so I earnestly tried to talk Mark out of squandering his precious Harvard education on such a frivolous endeavor. "You think you're going to compete against Friendster and Orkut?" was the general outline of my argument. There were already too many social networking sites out there, I claimed, and building yet another one was clearly a waste of time. After all, didn't he want to graduate? And make an A in CS161 while he was at it?
Zuck was -- remarkably! -- unphased by this line of thinking and went ahead and built Facebook, and just to prove how useless everything we teach about efficiency and careful software design in CS161 really is, he implemented it in PHP. And it worked. Oh yeah, and he managed to grow the site to over 175 million users. So clearly I know pretty much zip when it comes to figuring out what a good startup business strategy is going to be, which explains why I should remain safely ensconced in my office at Harvard.
I want to take a little credit for one aspect of Facebook, though. At a later meeting (after Zuck had left Harvard and gone off to raise millions in capital) we were talking about the future of Facebook and I suggested to him that they really needed to find a way to get people to login to the site regularly. With most social networking sites, you sign up, add your few dozen friends, and maybe for one or two weeks get a kick out of messaging them as they join your friend list. But after that, there's little or no reason to keep returning to the site -- as a result your profile just stagnates. Well, wouldn't you know it -- a few months later Facebook came out with the News Feed feature which shows you what all of your friends are up on on an up-to-the-minute basis. Pure genius! Had I only thought to patent the idea before Mark got his grubby hands on it I would be a very rich (and far less grumpy) man right now.
Ah well. It is, after all, my firmly held conviction that the goal of academia is to broadly disseminate good ideas for others to exploit. All I really need to do is keep publishing papers, and I don't care how many millions that News Feed is worth, it ain't good enough to get into SOSP. Right?
One last thought. If, by any chance, you are thinking about forming your own startup, by all means come talk to me first, and do exactly the opposite of what I tell you. Guaranteed success!
(Update 2/26/09: Some of those commenting here seem to have missed the intended irony of this post. In response to one of the comments, yes, I did in fact invent the Internet. As it turns out, I am also responsible for many other features of Facebook, including the ability to link to your friends, set your status, post photos, and of course -- my favorite feature by far -- superpoke. These were all originally my ideas and I hope you have enjoyed using them during your own Facebook experience. Thanks for giving me the opportunity to clarify!)
Wednesday, February 18, 2009
Blogging a research project?
This term I am teaching a graduate seminar on wireless sensor networks. Actually, "teaching" is not quite the right word, as I see my role as mainly that of moderating a discussion between the students in the course, and raising the occasional controversial point as grist for the mill. Normally, in these kinds of courses, the content of the discussion is lost to the ether, so this term I decided to run a blog where the students post a summary of our conversation about the assigned papers. Students are encouraged to put their own editorial slant on the content of the blog posting, and the blogger for each class is responsible for leading the discussion.
So far it's been a lot of fun and provides some permanence to those revelations and insights that, otherwise, would be terribly ephemeral. It also gives the students a chance to write up their ideas a bit more formally, with a broader audience than, say, simply emailing them to me.
This has got me thinking about the potential role of blogging in a research project. Timothy Gowers has started a very interesting blog-based mathematics project in which the blog itself serves as the medium for collaborative discovery. I started wondering whether this model might translate into, say, a computer systems research project. Sometimes this happens through email lists and impromtu collaborations between people who already know each other, but opening up the project on a blog seems to offer orders of magnitude more opportunity for networking and learning from one another in rapid and informal ways.
As an intermediate step in this direction, I am currently writing an NSF proposal for the Cyber-Physical Systems program (as is everyone else I know), and as part of the "broader impact" statement I've decided we're going to blog the research project, if it is funded of course. The idea is simple: Every week or so, one of the students or PIs will post a short article on the progress on the project to date, and invite comments. It's possible this has been done before, but I haven't seen any major systems projects adopt such a model.
I think this could yield a number of interesting results. The only way most systems research projects are presented to the world is through a small number of published papers. Necessarily, those papers capture the "successes" of the project and generally do not dwell upon the many blind alleys and outright failures that led up to the big result. By blogging the process of the project as it unfolds, other researchers, especially students, could learn from the mistakes of our work and also learn something about the trials and tribulations of a typical project. I'd love to tell one of my future grad students to simply read, say, a years' worth of a project blog to understand how what turned out to be a beautiful paper took so much hard work and hacking.
Also, blogging the effort could get potential collaborators and even the public a lot more interested in what we're doing. I love hearing about early prototypes and "conceptual designs" when they leak to blogs like Engadget; though most of these things never see the light of day, they can be pretty inspiring. Of course, there's always that fear that you'll get scooped if you tell the world about your great ideas before they are published in a conventional scientific venue, or that you'll look like an idiot when you blog about how you spent three weeks tracking down a missing minus sign in your code. (I have done this.) On the other hand, opening up the process of doing research seems to me to be the ultimate form of outreach and could offer the next generation of students a much better picture of what really happens in grad school.
Finally, this could potentially lead to a lot of unexpected collaborations getting started. Some of my best collaborations have started through random encounters: a former grad student of Margo Seltzer's introduced me to a geophysicist (Jonathan Lees at UNC) whom I have worked with on sensor networks for volcano monitoring, and someone attending one of my talks turned out to be looking for a wireless sensor solution for clinical assessment of neuromotor diseases (Paolo Bonato at the Spaulding Rehabilitation Hospital). My theory is the greater the surface area you expose, the more connections you're going to make.
The biggest risk, I think, is that of your ideas being stolen. However, I've always felt that the whole point of doing research is for others to take your ideas and run with them. For this reason I've always released the code my group develops under an open source license and simply not worried about who picks it up for what purpose. (But that's a topic for another post!) Realistically, we are always sharing our ideas, when giving talks, writing papers, or bumping into someone at a conference. Could blogging increase the bisection bandwidth?
Update (22 Feb 09) - Speaking of blogging research, here's the blog for the Berkeley Cloud Computing project.
So far it's been a lot of fun and provides some permanence to those revelations and insights that, otherwise, would be terribly ephemeral. It also gives the students a chance to write up their ideas a bit more formally, with a broader audience than, say, simply emailing them to me.
This has got me thinking about the potential role of blogging in a research project. Timothy Gowers has started a very interesting blog-based mathematics project in which the blog itself serves as the medium for collaborative discovery. I started wondering whether this model might translate into, say, a computer systems research project. Sometimes this happens through email lists and impromtu collaborations between people who already know each other, but opening up the project on a blog seems to offer orders of magnitude more opportunity for networking and learning from one another in rapid and informal ways.
As an intermediate step in this direction, I am currently writing an NSF proposal for the Cyber-Physical Systems program (as is everyone else I know), and as part of the "broader impact" statement I've decided we're going to blog the research project, if it is funded of course. The idea is simple: Every week or so, one of the students or PIs will post a short article on the progress on the project to date, and invite comments. It's possible this has been done before, but I haven't seen any major systems projects adopt such a model.
I think this could yield a number of interesting results. The only way most systems research projects are presented to the world is through a small number of published papers. Necessarily, those papers capture the "successes" of the project and generally do not dwell upon the many blind alleys and outright failures that led up to the big result. By blogging the process of the project as it unfolds, other researchers, especially students, could learn from the mistakes of our work and also learn something about the trials and tribulations of a typical project. I'd love to tell one of my future grad students to simply read, say, a years' worth of a project blog to understand how what turned out to be a beautiful paper took so much hard work and hacking.
Also, blogging the effort could get potential collaborators and even the public a lot more interested in what we're doing. I love hearing about early prototypes and "conceptual designs" when they leak to blogs like Engadget; though most of these things never see the light of day, they can be pretty inspiring. Of course, there's always that fear that you'll get scooped if you tell the world about your great ideas before they are published in a conventional scientific venue, or that you'll look like an idiot when you blog about how you spent three weeks tracking down a missing minus sign in your code. (I have done this.) On the other hand, opening up the process of doing research seems to me to be the ultimate form of outreach and could offer the next generation of students a much better picture of what really happens in grad school.
Finally, this could potentially lead to a lot of unexpected collaborations getting started. Some of my best collaborations have started through random encounters: a former grad student of Margo Seltzer's introduced me to a geophysicist (Jonathan Lees at UNC) whom I have worked with on sensor networks for volcano monitoring, and someone attending one of my talks turned out to be looking for a wireless sensor solution for clinical assessment of neuromotor diseases (Paolo Bonato at the Spaulding Rehabilitation Hospital). My theory is the greater the surface area you expose, the more connections you're going to make.
The biggest risk, I think, is that of your ideas being stolen. However, I've always felt that the whole point of doing research is for others to take your ideas and run with them. For this reason I've always released the code my group develops under an open source license and simply not worried about who picks it up for what purpose. (But that's a topic for another post!) Realistically, we are always sharing our ideas, when giving talks, writing papers, or bumping into someone at a conference. Could blogging increase the bisection bandwidth?
Update (22 Feb 09) - Speaking of blogging research, here's the blog for the Berkeley Cloud Computing project.
Monday, February 16, 2009
Time management for junior faculty
At SEAS this term we've been holding a professional development seminar for graduate students, basically an informal series of lunchtime talks in which faculty talk about issues like doing a job search, giving a talk, academic writing, and so forth. It's a great idea and touches on a lot of issues that I wish had been brought up in a more formal setting when I was a graduate student. Last week, Vinny Manoharan and I gave a presentation on time management, and this got me thinking about effective techniques for managing one's time, many of which I only learned as a new faculty member.
Of course, I continually struggle with managing my time, especially with distractions like e-mail and reading Digg while trying to get "real work" done. But in the last few years I've developed a few tricks that I find pretty helpful in terms of structuring my day.
The first is to protect my "deep thinking" time, namely, the mornings, when I get my best thinking (and writing) done. I generally disallow any meetings on Monday-Thursday before noon. (Fridays end up being my one back-to-back meeting day.) Not only does this give me a long stretch of uninterrupted time to work on papers or proposals, but it leverages the mental clarity and lexical eloquence that I only seem to manage first thing in the morning. During this time, I focus on writing and thinking, and avoid any kind of administrative distractions, preparing for class, or meetings. Those I save for the afternoon when I am generally mentally fried and the context-switch overhead is less detrimental.
The second is to carve out one full day of the week -- Wednesday in my case -- and have no meetings that day. Though I am only successful in this about 30% of the time, having a whole day with no meetings means I can really dig in deep into projects that I have been putting off, such as starting a new paper or -- gasp! -- actually doing some hacking. I really look forward to Wednesdays and they also give me a chance to catch up on things that pile up during the week.
The third big change in my schedule has been to think twice before agreeing to do something, whether it be to join in on a grant proposal with some colleagues, write a book chapter, or give a talk. Individually, these things might be great, but when I find myself with too many balls in the air my life gets really miserable and I end up thrashing just trying to stay afloat. Before I started at Harvard, Margo Seltzer gave me an invaluable piece of advice: learn how to say no. Although it's hard to imagine as a grad student, as faculty everyone wants a piece of you -- whether it be for sitting on university committees, helping to write a grant, serving on a program committee.... the list goes on. I've since learned that saying no is essential to keeping your sanity.
Lately I have been actively cutting back on my list of projects, and generally saying no to new projects (no matter how interesting they might be), unless they directly dovetail with something I'm already working on. I'm also saying no to any PCs at the moment since I'm cochairing Sensys 2009, and expect that to take up a significant chunk of my time from now until July. (Also, I somehow managed to find myself on five separate PCs at once not long ago, and that was a world of hurt.) It is very hard to do, since I don't like letting people down, and I actually really like having lots of varied things to work on. But having less on my plate means I can go more deeply into each project and hopefully keep tabs on everything without getting pulled in too many directions.
Of course, I told myself for a year or so that I wouldn't start blogging because the last thing I needed was another project... well, aren't rules made to be broken?
Of course, I continually struggle with managing my time, especially with distractions like e-mail and reading Digg while trying to get "real work" done. But in the last few years I've developed a few tricks that I find pretty helpful in terms of structuring my day.
The first is to protect my "deep thinking" time, namely, the mornings, when I get my best thinking (and writing) done. I generally disallow any meetings on Monday-Thursday before noon. (Fridays end up being my one back-to-back meeting day.) Not only does this give me a long stretch of uninterrupted time to work on papers or proposals, but it leverages the mental clarity and lexical eloquence that I only seem to manage first thing in the morning. During this time, I focus on writing and thinking, and avoid any kind of administrative distractions, preparing for class, or meetings. Those I save for the afternoon when I am generally mentally fried and the context-switch overhead is less detrimental.
The second is to carve out one full day of the week -- Wednesday in my case -- and have no meetings that day. Though I am only successful in this about 30% of the time, having a whole day with no meetings means I can really dig in deep into projects that I have been putting off, such as starting a new paper or -- gasp! -- actually doing some hacking. I really look forward to Wednesdays and they also give me a chance to catch up on things that pile up during the week.
The third big change in my schedule has been to think twice before agreeing to do something, whether it be to join in on a grant proposal with some colleagues, write a book chapter, or give a talk. Individually, these things might be great, but when I find myself with too many balls in the air my life gets really miserable and I end up thrashing just trying to stay afloat. Before I started at Harvard, Margo Seltzer gave me an invaluable piece of advice: learn how to say no. Although it's hard to imagine as a grad student, as faculty everyone wants a piece of you -- whether it be for sitting on university committees, helping to write a grant, serving on a program committee.... the list goes on. I've since learned that saying no is essential to keeping your sanity.
Lately I have been actively cutting back on my list of projects, and generally saying no to new projects (no matter how interesting they might be), unless they directly dovetail with something I'm already working on. I'm also saying no to any PCs at the moment since I'm cochairing Sensys 2009, and expect that to take up a significant chunk of my time from now until July. (Also, I somehow managed to find myself on five separate PCs at once not long ago, and that was a world of hurt.) It is very hard to do, since I don't like letting people down, and I actually really like having lots of varied things to work on. But having less on my plate means I can go more deeply into each project and hopefully keep tabs on everything without getting pulled in too many directions.
Of course, I told myself for a year or so that I wouldn't start blogging because the last thing I needed was another project... well, aren't rules made to be broken?
Saturday, February 14, 2009
Your suggestions for SenSys 2009
This year, Jie Liu and I are program co-chairs for ACM SenSys 2009, the 7th year of this highly successful conference on sensor networks. David Culler is the general chair and is doing a great job making the conference actually happen (by comparison, the job that Jie and I have is pretty easy).
SenSys was started, in part, to provide a venue for true systems papers on sensor nets, as opposed to the large body of theoretical and simulation-based work in the area. Prior to SenSys, there was no single good venue to publish these papers: they did not quite belong in traditional networking or mobile communications conferences; nor did they represent a substantial fraction of the programs at the mainstays of the systems community (such as SOSP and OSDI). I think SenSys has done a great job at carving itself out as the premier conference for sensor networks systems research, and it continues to be an extremely vibrant and competitive venue.
Now that SenSys has been around for a while, Jie and I thought it would be worthwhile to reflect on the state of the WSN research community and consider what the conference's role should be going forward. So, we'd like to hear your thoughts on what direction SenSys should take this year. Please feel free to comment on this article, or just email me with any ideas you have.
As you can tell from the call for papers and the excellent program committee we have put together, we are trying something different with SenSys this year. First, we are bringing in a number of PC members who are outside of the "traditional" WSN community, in the hopes of keeping us on our toes, and maintaining a sanity check on the quality of the papers compared to other areas, such as wireless and mobile networking. Second, we are hoping to broaden the scope of the conference to encompass "non-traditional" sensor network systems, such as the use of mobile phones or underwater sensor nets. At the same time, we have to be careful not to water down the core of the conference (no pun intended). Finally, I am hoping to re-calibrate the conference in terms of its expectations of what constitutes a good paper. In my opinion, not enough papers submitted to SenSys (and other WSN venues) constitute really strong systems papers, in terms of technical depth, presentation, and quality of the evaluation. So while we want to become more inclusive it is also important to maintain high standards.
So, we'd like to get your thoughts on the conference, the reviewing process, and your hopes for what direction we might take. As a reminder, the abstract submission deadline is April 1, with full papers due April 8 - see the full call for papers here.
SenSys was started, in part, to provide a venue for true systems papers on sensor nets, as opposed to the large body of theoretical and simulation-based work in the area. Prior to SenSys, there was no single good venue to publish these papers: they did not quite belong in traditional networking or mobile communications conferences; nor did they represent a substantial fraction of the programs at the mainstays of the systems community (such as SOSP and OSDI). I think SenSys has done a great job at carving itself out as the premier conference for sensor networks systems research, and it continues to be an extremely vibrant and competitive venue.
Now that SenSys has been around for a while, Jie and I thought it would be worthwhile to reflect on the state of the WSN research community and consider what the conference's role should be going forward. So, we'd like to hear your thoughts on what direction SenSys should take this year. Please feel free to comment on this article, or just email me with any ideas you have.
As you can tell from the call for papers and the excellent program committee we have put together, we are trying something different with SenSys this year. First, we are bringing in a number of PC members who are outside of the "traditional" WSN community, in the hopes of keeping us on our toes, and maintaining a sanity check on the quality of the papers compared to other areas, such as wireless and mobile networking. Second, we are hoping to broaden the scope of the conference to encompass "non-traditional" sensor network systems, such as the use of mobile phones or underwater sensor nets. At the same time, we have to be careful not to water down the core of the conference (no pun intended). Finally, I am hoping to re-calibrate the conference in terms of its expectations of what constitutes a good paper. In my opinion, not enough papers submitted to SenSys (and other WSN venues) constitute really strong systems papers, in terms of technical depth, presentation, and quality of the evaluation. So while we want to become more inclusive it is also important to maintain high standards.
So, we'd like to get your thoughts on the conference, the reviewing process, and your hopes for what direction we might take. As a reminder, the abstract submission deadline is April 1, with full papers due April 8 - see the full call for papers here.
Wednesday, February 11, 2009
The Berkeley Systems Model
One thing we have discussed a bit in my graduate seminar this term is the tension between complexity and simplicity in system designs. As a simple illustrative example, consider B-MAC versus Z-MAC, two MAC protocols for WSNs with very different underlying design philosophies. B-MAC (the B stands for "Berkeley") is a simple, almost minimalist approach to MAC design: a couple of simple primitives (low power listening and ACKs) with a lean API permitting a layer on top to tune the LPL check interval and to turn ACKs on and off. That's it. The paper argues that with this basic set of mechsnisms you can build a wide range of policies on top, including more sophisticated protocols using RTS/CTS or TDMA. Of course, that is largely left to the reader.
Z-MAC, on the other hand, takes quite the opposite approach. It is a hybrid between CSMA and TDMA, includes mechanisms for two-hop neighbor discovery, slot assignment, time synchronization, and adaptivity to low- and high-contention modes using an epidemic ECN protocol. None of these mechanisms are exposed or tunable by the layers above; none of them can be disabled in different situations. Z-MAC is like a rich, sumptuous meal at a two-Michelin-star restaurant (I'll reserve the three stars for later work); whereas B-MAC is like the menu at In-n-Out: "Hamburger. Cheeseburger. Double Double."
Personally, I tend to recoil at designs that pack in so much complexity, especially for something so fundamental as a MAC protocol. (If for no better reason than code footprint -- with Z-MAC, how much memory is left over for the application, anyway?) This leads me to what I like to call the Berkeley Systems Model: a certain approach to doing systems research that strives for elegance and austerity above all. I'm sure it's not unique to Berkeley -- perhaps I should just call it the Harvard Systems Model -- but it seems to be best characterized by work such as Patterson's RISC and RAID, and Culler's Active Messages (by which I mean the original work, not the TinyOS manifestation of the same idea). Compare Active Messages to its main competitor at the time -- distributed shared memory -- and you'll understand immediately what the B.S.M. is all about. The B.S.M. is found in many other places (not all of which are populated with Berkeley alums); and arguably one could claim it actually originated at MIT or UCLA, but let's not split hairs.
Having thought a fair bit about this, I think there are two factors at work in shaping the B.S.M. mindset. The first, frankly, is simply a distaste (or perhaps a fear?) of complexity, irresepective of the merits of whatever system embodies it. Culler is well known for pushing back on work that has too many knobs, bells, or whistles -- he engrains in his students a deep appreciation for minimalism. If you compare some of the earliest systems work in sensor nets -- the TinyOS model and its own version of Active Messages with, say, directed diffusion -- it is immediately evident that the whole idea of combining naming, routing, querying, aggregation, and MAC into a single layer is just not written in the Tao of Culler.
But the deeper, and more important (I think) motivation is the desire to obtain clarity in terms of the fundamental underpinnings for a given system design. Much of the B.S.M. is about stripping away the layers, pulling a system apart into its many constituent pieces, reasoning about how they fit together, which ones belong, and which ones don't. To take another Berkeley example, Eric Brewer is fairly adept at this incisive mode of thinking; you can talk to him for 10 minutes about a system he's never heard of and he'll make an observation that forces you to rethink your whole design. So, I think of the B.S.M. as, essentially, about mediating on a system design, focusing on a mantra while opening one's mind to the whole.
All of that said, there are some real merits to the "kitchen sink" approach. The main one being that a heck of a lot of intellectual satisfaction can be derived from reading such a paper. While a paper from the B.S.M. school has a clean, sharp edge and leaves little aftertaste, something like Z-MAC really gives you something to sink your teeth into. There is so much going on in that paper: an intricate object with innumerable facets and hooks to explore. Of course, there are risks on both sides. A B.S.M. paper might feel vacuous, or fluffed-up, if the One Good Idea is just not profound enough. A kitchen sink paper tends to have a lot of territory to defend, especially in the initial reviewing round when PC members are looking for whatever chinks in the armor to shoot the thing down.
Also, it is clear one can take either approach to research and be successful; within a single community or conference there are plenty of examples of both types of work. It would not surprise me to learn that one's school of thought largely shapes how one approaches these papers, on PC meetings for example. This debate does not usually rise to the surface of the discussions we have in PC meetings themselves, but I think it underlies much of the dissonance between scores that some papers receive, touching a nerve in one camp while mollifying another. In co-chairing Sensys 2009 I'll be interested to see how this plays itself out.
Z-MAC, on the other hand, takes quite the opposite approach. It is a hybrid between CSMA and TDMA, includes mechanisms for two-hop neighbor discovery, slot assignment, time synchronization, and adaptivity to low- and high-contention modes using an epidemic ECN protocol. None of these mechanisms are exposed or tunable by the layers above; none of them can be disabled in different situations. Z-MAC is like a rich, sumptuous meal at a two-Michelin-star restaurant (I'll reserve the three stars for later work); whereas B-MAC is like the menu at In-n-Out: "Hamburger. Cheeseburger. Double Double."
Personally, I tend to recoil at designs that pack in so much complexity, especially for something so fundamental as a MAC protocol. (If for no better reason than code footprint -- with Z-MAC, how much memory is left over for the application, anyway?) This leads me to what I like to call the Berkeley Systems Model: a certain approach to doing systems research that strives for elegance and austerity above all. I'm sure it's not unique to Berkeley -- perhaps I should just call it the Harvard Systems Model -- but it seems to be best characterized by work such as Patterson's RISC and RAID, and Culler's Active Messages (by which I mean the original work, not the TinyOS manifestation of the same idea). Compare Active Messages to its main competitor at the time -- distributed shared memory -- and you'll understand immediately what the B.S.M. is all about. The B.S.M. is found in many other places (not all of which are populated with Berkeley alums); and arguably one could claim it actually originated at MIT or UCLA, but let's not split hairs.
Having thought a fair bit about this, I think there are two factors at work in shaping the B.S.M. mindset. The first, frankly, is simply a distaste (or perhaps a fear?) of complexity, irresepective of the merits of whatever system embodies it. Culler is well known for pushing back on work that has too many knobs, bells, or whistles -- he engrains in his students a deep appreciation for minimalism. If you compare some of the earliest systems work in sensor nets -- the TinyOS model and its own version of Active Messages with, say, directed diffusion -- it is immediately evident that the whole idea of combining naming, routing, querying, aggregation, and MAC into a single layer is just not written in the Tao of Culler.
But the deeper, and more important (I think) motivation is the desire to obtain clarity in terms of the fundamental underpinnings for a given system design. Much of the B.S.M. is about stripping away the layers, pulling a system apart into its many constituent pieces, reasoning about how they fit together, which ones belong, and which ones don't. To take another Berkeley example, Eric Brewer is fairly adept at this incisive mode of thinking; you can talk to him for 10 minutes about a system he's never heard of and he'll make an observation that forces you to rethink your whole design. So, I think of the B.S.M. as, essentially, about mediating on a system design, focusing on a mantra while opening one's mind to the whole.
All of that said, there are some real merits to the "kitchen sink" approach. The main one being that a heck of a lot of intellectual satisfaction can be derived from reading such a paper. While a paper from the B.S.M. school has a clean, sharp edge and leaves little aftertaste, something like Z-MAC really gives you something to sink your teeth into. There is so much going on in that paper: an intricate object with innumerable facets and hooks to explore. Of course, there are risks on both sides. A B.S.M. paper might feel vacuous, or fluffed-up, if the One Good Idea is just not profound enough. A kitchen sink paper tends to have a lot of territory to defend, especially in the initial reviewing round when PC members are looking for whatever chinks in the armor to shoot the thing down.
Also, it is clear one can take either approach to research and be successful; within a single community or conference there are plenty of examples of both types of work. It would not surprise me to learn that one's school of thought largely shapes how one approaches these papers, on PC meetings for example. This debate does not usually rise to the surface of the discussions we have in PC meetings themselves, but I think it underlies much of the dissonance between scores that some papers receive, touching a nerve in one camp while mollifying another. In co-chairing Sensys 2009 I'll be interested to see how this plays itself out.
Sunday, February 8, 2009
On the serendipity of failure
My post on the relative difficulty of sensor nets research versus "traditional" systems research made an allusion to a comment that David Culler once made about the handicap associated with field work. David notes that in a field deployment, unlike a simulation or a lab experiment, if you get data you don't like, you can't just throw it away and run the experiment again, hoping for a better result. All systems researchers tend to tweak and tune their systems until they get the graphs they expect to get; it's one of the luxuries of running experiments in a controlled setting. With sensor network field deployments, however, there are no "do overs". You get whatever the network gives you, and if the network is broken or producing bogus data, that's what you have to go with when it's time to write it up. One of the most remarkable examples I have seen of this is the Berkeley paper on "A Macroscope in the Redwoods." In that case, the data was actually retrieved from the sensor nodes manually, in part because the multihop routing did not function as well as expected in the field. They got lucky by logging all of the sensor data to flash on each mote, making it possible to write that paper despite the failure.
We learned this lesson the hard way with our deployment at Reventador Volcano in 2005: the FTSP time sync protocol broke (badly), causing much of the data to have bogus timestamps, rendering the signals useless from a geophysical monitoring point of view. Note that we had tested our system considerably in the lab, on much larger networks than the one we deployed in the field, but we never noticed this problem until we got to the volcano. What went wrong? Many possibilities: The sparse nature of the field network; the fact that the nodes were powered by real alkaline batteries and not USB hubs; the fact that the time sync bug only seemed to turn up after the network had been operational for several hours. (We had done many hours of testing in the lab, but never continuously for that period of time.)
In our case, we managed to turn lemons into lemonade by designing a scheme to fix the broken timestamps, and then did a fairly rigorous study of its accuracy. That got us into OSDI! It's possible that if the FTSP protocol had worked perfectly we would have had a harder time getting that paper accepted.
I often find the parts of those application papers that talk about what didn't work as expected are more enlightening than the rest of the paper. Lots of things sound like good ideas on paper; it's often not until you try them in the field that you gain some real understanding of the real-world forces at work.
Later on I'll blog about why sensor net application papers face such an uphill battle at most conferences.
We learned this lesson the hard way with our deployment at Reventador Volcano in 2005: the FTSP time sync protocol broke (badly), causing much of the data to have bogus timestamps, rendering the signals useless from a geophysical monitoring point of view. Note that we had tested our system considerably in the lab, on much larger networks than the one we deployed in the field, but we never noticed this problem until we got to the volcano. What went wrong? Many possibilities: The sparse nature of the field network; the fact that the nodes were powered by real alkaline batteries and not USB hubs; the fact that the time sync bug only seemed to turn up after the network had been operational for several hours. (We had done many hours of testing in the lab, but never continuously for that period of time.)
In our case, we managed to turn lemons into lemonade by designing a scheme to fix the broken timestamps, and then did a fairly rigorous study of its accuracy. That got us into OSDI! It's possible that if the FTSP protocol had worked perfectly we would have had a harder time getting that paper accepted.
I often find the parts of those application papers that talk about what didn't work as expected are more enlightening than the rest of the paper. Lots of things sound like good ideas on paper; it's often not until you try them in the field that you gain some real understanding of the real-world forces at work.
Later on I'll blog about why sensor net application papers face such an uphill battle at most conferences.
Saturday, February 7, 2009
All pain, all gain
Jon Howell from MSR came to visit yesterday and we got into an interesting discussion about the risk versus reward tradeoff for different approaches to research. Two of my Ph.D. students (Bor-rong Chen and Konrad Lorincz) are graduating this year -- you should hire them, of course -- but they are facing a weak job market and the need to rack up publications is as important as ever. The question is, had we worked on problems in the more traditional systems space, would we have been more successful cranking out the papers?
It has long been my belief that doing research in wireless sensor networks -- especially the applied and experimental variety, where you actually have to build something that works -- involves a (nontrivial) degree of difficulty that is not present in "traditional" systems research. Think of it: Programming motes requires that everything fit into 10KB of RAM. You don't get malloc, threads, or printf. All you get are three LEDs to tell you what's going on. Half the time a mote you have on your desk doesn't program correctly, or is simply fried, requiring that you go hunt around for another one. Scaling up to a full network requires debugging complex interactions between all of these motes, and keep in mind you typically don't get to inspect what each one is doing -- and comunication and node failures are rampant.
And God forbid you try to run anything in a real field setting (like a redwood forest, or even a volcano) -- then it really, REALLY has to work. As David Culler says, in a field deployment you can't just throw away the data you don't like and rerun it. You have to take what you get. More often than not the network doesn't work as you expect, or seemingly trivial problems (like water getting into the cases) causes nodes to fail.
It's been a while since I focused on conventional distributed systems, nodes running UNIX, connected to the Internet, that kind of thing. But it seems that it is considerably easier in that environment to build up something complex and debug it to the point where it works. After all you can ssh into the nodes, dump everything to a log, use tried-and-true methods and tools like gdb and strace. Of course, there's still plenty of heavy lifting involved. Hakim Weatherspoon's work on getting the OceanStore prototype to run on 400+ PlanetLab nodes for several months is no mean feat. But if I took away his ssh connections and replaced them with 3 LEDs, I wonder what he'd do. (Of course, Hakim would have still rocked it. But that's Hakim.)
This is not to diminish the intellectual contribution of mainstream systems research at all. Indeed, one could argue that the lower barrier to entry has made it possible for those working on conventional systems to innovate more rapidly and produce deeper insights than those of us battling broken motes and crappy radios. So, I wonder what advice I should be giving new grad students wading into the field. A lot of the low hanging fruit in sensor nets has been taken. To make a substantial contribution in the area you need to take things in a different direction than those before you. Fortunately, the TinyOS community has been doing a much better job lately at providing standard libraries and protocols to lower the bar. But there's still a lot of pain involved in getting to the research frontier. (Another post, I'll muse on why so many people work on MAC protocols. I suspect it's because it requires a lot less reliance on other people's code.)
My group has been doing more work with the iMote2 platform lately, precisely because I think it provides an easier-to-use, more functional vehicle for driving research. Mostly this is because it has a good enough CPU and enough memory to push on some interesting ideas without having to wring your hands over every byte of RAM your code uses. But going forward, I wonder if some of the "gap" that people see in the sensor nets space isn't merely due to the blood, sweat, and tears that goes into getting anything complicated to really work. We should think about how to remove some of those obstacles to innvoation, not to mention publication.
It has long been my belief that doing research in wireless sensor networks -- especially the applied and experimental variety, where you actually have to build something that works -- involves a (nontrivial) degree of difficulty that is not present in "traditional" systems research. Think of it: Programming motes requires that everything fit into 10KB of RAM. You don't get malloc, threads, or printf. All you get are three LEDs to tell you what's going on. Half the time a mote you have on your desk doesn't program correctly, or is simply fried, requiring that you go hunt around for another one. Scaling up to a full network requires debugging complex interactions between all of these motes, and keep in mind you typically don't get to inspect what each one is doing -- and comunication and node failures are rampant.
And God forbid you try to run anything in a real field setting (like a redwood forest, or even a volcano) -- then it really, REALLY has to work. As David Culler says, in a field deployment you can't just throw away the data you don't like and rerun it. You have to take what you get. More often than not the network doesn't work as you expect, or seemingly trivial problems (like water getting into the cases) causes nodes to fail.
It's been a while since I focused on conventional distributed systems, nodes running UNIX, connected to the Internet, that kind of thing. But it seems that it is considerably easier in that environment to build up something complex and debug it to the point where it works. After all you can ssh into the nodes, dump everything to a log, use tried-and-true methods and tools like gdb and strace. Of course, there's still plenty of heavy lifting involved. Hakim Weatherspoon's work on getting the OceanStore prototype to run on 400+ PlanetLab nodes for several months is no mean feat. But if I took away his ssh connections and replaced them with 3 LEDs, I wonder what he'd do. (Of course, Hakim would have still rocked it. But that's Hakim.)
This is not to diminish the intellectual contribution of mainstream systems research at all. Indeed, one could argue that the lower barrier to entry has made it possible for those working on conventional systems to innovate more rapidly and produce deeper insights than those of us battling broken motes and crappy radios. So, I wonder what advice I should be giving new grad students wading into the field. A lot of the low hanging fruit in sensor nets has been taken. To make a substantial contribution in the area you need to take things in a different direction than those before you. Fortunately, the TinyOS community has been doing a much better job lately at providing standard libraries and protocols to lower the bar. But there's still a lot of pain involved in getting to the research frontier. (Another post, I'll muse on why so many people work on MAC protocols. I suspect it's because it requires a lot less reliance on other people's code.)
My group has been doing more work with the iMote2 platform lately, precisely because I think it provides an easier-to-use, more functional vehicle for driving research. Mostly this is because it has a good enough CPU and enough memory to push on some interesting ideas without having to wring your hands over every byte of RAM your code uses. But going forward, I wonder if some of the "gap" that people see in the sensor nets space isn't merely due to the blood, sweat, and tears that goes into getting anything complicated to really work. We should think about how to remove some of those obstacles to innvoation, not to mention publication.
Subscribe to:
Posts (Atom)
Startup Life: Three Months In
I've posted a story to Medium on what it's been like to work at a startup, after years at Google. Check it out here.
-
The word is out that I have decided to resign my tenured faculty job at Harvard to remain at Google. Obviously this will be a big change in ...
-
My team at Google is wrapping up an effort to rewrite a large production system (almost) entirely in Go . I say "almost" because ...
-
I'm often asked what my job is like at Google since I left academia. I guess going from tenured professor to software engineer sounds l...