I was invited to give the keynote speech at the European Wireless Sensor Networks conference in Coimbra, Portugal. This was a fantastic location for a conference -- Coimbra has one of the oldest universities in Europe, over 700 years old. It's a beautiful city. EWSN is the European counterpart to conferences such as SenSys and IPSN. It is a very different crowd than typically attends those events. I learned a lot about a couple of the big EU-sponsored sensor nets projects including CoNet and GINSENG. Interestingly, the Contiki OS seems to be pretty popular amongst the European research groups, in contrast to the TinyOS-dominated US landscape.
My keynote was entitled "The Next Decade of Sensor Networking" and I tried to argue that the field is running the risk of becoming stagnant unless we define some big research challenges that will carry us for the next decade. I've blogged about these themes here before. I delivered the talk in "Larry Lessig" style -- having written the "script" as an essay and then making slides to highlight the key points, rather than starting with the slides and ad libbing the voiceover as I usually do. I'll post a video here soon - the slides are more than 50 MB and don't really go well on their own.
A couple of highlights from the conference, though I had to miss the last day.
Jayant Gupchup from Johns Hopkins gave a talk on Phoenix, an approach to reconstructing the timestamps for sensor data after the fact. The idea is to not use a time synchronization protocol, but rather have nodes log enough data that can be used for post-hoc time correction. This is an interesting problem that was motivated by their experiences running sensor nets for more than a year, in which they observed a lot of node reboots (which throw off simple timing approaches) and extended periods when there was no suitable global timebase. The Phoenix approach collects information on nodes' local timestamps and beacons from GPS-enabled nodes at the base station, and performs a time rectification technique, similar to the one we developed for correcting our volcano sensor network data. Phoenix achieves around a 1 sec data accuracy (which is acceptable for environmental monitoring) even when the GPS clock source is offline for a significant period of time.
Raghu Ganti from UIUC gave a talk on "Privacy Preserving Reconstruction of Multidimensional Data Maps in Vehicular Participatory Sensing." The title is a bit unwieldy, but the idea is to reconstruct aggregate statistics from a large number of users reporting individual sensor data, such as their vehicle speed and location. The problem is that users don't want to report their true speed and location, but we still want the ability to generate aggregate statistics such as the mean speed on a given road. Their approach is to add noise to each user's data according to a model that would make it difficult for an attacker to recover the user's original data. They make use of the E-M algorithm to estimate the density distribution of the data in aggregate.
Although the paper considered a number of attacks against the scheme, I was left wondering about a simple binary revelation of whether a user had recently left their home (similar to PleaseRobMe.com). One solution is to delay the data reporting, although one would be able to learn the approximate time that an individual was likely to leave home each day. The other approach is to perturb the timing data as well, but this would seem to interfere with the ability to ask questions about, say, traffic levels at certain times of day.
Finally, Christos Koninis from the University of Patras gave a talk on federating sensor network testbeds over the Internet, allowing one to run testbed experiments across multiple testbeds simultaneously, with "virtual" radio links between nodes on different testbeds. So you could combine a run on our MoteLab testbed (around 190 nodes) with the TWIST testbed (220 nodes) to get a virtual testbed of more than 400 nodes. This is a very cool idea and potentially extremely useful for doing larger-scale sensor net experiments. Their approach involves routing data over a node's serial port through a gateway server to the other end where it is injected into the destination testbed at the appropriate point. They can emulate a given packet loss across each virtual link, not unlike Emulab. Unfortunately they did not really consider making the cross-testbed packet transmission timings realistic, so it would be difficult to use this approach to evaluate a MAC protocol or time sync protocol. It also does not properly emulate RF interference, but I think this is still a very interesting and useful ideas. Another cool aspect of this project is that they can add virtual simulated nodes to the test
bed, allowing one to run mixed-mode experiments.
Subscribe to:
Post Comments (Atom)
Startup Life: Three Months In
I've posted a story to Medium on what it's been like to work at a startup, after years at Google. Check it out here.
-
The word is out that I have decided to resign my tenured faculty job at Harvard to remain at Google. Obviously this will be a big change in ...
-
My team at Google is wrapping up an effort to rewrite a large production system (almost) entirely in Go . I say "almost" because ...
-
I'm often asked what my job is like at Google since I left academia. I guess going from tenured professor to software engineer sounds l...
No comments:
Post a Comment