Skip to main content

Peloton: Coordinated Resource Management for Sensor Networks

I have posted the camera-ready PDF of our forthcoming paper to be presented at HotOS 2009, entitled Peloton: Coordinated Resource Management for Sensor Networks. This paper describes a distributed operating system for sensor nets that is intended to provide the right abstractions to permit coordinated resource-management decisions to be made across multiple nodes. The key idea is that to get the best energy efficiency, it is often necessary for multiple nodes to orchestrate their roles.

The canonical example arises in a simple data-collection application, where each node locally samples data, perhaps performing some local processing, and forwards the data to a base station along a multihop spanning tree. To achieve long lifetimes, each node needs to determine how to mete out its limited energy reserves for sampling, computation, transmission, listening for and forwarding packets for other nodes, and other overheads such as maintaining time synchronization. Today this is generally done in an ad hoc manner, using manually-tuned, static schedules that fail to react to changes in the network load and energy availability over time. As a result, existing systems are typically very energy-inefficient and there is a lot of room for improvement.

Our Pixie operating system provides a node-level abstractions for reasoning about resource availability and giving applications the opportunity to adapt their behavior accordingly. The idea in Peloton is to extend this model across the network. (We named the system after the peloton in a road cycling race, in which cyclists draft each other closely to reduce wind drag and increase efficiency.)

There are three basic mechanisms in Peloton. First, nodes share information on their resource state using a global but weakly-consistent tuple space in order to promote informed decision-making. Second, we extend Pixie's resource ticket abstraction to support distributed resource allocations, called vector tickets. Third, we decouple the allocation of resources from use through a distributed ticket agent model. Nodes can delegate responsibility for resource allocation to a ticket agent operating centrally, remotely, or collectively.

In the paper we describe three application vignettes that would benefit from the abstractions in Peloton. My Ph.D. student Jason Waterman will be giving the talk at HotOS and we hope to have a prototype released this summer.

[Image credit John Spooner.]

Comments

  1. It’s quite appreciable that such information is being shared through a huge network. Keep it up.

    ReplyDelete
  2. Did you read Mads Dydensborg's Ph.D. thesis from DIKU?

    It was on synchronizing CPU wakeups and radio broadcasts between nodes by negotiating "connections" even for datagrams. Quite interesting.

    (You probably did. Just throwing it out there.)

    ReplyDelete

Post a Comment

Popular posts from this blog

Why I'm leaving Harvard

The word is out that I have decided to resign my tenured faculty job at Harvard to remain at Google. Obviously this will be a big change in my career, and one that I have spent a tremendous amount of time mulling over the last few months.

Rather than let rumors spread about the reasons for my move, I think I should be pretty direct in explaining my thinking here.

I should say first of all that I'm not leaving because of any problems with Harvard. On the contrary, I love Harvard, and will miss it a lot. The computer science faculty are absolutely top-notch, and the students are the best a professor could ever hope to work with. It is a fantastic environment, very supportive, and full of great people. They were crazy enough to give me tenure, and I feel no small pang of guilt for leaving now. I joined Harvard because it offered the opportunity to make a big impact on a great department at an important school, and I have no regrets about my decision to go there eight years ago. But m…

Rewriting a large production system in Go

My team at Google is wrapping up an effort to rewrite a large production system (almost) entirely in Go. I say "almost" because one component of the system -- a library for transcoding between image formats -- works perfectly well in C++, so we decided to leave it as-is. But the rest of the system is 100% Go, not just wrappers to existing modules in C++ or another language. It's been a fun experience and I thought I'd share some lessons learned.

Why rewrite?

The first question we must answer is why we considered a rewrite in the first place. When we started this project, we adopted an existing C++ based system, which had been developed over the course of a couple of years by two of our sister teams at Google. It's a good system and does its job remarkably well. However, it has been used in several different projects with vastly different goals, leading to a nontrivial accretion of cruft. Over time, it became apparent that for us to continue to innovate rapidly wo…

Running a software team at Google

I'm often asked what my job is like at Google since I left academia. I guess going from tenured professor to software engineer sounds like a big step down. Job titles aside, I'm much happier and more productive in my new role than I was in the 8 years at Harvard, though there are actually a lot of similarities between being a professor and running a software team.

I lead a team at Google's Seattle office which is responsible for a range of projects in the mobile web performance area (for more background on my team's work see my earlier blog post on the topic). One of our projects is the recently-announced data compression proxy support in Chrome Mobile. We also work on the PageSpeed suite of technologies, specifically focusing on mobile web optimization, as well as a bunch of other cool stuff that I can't talk about just yet.

My official job title is just "software engineer," which is the most common (and coveted) role at Google. (I say "coveted&quo…