Skip to main content

NSDI 2009, Day One

NSDI is happening this week here in Boston. This years' conference has 32 papers (selected out of about 160 submissions) and there are more than 240 attendees, which is an NSDI record. The topics this year are pretty diverse, including content distribution (which seems to be a euphemism for "P2P"), software-defined radios, botnets, and of course the mandatory session on BFT.

A couple of highlights from my favorite talks today.

TrInc: Small Trusted Hardware for Large Distributed Systems
Dave Levin, University of Maryland; John R. Douceur, Jacob R. Lorch, and Thomas Moscibroda, Microsoft Research

This paper proposes to add a small trusted hardware component (which they implement as a smart card), incorporating a counter and a key, providing provable attestation for state updates performed by a node participating in a distributed system. This can be used to prevent a malicious or selfish node from "equivocating" by sending different messages to different peers. For example, the authors show how this can prevent attacks on BitTorrent where a node lies about what chunks of a file it has received in order to increase its own download capacity.

HashCache: Cache Storage for the Next Billion
Anirudh Badam, Princeton University; KyoungSoo Park, Princeton University and University of Pittsburgh; Vivek S. Pai and Larry L. Peterson, Princeton University

This paper develops a Web cache that uses far less memory and storage than conventional caches. The idea is to provide an effective caching solution for use in developing countries with poor Internet access and little access to high-end hardware. They employ a bunch of clever techniques to reduce the overheads and trade off storage, memory, and performance. To be honest, I never thought that Web caching could be sexy again, but this paper proved me wrong.

Making Byzantine Fault Tolerant Systems Tolerate Byzantine Faults
Allen Clement, Edmund Wong, Lorenzo Alvisi, and Mike Dahlin, The University of Texas at Austin; Mirco Marchetti, The University of Mondena and Reggio Emilia

This paper deals with the "elephant in the room" that most BFT systems don't provide acceptable (or any) performance if some nodes are actually faulty. The authors describe Aardvark, which is designed to provide robust BFT rather than just peak performance in the best case. I like the authors' shift in priorities in developing their system to focus on robustness. Kudos for using "Big MAC Attack" as the title of one of the slides.

The poster and demo session was just afterwards, though being completely beat after a long day I decided to head home.

By the way, why don't conference hotels train their staff not to make a huge racket immediately outside of rooms where presentations are happening? The last three conferences I've been to have been disrupted by noisy hotel staff yelling at each other and moving dishes around outside of the conference hall. You would think they would somehow be aware that they are just next door to a roomful of 300 people trying to listen to a talk.


Comments

Popular posts from this blog

Why I'm leaving Harvard

The word is out that I have decided to resign my tenured faculty job at Harvard to remain at Google. Obviously this will be a big change in my career, and one that I have spent a tremendous amount of time mulling over the last few months.

Rather than let rumors spread about the reasons for my move, I think I should be pretty direct in explaining my thinking here.

I should say first of all that I'm not leaving because of any problems with Harvard. On the contrary, I love Harvard, and will miss it a lot. The computer science faculty are absolutely top-notch, and the students are the best a professor could ever hope to work with. It is a fantastic environment, very supportive, and full of great people. They were crazy enough to give me tenure, and I feel no small pang of guilt for leaving now. I joined Harvard because it offered the opportunity to make a big impact on a great department at an important school, and I have no regrets about my decision to go there eight years ago. But m…

Rewriting a large production system in Go

My team at Google is wrapping up an effort to rewrite a large production system (almost) entirely in Go. I say "almost" because one component of the system -- a library for transcoding between image formats -- works perfectly well in C++, so we decided to leave it as-is. But the rest of the system is 100% Go, not just wrappers to existing modules in C++ or another language. It's been a fun experience and I thought I'd share some lessons learned.

Why rewrite?

The first question we must answer is why we considered a rewrite in the first place. When we started this project, we adopted an existing C++ based system, which had been developed over the course of a couple of years by two of our sister teams at Google. It's a good system and does its job remarkably well. However, it has been used in several different projects with vastly different goals, leading to a nontrivial accretion of cruft. Over time, it became apparent that for us to continue to innovate rapidly wo…

Running a software team at Google

I'm often asked what my job is like at Google since I left academia. I guess going from tenured professor to software engineer sounds like a big step down. Job titles aside, I'm much happier and more productive in my new role than I was in the 8 years at Harvard, though there are actually a lot of similarities between being a professor and running a software team.

I lead a team at Google's Seattle office which is responsible for a range of projects in the mobile web performance area (for more background on my team's work see my earlier blog post on the topic). One of our projects is the recently-announced data compression proxy support in Chrome Mobile. We also work on the PageSpeed suite of technologies, specifically focusing on mobile web optimization, as well as a bunch of other cool stuff that I can't talk about just yet.

My official job title is just "software engineer," which is the most common (and coveted) role at Google. (I say "coveted&quo…