Michael Kozuch from Intel Research Pittsburgh described an approach to load-balancing computation within a datacenter that involves migrating the running operating system (and the applications running on top of it) from one physical machine to another. One approach is to shut down the OS and reboot it on the new hardware, but Michael is going further by looking at migrating a running OS instance and its device driver state -- even across nodes with different physical hardware. Ballsy.
Don Porter from UT Austin made the claim that operating systems should expose a transactional interface, allowing applications to describe a set of system calls as occurring within a transaction. Although there is a lot of related work in this area Don's point is that the interface should be very simple and general enough to capture essentially any set of system calls within a transaction (rather than being limited to filesystem calls, for example).
Andrew Baumann from ETH Zurich gave perhaps the best and most exciting talk of the workshop (so far) on "Your computer is already a distributed system. Why isn't your OS?" He pointed out that multicore systems already have a wide range of access latencies across processors and caches. Rather than relying on shared memory for communication, why not use asynchronous messaging between cores for everything? The proposed approach is called a multikernel and they are working on a prototype called Barrelfish. One nice aspect of this work is that they are doing a clean-slate design and throwing out support for legacy applications. Right now, the work is very much focused on performance; I'd like to see them look at the reliability and robustness issues that arise when running multiple OS kernels on your machine. (They do make a good argument that it is much easier to reason about a message-passing system than a shared memory system.)
Jeffrey Mogul from HP Labs made the case that we should be using a combination of flash and DRAM (which he calls FLAM) instead of only DRAM for main memory. The idea is to exploit the properties of flash memories in terms of high density and low price (compared to DRAM) to optimize a memory system -- he is not even concerned with the nonvolatile aspect of flash. The idea is to migrate pages between DRAM and flash; I'm not sure why this is so different than having less DRAM and using an SSD as your swap device. One thing you have to worry about is the high latency for flash access and the fact that it wears out over time.
This year we held a (sober) "Big Ideas" session in addition to the traditional (non-sober) "Outrageous Opinions" session. Some Big Ideas:
- Michael Scott argued that we need to rethink how we teach concurrency to undergraduates, using top-down rather than bottom-up examples.
- John Wilkes and Kim Keaton proposed that "Quality of Information" is at least as important -- if not more important -- than "Quality of Service" in big systems, and that we need explicit metrics to capture the information quality impact of optimizations in a system.
- Geoffrey Werner Challen opened up a wide-ranging discussion on the environmental impact of computing technology.
- Armando Fox argued that e-mail is dead as a communication medium due to the huge volume of spam. He claimed that social networks are far more effective since you cannot even contact someone whom you are not already connected with. Some folks not in the Facebook Generation bristled at this idea, of course. I don't agree that existing social networks are right for this -- for example, most of them do not allow you to maintain separate groups of contacts (such as "friends", "family", or "colleagues").