Ed Lazowska from UW gave the colloquium at Harvard this week on Computer Science: Past, Present, and Future. This is a talk he has no doubt given many times at many places, though it was the first I had heard it, and it was fantastic. More than anything else, his talk reminded me of why I am a computer scientist, and of how many great problems we have to work on in this field. I can't imagine wanting to do anything else.
Ed's talk started off reviewing what computer science has accomplished in the last 40 years, since the ARPAnet came online in 1969. This New York Times story from 2009 reported on the "Top 20 inventions of the last 30 years" and nearly all of them are derived from computer science in one fashion or another -- the Internet, PC, mobile phones, and email top the list. Although there's no surprise here at all, it was a great reminder of how much impact computing has had on nearly all aspects of the modern world.
This is why I am a computer scientist: because I want to change the world. Computing is the engine that is driving innovation in every field of humanity, and working on computer science puts you at the center of it. Of course, it's easy to forget that on days when I am beating my head against some obscure Objective C type mismatch bug, so it's nice to get a reminder of the bigger purpose.
This is why I am a computer scientist today, but it's not how I got started. My first experience with a computer was sitting down at an Apple II+ (I am dating myself) when I was 8 years old. I remember it clearly: The teacher told me to type my name at the prompt and I got:
]MATT
?SYNTAX ERRORThis was probably not the most rewarding first experience with a computer, but it taught me that this box spoke a different language, and I wanted to learn how to communicate with it. It was not long before I was writing BASIC programs to do lo-res graphics. While most kids in the class just manually translated their static image from a sheet of graph paper into a series of PLOT, HLIN, and VLIN commands, I was writing animated scenes (one was the pyramids with an animated sunset behind them) and even a sword-fighting game with a simple AI algorithm so you could play against the computer (keep in mind this was 3rd grade). I was totally hooked.
At that age, computers represented this amazing magical box that you could control and make it do almost anything. The games I was writing back then rivaled what I could buy in the store, but for me it was much more about writing the code than playing the game.
Now that I have a son, I wonder what his experience with computing will be like, and whether he'll be as turned on as I was by the whole mystery of the thing. Perhaps he will just treat the computer like any other appliance, like the dishwasher or telephone -- not something to be taken apart, programmed, explored. One thing we already do is play with the iPad, and even at 9 months old he loves to touch the screen and manipulate objects on it -- there are a couple of good iPad and iPhone programs for babies and the touch screen interface has tremendous potential there. But will he want to program? And how will he even do it? BASIC and LOGO were great back in the day as you could dive right in. I'm pretty sure I don't want to throw Visual Studio at him -- hopefully there are still good kid-friendly programming environments out there. And of course, the programs he'll be able to write won't be able to hold a candle to whatever the latest 3D immersive video game system he'll be playing then, so it's hard to say whether he'll appreciate the potential for doing it himself.
I am convinced that giving kids this very direct and rewarding experience with computing is important, though. If we turn computers in to just another kind of media consumption device (which most already are) then we'll lose that opportunity to hook the next generation.