Skip to main content

My love affair with code reviews

One of the most life-altering events in my move from academia to industry was the discovery of code reviews. This is pretty standard fare for developers in the "real world", but I have never heard of an academic research group using them, and had never done code reviews myself before joining Google.

In short: Code reviews are awesome. Everyone should use them. Heck, my dog should use them. You should too.

For those of you not in the academic research community, you have to understand that academics are terrible programmers. (I count myself among this group.) Academics write sloppy code, with no unit tests, no style guidelines, and no documentation. Code is slapped together by grad students, generally under pressure of a paper deadline, mainly to get some graphs to look pretty without regard for whether anyone is ever going to run the code ever again. Before I came to Google, that was what "programming" meant to me: kind of a necessary side effect of doing research, but the result was hardly anything I would be proud to show my mother. (Or my dog, for that matter.) Oh, sure, I released some open source code as an academic, but now I shudder to think of anyone at a place like Google or Microsoft or Facebook actually reading that code (please don't, I'm begging you).

Then I came to Google. Lesson #1: You don't check anything in until it has been reviewed by someone else. This took some getting used to. Even an innocent four-line change to some "throw away" Python script is subject to scrutiny. And of course, most of the people reviewing my code were young enough to be my students -- having considered myself to be an "expert programmer" (ha!), it is a humbling experience for a 23-year-old one year out of college to show you how to take your 40 lines of crap and turn them into one beautiful, tight function -- and how to generalize it and make it testable and document the damn thing for chrissakes.

So there's a bunch of reasons to love code reviews:

Maintain standards. This is pretty obvious but matters tremendously. The way I think of it, imagine you get hit by a truck one day, and 100 years from now somebody who has never heard of your code gets paged at 3 a.m. because something you wrote was suddenly raising exceptions. Not only does your code have to work, but it also needs to make sense. Code reviews force you to write code that fits together, that adheres to the style guide, that is testable.

Catch bugs before you check in. God, I can't count the number of times someone has pointed out an obvious (or extremely subtle) bug in my code during the code review process. Having another pair of eyes (or often several pairs of eyes) looking at your code is the best way to catch flaws early.

Learn from your peers. I have learned more programming techniques and tricks from doing code reviews than I ever did reading O'Reilly books or even other people's code. A couple of guys on my team are friggin' coding ninjas and suggest all kinds of ways of improving my clunky excuse for software. You learn better design patterns, better approaches for testing, better algorithms by getting direct feedback on your code from other developers.

Stay on top of what's going on. Doing code reviews for other people is the best way to understand what's happening in complex codebase. You get exposed to a lot of different code, different approaches for solving problems, and can chart the evolution of the software over time -- a very different experience than just reading the final product.

I think academic research groups would gain a lot by using code reviews, and of course the things that go with them: good coding practices, a consistent style guide, insistence on unit tests. I'll admit that code quality matters less in a research setting, but it is probably worth the investment to use some kind of process.

The thing to keep in mind is that there is a social aspect to code reviews as well. At Google, you need an LGTM from another developer before you're allowed to submit a patch. It also takes a lot of time to do a good code review, so it's standard practice to break large changes into smaller, more review-friendly pieces. And of course the expectation is you've done your due diligence by testing your code thoroughly before sending it for review.

Don't code reviews slow you down? Somewhat. But if you think of code development as a pipeline, with multiple code reviews in the flight at a time you can still sustain a high issue rate, even if each individual patch has higher latency. Generally developers all understand that being a hardass on you during the review process will come back to bite them some day -- and they understand the tradeoff between the need to move quickly and the need to do things right. I think code reviews can also serve to build stronger teams, since everyone is responsible for doing reviews and ensuring the quality of the shared codebase. So if done right, it's worth it.

Okay, Matt. I'm convinced. How can I too join the code review bandwagon? Glad you asked. The tool we use internally at Google was developed by none other than Guido van Rossum, who has graciously released a similar system called Rietveld as open source. Basically, you install Rietveld on AppEngine, and each developer uses a little Python script to upload patches for review. Reviews are done on the website, and when the review is complete, the developer can submit the patch. Rietveld doesn't care which source control system you use, or where the repository is located -- it just deals with patches. It's pretty slick and I've used it for a couple of projects with success.

Another popular approach is to use GitHub's "pull request" and commenting platform as a code review mechanism. Individual developers clone a master repository, and submit pull requests to the owner of that repository for inclusion. GitHub has a nice commenting system allowing for code reviews to be used with pull requests.

I was floored the other day when I met an engineer from a fairly well-known Internet site who said they didn't use code reviews internally -- and complained about how messy the code was and how poorly designed some pieces were. No kidding! Code reviews aren't the ultimate solution to a broken design process, but they are an incredibly useful tool.


  1. ReviewBoard ( is another excellent system for the same.

  2. What a welcome read! I'm a grad student who's a little hardened by code reviews in the open source world (the ns-3 project for instance, we use Rietveld by the way!), and it's scary to see the severely low standards for the same in academia. Things like: "Q: How did you test your code? A: It works when I run it this way."

    I try my best to keep whatever code I produce (even research code) in the public domain, just so that I force myself into writing better code (if not great). I'm presently working on my master's thesis, and I've promised myself to accompany the evaluation section with pointers to the source code that was used to generate the results, along with how to reproduce them.

  3. I endured some pretty traumatizing code reviews as an undergrad and graduate student in a group studying and hacking on Linux OS internals. People cried, literally. Today, I cannot imagine that anyone who survived their tenure in that research group would ever dismiss code reviews as anything but productive and worthwhile.

  4. Noah - I hope you are kidding. Code reviews should not be "traumatizing". At Google, people use code reviews constructively, not as a way of bashing each other.

    1. I have heard other people complain of traumatic code review experiences; for example, when Mark Chu-Carroll wrote about them at

      I think that this is one of those cases where doing things online instead of in person improves the social interaction rather than diminishes it. You want code comments to be impersonal both for the person making the comment and the person whose code is being commented upon. The stories I've heard of traumatic code reviews usually involve the author presenting his code, talking through it, and everyone kibitzing and critiquing it. (usually while the author is standing there, and the reviewers are sitting)

  5. Could it be because the research (paper), and not the code, is the product for an academic? If code submission (or publishing) replaced or augmented paper submissions, I'm guessing the standards of the code would increase.

  6. Fundou - we don't do code reviews at Google so that people outside of Google can see the code (with several notable examples, e.g., Chrome, Android, etc.) Clearly, for academics, the product should be papers and you shouldn't worry about writing beautiful code for academic reasons. Mainly it's to improve the quality of the code for doing your own research, and to improve teamwork in a research group.

  7. In my research with other academics, we usually don't anything quite so formal as these processes. But generally we make sure two people understand the code, and this is often accomplished by a very code-review-like process. And we've often found important bugs or opportunities for generalization we would have missed (or found much later). Definitely recommended...

    Here I'm talking about statistical software we write and then analysis code that uses those functions.

  8. I share your enthusiasm for code reviews. Like the academic, I began my career in an arena where code review is rarely practiced: the one-man shop/department. Entering into the larger world of team software development made me a code review convert for all the reasons you mention.

    I'd add one: when you've accumulated some experience and expertise it gives you a structured way to share that with colleagues/collaborators who can benefit from it.

    And heck, I'll add another: code reviews can become like automated tests. At first they seem like extra work, but once you've seen the light you'll feel nervous about shipping code without them.

    I second the recommendation of Review Board as well. I've used it happily with both small and large teams.

  9. Don't worry -- that 23 year old will forget how to write good code by the time he or she is our age. I was one of those kids in code reviews when I was 19 as a coop. I always found one or more major bugs in the code of the full time 40-year old staff people during the code review. But look at me today -- I can't write clean code or learn a new programming language. Just tell the kid -- "When I was your age..."

  10. Matt, you should've hung out more with the software engineering researchers when you were in academia: One of the most well-known findings in software engineering research is that code reviews (or "inspections", in the SE research jargon) is one of the most cost-effective methods of finding bugs. In addition, it tends to find different types of bugs than testing. There are probably more software engineering experiments on inspections than any other topic.

  11. Heh. My war story is from a few years ago when I noticed a program seg faulting on a HPC cluster. I notified the researcher that there was probably a bug in his program. He quickly informed me that the program had been used by several people in their dissertations and it couldn't possibly have a bug.

  12. I really wish code that professors give undergrad students was reviewed by at least one other person. Its so sad to see a professor provide terrible code to undergrads that are trying to learn how to program. I can't even count how many people I know that have drop class or switched manages simply because the code that is show in class is unclear and buggy.

  13. I take issue with this bit in your opening paragraph:

    «This is pretty standard fare for developers in the "real world",»

    Spoken like someone who went from academia to Google. In between academia and Google, I served seven years in the murky world of programming financial data systems - though my company was not one that ran a trading desk, most of our customers were. (i.e., we'd sell data feeds to Morgan Stanley, or Lehman Brothers, or Merrill Lynch)

    And the software practices in that industry? It's a nightmare. We had three totally disconnected source control repositories (two ClearCase installs, and an SVN install) at that company when we were still under 200 people, before we were acquired by a larger conglomerate. All three repositories were still being actively used when I left. Even getting read-only access to the repository used by another group was a whole huge hassle, full of interrogations about why you'd need to see it. No dedicated staff for the source control system itself, and getting management to pay for an actual Unix sysadmin (instead of just telling us that the same people being hired for Windows desktop support could handle it) was like pulling teeth. (Okay, we had one guy who was an amazing DBA and damn good at administering Clearcase. Naturally he was declared redundant when we were acquired.) And at my company, compared to standard industry practice, we were the lucky ones.

    I remember talking to someone who'd left the company to go work at «large wall street firm you've definitely heard of». He was working on Solaris, maintaining C + Tcl abominations that were still being used daily for ridiculously large amounts of money, and he couldn't get a source control system installed. (Devs don't have root on the servers there, and the sysadmin group didn't want to maintain SVN) He was storing his stuff in RCS, because that's all that was available. It boggles the mind how financial firms can continue to treat their developers.

    Sure, there's the occasional outlier in the industry like Jane St. capital, but they are in no way standard. Over in the world of Facebook, Google, Amazon, Apple, Microsoft, etc., things might be better and code reviews might count as standard but for developers as a whole? Not so much.

  14. My only disagreement is that it's sadly not standard in the "real world". This level of code quality is new to me at Google, and I'd hate to go back to the way it used to be after I leave. Granted, it's becoming more popular, but I can assure you that there are still a large number of organizations large enough to support code reviews that don't do it.


Post a Comment

Popular posts from this blog

Why I'm leaving Harvard

The word is out that I have decided to resign my tenured faculty job at Harvard to remain at Google. Obviously this will be a big change in my career, and one that I have spent a tremendous amount of time mulling over the last few months.

Rather than let rumors spread about the reasons for my move, I think I should be pretty direct in explaining my thinking here.

I should say first of all that I'm not leaving because of any problems with Harvard. On the contrary, I love Harvard, and will miss it a lot. The computer science faculty are absolutely top-notch, and the students are the best a professor could ever hope to work with. It is a fantastic environment, very supportive, and full of great people. They were crazy enough to give me tenure, and I feel no small pang of guilt for leaving now. I joined Harvard because it offered the opportunity to make a big impact on a great department at an important school, and I have no regrets about my decision to go there eight years ago. But m…

Rewriting a large production system in Go

My team at Google is wrapping up an effort to rewrite a large production system (almost) entirely in Go. I say "almost" because one component of the system -- a library for transcoding between image formats -- works perfectly well in C++, so we decided to leave it as-is. But the rest of the system is 100% Go, not just wrappers to existing modules in C++ or another language. It's been a fun experience and I thought I'd share some lessons learned.

Why rewrite?

The first question we must answer is why we considered a rewrite in the first place. When we started this project, we adopted an existing C++ based system, which had been developed over the course of a couple of years by two of our sister teams at Google. It's a good system and does its job remarkably well. However, it has been used in several different projects with vastly different goals, leading to a nontrivial accretion of cruft. Over time, it became apparent that for us to continue to innovate rapidly wo…

Running a software team at Google

I'm often asked what my job is like at Google since I left academia. I guess going from tenured professor to software engineer sounds like a big step down. Job titles aside, I'm much happier and more productive in my new role than I was in the 8 years at Harvard, though there are actually a lot of similarities between being a professor and running a software team.

I lead a team at Google's Seattle office which is responsible for a range of projects in the mobile web performance area (for more background on my team's work see my earlier blog post on the topic). One of our projects is the recently-announced data compression proxy support in Chrome Mobile. We also work on the PageSpeed suite of technologies, specifically focusing on mobile web optimization, as well as a bunch of other cool stuff that I can't talk about just yet.

My official job title is just "software engineer," which is the most common (and coveted) role at Google. (I say "coveted&quo…