Wednesday, June 12, 2019

Monday, February 11, 2019

Why I'm leaving Google for a startup

After more than eight years at Google, I'm joining XNOR.ai, a small startup developing AI for embedded devices.

Check out my blog post on Medium here.

Wednesday, February 6, 2019

Over-the-Air Arduino firmware updates using Firebase, Part 1

Just a reminder that I'm blogging on Medium these days.

I just posted another article, this time on using Firebase to support over-the-air firmware updates for Arduino projects. Check it out!

Sunday, February 3, 2019

I'm blogging on Medium!

Hey folks! If you're reading this blog, you may be in the wrong place.

I've decided to try out Medium as a new blogging platform.

Check out my Medium blog here!

So far, I have one article posted: Using Firebase to Control your Arduino Project over the Web. Hopefully more to come soon.


Tuesday, June 7, 2016

Death by peer review

I recently had the occasion to give some advice a friend who was considering making the switch from industry to academia. One of my key pieces of advice was to keep in mind that success (or failure) in academia is largely based on peer review -- by program committees, proposal review panels, tenure committees. While peer review has many good things going for it, it can also be extremely, dishearteningly random. Being an academic means living your life one peer-review decision to the next, and in many cases, those decisions are simply not the right ones. After a while, a string of semi-random decisions can be psychologically draining.

From http://www.michaeleisen.org/blog/?p=1778. Yes, I own the t-shirt.
The law of large numbers certainly applies here. Good work eventually gets published and funded, given enough iterations. Good researchers get their papers in, eventually. Peer review feedback can be incredibly helpful for refining a piece of work and improving it over time. But in the vast majority of cases, papers or proposals, whether accepted or rejected in the end, get a wide range of scores -- it is quite rare for even a good paper to get all "accept" reviews. Whether a paper gets accepted often depends on who the reviewers are, whether they've had enough coffee in the PC meeting, whether they are confident enough to stand up for the work, and so forth. Above a certain threshold, the objective merit of the work has little to do with the outcome.

This situation can get really bad. NSF proposal reviews are, historically, quite random, in part because NSF's conflict-of-interest policy prevents anyone who might actually be an expert in the area from reviewing your work (unless they forgot to submit their own proposal). I have submitted substantially the same NSF proposal multiple times and had vastly different scores. My best proposal never got funded; my worst proposal actually did.

To be clear, we also use peer review at Google, in particular for things like promotions. I've served on quite a few promotions committees, and I can tell you it can be just as random as, say, a conference program committee. Get four people into a room and they will have four different opinions about a given candidate. So I don't think this problem is specific to the academic peer review process.

But I want to contrast the peer-review process with the "industry process." At least at Google, I feel that hard work is generally rewarded if it has impact and leads to good products. My expectation is that the same is true at most companies. Rather than the success or failure of a project coming down to the dreaded Reviewer #3, it comes down to the team's ability to execute, targeting the right market, and attracting users.

Of course, many of these factors are just as out of the control of an engineer on the team as the capriciousness of a program committee. However, I believe the industry process is far less arbitrary. Yes, projects can (and do) get canceled by higher-level management. I've personally canceled projects on my teams, for a range of reasons. But the reasons for project cancellation are, in most cases, made after careful deliberation and by people who have a vested interest in the team. Even if the decision ends up being wrong, at least it's a decision that makes sense -- not the crapshoot you face every time you submit a paper or proposal to a random committee.

Do companies make bad decisions? Absolutely. Are decisions made that I personally disagree with? Of course. Do I have to work for a company that continually makes bad decisions that I don't agree with? Hell no. I'd much rather face my chances against a principled leadership organization that I trust and agree with than an ad hoc collection of anonymous reviewers.

While I have many thoughts on how the process of peer review could be improved, I don't argue that we should dispense with it entirely. I don't know of a better model for most things that academics need to be evaluated on. But aspiring academics should how much of your success hinges on the purely stochastic nature of the process. Industry is still a game, but it's a different kind of game, and one that I think tends to be more rational.

Monday, April 25, 2016

Why I gave your paper a Strong Accept

See also: Why I gave your paper a Strong Reject

I know this blog is mostly about me complaining about academics, but there's a reason I stay engaged with the research community: I learn stuff. Broadly speaking, I think it's incredibly important for industry to both stay abreast of what's going on in the academic world, as well as have some measure of influence on it. For those reasons, I serve on a few program committees a year and do other things like help review proposals for Google's Faculty Research Award program.

Apart from learning new things, there are other reasons to stay engaged. One is that I get a chance to meet and often work with some incredible colleagues, either professors (to collaborate with) or students (to host as interns and, in many cases, hire as full-time employees later on).

I also enjoy serving on program committees more than just going to conferences and reading papers that have already been published. I feel like it's part of my job to give back and contribute my expertise (such as it is) to help guide the work happening in the research community. Way too many papers could use a nudge in the right direction by someone who knows what's happening in the real world -- as a professor and grad student, I gained a great deal from my interactions with colleagues in industry.

Whenever I serve on a program committee, I make it a point to champion at least a couple of papers at the PC meeting. My colleagues can attest to times I've (perhaps literally) pounded my fist on the table and argued that we need to accept some paper. So to go along with my recent post on why I tend to mark papers as reject, here are some of the reasons that make me excited to give out a Strong Accept.

(Disclaimer: This blog represents my personal opinion. My employer and my dog have nothing to do with it. Well, the dog might have swayed me a little.)

The paper is perfect and flawless. Hah! Just kidding! This never happens. No paper is ever perfect -- far from it. Indeed, I often champion papers with significant flaws in the presentation, the ideas, or the evaluation. What I try to do is decide whether the problems can be fixed through shepherding. Not everything can be fixed, mind you. Minor wording changes or a slight shift in focus are fixable. Major new experiments or a total overhaul of the system design are not. When I champion a paper, I only do so if I'm willing to be on the hook to shepherd it, should it come to that at the PC meeting (and it often does).

Somebody needs to stand up for good papers. Arguably, no paper would ever get accepted unless some PC member were willing to go to bat for it. Sadly, it's a lot easier for the PC to find flaws in a paper (hence leading to rejection) than it is to stand up for a paper and argue for acceptance -- despite the paper's flaws. Every PC meeting I go to, someone says, "This is the best paper in my pile, and we should take it -- that's why I gave it a weak accept." Weak accept!?!? WEAK!?! If that's the best you can do, you have no business being on a program committee. Stand up for something.

In an effort to balance this out, I try to take a stand for a couple of papers every time I go to a PC meeting, even though I might not be successful in convincing others that those papers should be accepted. Way better than only giving out milquetoast scores like "weak accept" or -- worse -- the cop-out "borderline".

The paper got me excited. This is probably the #1 reason I give out Strong Accepts. When this happens, usually by the end of the first page, I'm getting excited about the rest of the paper. The problem sounds compelling. The approach is downright sexy. The summary of results sound pretty sweet. All right, so I'm jazzed about this one. Sometimes it's a big letdown when I get into the meat and find out that the approach ain't all it was cracked up to be in the intro. But when I get turned on by a paper, I'll let the small stuff slide for sure.

It's hard to predict when a paper will get me hot under the collar. Sometimes it's because the problem is close to stuff I work on, and I naturally gravitate to those kinds of papers. Other times it's a problem I really wish I had solved. Much of the time, it's because the intro and motivation are just really eloquent and convincing. The quality of writing matters a lot here.

I learned a lot reading the paper. Ultimately, a paper is all about what the reader takes away from it. A paper on a topic slightly out of my area that does a fine job explaining the problem and the solution is a beautiful thing. Deciding how much "tutorial" material to fit into a paper can be challenging, especially if you're assuming that the reviewers are already experts in the topic at hand. But more often than not, the PC members reading your paper might not know as much about the area as you expect. Good exposition is usually worth the space. The experts will skim it anyway, and you might sell the paper to a non-expert like me.

There's a real-world evaluation. This is not a requirement, and indeed it's somewhat rare, but if a paper evaluates its approach on anything approximating a real-world scale (or dataset) it's winning major brownie points in my book. Purely artificial, lab-based evaluations are more common, and less compelling. If the paper includes a real-life deployment or retrospective on what the authors learned through the experience, even better. Even papers without that many "new ideas" can get accepted if they have a strong and interesting evaluation (cough cough).

The paper looks at a new problem, or has a new take on an old problem. Creativity -- either in terms of the problem you're working on, or how you approach that problem -- counts for a great deal. I care much more about a creative approach to solving a new and interesting (or old and hard-to-crack) problem than a paper that is thoroughly evaluated along every possible axis. Way too many papers are merely incremental deltas on top of previous work. I'm not that interested in reading the Nth paper on time synchronization or multi-hop routing, unless you are doing things really differently from how they've been done before. (If the area is well-trodden, it's also unlikely you'll convince me you have a solution that the hundreds of other papers on the same topic have failed to uncover.) Being bold and striking out in a new research direction might be risky, but it's also more likely to catch my attention after I've reviewed 20 papers on less exciting topics.


Startup Life: Three Months In

I've posted a story to Medium on what it's been like to work at a startup, after years at Google. Check it out here.