Thursday, July 11, 2013

Does the academic process slow innovation?

I've been wondering recently whether the extended, baroque process of doing research in an academic setting (by which I mean either a university or an "academic style" research lab in industry) is doing more harm than good when it comes to the pace of innovation.

Prior to moving to industry, I spent my whole career as an academic. It took me a while to get used to how fast things happen in industry. My team, which is part of Chrome, does a new major release every six weeks. This is head-spinningly fast compared to academic projects. Important decisions are made on the order of days, not months. Projects are started up and executed an order of magnitude faster than it would take a similarly-sized academic research group to get up to speed.

This is not just about having plenty of funding (although that is part of it). It is also about what happens when you abandon the trappings of the academic process, for which the timelines are glacial:
  • A three month wait (typically) to get a decision on a conference submission, during which time you are not allowed to submit similar work elsewhere.
  • A six month wait on hearing back on a grant proposal submission.
  • A year or more wait for a journal publication, with a similar restriction on parallel submissions.
  • Five plus years to get a PhD.
  • Possibly one or two years as a postdoc.
  • Six to eight years to get tenure.
  • A lifetime of scarring as the result of the above. (Okay, I'm kidding. Sort of.)
This is not a problem unique to computer science of course. In the medical field, the average age at which a PI receives their first NIH R01 grant is 44 years. Think about that for a minute. That's 23-some-odd years after graduation before an investigator is considered an "independent" contributor to the research field. Is this good for innovation?


Part of the problem is that the academic process is full of overheads. Take a typical conference program committee for example. Let's say the committee has 15 members, each of whom has 30 papers to review (this is pretty average, for good conferences at least). Each paper takes at least an hour to review (often more) - that's the equivalent of at least 4 work days (that is, assuming academics work only 8 hours a day ... ha ha!). Add on two more full days (minimum) for the program committee meeting and travel, and you're averaging about a full week of work for each PC member. Multiply by 15 -- double it for the two program co-chairs -- and you're talking about around 870 person-hours combined effort to decide on the 25 or so papers that will appear in the conference. That's 34 person-hours of overhead per paper. This doesn't count any of the overheads associated with actually organizing the conference -- making the budget, choosing the hotel, raising funds, setting up the website, publishing the proceedings, organizing the meals and poster sessions, renting the projectors ... you get my point.

The question is, does all of this time and effort produce (a) better science or (b) lead to greater understanding or impact? I want to posit that the answer is no. This process was developed decades ago in a pre-digital era where we had no other way to disseminate research results. (Hell, it's gotten much easier to run a program committee now that submissions are done via the web -- it used to be you had to print out 20 copies of your paper and mail them to the program chair who would mail out large packets to each of the committee members.)

But still, we cling to this process because it's the only way we know how to get PhD students hired as professors and get junior faculty tenured -- any attempt to buck the trend would no doubt jeopardize the career of some young academic. It's sad.

How did we get here?

Why do we have these processes in the first place? The main reason is competition for scarce resources. Put simply, there are too many academics, and not enough funding and not enough paper-slots in good conference venues. Much has been said about the sad state of public funding for science research. Too many academics competing for the same pool of money means longer processes for proposal reviews and more time re-submitting proposals when they get rejected.

As far as the limitation on conferences goes, you can't create more conferences out of thin air, because people wouldn't have time to sit on the program committees and travel to all of them (ironic, isn't it?). Whenever someone proposes a new conference venue there are groans of "but how will we schedule it around SOSP and OSDI and NSDI and SIGCOMM?!?" - so forget about that. Actually, I think the best model would be to adopt the practice of some research communities and have one big mongo conference every year that everybody goes to (ideally in Mexico) and have USENIX run it so the scientists can focus on doing science and leave the conference organization to the experts. But I digress.

The industrial research labs don't have the same kind of funding problem, but they still compete for paper-slots. And I believe this inherently slows everything down because you can't do new research when you have to keep backtracking to get that paper you spent so many precious hours on finally published after the third round of rejections with "a strong accept, two weak accepts, and a weak reject" reviews. It sucks.

Innovative != Publishable

My inspiration for writing this post came from the amazing pace at which innovation is happening in industry these days. The most high-profile of these are crazy "moon shot" projects like SpaceX23andme, and Google's high-altitude balloons to deliver Internet access to entire cities. But there are countless other, not-as-sexy innovations happening every day at companies big and small, just focused on changing the world, rather than writing papers about it.

I want to claim that even with all of their resources, had these projects gone down the conventional academic route -- writing papers and the like -- they would have never happened. No doubt if a university had done the equivalent of, say, Google Glass and submitted a MobiSys paper on it, it would have been rejected as "not novel enough" since Thad Starner has been wearing a computer on his head for 20 years. And high-altitude Internet balloons? What's new about that? It's just a different form of WiFi, essentially. Nothing new there.

We still need to publish research, though, which is important for driving innovation. But we should shift to an open, online publication model -- like arXiv -- where everything is "accepted" and papers are reviewed and scored informally after the fact. Work can get published much more rapidly and good work won't be stuck in the endless resubmission cycle. Scientists can stop wasting so much time and energy on program committees and conference organization. (We should still have one big conference every year so people still get to meet and drink and bounce ideas around.)  This model is also much more amenable to publications from industry, who currently have little incentive to run the conference submission gauntlet, unless publishing papers is part of their job description. And academics can still use citation counts or "paper ratings" as the measure by which hiring and promotion decisions are made.


  1. Good points, Matt. Some academics in my field are doing something worthy to change the publication model: Its first iteration was a great success. For those of us in industry, it's a good model: if you have something you want to publish because it could advance the broader field, you just put it in arXiv and then "submit" to ICLR or similar venues. One negative of these models, though, is that they do not allow double-blind reviewing, and there's some evidence that knowing who the authors are influences reviewer judgments.

  2. These are great points, Matt. The submission to arXiv provides the necessary timestamp, and decouples submissions from the evaluation by program committees. These could very well still exist, and conferences become selections, analogous to film festivals. A paper can go about wielding "Official selection to Sigcomm 2014", along with other stamps. An alternative is the rolling submission model adopted by VLDB, which removes the once-a-year synchronized submissions by everybody, and makes the feedback loop much tighter, as they have a deadline every month. Once a year they do have the mongo conference, and they have the papers accepted (and published) each month presented.

  3. The problem with the open publication model and attribution is that a mostly incomprehensible paper can still be the first to propose an idea. No one wants to read that paper. (I've read them on program committees; it's miserable.) In contrast, while I sometimes find SOSP papers boring, at least by the time they've made it through the review process, I can actually understand what they're getting at.

    While the open publication model might still result in the best papers filtering to the top, I suspect it will also be beset with people complaining that the best papers ripped them off--that they, too, had that same idea far earlier, but that no one recognized their mostly nonsensical paper at the time.

    1. @Sean Rhea, this problem does not appear because of OA. Instead it becomes more visible if both the obscure and the famous articles are freely available. So, OA improves this situation. Instead of the old version with people commenting near the coffee machine "the results from these 3 recent famous articles are just a rephrasing of a particular case from that old memo appeared in the '70 in french or russian", you got now the old memo and the 3 articles on your table and it's up to you to make an opinion.

      What's more important is that communication is faster among people with independent, original thinking, and that it favors those with good nose for detecting valuable research versus those relying on others to tell them which is which.

    2. Part of the problem is that we judge academic success by "who had the idea first?" as opposed to "who made the idea accessible?". A paper that no-one can read might as well not have been written at all.

      Of course, sometimes (or often?) the same people who are capable of producing a certain idea are also completely at a loss when it comes to explaining that idea to the uninitiated. Perhaps we need to place more value on the kind of collaboration where one researcher explains the results of his otherwise incomprehensible colleague.

      (To be sure, there are instances where precisely that has already happened, and where the explanatory paper is the one people remember and cite most of the time - even if the idea did not originate there.)

    3. The "first" vs "best explained" question has been challenging science for centuries (arguably it was part of the great Newton-Leibniz feud). I find the history of Euler's polyhedron formula (V-E+F=2) an especially good illustration, in that successive "proofs" kept being wrong, and thus "poor explanations" but the search for a real proof was critical to the development of modern topology (Proofs and Refutations by Imre Lakatos is a wonderful account of how mathematical sausage gets made, from great intuitions badly justified to fully justified but somewhat boring textbooks; for more details on the math at a bedside book level, David Richeson's Euler's Gem is outstanding). The late Giancarlo Rota had very interesting things to say on the trajectory from "idea" to "explanation" in his delightful Indiscrete Thoughts. Basically, he argued (among many other points) that a result is not really proved until it is obvious, that is, until the argument has been completed and refactored so that all intermediate definitions and steps follow trivially from previous ones. Very different skills in settling a territory vs exploring it for the first time. We tend to glorify the explorers, but it is the settlers that enable the accumulation of knowledge. But curiously, conference reviewing favors settlers, maybe because so many of the super-organized folks who run conferences come from the settler clan.

  4. I don't think most people really understand the benefit that comes from being forced to write down your ideas coherently enough that someone you don't know and have never met will understand and believe them. This kind of clarity is not something that comes naturally to most people, and two aspects of the peer-review process (the pain of having to get the reviewers to agree that you've written something that others should read, and the career/prestige/brownie-point benefits from succeeding at this) can really help -- IF the peer reviewers do their jobs well.

    My sense is that most people really don't understand their own work until they try to explain it to someone else; doing that in writing is harder but better than doing it with a whiteboard or slides.

    A case in point: HP has an annual internal technical conference (creatively named "Tech Con") which requires relatively short submissions, which are then peer-reviewed by other HP engineers. You can't get into the conference unless your paper is accepted, and a LOT of people want to get in. What this has done is to (1) teach people, especially those outside of research, to communicate clearly about what they are doing, (2) force them to be more honest about whether their ideas are good, and (3) create much better lines of communication across groups within the company. And I suspect other companies, besides HP, have similar mechanisms. So publication doesn't necessary have to be tied up with the publish-or-perish nonsense to be beneficial to both writers and readers.

    Fundamentally, I agree that a paper that nobody can read is a bad paper, and when I review things, I generally don't give good grades to unreadable papers ... although if a paper can probably be fixed via shepherding, I'm more generous. But there's a tension between rewarding people (via paper acceptances) for having good ideas, vs. rewarding them for doing a good job at explaining things. Look at the complex history of the Paxos paper(s), for example.

  5. Also: I'm on the Steering Committee for SIGCOMM, and we're trying to figure out whether/how to expand the opportunities for publishing papers ... I think our goal is to get a "robust discussion" going about the various alternatives. It will be interesting to see how people feel about the inevitable conflicts between competing interests (e.g., "SIGCOMM needs to be very selective so that my tenure case will look good" vs. "SIGCOMM needs to accept more papers so that my tenure case will look good" vs. "I just want to know which papers are worth reading."

  6. Jeff, I couldn't agree more that publishing papers is a really important way of structuring your thinking. The process is at least as important as the result.

    In fact I blogged about this a while back:

  7. I share your opinion. And this something not only related to the way ideas are conceived in academia (papers, reviews, conferences, etc...), but also to the way simple projects are carried out inside universities. Despite the scientific thing of the academic projects (which is indeed important), take into account only the deliverable product proposed by some research projects and it gets clear how slow things really are. Believe-me, a very small software that can be developed in 3 months by a small team of young engineers in the industry usually take semesters to reach its goal inside universities. They can't count on resources, students come and go depending on scholarships while the project struggles to still make sense after a number of thesis and papers are written based on partial results of the "science" behind it which always have a higher priority, naturally. To make things worse, some of the scientific contributions of these messy projects doesn't seem to really contribute that much to the state of the art. People seem to be more concerned in keeping the "academia engines" working on with all the conferences, travels, scholarships and H-Index micro-celebrities, rather to really deliver innovative artifacts to the society.

  8. You need some kind of feedback. If your product is google glass, then your feedback will be customers. If your products are ideas at the frontier of knowledge, then your costumers will be experts in your particular field, which generally are comparatively few. Reviewing processes allows experts in your field to become aware of your work and to give feedback to your work. They work as a kind of "compiler" of our "code" (papers). As far as I know, there is no "automatic" procedure for this yet. It is very hard, specially for young researchers to write ideas into readable papers. The reviewing feedback is our guide. People working in companies are evaluated according to sales. By analogy, in academia your costumers are researchers that review and cite your work. Believe me, your "funny" google glass / google chrome will not be there without the many papers out there that improve further our understanding of nature. I agree that we need more open source things anyway, but reviewing process are important in academia. An academia still is the leader in the realm of ideas.

    Best Regards,
    PhD. Perotti, Juan I.

    1. I fully agree in giving feedback, and nothing in my proposal does away with peer review. I'm just arguing that peer review for conferences and journals need not block progress on the research itself.

  9. I like ArXiv very much, but there's an issue here: the time and attention of each researcher is limited. What conferences/journals do is filter papers for them (leaving, hopefully, what is worth reading. I'm being generous here). Peer review is used not because it's perfect, but because it's the 'least bad' method -and supposedly it's fair. If you have a place where everyone publishes whatever they want, who is going to have time to read through it all? They will have to make a selection. Chances are they will make it based on *who* wrote the paper, and a few keywords in the title or abstract. I'm not sure how this is going to help. There's too much data/information out there; we need filters. This just seems to move the filter somewhere else.

    1. It is my understanding that some scientific communities (e.g., theoretical physics) have very successfully incorporated arXiv into the publication model. People do spend a lot of time reading early papers "published" there, but they also submit the work to more traditional journals. We can still have a council of elders deciding which papers are worthy of "selection" using a program committee model; I just think it's silly that we withhold publication of results based on a 1950s-era model of peer review.

  10. Matt, You're observations are spot-on for a fast-moving field such as computer systems, where, because of issues such as scale, big players like Google have lately been the innovators.

    But in my opinion, your comments only apply to such fast-moving fields (drug and medicine related research comes to mind as another example of such a field). Most other branches of science and mathematics, and for that matter, theoretical computer science, are extremely slow-moving fields mainly because the pace of innovation and the rate at which truly novel ideas are produced are really slow. For such fields, the "traditional" publication model seems to be ideal.

    1. Wouldn't it be great if more fields were fast-moving?

    2. Sure, but the point is that it's inherently harder in some fields to innovate than others.

  11. To follow up on my previous anon comment, academic computer science departments have had such trouble convincing deans that conference publications are what matter, not journals. With many, many years of effort, deans are now somewhat convinced. And that's not even radical! After all, both journals and conferences are "archival" (whatever crap that means; hell, everything is "Archival" in this day and age).

    Your suggestion is so radical that deans are bound to go into a hissy fit if such an idea is even proposed by academic computer scientists. And of course, this idea is likely to find strong support only among computer systems researchers, further creating a rift in departments.

    For better or worse, the publication model you propose is never going to get adopted, despite its obvious benefits. Innovation in computer systems will come from industry. Conferences such as SOSP and OSDI, dragged down by inertia from academia will either (a) become irrelevant; or (b) dominated by industry (aren't they already?). And academia will lose as a result. Add MOOCs into the equation, and the question that comes to mind: Why are we running universities any more?

    MOOCs + Fast-paced, non-paper-oriented, industry-driven research is the way of the future.

    1. Why run Universities anymore....? This is a truly ridiculous statement and is so short sighted I'm not even going to bother writing you a full reply!

  12. I was also thinking about these topics in the last 6+ months.
    I really also think bad and slow reviews of papers is slowing human progress.

    That is why I propose a radical model for academic publications: accept all papers!
    There is already so much noise out there, so many journals and papers, and conferences, that even the filtered material is already too much to read and follow.

    Therefore, since there is already too much noise, we skip the filtering process, because such process is really error prone (many good papers rejected), and also causes long delays (often a year nowadays).
    If you are worried about quality, you should not be: good papers and ideas have a way to come to the surface pretty quickly. See recent speed of publication sharing with ArXiv and Google+/social nets.
    Also you will already have an idea where good papers come from, because you will know a few peoples and groups and you will be monitoring their publications, just like you do on a conventional "high profile" but boy's club journal.

    If you are worried about teaching people how to write good papers, you should not: one can organize informal reviews and pre-reviews with colleagues that are in the same research area and WANT to read the paper. Young people can learn how to write good paper from those reviews and from collaborating with the group.

    Conferences? I would not give them much importance anymore. For the same reason: too many, too much noise.
    One big conference? We already have it: it is the internet [again!] where people can post talks and share them with the whole world.

    Want to go to a conference to mingle and have fun: sure do that, it is usually the best way to find out what does not work anyway. And you can have a good time with colleagues. Always good for staring a collaboration. But conferences are rarely the place to read and fully understand some work. You can keep doing that from free online papers, talks, seminars, group meeting, journal clubs, and the occasional skype calls with colleagues.

  13. Matt,

    So what happens with Physics research then? Any chance of it getting done in an industrial setting like the above? I can not also imagine something like the recent developments in Machine Learning gestating in an industrial setting but only in an academic style research lab. What you say is perhaps true in areas where systems are to be built. But that is not alone what constitutes "research".

    1. I don't know what you mean. Physics research has long been done in industrial settings: take Bell Labs for example. And Machine Learning is very hot at companies like Microsoft, Google, and many others.

  14. Matt - thanks for sharing!

    It is this field's model of low-acceptance-rate high-page-count conferences that is to blame for much of the progress slow-down, in my opinion.

    I wish our publication model was close to what is done by the vast majority of other fields. That is: preliminary work is written up quickly in a short conference paper; conferences accept basically all papers that are in scope and are understandable; conference interest and feedback are used to prepare a definitive journal publication. This model is workable even with fast-moving fields.

    Our model, on the other hand, leaves us without venues for publishing timely work (that can be scooped if not disclosed) that is not 100% complete. It defeats the purpose of having conferences in the first place (what use is conference feedback when the work is already expected to be more or less complete?), and prevents industry from contributing.

    It negatively contributes to the field in many other ways as well. For one, it puts up very high barriers for new entrants to the field. Also, it makes it very difficult to collaborate with people in other fields, who expect conference papers to be short reports on preliminary findings and are taken aback by both the effort our conferences require and the sharply negative feedback that comes with the low acceptance rates.

    1. I don't think there is any evidence to suggest the old ways of XIX century sciences are superior. There is much that needs to be improved on the way we communicate in CS, but it starts by using to our advantage the latest technological advances. Simply adopting methods hand optimized for the printed journal world of the last century would be a step backwards.

  15. >> we should shift to an open, online publication model -- like arXiv -- where everything is "accepted" and papers are reviewed and scored informally after the fact.

    Sorry Matt, but that's a horrible idea! Do you really want top CS systems conference proceedings to mimic Hacker News and Reddit front pages? Clearly you have too much faith in "wisdom of the crowds"!

    1. Why do you think that moving to this model will turn conferences into Reddit? We can still have program committees reviewing submissions for "presentation" at a conference, and the bar for selecting papers can continue to be high. But that process need not block the initial publication of the work online, nor need it be the only selection mechanism (i.e., a paper might be popular even if it doesn't pass the "SOSP bar").

  16. Nobody is blocking the initial publication of a work. In TCS (theoretical CS), posting to Arvix along with submitting is the norm, not the exception. Even in Systems, people often create tech reports before submitting.

    This is the specific part of your proposal that rang alarm bells to me
    >>Papers are reviewed and scored informally after the fact
    If reviews are public, and people can see other reviews before writing their own, then the end result would mimic Hacker News homepage. The most highly rated papers would be on extremely controversial (it's trendy/cool to be a non-conformist) ideas, and/or written by very famous people. Maybe I'm understanding your idea wrong (it will be great if you can elaborate how you think this reviewing should be done).

    When people spend months(/years) of hard work on a paper, they deserve the attention of the PC, and it's wrong to call this an overhead.

  17. HI, very interesting analysis. The title is a rethorical question in my point of view. I've been discussed these issues with many people and find it hard to find feseable solutions. However, feseability depends merely on those who could provide a technological infrastructure to decouple the journal. Public domain is overflooded by its own bureaucracy, while current editors are not interested in changing the game as long as it last, so they're playing "passive-aggressive" in my opinion. It's ridicolous to see them shaking when something happen out of their control...
    So many innovation into alternative academic publishing are happening everyday. I guess that if some external player would take part to the game and deliver something really different it will be of great joy for everybody. I'm really struggling to understand why no big one have yet considered to create a free and collaborative platform to incentivize scientific discoveries. Google for instance is an advertising company but I suppose it has the resources to build something similar. MOOCS model rely on ultraprofiled HR, so why not giving people the opportunity to incubate the academic publishing process in a way that the pace of scientific innovation could not be bounded to bureaucracies and editorial oligopolies. WE should definetly start to think at something different. Here is something I've been drawn some months ago:

    However, I don't want to write a post here to reply to this post, I have aleady done it here in the first three chapter of my open e-booklet... enjoy it!


  18. Matt,

    You are pointing out two major issues: (1) overhead, and (2) good ideas getting blocked due to slow-moving publication model.

    (1) Overhead: Actually, I am not sure if I entirely agree with your calculation of overhead. You calculate the overhead associated with each "accepted" paper. I think, and probably you will agree with me, a better way to judge inefficiency or overhead of this publication model is to estimate the ratio of "extra" work (hours spent by reviewers) to "actual" work (hours spent toward making the first submission) for any given paper.

    For simplicity, let's assume a paper gets published in its third submission, on an average. This assumption directly affects the overhead calculation, so let's offset some of this "conservative" assumption by assuming no "actual" work goes towards the paper after its first submission. We all
    know that this is farthest from the truth, but this will only make the overhead number look "high".

    If the first submission happens after six months of work. Assuming two grad students working on the project, roughly 50 weeks worth of time is spent towards "actual" work. Three submissions, each receiving five reviews, means a total of 15 reviews. That roughly amounts to 1 week of work (assuming approx. 2 hours per review). So, overhead is approximately 2-3%.

    Of course, we can always come up with more realistic assumptions, to account for this overhead more accurately. But this estimation not too far from the ground.

    I completely agree with you that in absolute numbers the overheads are high; but relative to how much time is spent in doing the work, the "feedback" overhead is not that high (as the above calculation shows). Of course, we can be more efficient. I think you will agree that any kind of feedback will have 2-3% of overhead, e.g., code-reviews, grading etc.

    I agree with you that our publication model is broken, but I don't think ArXiv + post-submission review can repair it as-is. Our model is broken, not because the associated overhead is too high (as shown here), but rather because our minds are too closed. We review to-reject, and not to-accept. We can always refine the publication model, we have to first refine our mind-set. It's hard to imagine disagreement here.

    (2) Good ideas getting blocked due to slow-moving publication model: I agree with you here. But, on a second thought, a good idea as good as Google-glass is not likely to stay behind the anonymous submission process. Academicians do give talks about their on-going, but unpublished, works. If an ideas that can transform into a product, is likely to break-free from our "more scientific, less product-oriented" publication model. In fact, I do not know of any "revolutionary product" idea being delayed because of our publication model. It is primarily because our current publication practices put more thrust on "science and advancing the current knowledge", than on "having an impact on the real-world".

    So, a related thing we need to be more concerned about, IMHO, is how to produce more research ideas that can have real-world impact "more quickly". I think this is somewhat related to your second concern. Our model currently favors slightly incremental but less-controversial and well-evaluated ideas. We need to foster a culture of accepting more non-conventional ideas, Hot_X_ workshops do some of that. Also, what can we do to minimize the fraction of research ideas that go unrealized even in long timeframes (say 40-50 years).


    1. There are many problems with the current model. This is certainly one: We review to-reject, and not to-accept.

      Another one is that with the time pressures of program committees it is far easier to accept an epsilon improvement on a hot topic than to read and ponder about a farther reaching idea where the benefits are not so immediate precisely because it reaches so far into the future.

      The cascade model is also a tremendous waste of resources. In some sub-disciplines is becoming the norm to be offered acceptance in a smaller satellite workshop when rejected from a bigger conference. We have accepted some of these offers and declined others, but every take is a reduction of load on the system.

  19. No offence but this reads as yet another typical post from this blog:

    - Google and Google alone can solve the World’s problems!
    - Academia just doesn’t cut it and that's why I left a University environment and now work at Google.
    - Here are a list of things that are better in an industrial environment (Google).
    - Did I mention that I now work at Google and that it's so much better than a University environment?


    P.S. I used to enjoy reading this blog.

    1. Not that we are being defensive or anything, eh?

      Matt was embedded in academia at a really good institution and now he has an outside perspective which allows him to point out things that we could do better. I for one welcome Matt's comments and read them with interest regardless if I agree with them or not, as the case may be.

    2. Sounds like my blog is not for you then, glad you're not reading it anymore!

    3. I greatly enjoy reading these posts, even more so because they come from someone who has been at top-tier computer science places in academia, industrial research, and engineering.

    4. I also enjoy reading these posts. I like the blend of insights from both academia and industry. Many of Matt's posts have been helpful to my own system's research group in the UK. Keep posting :)

  20. This comment has been removed by the author.

  21. Reposted the above comment with a link...

    Interesting aside about the open publication model: this is Google Scholar's attempt at ranking the top Computer System's conferences by h5-index:

    The usual big name, well-established conferences are at the top but the open publication model (arXiv) also features at number 17.


Startup Life: Three Months In

I've posted a story to Medium on what it's been like to work at a startup, after years at Google. Check it out here.