tag:blogger.com,1999:blog-9186457242428335144.comments2024-03-18T23:39:02.190-07:00Volatile and DecentralizedMatt Welshhttp://www.blogger.com/profile/04255792550910131960noreply@blogger.comBlogger2082125tag:blogger.com,1999:blog-9186457242428335144.post-43452071343566392742019-02-11T11:43:11.180-08:002019-02-11T11:43:11.180-08:00What position are you taking in XNOR.ai?What position are you taking in XNOR.ai?Artemnoreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-77881305872033599132016-06-15T16:27:43.023-07:002016-06-15T16:27:43.023-07:00Industry is vast and has many examples of both mor...Industry is vast and has many examples of both more rational and less rational processes.<br /><br />Zooming in specifically on grant proposals, for example, that could be compared to getting your startup funded by a particular firm. That's just as random, if not more random, than NSF panel recommendations: the outcome of your case will similarly be influenced by whether the reviewers had coffee in the morning, by the order in which they see the proposals, by how combative they are feeling on any particular day, and by two dozen other factors over which you have no control whatsoever. Within an organization, pushing a new project through could be straightforward in some settings, but is extremely difficult in others. In a well-run organization with healthy appetite for risk you probably know most of the rules and the approval process appears non-random to you. In many organizations, though, you will have next to no visibility into why your projects were picked up or not. Indeed, most organizations would default to not letting you propose risky projects requiring substantial funding commitments to begin with :).<br /><br />You make an interesting point re that in industry the decisions are made by people who have vested interest in the work. That's true, but, arguably, NSF's decisions are made by the representatives of the NSF, who hopefully have a vested interest in the success of their overall program, rather than by panelists who only advise the NSF. In industry, similarly, a decision-maker would have plenty of advisers and would depend on the quality of their advice -- no real procedural difference with the approach taken by the NSF. And there are real issues with the advice quality in industry settings as well. Major one being: it is often hard, if not impossible, to find people who are both knowledgeable and unbiased. After all, entire corporate strategy consulting industry exists to solve just this problem: provide senior corporate decision-makers with [most likely] diligent, [hopefully] unbiased, and [at least somewhat] knowledgeable advisers. In general, I do agree that major corporate decisions are treated with somewhat more diligence than NSF funding decisions. I am not sure that this diligence necessarily translates to better decisions. Take the example of the MSR SV lab closure. Microsoft most definitely spent considerable amounts of time and energy investigating whether to do it. Whether at the end the resulting closure was based on realistic projections, biased opinions, or a coin flip, we'll never know. <br /><br />As an aside, I think that current corporate decision-making processes are where networked systems folks could really make a difference right now. There are control and communication networks underlying all these issues that I have not seen anybody study with any kind of rigor (I may not be fully up to date on all the relevant literature though) -- and this is where there is tremendous impact to be had from improving things just a little tiny bit. Anonymoushttps://www.blogger.com/profile/14640429204762234518noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-22132037891738551142016-06-15T02:39:05.379-07:002016-06-15T02:39:05.379-07:00One problem is that the reviewers reject a submiss...One problem is that the reviewers reject a submission by arguing some of their favorate papers are not cited. This is so annoying.Anonymoushttps://www.blogger.com/profile/03557850672543014273noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-62477686349582889342016-06-13T20:49:46.592-07:002016-06-13T20:49:46.592-07:00Another factor is that for really novel work a lot...Another factor is that for really novel work a lot of the mathematics has not been refined yet. It's much easier to get something published that is incremental and has been developed over decades with a few changes. Something completely new of course will not be perfect which the reviewer can use as an excuse.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-51693762851157095992016-06-10T15:18:52.700-07:002016-06-10T15:18:52.700-07:00In some ways, the (at least on the surface) fact t...In some ways, the (at least on the surface) fact that many professors delegate their peer reviews to their (usually more idealistic) grad students is a feature of the system, not a bug. This is interesting because it is generally viewed negatively but in reality adds a much needed element of appreciation for newness to the process.Anonymoushttps://www.blogger.com/profile/04290848674284909718noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-46295120493580102212016-06-10T11:20:27.224-07:002016-06-10T11:20:27.224-07:00While you are right in principle, I am not sure ho...While you are right in principle, I am not sure how much this conflict of interest plays itself out in practice. It seems exceedingly unlikely that a PC member could really influence the outcome of other papers in a way that would favor their own (though that might not stop them from trying). Anyway, it's an interesting point, but I just don't have data on how problematic this is.Matt Welshhttps://www.blogger.com/profile/07077674014671176946noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-543642350461477462016-06-09T11:09:00.131-07:002016-06-09T11:09:00.131-07:00Good post.
In a paper review, reviewers have a con...Good post.<br />In a paper review, reviewers have a conflict of interest unless they never submit to that journal or conference.<br />Because everybody is racing for a limited spot, SIGCOMM can only accept 20 papers, a journal accepts 15-20 etc.<br />Thus, it is a good strategy to reject papers of others, and a good amount of people do that, unfortunately.<br />Another corruption is that, in many CS journals, some associate editors always publish their papers there, not in the other journals in the same area. How ethical is this? <br /><br />The definition of 'conflict-of-interest' is not working. A person cannot review a paper from his university, but it is OK to publish on your journal or cherry-picking lenient reviewers for your paper, and harsh reviewers for opposing papers. That is a widely accepted practice in todays academia. How sick is this? <br /><br />In a company case, nobody has a conflict of interest if a project is approved, unless the company has a limited budget for projects. Probably, companies like Google are more flexible, since money is not that much of an issue. <br />That is probably why, companies, by their nature are less arbitrary, not because they are good samaritans.seenhttps://www.blogger.com/profile/16174900565040121687noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-47398744786330942642016-06-08T22:19:48.408-07:002016-06-08T22:19:48.408-07:00As an expert on randomness, I have nothing but the...As an expert on randomness, I have nothing but the highest respect for your opinion. Still, I think there's a vast and measurable difference between the amount of care that goes into writing a paper review and making corporate decisions like shutting down a research lab.<br /><br />While I can certainly imagine some companies treating major decisions as capriciously as the proverbial Reviewer #3 ("There's nothing novel about this work", "I don't understand why you did X instead of Y", etc.), I think you would agree that such a cavalier approach to corporate decision-making would not, in general, be a recipe for success.<br /><br />I hope you are not trying to equate Microsoft's decision to shut down its Silicon Valley lab with the kind of bullshit decisions I see being made on papers on *every single PC I have ever served on*. I'm sure that the people affected by that closure were not happy about it, but I am willing to bet good money that the decision was made far more deliberately, and with much greater care, than the sloppy, mostly shoot-from-the-hip approach taken by a large majority of program committee members.<br /><br />It is true that I'm fortunate to be at a stable company and working on a stable product team. If I were to join an early-stage startup, or even a new project team at Google, then my career would be subject to vastly different pressures in terms of raising funding, hiring good people, settling on the right product direction, and generally just being lucky. In such a case, I'd agree that the degree of entropy would be much higher than the relatively stable situation I find myself in now. I still claim that those sources of randomness are qualitatively different than the process of academic peer review, but perhaps we can agree that there would be just as much uncertainty.Matt Welshhttps://www.blogger.com/profile/07077674014671176946noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-30910614700864929972016-06-08T12:57:09.982-07:002016-06-08T12:57:09.982-07:00Matt --
There are many things I agree with in thi...Matt --<br /><br />There are many things I agree with in this post. Peer review does appear to be a semi-random process, where the bias may be in the right direction (we both appear to think that it is), so over longer time scales the law of large numbers applies, but over shorter time scales the results can be frustrating. (And unfortunately for many graduate students or even young faculty, the short time scale is rather important.) <br /><br />Where I think I'd disagree is where you say: "However, I believe the industry process is far less arbitrary." And I'm afraid I don't think you really back up that statement with any actual evidence. (As a counterpoint, people at Microsoft Research Silicon Valley found Microsoft's decision to close their lab and fire almost all the people in it rather arbitrary.) <br /><br />You may be experiencing your own personal selection bias. Google appears to be (for those of us on the outside) a well-run company that has enjoyed a long period of success. But that may not be a permanent condition, and of course many companies have far less pleasant circumstances. <br /><br />So I'd happily agree that various aspects of peer review are frustrating, and certainly I've had reviews that I thought were not only misguided, but ignorant and wrongheaded. Industry is indeed a different kind of game, but I'm unconvinced that we can ascribe to it a higher rationality. <br />Michael Mitzenmacherhttps://www.blogger.com/profile/02161161032642563814noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-42476728089321297732016-06-07T17:48:20.516-07:002016-06-07T17:48:20.516-07:00Isn't it pretty obvious, at least for the acad...Isn't it pretty obvious, at least for the academic case?<br /><br />I'm sure it is to the audience Matt intended for this particular post.<br /><br />In particular, pretty much everybody who wants an academic career wants academic freedom. Unfortunately the only way to have this on a long-term basis is to get tenure (*) and in order to have freedom to take on projects bigger than a single person can handle you need both that and research grants. Grants are made based on direct peer review and tenure decisions are (purportedly) based on publication record which is an aggregation of many peer review decisions. Hence the article.<br /><br />Again, everything in the preceding paragraph should be pretty obvious and isn't nearly as clever as that t-shirt, which I am now lusting after.<br /><br />(*) I'd argue that externally-fellowshipped grad students actually have more freedom than any professor does -- but only for a few years.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-70979829977447970812016-06-07T16:18:05.622-07:002016-06-07T16:18:05.622-07:00> Rather than the success or failure of a proje...> Rather than the success or failure of a project coming down to the dreaded Reviewer #3, it comes down to the team's ability to execute, targeting the right market, and attracting users.<br /><br />This is broadly true in the industry, especially at the macro scale, as business survival depends on it. <br /><br />With that said "Reviewer #3" does still make an appearance in companies (especially with siloed orgs), they're just called "Irritating VP of X" or "Obstructionist Manager #555". <br /><br />The military has a lovely term for these people: "Blue Falcons". The etymology of the term is left an an exercise for the reader. Anand Guptahttps://www.blogger.com/profile/15483238813865853357noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-84057054737177732172016-06-07T16:16:27.375-07:002016-06-07T16:16:27.375-07:00NIPS experiment endorses everything you said.
htt...NIPS experiment endorses everything you said. <br />http://blog.mrtz.org/2014/12/15/the-nips-experiment.htmlKlerisson Paixaohttps://www.blogger.com/profile/09598329932693791273noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-76365259695468917162016-06-07T13:46:11.793-07:002016-06-07T13:46:11.793-07:00Matt, you wrote, "But aspiring academics shou...Matt, you wrote, "But aspiring academics should [know] how much of your success hinges on the purely stochastic nature of the process." Just curious, what is your definition of academic success (and failure)? And what is your definition of non-academic, professional success (and failure)? It might be useful to define these terms before debating about the extent to which "the process" affects the outcomes.Prabal Duttahttps://www.blogger.com/profile/00975098265026146680noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-35930890321543100582016-04-28T08:05:22.929-07:002016-04-28T08:05:22.929-07:00"(...) explain what you're doing in clear..."(...) explain what you're doing in clear terms (...)"<br />That's in my opinion is a good recipe for mainstream research papers... in which you tell a story about what you've done to incrementally improve some “hot topic”. You yourself say that: "If by simplifying the problem just a little bit, you render your beautiful design unnecessary, it might be time to work on a different problem". However innovation cannot always be fully explained by the established way of communicating. <br />My opinion is that the well written papers you are referring at are somewhat boring papers… their average reading time is not driven by the content but by the acceptance/rejection task, their value is directly proportional to the quantity of empirical results are able to show. These papers are mostly muscular and with limited novelty. Please note that I'm saying they are boring, I'm not saying that they are not important for the community.<br />Writing papers is a form of a liberal art: the mainstream research needs disruptive underground research to feed its model business, underground research needs the mainstream research model business to survive. A perfect parallel can be made with underground music… Blues, jazz, rock, metal, hip hop, electronic were all underground music at their beginning and as such they were “strongly rejected” by the society (I.e. community)… all of “current” music is deeply influenced by all of them!<br />It is also well known that some of the “most seminal” papers were indeed initially rejected, since they were not following any “good” writing recipe.<br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-58593343249823995482016-04-27T09:02:12.338-07:002016-04-27T09:02:12.338-07:00I think I was clear in my article that I try to av...I think I was clear in my article that I try to avoid bias based on whether the authors are native English speakers. I think it's unfair to levy an accusation of cultural bias against me, and especially to do so hiding behind an anonymous comment.<br /><br />Bad writing is bad writing. I think most people who have served on program committees will agree that when they have a large pile of papers to review, papers that are well-written will be treated more kindly. If you want to argue that papers should only be evaluated based on scientific merit and not the quality of exposition, I think you'd be in the minority.Matt Welshhttps://www.blogger.com/profile/07077674014671176946noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-58959464617687542172016-04-26T20:56:52.637-07:002016-04-26T20:56:52.637-07:00I have always appreciated Matt's posts, becaus...I have always appreciated Matt's posts, because they ask or imply deeper questions about the highly systematized and rarely challenged value hierarchy that pervades academia. But this post is rather disappointing, because it does not seem as if Matt's time outside of academia has provided him much perspective on this issue. <br /><br />The above content might be acceptable if he were coaching his PhD students at Harvard (and indeed, I'm sure he said much the same to his students, it's good working advice. In many cases, he may have jumped in and written those same introductions himself right before the paper was submitted.) Or if it had been entitled "Pro-tips for PhDs and Professors" or "How (Some) Reviewers Think", it would have been great. But instead, it has taken the form of a self-righteous declaration of entitlements as a reviewer. <br /><br />The essential message of Matt's article is "I reserve, even cherish, the right to reject your paper if I don't think your writing is good enough." In an increasingly international research community, this elitist argument clearly gives the advantage to those with Anglo- and Western- backgrounds, in the same way that the SAT Verbal section was a set of memorized code words that ever-so-conveniently happened to be in common use by upper middle class families across America and ensured their continued access to top universities. In a community with notoriously low accept rates, placing an emphasis on artful writing clearly favors a select group. Of course, we can always argue that somebody who really deserves to publish here will learn all of those code words, or the art of writing "eloquent introductions", as Matt refers to. Perhaps it's just me, but don't the Program Committees of the systems community seem a little bit white compared to most of CS? Maybe I'm wrong.<br /><br />I believe in strong writing, possibly the product of a liberal arts background that is sufficient to equip me to understand the bias that underlies Matt's argument. It irritates me to no end when I have to suffer through bad writing. But at the end of the day, globalization requires that we acknowledge that science is about advancing human knowledge at the fastest rate possible, and not about writing pretty prose. If the cure for Ebola is on page 3, after a crappy introduction, you should accept. Or, for that matter, if they improve data centers by 0.5%, you should also accept because that could indirectly be on the path to the cure to Ebola.<br /><br />A more PC version of what Matt could have said would be "Well written papers are more likely to communicate your ideas and improve your chances of acceptance, and of others citing your work." But I think it's good Matt wrote what he did because it gives us reason to reflect.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-33999173367141893772016-04-26T15:10:28.854-07:002016-04-26T15:10:28.854-07:00Nice post, especially the point about picking some...Nice post, especially the point about picking some papers to champion. It's far too easy to find reasons to reject papers. As a side note, I think we should be accepting more papers in general, as I think paper-as-final-outcome---in any venue, regardless of imprimatur---is a horrible metric for impact, given the randomness of the process.<br /><br />I still remember a PC I was on where I reviewed a paper on BGP. <br /><br />In the meeting, my argument for strong accept (which echoes your comment above) was simple: I learned something. The chair was astute enough to observe that if I learned something about BGP from reading the paper, then other people very likely would, too.Nick Feamsterhttps://www.blogger.com/profile/18117591092852756524noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-87628059561039488892016-04-26T04:12:12.679-07:002016-04-26T04:12:12.679-07:00I would argue that Results is the most meaningless...I would argue that Results is the most meaningless of all sections, and should not even be included in a paper. How can we trust one's own evaluation of their own work? It has been shown that even work presented at top conferences in a large majority of cases cannot be reproduced: <br />http://cacm.acm.org/magazines/2016/3/198873-repeatability-in-computer-systems-research/fulltext<br /><br />Evaluation should not be done by the authors. You don't trust a car dealer, but you take a car out for a test drive before buying it. I don't see why you'd trust an academic who doesn't even share their code/data.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-11624835228496036942016-04-24T06:32:08.609-07:002016-04-24T06:32:08.609-07:00Well, one problem is that the review process serve...Well, one problem is that the review process serves two independent purposes: (1) Choose a set of papers for the conference. (2) Provide feedback to the authors. If the goal is only (1), then skimming a paper that is to be rejected is fine. If the goal is also (2), then authors are owed some feedback.<br /><br />And increasingly, I see papers submitted for purpose (2), i.e., with the rough expectation that they will be rejected (though the hope that they will be accepted, of course), but with the goal of getting some feedback.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-7936192007865574122016-04-23T12:28:21.232-07:002016-04-23T12:28:21.232-07:00I'll add another, which has started to truly i...I'll add another, which has started to truly infuriate me: Not including any results at all.<br /><br />I don't care about your ambitions, your desires, and your architectural notions. If you don't have any evidence to back up your statements, it's not science. I don't demand that people be doing real-world experiments, but at least run some simulations or prove a theory or *something*. I've gotten a number of these recently, and it's gotten to the point where after I read the abstract, I flip to the end to see if there are any results or not before I even start on the introduction.Jacob Bealhttps://www.blogger.com/profile/03294188694250217754noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-52197353458487370152016-04-23T09:06:10.911-07:002016-04-23T09:06:10.911-07:00Making unsupported assertions. Don't claim &qu...<b>Making unsupported assertions.</b> Don't claim "A is B" as a fundamental motivation of your work unless you can actually support it. In the area of performance, be especially careful not to automatically add "and slow" after "complex" in an assertion - there's a huge difference between "difficult for you to understand" and "necessarily slow".<br /><br /><b>Resembling a high school science report.</b> (or "I don't want to read your journal of personal discovery.") A good paper describes <i>what</i> you learned, and the experimental path a reader might take to reproduce your results, not the random walk that actually occurred. (a really good paper can address some of your research mis-steps, but if you need this advice then you don't write really good papers.)<br /><br />Anonymoushttps://www.blogger.com/profile/08304076538252258440noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-68592225638185247962016-04-23T03:58:38.818-07:002016-04-23T03:58:38.818-07:00Hi Matt. I subscribe each and every word you'v...Hi Matt. I subscribe each and every word you've written!!!Anonymoushttps://www.blogger.com/profile/14699201538107109157noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-46562263375956956272016-04-22T16:10:58.128-07:002016-04-22T16:10:58.128-07:00This is a pretty good point. Much has been written...This is a pretty good point. Much has been written about how broken the scientific paper review process is (on this blog and elsewhere). Still, I don't feel that I owe authors a substantial review (and chunk of my time) just because they submitted a paper. If on skimming a paper it's clear it's going to get rejected anyway, I'll write a cursory review and move on with other things in my life. While there are things that could make the review load lighter, I don't think those would have a big impact on my disposition towards poorly-written papers.Matt Welshhttps://www.blogger.com/profile/07077674014671176946noreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-56687433203929618092016-04-22T08:59:48.744-07:002016-04-22T08:59:48.744-07:00I strongly disagree with you. Science is more abou...I strongly disagree with you. Science is more about being able to communicate one's results than about the results itself. It's actually my duty as an author to write my paper in a way that intrigues the reader to read on. After all, publications are not about having a publication but, instead, should be read by many more people after they have been published. This means that I (as a reviewer) wouldn't really do the respective authors a favor if I accepted a poorly written paper because I thought that the idea had some merit. If such a paper gets accepted, it will have no impact at all since no one besides the original reviewers will read it. And if that's due to the paper's presentation, then it's really a waste that the potential of the ideas was not fully realized.<br /><br />We don't write papers for ourselves - we do it for our readers. And that also means that we absolutely have to prepare papers in the best possible way for the reader. It's crucial to be direct in a review. If the presentation is really bad, we should write so clearly and give some recommendations for improvement.<br /><br />Actually, as an author I'm angry with myself if the majority of the reviewers doesn't point out how well the paper is written. I can live with rejects based on the ideas. But if reviewers don't understand my paper/my ideas or if the review says that the presentation was poor and that the paper has to be rejected based on that - then that was my mistake.<br /><br />Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-9186457242428335144.post-68756044202398585562016-04-22T08:35:32.652-07:002016-04-22T08:35:32.652-07:00Background: I am neither in academia nor publish s...Background: I am neither in academia nor publish scientific papers. However, as a research software developer, I read a lot of them.<br /><br />I understand your pain regarding badly written papers. These usually turn out to be the ones with unbelievable promises and results. FYI, these come to my desk to be prototyped. However, the following statement stuck out to me.<br /><br />"Keep in mind I've got a pile of 20 or 30 other papers to review, and I'm not going to spend my time picking apart the nuances of your proofs and evaluation if you've bombed the intro."<br /><br />I think the bigger problem is that each reviewer gets 20 to 30 papers to review.<br /><br />The next question is, "If you had only 2 or 3 papers to review, would you mind spending your time picking apart the nuances of the proofs and evaluating if the author just bombed the intro?"<br /><br />If your answer is no; then there is nothing else that can be done. It's just an reviewer's bias where everyone has their own opinion. If the answer is yes, it indicates a deeper issues with reviewing system than with the writing.<br /><br />Anonymousnoreply@blogger.com