Summary: Let's improve the NSF proposal review process by making it function more like conference program committees.
Intellectual merit: The core problem that this proposal addresses is the poor quality of many reviews submitted by NSF panelists. It is not uncommon for a proposal to be rejected with short, content-free reviews, offering little feedback to the authors. In many cases the scoring of a proposal is poorly justified, leaving the author mystified as to why they got such a low (or high) score. Recently, I had a proposal rejected where one of the reviews was essentially a single sentence in length. Not only does this not help the PI improve the work for later submission, but it leaves the impression that the review process is arbitrary.
(I'd like to emphasize that this is a problem that many NSF program managers have called attention to, but they are powerless as individuals to do much about it. So I believe the fault rests with the research community, not with the NSF PMs.)
A key problem with NSF panels is that there is no community standard for what constitutes a good (or even acceptable) proposal review. I am a strong advocate of the approach used by the systems community, where paper submissions are given extremely detailed reviews with constructive feedback. Given that we spend so much effort reviewing papers, couldn't we also give the same effort to NSF proposals, which arguably are more important than a single paper?
It is my impression that NSF program managers also have a hard time pulling panels together, mainly because people are so busy, and don't have the time to travel to DC. Yet many of the potential panelists freely serve on conference program committees with much higher reviewing loads and an expectation of much more detailed reviews. (A typical panelist will review 8-12 proposals, whereas a competitive conference will require TPC members to review 2-3x as many papers.) Why? One reason, perhaps, is that program committees are recognized for their work, and serving on a TPC is an important indication of one's stature in the research community.
These two issues are related. Since serving on an NSF panel is seen as "paying your dues," rather than an activity you take pride in, there is little incentive to write good reviews. However, if you write a bunch of crappy reviews for a TPC, you can earn a reputation as someone who doesn't take the process seriously might not get invited back in the future. So the public recognition of the TPC and the quality of the reviews go hand in hand.
My proposal: Let's have NSF panels emulate the conference program committee model. First, we should recognize panelists publicly for their work. Being on the "NSF NeTS 2010" panel should be as prestigious as serving on SIGCOMM or SenSys. The NSF should create a web page for each years' competition where they list the panelists and list the proposals funded through that competition (the latter information is available, but a little hard to dig up). So the community can take pride in the effort and see the outcome of the process more directly.
Second, establish expectations for high-quality proposal reviews. If you are a bad panelist, you won't get invited back in the future, so you won't gain the recognition of being asked to serve. Panelists will be chosen from among the best people in the field, where "best" is defined both by research contribution and service.
Third, hold panels somewhere other than Washington DC. Since I live in Boston, it's easy for me to get down there, but for people on the West Coast it is much harder. If panels are run in different locations around the country, the travel burden can be spread around more evenly.
I will be the first to admit that the conference program committee model is not perfect -- see my related posts here and here for thoughts on that. But in my experience it is better than the (typical) NSF panel.
Of course, NSF's conflict-of-interest guidelines will have to be tweaked. Currently, you can't serve on an NSF panel for a program for which you have submitted a proposal. (The upshot is that panels tend to consist of people who didn't get their act together to submit a proposal, which may not be the best group of scholars to evaluate the work.) Recently NSF has separated larger programs into multiple panels and simply ensured that someone can't serve on the specific panel for which their own proposal is under consideration.
Broader impacts: Improving the proposal review process and providing more constructive feedback to PIs will result in better science.
In the comments, please indicate your rating for this proposal:
Excellent
Very Good
Good
Fair
Poor
Subscribe to:
Post Comments (Atom)
Startup Life: Three Months In
I've posted a story to Medium on what it's been like to work at a startup, after years at Google. Check it out here.
-
The word is out that I have decided to resign my tenured faculty job at Harvard to remain at Google. Obviously this will be a big change in ...
-
My team at Google is wrapping up an effort to rewrite a large production system (almost) entirely in Go . I say "almost" because ...
-
I'm often asked what my job is like at Google since I left academia. I guess going from tenured professor to software engineer sounds l...
Serving on a theory PC is also considered prestigious, but it does not prevent papers from receiving content-free reviews. Nearly every paper I have sent to a top theory conference has had at least one review that is less than 3 sentences (and if they bother to state any facts about the paper, get them wrong). So if you think that the systems community is doing something to encourage detailed and useful reviews, it may be partially due to associating prestige with serving on a PC, but it is partially something else that you are doing better than the theory community.
ReplyDeleteYou talk about the PC reviews as if the PC members themselves write them (or did I misunderstand?). In the theory PC's, I think it is common practice to farm out the work to sub-reviewers, who are then not held to account at all for a terrible review. Perhaps requiring the reviews to come directly from the PC member would help, but this is already done with NSF proposals, right? So that can't be the whole explanation either.
A lot of people would raise hell about how busy they are and can't do this; to them I would say, well then don't serve on the PC. Prestige doesn't come free. I think that in general, science would vastly improve if everyone thought hard about what they have time to do before committing to doing it, and then really did it right, instead of playing this ridiculous game of collecting vast bullet lists of CV "accomplishments" like 8 PC's per year and 30 papers per year, which no human could possibly really do properly without a 90-hour day.
I'm not too bent out of shape visiting D.C. for panels. Given that the NSF PMs have to crank through a huge number of these proposals, it makes some modest sense to keep them put and have everybody else come to them. And now you can use your own laptop without having to get it checked out by the NSF anti-virus security gestapo.
ReplyDeleteAnother issue here is the NSF's conflict-of-interest system. For big programs, like Expeditions in Computing, it's acutely problematic. My latest proposal in that program had 291 listed conflicts of interest. Multiply that by our competitors and it's amazing they could find reviewers at all.
Ultimately, the NSF is the flip-side of the argument for double-blind conference submission. We have double-blindness for conferences where we feel we cannot give the paper an unbiased consideration of its merits. (I'm not sure I agree that this is necessary, but let's assume it is for the sake of discussion.) In grant review, you're explicitly allowed to take into account the reputation of the PIs. Yes, these PIs can pull off their proposed work. Those ones, I'm less sure. In a world where you're explicitly allowed to make such judgements, the need to avoid COIs is clearly a pressing concern. Should we have double-blind grant submissions? Could that even work? Somehow I doubt it.
And, if you think the NSF gets it wrong, the State of Texas has an on-again-off-again grant program that we all apply to. Naturally, every academic in the state is conflicted, so they get unknown out-of-state reviewers to do the reviews, and we get back maybe one or two sentences per review. One of my colleagues refers to it as "the lottery." You're happy if you win, but you treat it as a game of chance.
I like the high level idea, but I think its implementation may be problematic, mostly due to issue of anonymity. I'm probably one of the minority who does enjoy the annual trips to DC, but I'm repeatedly reminded of the fact that it is "very bad" to let on which general program you reviewed proposals for. This seems to be directly in conflict with trying to imbue NSF panels with that TPC prestige. Any ideas on how to avoid this tension?
ReplyDeleteAnon - In good systems conferences the PC members are supposed to write the reviews themselves. For whatever reason we have been brainwashed into believing that such things matter. I guess theory people know better :-)
ReplyDeleteDan - I don't want to wade in on the single blind vs. double blind debate (again!). Suffice it to say that many conferences with a single-blind system operate reasonably well, or at least as well as your average NSF panel, so this doesn't concern me much.
Ben - my idea is to simply have the panel membership made public. Jeanette Wing made it clear to me that this is an NSF policy that is unlikely to change :-)
Let me review your proposal: the broader impact is an afterthought, without even the typical mention of special attention towards women and minorities. That's going to hurt you, broader impact should be integrated in the proposal, not be an afterthought. Rated: Fair.
ReplyDeleteNSF review process is actually stricter than most PCs: each proposal has 4 reviewers, and more available from the panel if there is a disagreement or a point which needs more expertise; there's a second day, which gives time to revisit problematic proposals.
NSF panels are outside of DC: I have been to some in Chicago and LA.
Reviewers who write only one single sentence reviews are not invited back, and would be shamed during the proposal review time at the table. Or it might be that a review got expunged after the discussion.
The real problem with NSF panel is that NSF program directors change every other year, which means that the history about the bad reviewers is lost when he/she goes back to her/his university (and the newbie has to create a portfolio of reviewers who can be trusted from scratch). So if your proposal addressed that point, maybe I'd rate it Good.
Sorry to hear you've had such a negative experience.
ReplyDeleteThe panels I've seen have been very helpful. Reviewers have to talk to each other and reach a consensus. That means someone can see a flaw or a benefit, that the other reviewers missed, which then gets incorporated into the summary. That also implies panel reviewers update their reviews. It helps, because it makes the panel review system seem far less random and arbitrary.
One of the major problems in my opinion is that grant reviews are considered
ReplyDeletefinal. In papers, if some incorrect facts are stated by the reviewer,
the author can illustrate their incorrectness.
The one shot yearly approach by the NSF with
referees that can make up facts (especially
when they are competitors)
is severely flawed.
I find all your suggestions quite interesting, because these are some of the things that NASA uses to ensure that they have a successful review process.
ReplyDelete(1) Planetary scientists add their panel participation to their resumes and it is generally considered to be an important community service on par with being a reviewer for papers or organizing a conference.
(2) Proposals are reviewed like papers, but are also given rankings based on the feasibility of their costs and work plans.
(3) Panels meet in different locations, including the midwest and DC area.
So, basically what I'm saying is that if one government agency can do it, another can. I think a lot of this organization depends on the program manager designing a good template for success. It also depends on good lead panelists to organize reliable panelists and reviewers with the expectation of a high quality product.
The importance of participating in review panels for proposals is regularly trumpeted at various of our planetary science conferences. Graduate students and postdocs are also urged to participate in such panels to learn how to write proposals, since it's a great way to see differences between good and bad ones. We even have a yearly proposal writing workshop at our main planetary science conference. It is lead by one of the program officers who works at NASA Headquarters. This workshop certainly serves as a source of new panelists who have a good idea of how a proposal should be written, in addition to helping increase the quality of the actual proposals submitted to NASA.