|
Post by Aureliano Buendia on Jun 12, 2010 18:17:33 GMT -5
In the US Senate, one anonymous Senator can block important work for any reason.
NSF reviews are sort of like this. If most of the reviewers really like a proposal, but one doesn't, the project goes nowhere.
Any experiences to this, or other, effect out there?
|
|
|
Post by retired observer on Jun 14, 2010 11:59:40 GMT -5
The scenario suggested by the Administrator, which I assume was thrown out to readers to elicit responses, may be true or false depending on the Program to which the proposal was sent, the program director responsible for making the funding recommendation, or the culture of the division in which the program resides. In most programs, and for most program directors, a single negative review only counts disproportionately heavily if -- big IFF -- the review raises a critical scientific issue or flaw that the PD determines to be VALID and which is not appropriately addressed by the positive reviewers. Sometimes only one reviewer out of a group will pick up a "fatal flaw." It is incumbent upon a PD to determine whether or not the "fatal flaw" described by the negative reviewer is valid or not. Most good PDs will do their homework appropriately and make that determination.
MORE: If the program director is a GOOD program director (and most at NSF are, but not all), then a "bimodal" review outcome (some strongly favorable reviews and some strongly negative reviews) means that the PD has to do more work! The scientific of the comments in the reviews need to be checked; any potentially biassing relationships between the reviewer(s) and the proposer need to be double-checked; etc. AND most importantly, the reasons for the disparate reviewer opinions need to be evaluated. For example: if some reviewers think the project is imaginative, creative, and would shed important light on an important scientific issue or change the way scientists in the field think about things, and that the proposer has the wherewithal to pull it off if it can be pulled off at all, and therefore they love it, while other reviewers think it simply can't be done because it hasn't been tried before or think that the hypothesis is wrong to begin with and therefore hate it, then the PD would realize that this proposal is for a "high risk high reward" project and would probably (not necessarily, because funding is always tight and other considerations have to be taken into account also) recommend funding for it. On the other hand, if all the reviewers who loved it turn out to be good buddies or friends of the proposer while all the reviewers who found significant problems with the proposal turn out to not know the proposer personally, and if the problems cited by the negative reviewers are demonstrably valid while the praises of the positive reviews are either vague or unsupportable by the proposal itself, then a good PD would decline that proposal in a heartbeat. Good science is declined at NSF all the time. The reason is simply because more good science is being proposed than NSF can afford to support. I would estimate that, in the Directorate for Biological Sciences, about 65% of the proposals received are really, really, really good. However, depending on the division, the program, and the fiscal year, funding constraints allow only about 7-25% of the proposals can be supported (yes, seven percent, in some cases). It is heartbreaking. Once the "good science" is identified, PDs need to consider many other factors in order to reach funding recommendations. Obviously, proposers whose projects are really really good but who don't get funded are angry. Of course they are! It isn't fair. It isn't. How can it be? But here's a word of caution: imagine if NSF suddenly had four times as much money to spend on science. What would happen? Would all the really good science be funded and everyone be happy? Of course not! Because really good scientists are full of really good ideas, and they would send in at least four times as many projects, or bigger more complex projects that cost four times as much! And scientists who maybe aren't all that creative or productive and who generally don't even submit to NSF will start to submit because the likelihood of funding seems to have gone up! That's human nature.
And no, I don't have a solution to the problem, other than to keep making sure that most, if not all, of the program directors and other scientific staff at NSF are top-notch and doing their best to make good funding recommendations that will result in great new scientific breakthroughs.
|
|
|
Post by blanca on Jun 15, 2010 10:20:25 GMT -5
I have sat on many NSF panels, and it really just fuels my frustration with the NSF review process. I think its ironic that I am continually asked to review proposals and sit on panels when my own funding rate is miserable. One thing in recent years that bothers me a lot is how panels currently operate. A few years ago, the whole panel would sit and listen to the discussion on any one proposal. Even if they hadn't read or ranked it, they could offer an opinion and question the alpha male that was hijacking the discussion, or maybe back up the viewpoint of a subordinate panelist. These days, everyone who was not assigned the proposal are paying no attention to the discussion. They have their heads buried in their laptops, busily knocking out their panel summaries so they can leave at the earliest possible moment. With only three people, and possibly the program officer, participating in these critical exchanges, it is too often the case that the opinion of one (the loudest) carries the day.
|
|
|
Post by Anon on Jun 15, 2010 13:43:35 GMT -5
This is exactly what I experienced in the first time I participated in the panel and came out feeling completely disillusioned with the process. 2/3 randomly chosen people are evaluating the proposal. The Prog. officers are also sometimes not paying attention. All others are typing away or checking their personal emails. The panelist I was sitting next to was always on email. So, I am not sure how this process works and who are those that get funded and how objective is the evaluation. I have been given several advices and suggestions but after my participation, I just felt it was akin to 'being at the right place at the right time' situation. And yes some here and everywhere else say that those that have not received funding are always finding it to be sour grapes but then why not. Those that get funded have no complaints - duh they just got funded.
|
|
|
Post by anonomy on Jun 22, 2010 13:04:46 GMT -5
2/3 randomly chosen people are evaluating the proposal. I'm confused by this statement. I am guessing because it is a blanket statement that does not reflect the differences within the system. Different directorates, divisions, and even programs set up reviews in slightly different ways- the only hard rules that really come to mind are - at least 3 reviews of each proposal (though "retired" could correct me), and no involvement of reviewers/PDs with conflicts of interest. Now to the next part of this. I am insulted to hear someone claim that the panel is 2/3 random. The PDs at NSF put an extraordinary amount of work into getting reviewers for the panels and to complete ad hoc reviews. You're right in recognizing that the panel is not made up entirely of people you know in your specific sub-field/line of inquiry but that's because the proposals submitted to a program are never that narrow. So, for a panel a PD has to find a sufficient number of reviewers with experience to cover the range of proposals submitted who are willing to descend upon Arlington at the same time and are not in conflict with the proposals submitted and maybe even represent diversity (though this is an opt-in for reviewers and a required report for NSF so it always looks bad unless everyone willingly identifies themselves). That is a tall order indeed so it is not surprising that panels are not solely constituted of perfect matches to each and every proposal; that's why they use ad hoc reviews to get input from specialists who couldn't or wouldn't otherwise be part of the panel. If these panels are insufficient then I propose the following- everyone promptly accept to be on a panel or send a review every time you are asked- that way you are more likely to get the most relevant reviewers participating. AND/OR everyone have a "pay to play" service requirement- if you aren't contributing to the process then you cannot get funded. That would save time writing grants for those who feel entitled to money without giving back to the community and would lessen the proposal load leading to higher success rates. Thus we can ensure that the right number of appropriate reviewers will always be available.
|
|
|
Post by Anon on Jun 23, 2010 11:26:54 GMT -5
"...Now to the next part of this. I am insulted to hear someone claim that the panel is 2/3 random. The PDs at NSF put an extraordinary amount of work into getting reviewers for the panels and to complete ad hoc reviews.... "
I realize that must be insulting especially when you have out in so much to construct this 'panel of experts'. I have been in this 'game' of academia in the US for over a decade now and have been submitting grants and publishing papers but I was never called to be on a panel despite sending my cv adn such several times to POs. However, I see the same few elite are picked up each time. ANd then this one time I got to be there (not even on the core panel mind you) I realized that - panelists begin with 'I had no idea about this system or organism or the research going on in that field" and now to think this person is going to lead the discussion and be the person deciding the fate of the PI's grant - I was appalled. ANd then another panelist says " This particular species of plant is close to my heart as I worked on it during my PhD and I would like to encourage the PI despite the fact that there is no preliminary data since I know how difficult it is to get the data on this species' Now you tell me for a person who is serving on a panel for the first time, has never been funded, has received sarcastic reviews and is witnessing such things - am I supposed to be encouraged by these proceedings? No way! I was not. It may be a sample size of 1 for beign a panelist but the rejects have been many more.
The NSF would like to believe that they are doing all possible to keep the procedure fair but it is not because there is a lot of backscratching. I like you second option for reviewers better but I am not sure this can be implemented at all.
|
|