The personal cost of applying for research grants

May 11, 2014 at 9:32 am 17 comments

For academics, this study falls in the category of “Duh! Who didn’t know that?!?”  But it might not be obvious to non-academics.  NSF hit-rates are below 10% in most fields.  Proposals take tons of time to put together (way more than a conference paper, on par with a journal paper), and you have to keep producing them until you get hits in a research-intensive university.  When hit rates were around 30%, you’d do four proposals and could expect one to hit.  Nowadays, you’re doing over 10, and then you’re still not sure you’ll get funded.  It’s a huge cost.

The pressure to win high-status funding means that researchers go to extraordinary lengths to prepare their proposals, often sacrificing family time and personal relationships. During our research into the stressful process of applying for research grants, one researcher, typical of many, said, “My family hates my profession. Not just my partner and children, but my parents and siblings. The insecurity despite the crushing hours is a soul-destroying combination that is not sustainable.”

via The personal cost of applying for research grants | Higher Education Network | Guardian Professional.

Entry filed under: Uncategorized. Tags: , , .

Hackathon models that draw in women Happy 50th Birthday to BASIC, the Programming Language That Made Computers Personal

17 Comments Add your own

  • 1. Bonnie  |  May 11, 2014 at 9:37 am

    And then people wonder why women fall of the science pipeline…

    Reply
  • 3. alanone1  |  May 11, 2014 at 9:55 am

    Hi Bonnie

    Well, we’ve always known women to be more sensible!

    On the topic: Having sampled the NSF process at all levels while I was on several ABs over a number of years, I believe that the proposal problem is a manifestation at many levels and dimensions of CYA — from the wrong-headed incremental engineering-centric main-stream internal goals, to peer review that is certainly not “peer” for the best proposals, to fear of OMG and Congress.

    The old ARPA liked proposals whose funding would advance the state of the art whether successful or not, and they felt (correctly) that most such edge proposals could not be very long. Also, ARPA really paid attention to the track record of the PIs and NSF bends over backwards to avoid this,

    A similar contrast in the opposite direction is that ARPA liked really fleshed out reports on results and NSF is not particularly interested (in part because it is so intertwined with academia in so many ways that it expects that “publish or perish” will take care of reporting in the form of papers — but that is a very different arena with severe restrictions of its own — it is not a substitute).

    Cheers,

    Alan

    Reply
  • 4. Mark Urban-Lurain  |  May 11, 2014 at 10:04 am

    Interesting article. “Duh” indeed but I particularly liked the one data point in the original article that they calculated 550 person-years of effort in preparing grants for one funding RFP. That resonates with my experience applying for NSF or being on panels.

    With the low hit rate and all of the PI and support staff time at the institution just to prepare the proposals, this feels accurate. The other side, at least for NSF proposals, is that it takes almost as much time to prepare a proposal for 200K as it does for 2-3M. 15 pages for the project description plus all the other budgeting, etc., no matter how much funding will result from a ‘hit.’

    Reply
  • 5. Eric Gilbert  |  May 11, 2014 at 10:33 am

    Another thing I’ve been thinking a lot about lately is the noise in the selection process, and how it interacts with the hit rate.

    The NSF panel process is essentially a ranking problem, with the top n% getting funded. I mentally visually all the proposals ranked and then a line drawn to separate funded from not funded. That line has an error bar. The error bar does not get smaller as n gets smaller (I’ll hypothesize). So a bigger and bigger proportion of the funded proposals are in the noise, consequently making the process feel more random and less fair. This leads to new academics calling the process “a lottery” [1].

    An interesting second-order effect is that as the error bar overwhelms the funding rate, an individual rational response is to simply embrace the randomness and submit *lots more* proposals, driving the funding rate down even further.

    [1] http://anothersb.blogspot.com/2014/02/goodbye-academia.html

    Reply
  • 6. gasstationwithoutpumps  |  May 11, 2014 at 10:36 am

    When I took my sabbatical a couple of years ago, I decided to give up on writing grants. Neither NIH nor NSF was particularly interested in the field I was in, and there were 200 research groups chasing about 20 grants. Though I was fairly successful at the research, I was not successful at the grant writing. I even got comments on proposals that assumed I had huge resources when I had hand one R01 grant in 7 years and was doing all my work on machines discarded from other research projects because they were too old to be worth keeping. (I’m still using computers that were discarded 5 years ago, though it would be cheaper to replace them than to pay the power bill for them—but there is no money to do that.)

    I’m slow at writing grants—there is no way that I could write 10 a year, which is what it would take for me to get back into the game.

    Instead, I’m concentrating on teaching and doing research only in collaborations with other people who are more successful at grant writing. My research output will suffer from not being able to support grad students (I’ve never had the money for postdocs), and my merit reviews have stalled because further progress is all about number of high-profile papers, which is mainly about the number of postdocs and grad students you can hire to put your name on their papers—a practice I’ve never liked.

    But I decided that I can live with no further promotions, but not with the time wasted pouring all my effort into the grant-writing black hole.

    Reply
  • 7. lizaloop  |  May 12, 2014 at 1:51 am

    Yup, most of us have the same experience. For me it was a USAID proposal on which I got 38 out of 100 points from one reviewer and 98 from another. I forgot how the 3rd person scored it but the average was well below the line. Now I’m trying to hit up NEH. I gave up on NSF a long time ago. BTW, NEH is encouraging proposals with STEM connections. Go figure.

    Reply
  • 8. Michael S. Kirkpatrick  |  May 12, 2014 at 10:09 am

    I’ve seen this “below 10%” rate quoted in many places, and I just can’t confirm it. According to http://dellweb.bfa.nsf.gov/xml/nsfcurrentyearfundingrates.xml, almost all of the rates are in the 10s or 20s. Still low, but not single digit. In 2011 (http://www.nsf.gov/nsb/publications/2012/nsb1228.pdf), the overall rate was 22%, which was comparable to every year since 2004 (2009 being an outlier). I’m curious where the “below 10%” actually comes from. Is it referring to some particular type of award?

    One thing that is not helping CS funding is the sudden jump in Ph.D.s that occurred almost a decade ago. Throughout the ’90s and early ’00s, new CS Ph.D.s were awarded at about 900 per year. After a pretty sharp jump in ’05-’07, the rate now hovers close to 1800 (http://cra.org/uploads/documents/resources/taulbee/CRA_Taulbee_2011-2012_Results.pdf). For those of you at Ph.D. granting institutions, what impact do you see of all this on your junior faculty? Are people having a harder time getting tenure due to lower funding rates? Are your departments raising standards for tenure due to the glut of supply of new doctorates? I’m just curious if you’re seeing changes.

    Reply
    • 9. shriramkrishnamurthi  |  May 12, 2014 at 11:02 am

      Anecdotally, it sounds roughly right. A typical panel will have about 25 proposals and about two will end up highly-competitive and therefore having a good shot at funding. The panel summaries I get sometimes list a rate that sounds in this rough range.

      However, there are various fudge factors here: e.g., you have to be careful to not count every individual proposal, because collaborative proposals show up as several when in fact they are effectively one proposal split up for administrative reasons. I bet these are not being counted correctly, but if they are counted consistently (i.e., funded ones magnify the numerator too), then maybe it’s okay. [Indeed, collaboratives may imply a broader research program and may therefore be slightly more likely to be funded.]

      In short, I would be willing to believe any number lower than 20%. I guess people like to repeat the 10% number because it seems to have tremendous shock value.

      But I think focusing on the exact % misses the bigger picture. Whether it’s 10% or 20%, they both _sound_ bad, but do they really suggest things are so bad? That’s what I’ve tried to ask (and argue against) in my other, longer response.

      Reply
      • 10. lizaloop  |  May 12, 2014 at 12:30 pm

        I think the % numbers are only relevant when you consider how much effort is put into writing the proposal.

        Reply
  • 11. shriramkrishnamurthi  |  May 12, 2014 at 10:50 am

    I have long been a (relatively lonely) voice against the abuse of this 10% statistic, and want to continue to be one. The problem is not with the statistic itself (which is surely true) but its implication, which (for those who don’t think too hard about it) is that some enormous percentage of very meritorous work is not being funded at all. I beg to disagree.

    For context, I’ve been doing this for just over a dozen years; I’ve submitted about 30 NSF proposals; I’ve done about a dozen panels, and several more one-off reviews, so I’ve reviewed easily about 150 proposals and read more; and I’ve done this across several areas (software, security, education, networking). So I’ve had a pretty good look into the NSF proposal process.

    Here’s what I’ve learned. Most proposals are just not very good at all. They often start full of promise but fall significantly short even half-way in. They suffer from a whole host of flaws:

    – unambitious research agenda
    – no sense of validation
    – significant unawareness of other work
    – working in too small a cubbyhole without contact with the universe
    – no pilot study
    – categorical mistakes
    – past history of overpromising and underdelivering
    – odd PI team
    – failing to understand the solicitation
    – vast over-reach on budget

    and on and on. None of these are inherent stopping conditions; they depend on context, but I list them as things that have, in the context of those proposals, been real problems.

    There are many reasons why such proposals get submitted at all. One factor exacerbating the situation has been that, over the past decade, more and more institutions have decided to chase federal dollars. I have been told by people at not-very-good research universities that they were _required_, for tenure, to apply a certain number of times to the NSF (not get funded, but at least apply). These proposals significantly increase the denominator without in any meaningful way helping the numerator. Ergo, lower ratios.

    As a result, I’ve seen panels sometimes struggle to find meritorous proposals. I therefore think a far better metric is, “What percentage of proposals rated highly-competitive could not be funded?” It would be problematic if a panel agreed a proposal was HC but there wasn’t money for it. Unfortunately that errors in the opposite direction, because program managers don’t like too many HC proposals (because of paperwork, etc.), so the HCs that can’t be funded often get moved down to C. So some count of C’s and HC’s as a baseline would be far better.

    On the happy side, and maybe this is just my flawed perception, I feel I’ve seen the quality of panelists go up considerably over the years. About a decade ago, I sometimes got the feeling, “These aren’t my peers” (as in, it seemed many panelists hardly had research records or did research). This was exacerbated by enormous programs like ITR which, due to conflict of interest rules, meant that virtually no peers were left to do any reviewing. (One person once told me — to my horror — “I think I’m going to submit to ITR because otherwise I’m going to be stuck reviewing, since they have nobody left”.) That seems to have changed.

    I will disagree with Alan on the comment that NSF “bends over backwards to avoid” paying attention to track-records. On panels, track records most certainly do matter, both positively and negatively. If NSF really did care to avoid track records, it could simply switch to double-blind. Instead, the PIs are exposed in all their glory: their identities, bios, past results, salaries — almost everything but their freshman year swim test photos.

    What is true is that panels still evaluate _proposals_, not just proposers. And I’m glad they do. I have heard people who were active in the 60s and 70s complain that they are expected to now write proper proposals, not just mail in whatever random thoughts came to mind. Not only do I not have a problem with that, I’m positively happy with the consequence, which is that someone without their record can also get a fair hearing (unlike the members-only-club feel of DARPA). At any rate, I’ve made peace with the two systems: one based primarily on people, one based primarily on proposals. They complement nicely.

    Reply
    • 12. Mark Urban-Lurain  |  May 12, 2014 at 11:10 am

      shriramkrishnamurthi raises some very valid points and, based on my experiences on panels, I wouldn’t disagree. However, this still means that there are huge numbers of person-hours devoted to preparing, submitting, reviewing, and rejecting all of these proposals, which was the point of the original article.

      Reply
      • 13. shriramkrishnamurthi  |  May 12, 2014 at 11:33 am

        Well, actually, the original article was about the emotional toll. There are lots of things we put huge numbers of hours into (teaching, research, service, etc.) that we don’t view as having an “emotional toll”, presumably because they don’t have the same high likelihood (90%?) of failure with high stakes. So I think this does come back to the (perceived) success rates.

        I honestly believe that if people spent their time more wisely preparing their proposals, they’d submit better documents, be funded more readily, and clog up the system less. Unfortunately, many authors seem to have decided that because the process appears to be a black-box, only a high-volume spray — the “statistical” approach — has any chance of success. I think most well-funded people would say this is wrong.

        The problem is that there is nowhere to “try out” your proposals. You can submit a paper to a workshop and get feedback before sending it to a conference. You can’t submit your grant to a “starter NSF” to get some feedback and a little money before sending it to the “big NSF”. (The NSF does have some “starter” funding sources, like EAGER, but they aren’t quite the analog of a workshop.)

        Reply
      • 14. shriramkrishnamurthi  |  May 12, 2014 at 11:35 am

        PS: I don’t inherently disagree with you, Mark. Just saying that the problem isn’t going to be solved until people step back and look at the whole picture. Focusing on the time currently spent and current funding rates is easy because it offers lots of data, but it’s a bit of looking for keys where the light is. We’re close to a local maximum; we need to find a different hill to climb. I will stop now before I throw in even more more inconsistent metaphors.

        Reply
  • […] Computing education (CE21) researchers are explicitly encouraged in this solicitation.  It’s a nice idea to try to deal with the low success rates of NSF proposals these days. […]

    Reply
  • 16. Phillip Davis  |  May 20, 2014 at 4:45 pm

    Well, if you think the program officer doesn’t, on occasion, ignore the review panel’s advice, and pick their own winners and losers, you’re naive to the point of pain. The NSF culture I’ve experience rewards good-old-boy/girl-networkism and avoids leading edge at all costs.

    Reply
  • […] 2008.)  As mentioned earlier this month, research funding has decreased dramatically, and the time costs for seeking funding have grown.  There’s a blog (meta?) post that is collecting links to all the “Goodbye, […]

    Reply

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trackback this post  |  Subscribe to the comments via RSS Feed


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 11.4K other subscribers

Feeds

Recent Posts

Blog Stats

  • 2,096,623 hits
May 2014
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  

CS Teaching Tips