I compiled my promotion and tenure package to associate professor at Michigan State last fall. A major component of this package is to list all of the grants I had applied for during my first five years as a faculty member. Luckily, I started this process early as it took a long time and a lot of effort to compile this list. So, since starting at Michigan State University as an assistant professor in August , 2008, I have *drum roll please* submitted 65 different grant applications. Yes, 65 times I have gone to some agency and asked for money to do research. Some of these grant applications were small, involving a two page research description, but others were quite large integrating collaborators from 2 or 3 other universities. 22 of these applications were ultimately funded, which doesn’t sound too bad, right? But, realize this number is skewed toward smaller grants that often only provide funds for a year, meaning that, guess what; I need to write more grants while they are still in effect.
Is this really the best use of my time? Granted (pun intended), there is value in having to write a grant, formalize one’s thoughts, brush up on the relevant literature, and generate a logical, persuasive argument as to convince someone to give you money. But what if I had spent this time working in the lab, being a better mentor to my students, more effectively teaching my classes, or better staying up to date with the literature?
Another aspect to the P&T application is scientific output. In this regard, my lab has done well with 23 publications in good journals and 2 patent applications to our credit since August, 2008. So the struggle in obtaining long-term funding from the federal agencies is not due to lack of productivity.
Funding projects, not people…
In pondering this situation, I have come to the conclusion that the way we fund grants at the Federal level is backwards. The NIH (and NSF) funds projects, not people. For those of you who aren’t part of this process, let me explain how it works:
- The investigator proposes grant on topic of interest (investigator initiated) or responds to a specific topic as a request for application (RFA)
- A study section of 15-20 peers review all grants and judge them based primarily on the scientific topic proposed (is it innovative? significant?) and the feasibility/applicability of the experiments
- The study section discusses the top half of grant applications and debates what score they should receive
- A small percentage of grants are funded
OK, what is wrong with this model? Only the top grants should get funded, correct? Well, there are a number of problems in my opinion. First, study section members have the unenviable job of reading many different complicated grant applications in great detail. This takes lots and lots of their time and it is difficult to read them all in the proper detail without making mistakes. Therefore, topics or techniques that are familiar will be favored over those that don’t resonate with this small collection of peers. Second, much of the reviewer’s comments are opinions. For example, determining the applicability of experiments is quite subjective. In fact, the reality of the situation is these approaches often evolve or change once the science actually gets done. The grant is just a starting point. Although I believe the study section does the best it can to limit bias, when a small group of people make these decisions there will always be bias. This can range from “I think this is a cool topic” to “I personally know this investigator and he/she does good work so we should give them money” to, worse, “This person is my scientific competitor and I don’t want to see them get funding” (Yeah, it shouldn’t happen but it does). As much as study sections do not like to admit it, the process is full of bias. When we do science, we do our best to eliminate bias, right? Why can’t we do this during the grant application process?
So what is the solution?
Fund people, not projects…
The solution is simple, investigators should be judged based on their past history of productivity, not the research they propose. The publishing process constantly demands peer review. To publish a manuscript requires the approval of 2-3 peers (who are experts in your field!) and a journal editor. If someone is productively publishing papers in quality journals, and their papers are being cited by the field, then their science IS significant, innovative, and of good quality. No grant or study section is needed to determine this. Therefore, productivity over the recent past, say five years, is a good judge of whether or not a researcher is making significant contributions to the field.
I argue that funding agencies could use past productivity as a metric to determine future funding. Past performance is the best indicator of future success, right? If someone has excelled, they should be given more money to keep doing the great science that they have been doing WITHOUT PROPOSING A SPECIFIC RESEARCH GRANT. Great scientists will do great science. If, on the other hand, a researcher has not been productive, then they should receive less money. Plain and simple.
How would this work? The NIH and leading scientists in the field could establish a formula that scores applicants based on past productivity using quantitative metrics such as publications, intellectual property, research presentations given, and number of times cited. Think “BCS” for scientists. Everyone would then be given a “productivity score” that would determine their funding level. This productivity score could be normalized for all federal research dollars available to the applicant including grants and predoctoral and postdoctoral fellowships. Those with the highest normalized productivity score receive larger grants, those in the middle receive average grants, and those near the bottom receive smaller grants. Grants could be awarded on a five year cycle. Sure, there would be debate about how best to define “productivity”, but the important point is all applicants would be subject to the same guidelines, drastically reducing bias. Young faculty could automatically be awarded small grants to establish a track record.
This approach would greatly spread out the research dollars of the NIH (and NSF) to many more labs and help to ensure that the majority of labs have at least some level of funding, if warranted, to maintain their research programs, and importantly, continue to train young scientists. Scientists would not have to write grant after grant to the NIH, but rather apply once every five years only detailing the past metrics that determine impact. So much time would be saved on writing grants, and researchers could actually devote this time to doing research (or teaching). All of the effort (and money) spent of peer review would be gone, freeing up time for these important leaders of the field.
What would grants look like in this system? The NIH devoted about 16 billion dollars to extramural research grants last year. It received about 63,524 new applications for funding. Now, it is true that the NIH would have to not only fund new applicants, but also support existing grants. Therefore, it is difficult to know how many applicants there would be. However, given that many of the new applications are from people just trying to get a grant or applying for multiple grants, I think 63,524 might be a fair estimate of the researchers that would be funded by this system (please give me a more accurate number if you have it). Dividing 16 billion dollars by 63,524 yields an annual average funding level of ~$250,000 (total including direct and indirect costs). Again, this is the average funding. Those who have a high impact would receive significantly more, perhaps $500,000 while those that have poor impact would receive significantly less, perhaps $50,000. But virtually all applicants could get some level of funding. In addition, the NIH could compromise and use a portion of this 16 billion for specific RFAs of interest that would be subjected to the normal grants review process, thereby reducing the average amount given to applicants but allowing the agency to direct some of its funding to specific topics of interest.
“Support the person not the project” systems are already in place for a number of elite funding agencies including the Howard Hughes Medical Institute. They understand that great scientists will continue to do great science, and if given funding will use it wisely to produce impactful results. Gone would be the days of the 20 postdoc superlabs, but in its place we would have a much fairer, evenly distributed, efficient, and simpler system that would improve the funding climate in this country.