This month, the NSF GRFP awardees were announced. The National Science Foundation (NSF) Graduate Research Fellowship Program (GRFP) is an annual competition to support around 2,000 promising scholars with a five-year fellowship, including three years of financial support, for their graduate studies. The fellowship includes a stipend of $37,000, which is higher than the average stipend in the USA. The fellowship is restricted to certain fields and to U.S. citizens, nationals, or permanent residents.
A lot has been written about the NSF GRFP and similar awards (such as those from the DOE, NOAA, and many others). There have been issues in the reviewing process and inappropriate reviews. More broadly, though, many have questioned (see here and here) whether the program is meeting its objective.
The application evaluation involves the review and rating of applications by disciplinary and interdisciplinary scientists and engineers, and other professional graduate education experts. Applicants are reviewed on their demonstrated potential to advance knowledge and to make significant research achievements and contributions to their fields throughout their careers. Reviewers are asked to assess applications using a holistic, comprehensive approach, giving balanced consideration to all components of the application, including the educational and research record, leadership, outreach, service activities, and future plans, as well as individual competencies, experiences, and other attributes. The aim is to recruit and retain a diverse cohort of early-career individuals with high potential for future achievements, contributions, and broader impacts in STEM and STEM education.
Receiving the NSF GRFP award during my undergraduate years was a pivotal moment that altered my trajectory. Initially facing rejection from three out of four schools due to low GRE scores, the award provided me with the academic freedom to pursue my intellectual passions. I have now served as a GRFP panelist, taught workshops on the application process, and successfully mentored students applying for the award.
What is the key to success? In my experience, the key is to write a good application and get lucky. There are usually well over 10,000 applications per year for the fellowship with only 2,000 awardees. For each application, three reviewers, who are volunteer scientists, review and assess both the intellectual merit and broader impacts. If all reviewers agree on an application, there is no further discussion. If there is disagreement, reviewers have the opportunity to discuss. Reviewers score these applications and then NSF makes final determinations on awardees. There is luck in which three reviewers happen to be assigned to your application. Reviewers are supposed to use a set of criteria and provide a holistic review, but this doesn’t always happen. People also have their personal preferences as well. Thus, I think it is important to write a good application to allow the possibility that a bit of luck will help you secure an award.
What happened in the 2024 competition? Data is available on research.gov to help answer this question. A total of 2,037 awards were offered this year. The total is lower than the 2,500 that NSF had planned (I’m not sure why this happened). There were 253 different fields of study represented across the awards with Engineering - Biomedical Engineering (n=121), Life Sciences - Ecology (n=115), Engineering - Mechanical Engineering (n=80), Engineering - Chemical Engineering (n=69), Engineering - Bioengineering (n=64), Life Sciences - Neurosciences (n=63), and Physics and Astronomy - Astronomy and Astrophysics (n=55) representing the top seven.
A total of 453 different institutions (where the applicant attended school for undergraduate) were represented. There was a steep drop-off in the number of awards after UC Berkeley, MIT, and other elite schools. The top 40 schools combined had the same number of awardees as the bottom 413 schools (many schools in the country had zero awardees and aren’t in the data below).
A comically long table of awards by the institution:
And here is the distribution of awards among the top 40 institutions.
None of these results are dramatically different from previous years. There was some movement at the top where the top 10 schools only represented 21% of awards compared to 31% in 2019. I was also happy to see 26 awardees from the University of Puerto Rico. Overall, these types of numbers are what have given academics reasons for concern (see here and here for examples). It's evident that a significant portion of awards gravitate toward institutions already endowed with substantial resources. This raises a critical question: do these awards genuinely contribute to diversifying and advancing excellence in the scientific community, or are they merely funneling resources to individuals or institutions that may not necessarily require them?
The above question is difficult to answer without more data. Instead, I want to address a related question: how do we allocate sparse resources? If you have been involved in the GRFP reviewing process, you’ll know that there are many more than the 2,000 awardees who could do excellent graduate work.
There is existing literature on how to allocate sparse resources in this type of situation. The literature suggests a mixed-method approach to allocation usually outperforms single strategies. For example, if you want to allocate COVID-19 treatments, a weighted lottery system, where individuals from highly disadvantaged neighborhoods are given double the odds of receiving treatment, promotes a more equitable distribution of the treatment. Similar approaches have been developed for other medical treatments, access to education, and the distribution of natural resources.
I think a potential future option in allocating NSF GRFP awards would be a distribution as follows:
50% (~1000 awards) with two awards going to the top, acceptable candidates at each institution,
25% (~500 awards) would go to top candidates not in the top two at their home institution, and
25% (~500 awards) would be distributed randomly among the remaining acceptable candidates.
In all scenarios outlined, awards would solely be granted to candidates meeting a predefined standard of acceptability, likely corresponding to a "good" or higher rating on the existing GRFP scale. Thus, applicants must still submit compelling applications to be considered. I firmly believe that if a student submits the top application from their institution, they've maximized their potential given their circumstances. Moreover, the proposed system allows for highly qualified candidates to still receive awards, even in institutions with strong competition. Lastly, acknowledging the limitations of the process, the final bullet recognizes that not all deserving applicants can be accommodated.
I believe the above system would increase diversity and produce a more equitable process. Similar mixed-model resource allocation strategies are already in place within many US higher education institutions. For example, in Texas (and at least 12 other states), the top 10% of graduates from each high school qualify for automatic admission to a public university. There have also been experiments with lottery systems for college admissions.
In my view, a mixed model system avoids the negative aspects of any one system. NSF already has the data internally to simulate the outcomes of such a system. I think such a system would have knock-on effects as well. For instance, applicants from smaller and underresourced institutions might put in more applications knowing they have a chance.
What are your thoughts on this proposed system? Do you believe it would improve or hinder the selection process?