
"The distance between a world-changing innovation and its funding often comes down to four minutes-the average time a human reviewer tends to spend on an initial grant application. In those four minutes, reviewers must assess alignment, eligibility, innovation potential, and team capacity, all while maintaining consistency across thousands of applications. It's an impossible ask that leads to an impossible choice: either slow down and review fewer ideas or speed up and risk missing transformative ones. At MIT Solve, we've spent a year exploring a third option: teaching AI to handle the repetitive parts of review so humans can invest real time where judgment matters most."
"In 2025, Solve received nearly 3,000 applications to our Global Challenges. Even a cursory four-minute review per application would add up to 25 full working days. Like many mission-driven organizations, we don't want to trade rigor for speed. We want both. That led us to a core question many funders are now asking: "How can AI help us evaluate more opportunities, more fairly and more efficiently, without compromising judgment or values?" To answer this question, we partnered with researchers from Harvard Business School, the University of Washington, and ESSEC Business School to study how AI could support early-stage grant review, one of the most time-intensive and high-volume stages of the funding lifecycle."
"The research team developed an AI system (based on GPT-4o mini) to support application screening and tested it across reviewers with varying levels of experience. The goal was to understand where AI adds value and where it doesn't. Three insights stood out: 1. AI performs best on objective criteria. The system reliably assessed baseline eligibility and alignment with funding priorities, identifying whether applications met requirements or fit clearly defined geographic or programmatic focus areas."
MIT Solve faced nearly 3,000 Global Challenges applications in 2025, creating an impractical reviewing burden at four minutes per application. The organization explored teaching AI to handle repetitive screening tasks so human reviewers can focus on judgment-intensive evaluation. Researchers developed a GPT-4o mini–based system to assess baseline eligibility, alignment with funding priorities, and other objective criteria, and tested it across reviewers with varying experience. Results showed AI excelled on objective checks, offered greater benefit to less experienced reviewers, and enabled reviewers to spend more time on nuanced judgment, potentially increasing efficiency and fairness without reducing rigor.
Read at Fast Company
Unable to calculate read time
Collection
[
|
...
]