Why universities need to radically rethink exams in the age of AI
Briefly

Why universities need to radically rethink exams in the age of AI
"AI use among students is now the norm. In February, a survey of more than 1,000 full-time UK undergraduates found that 92% use AI in some form, up from 66% in 2024. And 88% of students reported relying on generative AI (a form of AI that can create text, images and code from vast data sets) to support their academic coursework, compared with 53% in 2024."
"Universities have responded by using tools to try to detect student use of generative AI. But these have proven to be unreliable. This has led to short-term fixes such as 'stress-testing' written assessments and replacing them with oral examinations, handwritten tests or reflective formats (portfolios and journals; see go.nature.com/43btcxf), as well as clearer guidelines on when AI can and cannot be used. Although these measures help, their effectiveness is limited."
Student use of AI has surged to near-universal levels, with large proportions relying on generative AI to support coursework. Generative models now outperform humans on basic tasks like reading comprehension and programming, undermining the value of conventional essays and written assessments. Over-reliance on chatbots risks superficial learning, fewer opportunities for reflection, and reduced student agency. Detection tools for AI use are unreliable, prompting short-term responses such as oral exams, handwritten tests, reflective portfolios, and clearer AI-use guidelines. Those tactics offer limited effectiveness. A fundamental rethink of learning and assessment is necessary, including adapting conversation-based and other approaches to ensure genuine intellectual development and accurate evaluations.
Read at Nature
Unable to calculate read time
[
|
]