ChatGPT outperforms undergrads in intro-level courses, falls short later
Briefly

Since the rise of large language models like ChatGPT, there have been reports of students submitting AI-generated work as their exams. Scarfe's team demonstrated the prevalence of this issue by submitting AI-generated answers that outperformed human students in controlled experiments.
The team submitted AI-generated work in five modules covering all three years of a psychology degree, with the majority of submissions going undetected. By setting word count limits, they gauged ChatGPT's ability to produce relevant content without editing.
Read at Ars Technica
[
|
]