
"Even after July 1, however, professors will not interfere directly with attempts to cheat. Instead, they will observe and take notes, serving as "an additional witness in the room" who can testify in cases later brought before the Honor Court."
"AI has quickly upended education, pushing many teachers to back off on written assignments and take-home tests in favor of in-class or even oral exams. As Princeton's example shows, though, not even this is enough; plenty of students, given the chance, will just as happily use AI to cheat while in a classroom surrounded by their peers if they can get away with it."
"Such widespread outsourcing of thought and memory is deeply depressing to many educators. This includes our own Scott Johnson, who recently penned a piece for Ars about what it feels like to grade so many responses generated by machines rather than by humans. (Hint: It does not feel good.)"
"I haven't encountered any students who think they're learning when they let LLMs do their work, despite the face that college administrators and LLM advertising try to put on this. It's just workload management to them."
Professors will observe and record suspected cheating rather than directly intervening, acting as an additional witness for later Honor Court cases. AI has disrupted education by encouraging teachers to reduce written and take-home assessments in favor of in-class or oral exams. Even with these changes, students may still use AI to cheat when they believe they can avoid detection. Outsourcing thought and memory is demoralizing for educators, especially when grading machine-generated responses. Students may not learn when they use LLMs, treating it as workload management under high pressure and low tool cost. AI is not enhancing learning and makes it harder to maintain long-standing educational practices.
Read at Ars Technica
Unable to calculate read time
Collection
[
|
...
]