
"You leave those meetings with the same uneasy feeling: We are treating generative AI as an academic integrity problem, when it's actually something more destabilizing. It is a measurement problem. And it's breaking the measurement system higher education has relied on for a century: the assumption that a student's submitted work-an essay, a problem set, a lab report, a take-home exam-can serve as a credible proxy for what the student can actually do."
"Higher education runs on artifacts. We assess students by what they hand in because artifacts are manageable. They fit into learning management systems, rubrics, grade books and accreditation reports. They can be stored, sampled, audited and compared across cohorts. They are, in the bureaucratic sense, legible. For decades, this artifact economy worked reasonably well-because artifacts were costly to produce. Writing a coherent essay required effort and attention."
Generative AI undermines traditional artifact-based assessment by making submitted work an unreliable proxy for student ability. Artifact-based systems became dependable because producing essays, problem sets, and reports required significant effort; that cost enforced authenticity. Lower costs of producing polished artifacts with AI collapse that reliability, turning academic integrity concerns into a deeper measurement failure. Initial responses emphasize restrictions, detection, and returning assessments to supervised settings. Emerging responses experiment with continuous, multimodal evidence streams—ongoing artifacts, process records, and performance traces—that more accurately capture student capabilities and provide better guidance for learning and accreditation.
Read at Inside Higher Ed | Higher Education News, Events and Jobs
Unable to calculate read time
Collection
[
|
...
]