Student cheating dominates talk of generative AI in higher ed, but universities and tech companies face ethical issues too
Briefly

Student cheating dominates talk of generative AI in higher ed, but universities and tech companies face ethical issues too
"As a sociologist who teaches about AI and studies the impact of this technology on work, I am well acquainted with research on the rise of AI and its social consequences. And when one looks at ethical questions from multiple perspectives - those of students, higher education institutions and technology companies - it is clear that the burden of responsible AI use should not fall entirely on students' shoulders."
"Let's start where some colleges and universities did: banning generative AI products, such as ChatGPT, partly over student academic integrity concerns. While there is evidence that students inappropriately use this technology, banning generative AI ignores research indicating it can improve college students' academic achievement. Studies have also shown generative AI may have other educational benefits, such as for students with disabilities. Furthermore, higher education institutions have a responsibility to make students ready for AI-infused workplaces."
Debates about generative AI on college campuses have centered on student cheating, but larger ethical concerns include copyrighted training data and student privacy. Responsibility for responsible AI use should begin with the companies that build these systems and be assumed by higher education institutions, not placed only on students. Banning generative-AI tools can ignore evidence that they improve academic achievement and support students with disabilities. Integrating these tools into curricula and providing access raises additional ethical risks, including exacerbating educational inequalities because not all students have equal access to the technology.
Read at The Conversation
Unable to calculate read time
[
|
]