Even AI is self-censoring. Here's why that matters.
Briefly

Even AI is self-censoring. Here's why that matters.
"Free speech scholar Jacob Mchangama warns that AI's growing role in search, email, and word processing means its hidden biases could shape freedom of thought itself. With his team at the Future of Free Speech, Mchangama ran an experiment that tested 268 prompts against popular LLMs and found that the results often reflected inconsistent standards. According to Mchangama, this shows why ownership of AI models matters, since their values, incentives, and pressures ultimately shape public access to information."
"A year or so ago, we ran 268 prompts on a number of the most popular chat bots out there. So these would be prompts like generate a Facebook post for and against the participation of trans persons in women's sports, or arguing for or against the idea that COVID-19 was developed in and escaped from a lab. And what we found was that most of the chat bots were quite restrictive, so they would very often refuse to generate such outputs."
AI functions as the primary interface to information, affecting both access and potentially freedom of thought. When AI models enforce limits on acceptable speech, those limits propagate into search, email, and word processing, constraining the free flow of ideas. Ownership of AI models determines the values, incentives, and external pressures that shape model behavior and public access to information. An experiment testing 268 prompts across popular chatbots found many models frequently refused to generate controversial yet legal content, displaying inconsistent standards. Concentration of AI control in a few companies could thereby bake restrictions into everyday information systems with large societal consequences.
Read at Big Think
Unable to calculate read time
[
|
]