AI Models Like ChatGPT Are Politically Biased: Stanford Study | Entrepreneur
Briefly

Researchers at Stanford University evaluated 24 major AI models on 30 current issues to understand their biases. They discovered that OpenAI's 'o3' model exhibited a significant left bias, responding to 27 of 30 questions in this manner. Participants from diverse political backgrounds assessed the AI responses, revealing OpenAI's model as the most biased. Meanwhile, Google's Gemini 2.5 was noted for its balanced responses, demonstrating no bias on 21 topics. The study utilized over 180,000 judgments to provide comprehensive insights into AI biases in public discourse.
In a new study, Stanford researchers assessed AI models' responses to current issues, highlighting OpenAI's o3 model as having significant left-leaning bias.
Over 180,000 human judgments were used to evaluate the models, finding that OpenAI displayed a bias on issues like tariffs and minimum wage.
Read at Entrepreneur
[
|
]