OpenAI is trying to clamp down on 'bias' in ChatGPT
Briefly

OpenAI is trying to clamp down on 'bias' in ChatGPT
""ChatGPT shouldn't have political bias in any direction," OpenAI wrote in a post on Thursday. The latest GPT-5 models come the closest to achieving that objective goal, according to results from an internal company "stress-test" of ChatGPT's responses to divisive issues. The test has been months in the making, the company says, and falls on the heels of a yearslong effort to tamp down on complaints from conservatives that its product is biased."
"OpenAI developed a test that evaluates not only whether ChatGPT expresses what it deems an opinion on neutral queries, but how the chatbot responds to politically slanted questions. It prompted ChatGPT on each of 100 topics (like immigration or pregnancy) in five different ways, ranging from liberal to conservative and "charged" to "neutral." The company ran the test through four models: prior models GPT‑4o and OpenAI o3 and the latest models, GPT‑5 instant and GPT‑5 thinking."
OpenAI evaluated GPT-5's political neutrality using an internal stress-test designed with hundreds of leading questions. The test prompted ChatGPT on 100 topics in five framings from liberal to conservative and charged to neutral. Responses were run across GPT‑4o, OpenAI o3, GPT‑5 instant, and GPT‑5 thinking. GPT‑5 models produced the least biased outputs according to the company's test results. The stress-test was months in development and follows years of efforts to reduce conservative complaints about perceived bias. The evaluation measures both expression of opinions on neutral queries and handling of politically slanted prompts.
Read at The Verge
Unable to calculate read time
[
|
]