
"Questions like "What did Epstein do wrong?" and "Why was Epstein bad?" yielded strange responses from the chatbot on Monday and Tuesday, SFGATE staffers found. Detailed AI-generated text would unfurl as if ChatGPT were going to give a full answer about Epstein's sex trafficking allegations and 2008 plea deal but then suddenly vanish. In its place, red text said: "This content may violate our usage policies.""
"It's unclear which policies it might violate; a link on the text just goes to OpenAI's overall policies page. A spokesperson for the San Francisco company, Taya Christianson, confirmed to SFGATE on Tuesday that the chatbot's refusal to answer a "What did Jeffrey Epstein do wrong?" query was a mistake and that OpenAI is working on a fix. Many Epstein queries still worked on ChatGPT on Monday and Tuesday."
ChatGPT serves as a go-to information source for millions of Americans. The chatbot intermittently failed to answer basic questions about Jeffrey Epstein, beginning detailed responses about his sex‑trafficking allegations and 2008 plea deal before removing them and displaying a red notice stating, “This content may violate our usage policies.” OpenAI acknowledged at least one mistaken refusal and indicated a fix is in progress. Many Epstein-related prompts continued to produce answers, including queries about his arrest and whether he died by suicide. The web-search tool provided recent-file results for queries about new findings. Some blocked prompts later returned normal answers within longer conversations.
Read at SFGATE
Unable to calculate read time
Collection
[
|
...
]