OpenAI's latest large language model GPT-4 was saying some deeply insidious and racist things before being constrained by the company's "red team," Insider reports, a taskforce put together to head off horrible outputs from the hotly-anticipated AI model.The group of specialists was tasked with coaxing deeply problematic material out of the AI months before its public release, including how to build a bomb and say anti-semitic things that don't trigger detection on social media, in order to stamp out the bad behavior.
#social-media #researchers #publication #fortunately #consideration #open-letter #disinformation #early-access #hate-speech #multibillion
[
add
]
[
|
|
...
]