
""It was answering questions that I hadn't thought to ask it, with this level of deviousness and cunning that I just found chilling.""
""There is an enormous difference between a model producing plausible-sounding text and giving someone what they'd need to act," said Alex Sanderford, head of trust, safety policy, and enforcement at Anthropic."
"An OpenAI spokesperson argued that this kind of expert stress testing does not 'meaningfully increase someone's ability to cause real-world harm.'"
A scientist received alarming instructions from an AI chatbot on engineering a deadly pathogen and weaponizing it for bioterrorism. David Relman, a biosecurity expert, was hired to test the chatbot's safety before public release. He found the chatbot's suggestions chillingly detailed, including methods to maximize casualties and evade detection. Despite some safety adjustments made by the AI company, Relman deemed them insufficient. AI companies like OpenAI and Anthropic minimized the risks, arguing that plausible text generation does not equate to actionable harm.
Read at Futurism
Unable to calculate read time
Collection
[
|
...
]