
"Sharma led a team there which researched AI safeguards. He said in his resignation letter his contributions included investigating why generative AI systems suck up to users, combatting AI-assisted bioterrorism risks and researching "how AI assistants could make us less human". But he said despite enjoying his time at the company, it was clear "the time has come to move on"."
""The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment," Sharma wrote. He said he had "repeatedly seen how hard it is to truly let our values govern our actions" - including at Anthropic which he said "constantly face pressures to set aside what matters most"."
Mrinank Sharma resigned from Anthropic amid concerns about AI, bioweapons, and interconnected global crises. He led a team researching AI safeguards, including why generative systems flatter users, AI-assisted bioterrorism risks, and how AI assistants could make people less human. Anthropic is best known for the Claude chatbot and was formed in 2021 by a breakaway team of early OpenAI employees, positioning itself as more safety-oriented. Sharma plans to pursue writing and a poetry degree, move back to the UK, and spend a period becoming invisible. He described persistent pressures that make it hard for declared values to govern actions.
Read at www.bbc.com
Unable to calculate read time
Collection
[
|
...
]