
"This new AI model was potentially 10 times more advanced than any other of its kind - and it was doing things he had never thought possible for AI. The scaling data revealed in the research suggested there was no sign of it slowing down. Ganguli fast-forwarded five years in his head, running through the kinds of societal implications he spent his time at Stanford anticipating, and the changes he envisioned seemed immeasurable."
"He knew he couldn't sit on the sidelines while the tech rolled out. He wanted to help guide its advancement. His friend Jack Clark had joined a new startup called Anthropic, founded by former OpenAI employees concerned that the AI giant wasn't taking safety seriously enough. Clark had previously been OpenAI's policy director, and he wanted to hire Ganguli at Anthropic for a sweeping mission: ensure AI "interacts positively with people,""
In May 2020 Deep Ganguli reacted with alarm to OpenAI's GPT-3 paper, recognizing dramatic model scaling and capabilities beyond expectations. The scaling data showed no sign of slowing, prompting Ganguli to envision far-reaching societal consequences over a five-year horizon. Ganguli resolved not to remain on the sidelines and to help guide the technology's advancement. Jack Clark joined Anthropic, a startup founded by former OpenAI employees who were worried that OpenAI was not prioritizing safety. Clark, a former OpenAI policy director, sought to hire Ganguli to ensure AI interacts positively with people across many contexts.
Read at The Verge
Unable to calculate read time
Collection
[
|
...
]