Artificial intelligence
fromIntelligencer
12 hours agoThe Industry of the Future Is Run by People Who Hate Each Other
AI leaders emphasize the need for collective responsibility in shaping technology to avoid societal harm.
According to the Epstein documents recently released by the Department of Justice, on April 6, 2013, three months before the public filing, Epstein emailed Sinofsky a copy of Sinofsky's own "Resignation Agreement," asking for comments. After some back and forth about the non-disparagement clause, Epstein wrote: "[SEC] disclosure will appear as if they are concerned about what you say. seems very weak. appears they are buying your silence."
I grew up with Scott Adams'work. My dad would read the "Dilbert" comic strips to me at night as bedtime stories. Later, I became a devout listener of the "Coffee with Scott Adams" podcast. One theme I heard over and over was that Scott was mesmerized by AI. He said repeatedly that he wanted to give back to the world by becoming AI after he died.
When he was talking about the risks of AI, he contorted. His body twisted. He was really emotionally showing how scared he was. It made an impression on the investor, who spoke on condition of anonymity due to fear of impact to their business, and said they believed large language models would never be successful if they weren't trustworthy.
Anthropic, however, is pushing a different idea: that AI agents may matter more in the unglamorous work between discoveries. In exclusive interviews announcing new partnerships with the Allen Institute and the Howard Hughes Medical Institute, Anthropic's head of life sciences Jonah Cool and Grace Huynh, executive director of AI applications at the Allen Institute, said the elite science labs are using Claude-powered AI agents to tackle the analysis, annotation, and coordination bottlenecks that can stretch research timelines into years.