Anthropic has achieved a significant milestone where over 70% of their code is now generated by Claude AI, enabling engineers to concentrate on managing the codebase and engaging in other strategic activities. The company's leadership, including co-founder Amodei, discusses the balance between AI development and safety, particularly against a backdrop of regulatory challenges. Amodei is concerned about safety measures being compromised in the competitive AI landscape, emphasizing the need for a dual focus on innovation and risk management while dispelling myths that longer safety testing equates to slower progress.
"Something like over 70 percent of [Anthropic's] pull requests are now Claude code written," Krieger told me. As for what those engineers are doing with the extra time, Krieger said they're orchestrating the Claude codebase and, of course, attending meetings. "It really becomes apparent how much else is in the software engineering role," he noted.
If you're driving the car, it's one thing to say 'we don't have to drive with the steering wheel now.' It's another thing to say 'we're going to rip out the steering wheel and we can't put it back in for 10 years,'" Amodei said.
The absolute puzzle of running Anthropic is that we somehow have to find a way to do both," Amodei said, meaning the company has to compete and deploy AI safely.
You might have heard this stereotype that, 'Oh, the companies that are the safest, they take the longest to do the safety testing. They're the slowest.' That is not what we found at all.
Collection
[
|
...
]