
"Speaking at the Anthropic Futures Forum on Monday, Amodei shed light on the approach his company is taking to develop and deploy safe and effective AI solutions, particularly around large language models and agentic AI tools. He offered examples of use cases for emerging AI capabilities, such as in the medical and scientific research arenas, but also acknowledged the risk potential inherent to advanced AI systems."
"I think chips are the single ingredient where we kind of most have an advantage. The technology stack for building these is very difficult. We're being very consistent when we advocate the same thing be done at the chip layer," Amodei said. "It's not some attempt to manipulate and order the chip market. We're doing this at every layer of stack. We think it's the right thing to do."
""I think it's more the risks where government has a role to play," he said. "This is the biggest threat and the biggest opportunity for national security that we've seen in the last 100 years.""
Policymakers can positively shape the U.S. AI ecosystem through balanced export controls, basic regulatory guardrails, and support for workers displaced by automation. A major AI company emphasized safe development and deployment of large language models and agentic tools, citing medical and scientific research as promising use cases while acknowledging significant risks. The company restricted model dissemination to Chinese firms and urged similar semiconductor export curbs to prevent misuse by adversaries. Recommended guardrails include model training transparency and publicly available basic transparency tests. Policy should remain loose enough to avoid stifling innovation while addressing national security concerns.
Read at Nextgov.com
Unable to calculate read time
Collection
[
|
...
]