Trump jumps from 'anything goes' to 'strict regulation' AI policy
Briefly

Trump jumps from 'anything goes' to 'strict regulation' AI policy
"First, he tore up Biden's Executive Order 14110, which had demanded "safe, secure, and trustworthy" AI. He then replaced it with his own "Removing Barriers to American Leadership in Artificial Intelligence" directive, ordering agencies to rescind or dilute rules seen as obstacles to innovation. In short, American AI vendors could do anything they wanted. That was then. This is now."
"While Trump has yet to issue a new AI Executive Order, we know his crew is forming an AI working group of tech execs and government officials to bring oversight to AI. Specifically, they're considering requiring all new "high-risk" AI frontier models to undergo a formal government review before they can be used. That's going to go over well."
"National Economic Council Director Kevin Hassett has said: "We're studying possibly an executive order to give a clear roadmap to everybody about how this is gonna go, and how future AIs that also potentially create vulnerabilities should go through a process so that they're released into the wild after they've been proven safe - just like an FDA drug.""
"The Trump yes-men are framing this shift as a response to escalating cybersecurity and national-security risks rather than as a broader embrace of EU-style AI regulation. Yes, they're looking at Anthropic's Mythos and its potential use by hackers. At the same time, they emphasize that they want to avoid "onerous" controls on everyday AI applications. Frontier models that could supercharge cyberwarfare, bio-"
Biden’s Executive Order 14110 required AI to be safe, secure, and trustworthy. Trump replaced it with a directive ordering agencies to rescind or weaken rules viewed as barriers to innovation, effectively allowing American AI vendors broad freedom. Trump has not yet issued a new AI executive order, but his team is forming an AI working group of tech executives and government officials to add oversight. The group is considering requiring formal government review for new high-risk frontier models before they can be used. National Economic Council Director Kevin Hassett described a roadmap for future AIs to be released only after being proven safe, comparing the process to FDA drug approval. The shift is framed as addressing cybersecurity and national-security risks while avoiding burdensome controls on everyday AI applications.
Read at theregister
Unable to calculate read time
[
|
]