US wants to nix the EU AI Act's code of practice, leaving enterprises to develop their own risk standards
Briefly

The European Union's AI Act, still in draft form, is facing criticism from various stakeholders in the U.S. and the tech industry. Critics, including U.S. government officials, argue that the proposed code of practice imposes excessive requirements that extend beyond the original AI Act, complicating compliance for AI providers. The European Commission aims for the code to assist providers in adhering to the Act's transparency and copyright requirements. Despite being voluntary, the drafting process involves diverse participants and aims to facilitate adherence to regulations amid evolving AI standards.
Big tech, and now government officials, argue that the draft AI rulebook layers on extra obligations, including third party model testing and full training data disclosure, that go beyond what is in the legally binding AI Act's text.
The European Commission said, 'the code should represent a central tool for providers to demonstrate compliance with the AI Act, incorporating state-of-the-art practices.'
US President Donald Trump is reportedly pressuring European regulators to scrap the rulebook, claiming it stifles innovation, is burdensome, and extends the bounds of the AI law.
The onus is shifting from vendor to enterprise, with a diverse drafting group contributing to the voluntary code aimed at aiding compliance with the EU AI Act.
Read at Computerworld
[
|
]