The launch of ChatGPT in late 2022 sparked a global rush among governments to establish AI regulations that balance innovation and safety. The EU's Artificial Intelligence Act, effective since February, categorizes AI systems by risk level, imposing strict rules on 'high-risk' applications, particularly in healthcare. While AI can greatly enhance medical processes, its opaque decision-making raises challenges for accountability and real-time human oversight, essential under the EU regulations. In August, further guidance will clarify these compliance measures as industry standards are developed.
By default, compliance will be evaluated using a set of harmonized AI standards, but these are still under development.
High-risk systems are required to be transparent and designed so that an overseer can understand their limitations and decide when they should be used.
Collection
[
|
...
]