Australia considering mandatory guardrails for "high-risk" AI | DailyAI
Briefly

The government will consider mandatory safeguards for those who develop or deploy AI systems in legitimate, high-risk settings. This will help ensure AI systems are safe when harms are difficult or impossible to reverse.
The proposed guardrails include digital labels or watermarks to identify AI-generated content, 'he complete traceability of AI data and model development, transparency in AI decision-making processes, and compliance with ethical standards and guidelines.
Read at DailyAI
[
add
]
[
|
|
]