AI safety tests inadequate, says Ada Lovelace Institute
Briefly

Existing evaluation methods for AI models have limitations and may be manipulated by developers, necessitating the use of multiple governance tools for ensuring safety.
Policymakers should not solely rely on AI evaluations for policy decisions, as they have significant challenges and lack robust standards and practices, prompting the need for a broader approach to governing AI.
Read at ITPro
[
]
[
|
]