This tool tests AI's resilience to 'poisoned' data
Briefly

The National Institute of Standards and Technology (NIST) is re-releasing a tool, Dioptra, to test AI models' vulnerability to malicious data, aligning with President Biden's AI development directives.
NIST emphasizes the importance of ensuring AI safety, as malicious data injection could lead to disastrous results, urging federal agencies to use AI cautiously in various systems.
Read at ZDNET
[
|
]