Secure AI? Dream on, says AI red team
Briefly

As generative AI systems are adopted across an increasing number of domains, AI red teaming has emerged as a central practice for assessing the safety and security of these technologies.
AI red teaming strives to push beyond model-level safety benchmarks by emulating real-world attacks against end-to-end systems. However, there are many open questions about how red teaming operations should be conducted.
Since its formation in 2018, the Microsoft AI Red Team has expanded significantly in response to AI becoming more sophisticated and the increased number of products needing red teaming.
The increase in volume and the expanded scope of AI red teaming have rendered fully manual testing impractical, forcing us to scale up our operations with the help of automation.
Read at InfoWorld
[
|
]