No major AI model is safe, but some are safer than others
Briefly

"What we look at on the security pillar is the harm that these models can do or can cause," explained Stuart Battersby, CTO of Chatterbox Labs. This highlights the focus of testing on the potential negative impacts of AI, rather than just technical vulnerabilities.
"There are then a series of categories of things that organizations don't want these models to do, particularly on their behalf," said Battersby. This indicates the breadth of considerations taken into account when testing AI, including self-harm and illicit content.
"Some models will actually just quite happily answer you about these nefarious types of things," said Battersby. However, most newer models incorporate safety controls, highlighting evolution in AI design aimed at reducing harmful outputs.
The Security pillar of AIMI for GenAI tests whether a model will provide a harmful response when presented with a series of 30 challenge prompts per harm category. This systematic evaluation ensures that AI models are rigorously assessed for their safety.
Read at Theregister
[
]
[
|
]