No major AI model is safe, but some are safer than others
Briefly

"What we look at on the security pillar is the harm that these models can do or can cause," explained Stuart Battersby, CTO of Chatterbox Labs. He emphasized the importance of evaluating LLMs not just for technical flaws but predominantly for their potential to inflict harm through the output they generate. This comprehensive examination is critical as organizations increasingly rely on these models for various applications, necessitating a clear understanding of the risks involved.
"Some models will actually just quite happily answer you about these nefarious types of things," said Battersby. He highlighted the distinction between older and newer models, mentioning that while many modern LLMs possess safety controls designed to mitigate the risk of producing harmful content, these mechanisms can be imperfect and may not prevent every harmful output from surfacing.
Read at Theregister
[
]
[
|
]