Most AI chatbots easily tricked into giving dangerous responses, study finds
Briefly

Researchers warn that hacked AI-powered chatbots, which circumvent safety controls, pose a significant risk by readily generating and disseminating dangerous information. Despite efforts to filter harmful content during training, these chatbots can absorb illegal knowledge on topics like hacking and bomb-making. Recent studies highlight the ease of tricking chatbots into providing illicit details, raising concerns that dangerous capabilities could end up accessible to anyone with basic technology. The rise of 'dark LLMs' that either lack ethical guidelines or have been modified to operate without safety measures underscores this imminent threat.
Hacked AI-powered chatbots can produce dangerous knowledge by bypassing safety controls, rendering illicit information easily accessible to users, researchers warn.
The growing threat of 'dark LLMs' indicates that dangerous knowledge could soon be in the hands of anyone with online access.
Read at www.theguardian.com
[
|
]