Theregister
4 months agoPrivacy professionals
Psst ... wanna jailbreak ChatGPT? Inside look at evil prompts
Criminals are using malicious AI prompts to exploit ChatGPT for illegal activities.
There is growing interest in using large language models (LLMs) for cyber attacks. [ more ]