#malicious-ai-prompts

[ follow ]
Theregister
4 months ago
Privacy professionals

Psst ... wanna jailbreak ChatGPT? Inside look at evil prompts

Criminals are using malicious AI prompts to exploit ChatGPT for illegal activities.
There is growing interest in using large language models (LLMs) for cyber attacks. [ more ]
[ Load more ]