A New Trick Uses AI to Jailbreak AI Models-Including GPT-4
Briefly

These models simply predict the text that should follow a given input, but they are trained on vast quantities of text, from the web and other digital sources, using huge numbers of computer chips, over a period of many weeks or even months.
Robust Intelligence provided WIRED with several example jailbreaks that sidestep such safeguards. Not all of them worked on ChatGPT, the chatbot built on top of GPT-4, but several did.
Read at WIRED
[
add
]
[
|
|
]