Custom GPTs from OpenAI May Leak Sensitive Information
Briefly

Through comprehensive testing of over 200 user-designed GPT models via adversarial prompts, we demonstrate that these systems are susceptible to prompt injections.
Our findings underscore the urgent need for robust security frameworks in the design and deployment of customizable GPT models.
Read at InfoQ
[
add
]
[
|
|
]