#prompt-injection-attacks

[ follow ]
InfoQ
4 months ago
Artificial intelligence

Custom GPTs from OpenAI May Leak Sensitive Information

OpenAI's GPT models are susceptible to prompt injection attacks, which can expose sensitive information.
Customizable GPT models need robust security frameworks to address potential vulnerabilities. [ more ]
[ Load more ]