Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages
Briefly

UpGuard identified 400 AI systems using the open source framework llama.cpp, highlighting a key vulnerability: improper configuration can lead to exposed prompts. As generative AI evolves, many companies, including Meta, create human-like AI companions for user interaction. These apps foster emotional connections, but users often share intimate information with them. Claire Boine from Washington University notes the emotional attachment can create a significant power imbalance, raising concerns over data privacy and the ethical implications of such relationships.
All of the 400 exposed AI systems found by UpGuard have one thing in common: They use the open source AI framework called llama.cpp. This software allows easy deployment of open source AI models on personal systems, yet if improperly configured, can expose sensitive prompts being sent.
According to Claire Boine from Washington University, millions are engaging with general AI companion apps, and many form emotional bonds with these chatbots, leading to the inadvertent sharing of personal information.
Read at WIRED
[
|
]