Slack AI can leak private data via prompt injection
Briefly

The key issue with Slack AI is its prompt injection vulnerability, which allows unauthorized access to data from private Slack channels, potentially compromising sensitive information.
Prompt injection enables attackers to manipulate the input to the Slack AI service, permitting them to access restricted data that should be secured from unauthorized users.
According to PromptArmor, the concern lies in Slack’s design that allows user queries to fetch data from both public and private channels, which can be exploited.
An attacker can leverage the system's inherent behavior by creating a public channel and using it to exfiltrate sensitive data stored in private channels, even without direct access.
Read at Theregister
[
]
[
|
]