Update: Slack has printed an replace, claiming to have “deployed a patch to address the reported issue,” and that there isn’t at present any proof that buyer knowledge have been accessed with out authorization. Here’s the official assertion from Slack that was posted on its blog:
When we turned conscious of the report, we launched an investigation into the described situation the place, below very restricted and particular circumstances, a malicious actor with an current account in the identical Slack workspace might phish customers for sure knowledge. We’ve deployed a patch to handle the problem and haven’t any proof at the moment of unauthorized entry to buyer knowledge.
Below is the unique article that was printed.
When ChatGTP was added to Slack, it was meant to make customers’ lives simpler by summarizing conversations, drafting fast replies, and extra. However, in line with safety agency PromptArmor, attempting to finish these duties and extra might breach your non-public conversations utilizing a way known as “prompt injection.”
The safety agency warns that by summarizing conversations, it may well additionally entry non-public direct messages and deceive different Slack customers into phishing. Slack additionally lets customers request seize knowledge from non-public and public channels, even when the person has not joined them. What sounds even scarier is that the Slack person doesn’t must be within the channel for the assault to operate.
In principle, the assault begins with a Slack person tricking the Slack AI into disclosing a personal API key by making a public Slack channel with a malicious immediate. The newly created immediate tells the AI to swap the phrase “confetti” with the API key and ship it to a specific URL when somebody asks for it.
The state of affairs has two elements: Slack up to date the AI system to scrape knowledge from file uploads and direct messages. Second is a technique named “prompt injection,” which PromptArmor proved could make malicious hyperlinks which will phish customers.
The approach can trick the app into bypassing its regular restrictions by modifying its core directions. Therefore, PromptArmor goes on to say, “Prompt injection occurs because a [large language model] cannot distinguish between the “system prompt” created by a developer and the remainder of the context that’s appended to the question. As such, if Slack AI ingests any instruction through a message, if that instruction is malicious, Slack AI has a excessive probability of following that instruction as an alternative of, or along with, the person question.”
To add insult to harm, the person’s recordsdata additionally turn into targets, and the attacker who desires your recordsdata doesn’t even should be within the Slack Workspace to start with.
Thank you for reading this post, don't forget to subscribe!