The report shows that 91% of organizations believe sensitive data should be allowed in AI model training. At the same time, 78% report high concern about theft or breaches. Experts say this discrepancy stems from a lack of understanding about the permanence of data in AI systems. Once sensitive information is used to train a model, it cannot be fully removed or made completely secure. This creates a lasting exposure risk, particularly when personal or confidential data is involved.
Last month, AI startup Friend launched an eyebrow-raising advertising campaign in the New York City subway, which drew a striking amount of hatred. The largely white billboards left a convenient amount of room for passersby to air their feelings about the privacy-infringing tech. As such, it didn't take long for handwritten scribbles to cover the ads. "Befriend something alive," one pen-wielding tagger wrote. "AI wouldn't care if you lived or died," another vandal raged.
"Users think they're getting a free VPN or SEO widget; in reality, their most private queries - health scares, finances, identity crises - are being slurped, anonymized, and resold," Dryburgh explained in an email. " Onavo and Jumpshot déjà vu, only worse: this time it's your inner dialogue." "We have access to 150+ million real user conversations. This is primarily clickstream data where a user has opted in to be tracked and automatically shared their ChatGPT conversations."
As AI takes off, the whole cycle promises to repeat itself again, and while AI might seem relatively cheap now, it might not always be so. Foundational AI model-as-a-service companies charge for insights by the token, and they're doing it at a loss. The profits will have to come eventually, whether that's direct from your pocket, or from your data, you might be interested in other ways to get the benefits of AI without being beholden to a corporation.
Unique links are created when Grok users press a button to share a transcript of their conversation - but as well as sharing the chat with the intended recipient, the button also appears to have made the chats searchable online. A Google search on Thursday revealed it had indexed nearly 300,000 Grok conversations. It has led one expert to describe AI chatbots as a "privacy disaster in progress".