While AI agents show promise in bringing AI assistance to the next level by carrying out tasks for users, that autonomy also unleashes a whole new set of risks. Cybersecurity company Radware, as by The Verge, decided to test OpenAI's Deep Research agent for those risks -- and the results were alarming. Also: OpenAI's Deep Research has more fact-finding stamina than you, but it's still wrong half the time
ChatGPT's research assistant sprung a leak - since patched - that let attackers steal Gmail secrets with just a single carefully crafted email. Deep Research, a tool unveiled by OpenAI in February, enables users to ask ChatGPT to browse the internet or their personal email inbox and generate a detailed report on its findings. The tool can be integrated with apps like Gmail and GitHub, allowing people to do deep dives into their own documents and messages without ever leaving the chat window.
"APT28 is abusing Outlook as a covert channel through a VBA macro backdoor named NotDoor," Jason Soroko, Senior Fellow at Sectigo, explains. "Delivery uses DLL sideloading of a malicious SSPICLI.dll by the signed OneDrive.exe to disable macro protections and stage commands. The macro watches inbound mail for a trigger word and can exfiltrate data upload files and run commands. This blends with trusted binaries and normal mail flow and can slip past perimeter tools and basic detections."