
"We identify persistent limitations in reporting around ecosystemic and safety-related features of agentic systems. The biggest revelation of the report is just how hard it is to identify all the things that could go wrong with agentic AI. That is principally the result of a lack of disclosure by developers."
"The OpenClaw software attracted heavy attention last month not only for its enabling of wild capabilities -- agents that can, for example, send and receive email on your behalf -- but also for its dramatic security flaws, including the ability to completely hijack your personal computer."
"Agentic AI is something of a security nightmare at the moment, a discipline marked by lack of disclosure, lack of transparency, and a striking lack of basic protocols about how agents should operate."
Agentic AI technology is rapidly entering mainstream adoption, exemplified by OpenAI's hiring of OpenClaw's creator, despite significant security concerns. OpenClaw demonstrated both powerful capabilities—such as sending emails on users' behalf—and critical vulnerabilities, including complete computer hijacking potential. MIT researchers and collaborators surveyed 30 common agentic AI systems, revealing the field operates as a security challenge marked by insufficient disclosure, transparency gaps, and absent standardized protocols. Developers consistently fail to report ecosystemic and safety-related features adequately, making it difficult to identify potential failures and risks associated with these systems.
#agentic-ai-security #ai-transparency-and-disclosure #ai-risk-management #openclaw-vulnerabilities #ai-developer-responsibility
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]