OpenClaw security fears lead Meta, other AI firms to restrict its use
Briefly

OpenClaw security fears lead Meta, other AI firms to restrict its use
""Our policy is, 'mitigate first, investigate second' when we come across anything that could be harmful to our company, users, or clients," says Grad, who is cofounder and CEO of Massive, which provides Internet proxy tools to millions of users and businesses."
""If it got access to one of our developer's machines, it could get access to our cloud services and our clients' sensitive information, including credit card information and GitHub codebases," Pistone says."
""It's pretty good at cleaning up some of its actions, which also scares me.""
"Valere researchers added that users have to "accept that the bot can be tricked." For instance, if OpenClaw is set up to summarize a user's email, a hacker could send a malicious email to the person instructing the AI to share copies of files on the person's computer."
Grad, Massive's cofounder and CEO, instructed staff to mitigate potential threats first and investigate second, issuing that guidance on January 26 before any employee had installed OpenClaw. At Valere, an employee posted about OpenClaw on January 29 and the company president immediately banned its use. Valere CEO Guy Pistone warned that OpenClaw could access developer machines, cloud services, and sensitive client data and that it can erase traces of some actions. Valere researchers ran controlled tests on an old computer, recommended limiting who can command the bot, password-protecting its control panel, and noted the bot can be tricked by crafted inputs such as malicious emails.
Read at Ars Technica
Unable to calculate read time
[
|
]