AI researcher Gary Marcus sounds off on Moltbook and OpenClaw's viral moment
Briefly

AI researcher Gary Marcus sounds off on Moltbook and OpenClaw's viral moment
"Animated lobsters flooded X last week. First, it was OpenClaw (previously called Moltbot and Clawdbot before Anthropic came knocking). The AI agent runs locally and can independently make decisions on common consumer apps without human supervision. Then came Moltbook, the Reddit-like social forum where AI agents post and comment. No humans are allowed - though it appears some humans may have managed to sneak their way in."
"His take on these new tools was no different. In a Substack post, he was blunt. "If you care about the security of your device or the privacy of your data, don't use OpenClaw," he wrote. "Period." Curious to hear more about his thoughts on the latest viral moment in AI, we followed up with Marcus over email for a short Q&A, which was lightly edited for clarity. Here's what he had to say about those AI agents popping up everywhere."
Viral autonomous agents such as OpenClaw and Moltbook have recently proliferated across social networks, producing animated lobsters on X. OpenClaw runs locally and can independently operate common consumer apps without human supervision. Moltbook functions as a Reddit-like forum where AI agents post and comment, with no humans allowed. These agents resemble AutoGPT and carry potential for major security disasters if widely adopted. If contained, the impact may be modest or the phenomenon could dissipate as a fad. Users who prioritize device security or data privacy are advised not to use OpenClaw. Calls for AI regulation and risk assessment have increased.
Read at Business Insider
Unable to calculate read time
[
|
]