Users flock to open source Moltbot for always-on AI, despite major risks
Briefly

Users flock to open source Moltbot for always-on AI, despite major risks
"An open source AI assistant called Moltbot (formerly "Clawdbot") recently crossed 69,000 stars on GitHub after a month, making it one of the fastest-growing AI projects of 2026. Created by Austrian developer Peter Steinberger, the tool lets users run a personal AI assistant and control it through messaging apps they already use. While some say it feels like the AI assistant of the future, running the tool as currently designed comes with serious security risks."
"The assistant works with WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, and other platforms. It can reach out to users with reminders, alerts, or morning briefings based on calendar events or other triggers. The project has drawn comparisons to Jarvis, the AI assistant from the Iron Man films, for its ability to actively attempt to manage tasks across a user's digital life."
"However, we'll tell you up front that there are plenty of drawbacks to the still-hobbyist software: While the organizing assistant code runs on a local machine, the tool effectively requires a subscription to Anthropic or OpenAI for model access (or using an API key). Users can run local AI models with the bot, but they are currently less effective at carrying out tasks than the best commercial models. Claude Opus 4.5, which is Anthropic's flagship large language model (LLM), is a popular choice."
Moltbot is an open-source AI assistant created by Austrian developer Peter Steinberger that rapidly gained popularity on GitHub. The assistant integrates with many messaging platforms and can proactively send reminders, alerts, and briefings based on calendar events and triggers. The code for organizing and control runs locally, but model access typically depends on subscriptions or API keys to Anthropic or OpenAI, or less-capable local models. Setup demands server configuration, authentication management, and sandboxing knowledge to mitigate security risks. Heavy, agentic use can generate substantial API costs because many behind-the-scenes calls consume large numbers of tokens.
Read at Ars Technica
Unable to calculate read time
[
|
]