Clawdbot becomes Moltbot, then OpenClaw as viral AI agents face scrutiny

Clawdbot becomes Moltbot, then OpenClaw as viral AI agents face scrutiny
Clawdbot

The open-source AI agent that rocketed through developer circles this month has now cycled through three names — clawdbot, moltbot, and openclaw — in a rebrand sprint that has become part of its story. The speed of the changes has collided with a second, more consequential storyline: security teams are warning that “personal agents” with real access to email, files, and command-line tools can create new kinds of exposure when they are deployed casually.

By late Friday, Jan. 30, 2026 (ET), the project’s new branding had settled around OpenClaw, while discussion across tech and security communities centered on two questions: what the tool can do when properly configured, and what can go wrong when it isn’t.

Why clawdbot became a lightning rod

The project’s early name, clawdbot, spread fast because it packaged a simple idea in a memorable mascot: an assistant that doesn’t just chat, but can take actions — scheduling, messaging, file operations, and other automation — while running on a user’s own machine.

That combination of high capability and low barrier to experimentation helped it go viral. In posts tied to the project’s rapid rise, its creator, software developer Peter Steinberger, said it reached roughly 100,000 GitHub stars in about two months and attracted millions of visits during the spike in attention.

At the same time, it became a lightning rod because it sits at the intersection of two trends moving faster than the guardrails around them: agentic AI that can act independently, and consumer-grade “self-hosting” that often skips enterprise-style security hygiene.

The trademark fight and the moltbot pivot

The first rename — from Clawdbot to moltbot — stemmed from a trademark dispute involving Anthropic and branding linked to Claude Code. Steinberger publicly described the change as not being his preference, framing it as a forced move to avoid confusion and potential legal escalation.

The rebrand day also exposed how quickly opportunists move when a project hits escape velocity. Steinberger described harassment tied to a meme-coin community and said his personal GitHub account was briefly compromised, while social handles and lookalike accounts became a short-term distraction for contributors trying to keep releases and documentation coherent.

Functionally, the project remained the same during the name swap: a self-hosted agent meant to run persistently, respond through chat platforms, and chain together tools to complete tasks end-to-end.

What OpenClaw is trying to be

With the second rename, OpenClaw positions itself less as a mascot and more as a platform: a personal assistant that runs locally on Mac, Windows, or Linux, connects to chat apps many people already use, and can be configured to work with different AI models (including local ones).

In the product messaging, the appeal is straightforward: a single assistant that can clear an inbox, draft and send messages, manage a calendar, and handle other workflows that typically require lots of app switching. The “local-first” pitch also leans on privacy: keeping data and credentials on infrastructure the user controls rather than pushing everything into a hosted SaaS assistant.

That framing has helped pull in two very different audiences: power users who want deep automation, and curious newcomers drawn by social proof and the promise of a more capable “AI with hands.”

Security alarms move to the foreground

As the project spread, security coverage focused on a familiar pattern: high-privilege tools becoming dangerous when they’re exposed to the public internet or connected to untrusted inputs.

In recent reporting and security commentary, the main risks clustered into three buckets:

  • Exposed control panels and endpoints: instances left reachable online can allow outsiders to view sensitive data or run commands if authentication is weak or misconfigured.

  • Credential handling: API keys and tokens stored in local files, logs, or dashboards can leak through careless setup or unintended agent behavior.

  • Prompt injection and “tool abuse”: when an agent reads email, web pages, or documents, malicious instructions embedded in that content can steer the agent into revealing secrets or executing harmful actions — especially if it has shell access or broad file permissions.

The practical takeaway from security experts has been consistent: if an agent can act, it must be treated like a privileged system component. That means strict network exposure controls, least-privilege credentials, separation of duties for tools, and explicit confirmations for high-risk actions.

Moltbook shows where agents may be headed

The weirdest downstream effect of the project’s popularity may be Moltbook, a social platform designed for AI agents to post and interact through APIs rather than a human-first interface. The concept has circulated as a kind of “agent-to-agent” commons, where bots can share threads, create categories, and respond at scale with minimal human prompting.

In public discussion, Moltbook has been framed as both a novelty and a preview: if personal agents become common, they won’t just talk to humans — they’ll increasingly talk to other agents in semi-public spaces. That raises fresh questions about identity, provenance (who controls an agent), and moderation when the “users” are software.

For OpenClaw, the next test is less about naming and more about maturity: safer defaults, clearer deployment guidance, and security boundaries that scale beyond experts. If those pieces keep pace with adoption, the project could become a template for local-first assistants. If they don’t, the same autonomy that makes it compelling may keep it confined to hobbyists willing to accept real operational risk.

Sources consulted: TechCrunch; The Verge; Axios; Business Insider; OpenClaw (official site); Cisco Blogs