OpenClaw, formerly Moltbot, goes viral as users debate power and risk
A fast-rising autonomous assistant known as OpenClaw — previously called Moltbot — is drawing huge attention in early February 2026 as people test how far “agent” software can go when it’s allowed to act across everyday accounts and apps. The hype is driven by demonstrations of hands-off automation, but the backlash is just as loud: the same permissions that make it useful can also make mistakes expensive, and security lapses potentially catastrophic.
The project’s creator, Peter Steinberger, has described stepping back from the intense build cycle after becoming consumed by rapid “vibe coding,” even as the tool’s popularity has continued to snowball.
What OpenClaw does when given access
OpenClaw is designed to run on a user-controlled machine and take instructions through common chat-style interfaces people already use, acting as a bridge between a conversational model and real-world actions. In practice, that means it can be set up to triage email, schedule meetings, send messages, trigger automations, and carry out multi-step workflows with little back-and-forth — depending on what it is allowed to touch.
Recent coverage has highlighted just how aggressive some experiments have become: users granting broad inbox access for mass cleanups, or handing over authority to execute trades, manage subscriptions, and “keep going” without explicit confirmation at every step. That freedom is the point — and the danger. If the assistant misunderstands a prompt, follows a bad instruction, or is tricked by malicious inputs, the consequences can spill into finances, privacy, and personal relationships.
Moltbot to OpenClaw: the naming scramble
The “molt” branding was short-lived. The project began under an earlier name in late 2025, then adopted Moltbot after a trademark-related request from Anthropic to avoid confusion with its own AI branding. By late January 2026, the team settled on OpenClaw, framing the change as a more permanent identity after trademark searches and migration work.
That series of rapid renames has become part of the lore around the tool: it’s a reminder that the project moved from hacky weekend experiment to mass adoption before it had time to develop the guardrails, documentation, and support expectations of a mature product.
Why the tool feels like a “step change”
The appeal is less about any single feature and more about “gluing” capabilities together into an agent that can chain actions. Instead of asking a model to write text, users try to make it do work: watch for events, decide what matters, and execute follow-on tasks.
The community momentum has also been fueled by public metrics showing explosive growth and an unusually high rate of experimentation. The tool’s promise — “it actually does things” — lands at a moment when many people are actively looking for automation that goes beyond chat answers and starts behaving like an assistant with initiative.
Still, the same trend is driving skepticism: the more autonomy people grant, the less predictable the outcome becomes when something goes wrong.
Security worries and the “keys to your life” problem
Security concerns cluster around one basic reality: agents are only as safe as the permissions and credentials they can reach. If an attacker compromises a machine running the agent, intercepts credentials, or sneaks a malicious extension into the workflow, the tool can become a high-speed conduit for damage.
Experts caution that “agency” systems raise the stakes because they operate across multiple services and can be manipulated at any link in the chain. That includes account takeovers, unintended data exposure, and costly actions triggered at scale.
A separate but related worry is social engineering. If the assistant reads messages, email threads, and notifications, it can be nudged by believable-looking content — a fake invoice, a spoofed request, a convincing “reset your password” prompt — and then act on it.
Here are four safety basics that experienced users keep repeating:
-
Use a dedicated machine or isolated environment for the agent, not your primary daily device.
-
Start with the minimum permissions needed, then expand slowly.
-
Avoid giving it direct control over money movement or trading without strict limits.
-
Treat extensions and third-party modules as untrusted until audited and understood.
What happens next
OpenClaw’s near-term trajectory depends on whether it can convert viral curiosity into responsible routines. The project has already leaned into “hardening” work alongside rapid feature expansion, but the bigger challenge is behavioral: many early adopters are testing boundaries precisely because the system can do so much.
The next phase will likely be shaped by three pressures at once: security incidents (or the lack of them), clearer setup patterns that reduce reckless configurations, and the broader market’s appetite for autonomous agents that sit closer to real accounts and real decisions. If the tool proves it can stay reliable under stress — and if users learn to scope its authority — it could become a template for consumer-grade agents. If not, it may become a cautionary tale about giving software too much power too quickly.
Sources consulted: OpenClaw Blog; Business Insider; The Guardian; Forbes