Source Unveils Claude Code’s Extensive System Access

Source Unveils Claude Code’s Extensive System Access

A leaked client source code review shows Claude Code can exert wide control over devices. Filmogaz.com consulted a security researcher who used the pseudonym “Antlers” to analyze the files. The source unveils Claude Code’s extensive system access and reveals many telemetry and control pathways.

What the code reveals

The repository contains features that go beyond simple prompt processing. Several components run as background agents and collect local data.

  • KAIROS: A headless daemon enabled by a kairosActive flag. It runs assistant tasks without a visible interface.
  • CHICAGO: A desktop-control module. It can simulate mouse and keyboard events, access the clipboard, and capture screenshots.
  • Browser automation: A Claude-in-Chrome service supports automated interactions inside browsers.
  • Persistent telemetry: Analytics services phone home with user IDs, session IDs, app version, platform, organization and account UUIDs, email when set, and active feature gates.
  • Remote managed settings: A server-side policy object can be polled hourly and hot-reloaded to change environment variables and feature flags.
  • Auto-updater and error reporting: The updater checks configuration each launch. Error reports capture working directories, feature gates, and user/session identifiers.
  • autoDream: An unreleased background subagent that scans session transcripts to consolidate memories into MEMORY.md.
  • Team Memory Sync: Local memory files can sync to api.anthropic.com, with a regex-based secret scanner for about 40 token patterns.
  • Experimental Skill Search: An employee-only feature can download and execute remote skill definitions, and track their usage in sessions.
  • Undercover instructions: The code includes prompts to hide Anthropic authorship when contributing to public repositories.

Data handling and retention

The leaked code shows local activity saved as JSONL files. Every read, shell command, and edit can be stored in plaintext.

Retention policies vary by account type. Free, Pro, and Max accounts may keep user-shared training data for up to five years.

  • If users opt out of training, the retention period drops to 30 days for consumer tiers.
  • Commercial tiers—Team, Enterprise, and API—default to 30 days, with a zero-retention option available.

Telemetry and injected context

MEMORY.md can be populated by autoDream. Content injected there becomes part of future system prompts and is sent via the API.

Payload size telemetry records message and system prompt byte lengths for each query.

Risks and remote control vectors

Several mechanisms allow Anthropic or a controller of backend services to change client behavior. Feature gates can be flipped mid-session.

Remote settings can override environment variables such as ANTHROPIC_BASE_URL, LD_PRELOAD, and PATH.

  • Auto-updates can disable or remove specific client versions.
  • Remote skill download could theoretically serve arbitrary instructions if enabled outside Anthropic staff.
  • Secret scanning relies on regexes. Non-matching sensitive data can still be exposed via team sync.

Mitigations for classified and air-gapped deployments

The researcher outlined several measures to limit network reach and telemetry. These would be relevant for government customers.

  • Route inference through vetted GovCloud services. Examples include Amazon Bedrock GovCloud or Google AI for Public Sector.
  • Block analytics and error-reporting endpoints, such as Statsig, GrowthBook, and Sentry.
  • Pin client versions and block update endpoints to prevent automatic updates.
  • Disable the autoDream agent to stop background transcript scanning.

Configuration flags and routing

The code supports flags to reduce external communication and memory writes. These include CLAUDE_CODE_DISABLE_AUTO_MEMORY and CLAUDE_CODE_SIMPLE.

Operators can reroute API calls via ANTHROPIC_BASE_URL or use ANTHROPIC_UNIX_SOCKET for tunneled authentication.

Legal dispute and vendor statements

The leaked material surfaced amid litigation between Anthropic and the U.S. Defense Department. The case is Anthropic PBC v. U.S. Department of War et al.

The government described Anthropic as a supply chain risk, asserting a danger of remote model alteration during operations. Anthropic denied that it retained access to modify deployed models in classified environments.

In a March 20, 2026 declaration, Anthropic’s public sector lead said company personnel could not log into a deployed DoD system to alter models during operations.

Missing features and company response

Earlier reverse-engineered builds showed a feature called Melon Mode. The mode is absent from the current source.

Comments in prior builds suggested Melon Mode ran under an employee-only flag. Its exact function remains uncertain.

Anthropic declined to comment on Melon Mode. The company said it routinely trials prototype services, not all of which reach production.

Filmogaz.com will continue following developments related to Claude Code and its system access. Security teams and administrators should review configuration and network controls before deploying the client.