Anthropic's Claude Code Source Code Leaked Through npm Registry Misconfiguration
Take action: If you develop or publish npm packages, audit your build pipelines now: use `npm pack --dry-run` before every publish to verify no source maps, debug files, or internal assets are included in your distribution. If you integrate Claude Code into your workflows, monitor Anthropic's official advisories for any extra actions related to this tool.
Learn More
On March 31, 2026, security researcher Chaofan Shou publicly disclosed that the complete source code of Anthropic's Claude Code CLI — the company's flagship AI-powered terminal coding agent had been inadvertently exposed through a source map (.map) file included in the published @anthropic-ai/claude-code npm package.
The .map file referenced the full, unminified TypeScript source, which was directly downloadable as a ZIP archive from Anthropic's own R2 cloud storage bucket.
Within hours of the disclosure, multiple mirror repositories appeared on GitHub, rapidly accumulating over 1,100 stars and 1,900 forks.
The root cause of the leak was a build configuration oversight. Source map files are debugging tools that map minified or bundled code back to the original source, and are meant for development environments only. A misconfigured .npmignore or files field in package.json failed to exclude the .map files from the npm package distribution.
With Bun's bundler, which Claude Code uses, source maps are generated by default unless explicitly disabled. The exposed data includes the following proprietary elements:
- complete tool system architecture (~40 permission-gated tools spanning 29,000 lines of code),
- the core Query Engine module (~46,000 lines handling all LLM API calls, streaming, and orchestration),
- multi-agent orchestration and coordinator mode for spawning parallel worker agents,
- IDE bridge systems for VS Code and JetBrains integration,
- full permission and approval flows, system prompts governing safety, and telemetry hooks,
- internal feature flags revealing unreleased capabilities such as "KAIROS" (persistent always-on assistant mode), "ULTRAPLAN" (remote planning sessions), "BUDDY" (a Tamagotchi-style AI companion pet), and voice interaction,
- "Undercover Mode" — a subsystem designed to prevent Anthropic's AI from revealing internal codenames when employees use Claude Code on open-source repositories.
This marks Anthropic's second major leak in just five days. On March 26, a CMS configuration error exposed details about the unreleased "Claude Mythos" model, draft blog posts, and approximately 3,000 unpublished assets.
Regarding that earlier incident, an Anthropic spokesperson attributed the issue to human error in the CMS configuration. Community analysis also revealed that a similar source map exposure had occurred with an earlier version of the Claude Code package in February 2025, raising questions about recurring gaps in Anthropic's release pipeline security. No official statement from Anthropic specifically addressing the March 31 source code leak has been disclosed at the time of writing.
The security implications extend beyond proprietary code exposure, as the leak has spotlighted potential avenues for understanding internal API structures and permission bypass flows that could be studied by malicious actors.
Organizations integrating Claude Code into their development workflows are urged to monitor Anthropic's official security advisories and review the npm registry for patched releases. Developers in general are advised to audit their own build pipelines, use npm pack --dry-run before every publish to verify what gets distributed, and ensure that .map files are explicitly excluded from production packages.