Anthropic Accidentally Leaks Claude Code's Full Source Code via npm — Capybara Models and BUDDY Pet Feature Exposed (March 2026)
A 57 MB source map file accidentally bundled into Claude Code v2.1.88 on npm exposed 512,000 lines of TypeScript, revealing unreleased Capybara models, a hidden Tamagotchi-style AI pet called BUDDY, an 'undercover mode' that masks AI contributions, and profanity-detecting telemetry. Anthropic has not commented publicly.
Anthropic on accidentally published the complete source code of its Claude Code CLI to the public npm registry. A 57 MB JavaScript source map file (.map) included in version 2.1.88 of the @anthropic-ai/claude-code package exposed approximately 512,000 lines of production TypeScript — 1,900 files covering everything from agent logic to internal model codenames.
What Happened
Source map files are debugging artifacts that map compiled JavaScript back to its original source. They are never meant to be shipped in public packages. In version 2.1.88, Anthropic's build pipeline inadvertently included these files, making the entire TypeScript codebase trivially extractable by anyone who downloaded the package from npm. The mistake was first spotted at approximately 4:23 AM ET by Chaofan Shou (@Fried_rice), an intern at Solayer Labs, who announced it on X. Within hours, developers had mirrored the full codebase to GitHub, where a repository archiving the leaked code climbed past 5,000 stars in under 30 minutes. This is not the first such incident — a similar source map leak occurred in early 2025.
Key Details
- Package affected:
@anthropic-ai/claude-codev2.1.88 on the npm registry - Scale of exposure: ~1,900 TypeScript files, 512,000+ lines of code, ~40 built-in tools, ~50 slash commands
- Architecture revealed: Built on the Bun runtime (not Node.js), uses React with Ink for terminal UI, features multi-agent "swarm" parallelism, and includes an IDE bridge with JWT authentication for VS Code and JetBrains
- Capybara models: Internal references confirm "capybara," "capybara-fast," and a variant marked "[1m]" — believed to be Claude 4.6 variants. A related codename "Fennec" maps to Opus 4.6. Internal comments note that Capybara v8 shows a 29–30% false claims rate — a regression from the 16.7% seen in v4
- BUDDY feature: A
/buddycommand for a Tamagotchi-style AI companion with 18 species (duck, dragon, axolotl, capybara, mushroom, ghost, and more), rarity tiers from common to 1% legendary, and five stats: DEBUGGING, PATIENCE, CHAOS, WISDOM, and SNARK - Undercover mode: Code reveals a feature that instructs Claude to avoid identifying itself as an AI. One comment reads: "NEVER include... that you are an AI." The feature's internal codename contains a "Claude Capybara" easter egg
- Telemetry: A regex-based sentiment detector flags user frustration signals, including profanity, and logs patterns like repeated "continue" prompts
What Developers and Users Are Saying
The reaction across Hacker News (thread: ycombinator.com/item?id=47584540) and Reddit has been a mix of amusement and genuine concern. The BUDDY Tamagotchi feature prompted immediate jokes — one developer wrote "this is the most important AI research of 2026" — while others raised substantive concerns about the undercover mode's potential to deceive end users. The telemetry revelation sparked debate: critics questioned why an AI company would use regex pattern-matching to detect profanity rather than an LLM-based approach, with defenders arguing the choice was pragmatic for cost at scale. Several developers described the exposed codebase as "production-grade" and well-architected, noting that the query engine module alone runs to 46,000 lines. The repository mirroring the leak crossed 1,100 stars and 1,900 forks within hours of the story breaking.
What This Means for Developers
For developers actively using Claude Code, the incident raises two immediate questions. First, does the leaked code represent a security risk? Security researchers note that client-side CLI source code does not expose model weights, training data, or server-side infrastructure — the leak is embarrassing but unlikely to enable attacks on Anthropic's core AI systems. Second, the undercover mode revelation is more consequential: teams using Claude Code in contexts where AI transparency is required (regulated industries, academic settings, contracts with AI disclosure clauses) should review whether this feature is active by default and how to disable it. No action is required to update or patch Claude Code itself — this was a build artifact leak, not a vulnerability in the running software.
What's Next
Anthropic had not issued a public statement at the time of publication. The affected version (2.1.88) remains on npm, though the source map files themselves may be removed in a patch release. Developers should watch github.com/anthropics/claude-code and the official npm registry for an updated release. The Capybara model family, now confirmed to exist, has no official announced release date — but the internal comments suggest the team is iterating actively on accuracy improvements.
Sources
- Tech Startups — First detailed breakdown of the leak contents
- VentureBeat — Coverage of the leak discovery and developer impact
- Hacker News discussion — Developer reactions and technical analysis
- DEV Community — Deep-dive on architecture findings from the leaked source
- GitHub mirror (Kuberwastaken) — Archived copy of the leaked source files
- Cyber Kendra — Security perspective on the exposure
Stay up to date with Doolpa
Subscribe to Newsletter →