On March 31st, a security researcher named Chaofan Shou woke up early, found something he wasn't supposed to find, and posted a link on X at 4:23am UTC.

The link pointed to a zip file sitting on Anthropic's own cloud storage — 512,000 lines of source code for Claude Code, their fastest-growing commercial product. Not hacked out. Shipped there by accident. One misconfigured line in a build file that nobody caught before it went live.

Within hours, it had been mirrored across GitHub thousands of times. Anthropic filed copyright takedown notices and removed over 8,000 copies. The mirrors stayed up anyway.

What Changed — And How

Claude Code's build process uses a tool called Bun. Bun generates source map files by default — debugging files that let you reconstruct the original readable source code from compiled output. Someone forgot to exclude that file from the publish configuration. Version 2.1.88 shipped to the public npm registry with a 59.8 MB source map attached.

That's the whole story, operationally. One missing line in .npmignore. Anthropic confirmed it was human error — no customer data exposed, no model weights, not a breach in the traditional sense.

But the code was out. And what developers found inside it is the more interesting story.

What the Code Actually Revealed

The competitive blueprint. Claude Code isn't a thin wrapper around a language model. The leaked source exposed a 46,000-line query engine, a plugin architecture with 40+ tools, multi-agent orchestration logic, and 2,500 lines of bash security validation running 23 sequential checks on every shell command. Every competitor building in this space now has a detailed reference implementation. The R&D Anthropic spent years building just became freely available to anyone who grabbed a mirror before the takedowns.

Undercover Mode. Inside a file called undercover.ts, developers found a system prompt instructing Claude Code to strip all AI attribution from commits when contributing to public or open-source repositories. No "Co-Authored-By: Claude." No internal model names. The exact wording: "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Do not blow your cover." Anthropic's likely explanation is that this is designed for internal developers — some open-source projects ban AI-generated contributions, and this manages that quietly. The feature exists either way. The blueprint is now public.

44 unreleased features. The source contained compile-time feature flags for capabilities fully built but not yet shipped to any user. The most significant: KAIROS, referenced over 150 times in the source, is an always-on background daemon mode — Claude Code that keeps running and monitoring your project after you close the terminal. Its companion, autoDream, performs what the code calls "memory consolidation" during idle time, merging observations and converting tentative notes into confirmed facts. The code comments describe this as the model "dreaming." That's the actual internal term. Other unreleased features include ULTRAPLAN (30-minute cloud planning sessions approachable from your phone), COORDINATOR MODE (spawning parallel worker agents), and BUDDY — a Tamagotchi-style terminal pet with 18 species and a stat called SNARK.

The model roadmap. The Undercover Mode code contained a list of internal names it was explicitly told never to reveal in public commits. That list confirmed: Capybara is a Claude 4.6 variant (the same model leaked days earlier in a draft blog post as "Mythos"), Fennec maps to Opus 4.6, and Numbat is an unreleased model still in testing. References to Opus 4.7 and Sonnet 4.8 also appeared — strings the system was told to hide from public view.

An internal benchmark worth noting. Analyses of the leaked source found internal benchmark data showing the latest Capybara build with a false claims rate of approximately 29–30%, up from 16.7% in an earlier version. This is a development regression, not a final ship number — model benchmarks fluctuate. But it's the kind of data that doesn't usually leave the lab.

The Part Nobody Is Covering Correctly

Even if Anthropic's legal team pursues every mirror, their copyright position on this code has a significant complication. Gartner flagged it in a same-day advisory: Claude Code is 90% AI-generated, per Anthropic's own disclosures. Under current U.S. copyright law, protection requires human authorship. The Supreme Court declined to revisit that standard in March 2026.

What that means practically: the legal teeth behind Anthropic's takedown campaign are weaker than they'd be for traditionally authored software. Clean-room rewrite projects — where developers build new implementations inspired by what they read — almost certainly fall outside DMCA reach entirely.

One More Thing — A Separate Security Issue

This is unrelated to the leak itself but happened in the same window: a supply chain attack hit the axios npm package on March 31 between 00:21 and 03:29 UTC, injecting a Remote Access Trojan into versions 1.14.1 and 0.30.4. Claude Code depends on axios.

If you updated Claude Code via npm during that window, check your lockfile for those specific axios versions. If they're present, treat the machine as compromised and rotate all credentials. Anthropic's recommended path going forward is their native installer, not npm.

The Bigger Picture

This is Anthropic's second major exposure in a single week. Days before, Fortune reported that nearly 3,000 internal files — including a draft blog post about an unreleased model — had been sitting in a publicly accessible cloud cache.

Two separate incidents. One week. At the company that has built its entire brand identity around being the careful, safety-focused AI lab.

The engineering inside the leaked code is genuinely impressive — the architecture is mature, thoughtful, and clearly built by people who know what they're doing. The deployment process that let it escape is a different story. As one engineer put it in a public analysis: "A single misconfigured field in package.json can expose an entire proprietary codebase."

What the leak ultimately revealed isn't just what Anthropic built. It's what they're building toward. KAIROS and autoDream aren't prototypes — they're finished features waiting for a green light. Claude Code isn't positioning as a coding assistant. It's positioning as a persistent development agent, one that works in the background, consolidates memory across sessions, and eventually operates without waiting to be asked.

That's a meaningfully different product than what most users think they have installed right now.

Trish @ StackDrift

Drift Intel tracks vendor policy changes, pricing shifts, and fine print that affects how you build. If this was useful, forward it to someone who uses Claude Code.

Want to stay in the loop? Check out our Youtube Channel or subscribe to Drift Intel for weekly deep dives.

Keep Reading