On Monday, Anthropic shipped version 2.1.88 of Claude Code to npm. Buried inside was a source map file that contained the tool's entire codebase — 512,000 lines of TypeScript across nearly 2,000 files. Within hours, the code had been forked over 80,000 times on GitHub. Anthropic confirmed it was human error, not a breach. But the damage was structural: competitors, researchers, and attackers now have a full blueprint of how the most popular AI coding agent works internally — its tool architecture, permission gates, multi-agent orchestration, and a pile of unreleased features sitting behind compile-time flags.

By itself, that's a bad week. But it wasn't by itself.

What Else Happened

Days earlier, Fortune reported that a different Anthropic leak — a draft blog post left in an unsecured data cache — revealed the existence of "Claude Mythos," a next-generation model the company described internally as "far ahead of any other AI model in cyber capabilities." Anthropic has been privately warning government officials that Mythos makes large-scale autonomous cyberattacks significantly more likely this year.

This isn't hypothetical hand-wringing. Back in September 2025, Anthropic discovered a Chinese state-sponsored group using Claude Code to autonomously infiltrate roughly 30 organizations — tech companies, financial institutions, government agencies. The AI handled 80–90% of tactical operations independently. Anthropic caught it, shut it down, and published the findings. The next generation of models will make that kind of operation easier to run and harder to detect.

Meanwhile, at RSA Conference 2026 in San Francisco, the dominant theme wasn't ransomware or phishing. It was agent security. Cisco presented survey data showing 85% of enterprise customers are experimenting with AI agents — but only 5% have moved them into production. The barrier isn't budget or ambition. It's trust. A separate Gravitee survey found that 88% of organizations have already had confirmed or suspected agent-related security incidents. Only about 22% treat their agents as independent identities with proper access controls.

The Bigger Picture

There's a term forming in the industry for this: the agent control plane. Every major security vendor at RSA launched something targeting it — Cisco extended zero trust to agents, SentinelOne announced agent governance tooling, Palo Alto dropped Prisma AIRS 3.0 for agent runtime security, and Accenture partnered with Anthropic to launch "Cyber.AI" for AI-driven SOC operations.

The pattern is clear: agents are being treated as a new class of workforce that needs identity, access controls, kill switches, and audit trails — the same infrastructure humans operate under. Okta is pushing agent revocation. Microsoft launched Agent 365 as a control plane. The phrase "shadow agents" is now circulating the way "shadow IT" did a decade ago.

But the gap between the tooling being announced and the reality on the ground is enormous. Most agents deployed today run without security oversight or logging. Most organizations haven't even inventoried which agents are active.

What This Means for You

If you're a solo founder or indie dev running agents in your workflow — coding agents, deployment agents, monitoring agents — this week is a signal to audit what you've got running. Not because the sky is falling, but because the pattern is set: agent supply chains are vulnerable (the npm leak proved it), agent credentials are poorly managed (Moltbook proved it months ago), and the security industry is scrambling to catch up with what's already deployed.

The irony of this particular week is hard to miss. The company building what it says is the most capable AI agent in the world couldn't keep its own source code off a public registry. The gap between capability and control isn't closing — it's widening.

Trish @ StackDrift

Want vendor changes flagged before they hit the news? Subscribe to Drift Intel for weekly deep dives.

Keep Reading