Welcome back. We scanned the internet's fine print so you didn't have to. Here is the drift that matters to your MRR this week.
The Big Drift: Anthropic

Change Type: Security / Source Code Exposure Link to Source: Read the full post

The News: Anthropic accidentally shipped 512,000 lines of Claude Code source code to the public npm registry — including unreleased features, internal model names, and an "Undercover Mode" that strips AI attribution from public commits.

The StackDrift Analysis:

  What actually changed: A missing line in .npmignore caused a source map file to ship with Claude Code version 2.1.88. The file contained the entire readable codebase — tool architecture, multi-agent orchestration, bash security gates, and 44 unreleased features behind compile-time flags. Most notable: KAIROS (a persistent background daemon mode), autoDream (memory consolidation during idle time), and internal model codenames including Opus 4.7 and Sonnet 4.8. Anthropic filed 8,000+ DMCA takedowns. The mirrors stayed up.
    The Impact: Every competitor building AI coding agents now has a detailed reference implementation of how the market leader works internally. The copyright enforceability is also in question — Claude Code is 90% AI-generated by Anthropic's own disclosure, and U.S. copyright law still requires human authorship. If you depend on Claude Code in your workflow, the axios supply chain attack that hit during the same window (March 31, versions 1.14.1 and 0.30.4) is the more immediate concern — check your lockfile.
    The Action: If you updated Claude Code via npm between March 31 00:21–03:29 UTC, check for compromised axios versions and rotate credentials. Going forward, use Anthropic's native installer instead of npm. If you're evaluating Claude Code for production workflows, the architectural details in the leak are actually useful context for understanding what you're building on — but the operational security pattern here is a red flag worth weighing.

The Weekly Scan (Quick Hits)

Anthropic + Agent Security (The Bigger Picture): The Claude Code leak landed in the same week as RSA Conference 2026, where agent security was the dominant theme. 85% of enterprises are experimenting with AI agents, but only 5% have moved them to production. 88% of organizations report confirmed or suspected agent-related security incidents. Every major security vendor launched agent governance tooling. The phrase "shadow agents" is now in active circulation. | Severity:
The TL;DR - RoundUp

April 3, 2026

Welcome back. We scanned the internet's fine print so you didn't have to. Here is the drift that matters to your MRR this week.

The Big Drift: Anthropic

Change Type: Security / Source Code Exposure Link to Source: Read the full post

The News: Anthropic accidentally shipped 512,000 lines of Claude Code source code to the public npm registry — including unreleased features, internal model names, and an "Undercover Mode" that strips AI attribution from public commits.

The StackDrift Analysis:

  • What actually changed: A missing line in .npmignore caused a source map file to ship with Claude Code version 2.1.88. The file contained the entire readable codebase — tool architecture, multi-agent orchestration, bash security gates, and 44 unreleased features behind compile-time flags. Most notable: KAIROS (a persistent background daemon mode), autoDream (memory consolidation during idle time), and internal model codenames including Opus 4.7 and Sonnet 4.8. Anthropic filed 8,000+ DMCA takedowns. The mirrors stayed up.

  • The Impact: Every competitor building AI coding agents now has a detailed reference implementation of how the market leader works internally. The copyright enforceability is also in question — Claude Code is 90% AI-generated by Anthropic's own disclosure, and U.S. copyright law still requires human authorship. If you depend on Claude Code in your workflow, the axios supply chain attack that hit during the same window (March 31, versions 1.14.1 and 0.30.4) is the more immediate concern — check your lockfile.

  • The Action: If you updated Claude Code via npm between March 31 00:21–03:29 UTC, check for compromised axios versions and rotate credentials. Going forward, use Anthropic's native installer instead of npm. If you're evaluating Claude Code for production workflows, the architectural details in the leak are actually useful context for understanding what you're building on — but the operational security pattern here is a red flag worth weighing.

The Weekly Scan (Quick Hits)

  • Anthropic + Agent Security (The Bigger Picture): The Claude Code leak landed in the same week as RSA Conference 2026, where agent security was the dominant theme. 85% of enterprises are experimenting with AI agents, but only 5% have moved them to production. 88% of organizations report confirmed or suspected agent-related security incidents. Every major security vendor launched agent governance tooling. The phrase "shadow agents" is now in active circulation. | Severity:

  • Replicate (API Billing / Dispute Terms): Replicate quietly updated their API terms — failed and canceled runs are now explicitly billable for any compute time consumed, and billing disputes must be raised within 30 days. The kicker: continued use of the platform after a billing issue counts as acceptance of the charge. If you're running automated pipelines or multi-model chains, unhandled errors just became a direct cost. | Severity:

  • Microsoft MAI Launch (Three Models, One Day): Microsoft shipped MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 simultaneously through Foundry — all built by Mustafa Suleyman's MAI Superintelligence team. Transcribe-1 claims to beat Whisper and Gemini Flash on speech-to-text at $0.36/hr. Voice-1 generates 60 seconds of audio in one second. Image-2 sits at #3 on the Arena.ai leaderboard. This is Microsoft building its own model stack after OpenAI signed a $50B deal with Amazon and ended Azure exclusivity. | Severity:

From the Lab: The SaaS Safety Audit

If you're new here, don't let your stack drift away from you. Run our 5-Minute SaaS Safety Audit to find hidden leaks in your billing and data privacy.

Trish @ StackDrift

Built by humans at Drift Intel. Yes, actual humans who read legal docs, so you don't have to. If you found this helpful, forward it to a founder friend. If you didn't, maybe next time. Have a great weekend!

Want to stay in the loop? Check out our Youtube Channel or subscribe to Drift Intel for weekly deep dives.

Keep Reading