Claude Code Source Code Leaked: 5 Engineering Decisions That Reveal Where All AI Tools Are Heading
On March 31, 2026, a single misconfigured .map file in Anthropic’s npm package exposed nearly 2,000 TypeScript files containing over 512,000 lines of code — complete internal prompts, tool definitions, 44 hidden feature flags, and roughly 50 unreleased commands. The repository hit 9,000 GitHub stars in under two hours. Anthropic issued DMCA takedowns affecting 8,100+ forks within days.
Most coverage has focused on the headline: what was leaked and which fun features were found. But the more valuable story is what 512,000 lines of production Anthropic code reveals about how the company thinks, what it prioritizes, and where every AI tool on the market is headed next. This is that analysis.
How the Leak Happened — and What Was Actually Exposed
The culprit was a source map file — a debugging artifact — accidentally included in the Claude Code npm package (version 2.1.88). Source maps contain a sourcesContent array that embeds the complete original source code as strings. In other words: the full internal TypeScript codebase, including comments, was sitting in the npm registry for anyone to extract.
The fix is trivial — exclude *.map from production builds or add them to .npmignore. The fact that Anthropic missed it twice (a prior version leaked in February 2025) suggests the operational complexity of shipping a tool at this scale has outpaced their DevOps discipline.
The scale of what was exposed: 1,900 files, 512,000+ lines of code, 40+ built-in tools, 50 slash commands, 44 feature flags hiding unreleased capabilities. Developer comments in the code also exposed operational data: 1,279 sessions experienced 50+ consecutive failures daily, burning approximately 250,000 wasted API calls per day globally. That’s a stark reminder that building reliable agentic systems at scale is genuinely hard.
What the Architecture Actually Reveals About Anthropic’s Engineering Philosophy
The most technically honest takeaway from this leak is this: the moat in AI coding tools is not the model. It is the harness. Anthropic’s team isn’t winning because their model is untouchable — they’re winning because of the system they’ve built around it. Here’s what that system looks like.
The Runtime Choice: Bun Over Node.js
Claude Code runs on Bun, not Node.js. This is a performance-first decision — Bun starts faster and executes JavaScript/TypeScript more efficiently, which matters when your tool is spawning agents, managing file watchers, and orchestrating long-running processes in a terminal. It’s not a dramatic architectural statement, but it’s a signal: Anthropic cared enough about Claude Code’s responsiveness to switch runtimes.
The Terminal UI: React with Ink
The CLI interface is built with React and Ink — React rendering in a terminal environment. This means Anthropic’s frontend engineers could use the same component mental model they already knew rather than learning a new terminal UI framework. It’s a pragmatic choice that enabled faster iteration.
The Tool System: 29,000 Lines Just to Define the Base
Claude Code’s tool registry contains 40+ tools, filtered dynamically by feature gates, user type, and environment flags. Tool schemas are cached for prompt efficiency. The base tool definition alone accounts for 29,000 lines of code — more than many complete npm packages. This reflects a reality the industry doesn’t talk about enough: building reliable tool-calling agents means building a permission layer, a schema validator, a feature gate system, and a tool discovery mechanism. The AI model is the easy part.
Multi-Agent Architecture: Coordinator Mode
Claude Code can spawn parallel worker agents managed by a coordinator. The workflow runs through distinct phases: Research (workers gather information), Synthesis (coordinator consolidates findings), Implementation (workers execute), and Verification (workers confirm correctness). Agents communicate via XML-formatted <task-notification> messages and share a scratchpad directory for cross-agent knowledge. This pattern — coordinator, workers, shared scratchpad, XML messages — is exactly what developers building multi-agent systems today are trying to implement. The source code gives a production-grade reference implementation.
The YOLO Permission System: ML-Based Auto-Approval
Claude Code’s permission classifier has four modes: default (interactive approval), auto (ML-based risk assessment), bypass (assume approved), and yolo (fully autonomous). The most interesting is YOLO — it uses an ML classifier trained on transcript patterns to automatically approve operations it classifies as low-risk. This is a production example of using a small, fast model to gate a larger, more expensive one — a cost optimization pattern that many teams haven’t formalized yet.
Unreleased Features: A Roadmap Revealed
Behind 44 feature flags, the codebase revealed several capabilities Anthropic hadn’t shipped. Three stand out as particularly significant for where AI tools are going.
BUDDY: A Gamified Agent Companion
BUDDY is a Tamagotchi-style virtual pet system built into Claude Code. It features 18 species across five rarity tiers (Common 60%, Uncommon 25%, Rare 10%, Epic 4%, Legendary 1%), with an independent 1% shiny chance per species. Stats are procedurally generated: Debugging Skill, Patience, Chaos, Wisdom, and Snark. ASCII sprites with animation frames render the companions in the terminal. The system uses the Mulberry32 deterministic PRNG to generate consistent pet stats.
Beneath the novelty, BUDDY is a testbed: it exercises Claude Code’s session persistence, companion personality modeling, and ASCII rendering — all capabilities Anthropic is building for more serious use cases. It’s not a joke feature. It’s a vehicle for building the infrastructure that will power future agent memory and personality systems.
KAIROS: The Always-On Background Agent
KAIROS is an always-on autonomous background assistant. It maintains append-only daily log files, watches for relevant events, and acts proactively — with an exclusive 15-second blocking budget to avoid disrupting active workflows. Its exclusive tools include SendUserFile, PushNotification, and SubscribePR.
KAIROS is the most significant architectural reveal in the entire leak. It represents the clearest signal available about where AI assistants are heading: from reactive tools that wait for commands to persistent background companions that monitor and act on your behalf. This is not a Claude Code feature. This is a preview of the next generation of all AI assistants.
ULTRAPLAN: Remote Planning with Extended Thinking
ULTRAPLAN offloads complex planning tasks to a remote Cloud Container Runtime using Anthropic’s Opus 4.6 model with 30-minute think time — far beyond what any interactive session allows. A browser-based approval UI lets users review and approve the generated plan. Results are transferred via a special __ULTRAPLAN_TELEPORT_LOCAL__ sentinel.
ULTRAPLAN demonstrates Anthropic’s architectural thinking: separate the computationally expensive planning phase from the interactive session, run it asynchronously with maximum model time, then surface results for human review. It’s a pattern that will become standard in enterprise AI tools handling complex workflows.
The Anti-Distillation System: A Fascinating Contradiction
The source code revealed an “anti-distillation” system — a mechanism designed to inject fake tool definitions into Claude Code’s outputs to poison AI training data scraped from API traffic. The comment in the code even admits this measure is now “useless” since the leak exposed its existence.
This is the most intellectually interesting artifact in the entire codebase, and almost no coverage has explained why. Here’s the contradiction: the system was designed to make Claude Code’s API behavior unusable as training data. But the leak itself made the system’s existence and design public — eliminating any value it had as a deterrent. Anthropic spent engineering resources building a mechanism that secrecy alone was supposed to power. The moment the source code was public, the mechanism was defeated.
This reveals something important about the economics of AI tooling: many security and anti-abuse mechanisms in AI products depend entirely on obscurity, not technical robustness. Once the code is visible, the tricks become useless. The same applies to hidden feature flags, internal codenames, and internal roadmap references — the entire security model was “if nobody sees the code, nobody can replicate it.” That assumption is now broken.
Undercover Mode: What Anthropic Didn’t Want You to Know
The “Undercover Mode” feature instructs Claude Code to never reveal its own identity in commits — no “Claude Code” mentions, no Co-Authored-By attribution lines, no references to AI involvement. The goal: write commit messages and code as a human developer would. The internal codename “Tengu” was also confirmed as Claude Code’s project codename.
Undercover Mode’s existence is itself the story. It tells you that Anthropic’s own team recognized that the tool’s AI fingerprints are visible to experienced developers — and that hiding those fingerprints matters enough to build a dedicated feature. It also reveals the ongoing tension in AI tooling: the more powerful and agentic the tool, the more its outputs begin to look non-human. And that gap needs to be actively managed.
What This Leak Means for the AI Ecosystem — and for You
The comparison nobody is making clearly: while Anthropic scrambled to issue DMCA takedowns affecting 8,100+ repository forks, OpenAI had already open-sourced its entire Codex CLI under Apache 2.0 in April 2025. Today, Codex has over 60,000 GitHub stars and 363 contributors. The open-vs-closed debate in AI models has now fully transferred to AI tools — and this leak is the sharpest illustration of the trade-offs yet.
Here is what is true regardless of your position on open source: the competitive moat in AI coding tools is not the model. It is the orchestration harness, the permission system, the context management layer, the IDE integration, and the developer experience built around the model. These patterns are now visible in production form in Claude Code. The teams that can execute on these architectural patterns most effectively — whether open or closed — will win.
For businesses and developers working with AI tools today, the implications are concrete: AI coding assistants are becoming comprehensive development environments. The context window is expanding toward 1 million tokens (confirmed in source code as context-1m-2025-08-07). Multi-agent architectures are in production. Always-on background companions like KAIROS are coming. The era of reactive AI tools is ending.
The npm source map exposed 512,000 lines of Anthropic’s thinking. The takeaway isn’t that Claude Code’s secrets are now worthless. It’s that the next generation of AI tools is already built — just not shipped yet. And now you know exactly what it looks like.
Frequently Asked Questions
What exactly happened with the Claude Code source code leak?
A source map file (.map) was accidentally included in the Claude Code npm package. Source maps contain a sourcesContent array that embeds the complete original TypeScript source code. This exposed 1,900 files and over 512,000 lines of code to the public. It was the second such incident, following a similar leak in February 2025.
Does the leak pose any security risk to Claude Code users?
No direct security risk to end users. The source code does not expose API keys, user data, or infrastructure credentials. However, the leak reveals internal architecture decisions, unreleased features, and operational data that competitors can learn from. It also raises legal questions about the scope of Anthropic’s DMCA takedowns against 8,100+ repository forks.
What are the most significant unreleased features found in the code?
Three features stand out: KAIROS, an always-on background agent that proactively watches and acts on the user’s behalf; ULTRAPLAN, a system that offloads complex planning to a remote container running Anthropic’s Opus 4.6 with 30-minute extended thinking; and BUDDY, a Tamagotchi-style companion pet system with procedurally generated stats and ASCII rendering. All three represent architectural patterns that will define the next generation of AI assistants.
What does the anti-distillation system in Claude Code mean?
Anthropic built a system to inject fake tool definitions into Claude Code’s outputs to poison AI training data scraped from API traffic. The source code itself reveals this system is now “useless” — because the leak exposed its existence and design. This demonstrates that many anti-abuse mechanisms in AI products depend on secrecy rather than technical robustness. Once the code is public, the tricks no longer work.
How does Claude Code compare to OpenAI’s Codex?
OpenAI open-sourced Codex CLI under Apache 2.0 in April 2025, gaining 60,000+ GitHub stars and 363 contributors. Anthropic’s Claude Code remained closed, and this leak represents an accidental exposure rather than an intentional open strategy. The architectural patterns in Claude Code — multi-agent orchestration, always-on background agents, ML-based permission systems — represent a different design philosophy that prioritizes depth and capability over transparency.
What was the Claude Code internal codename?
The internal project codename for Claude Code is “Tengu,” confirmed in the Undercover Mode feature code. “Tengu” is a reference to a mythical figure in Japanese folklore — fitting Anthropic’s pattern of using mythology-adjacent names for AI concepts.