On March 31, 2026, Anthropic's popular AI coding assistant — Claude Code — had its entire source code accidentally exposed to the world. Not through a sophisticated hack, not through a rogue insider, but through a simple human error in npm packaging. The fallout has been massive, and the implications are still unfolding.
Let's break down exactly what happened, what was found inside the code, and why this matters for every developer using AI coding tools.
The Leak: How It Happened
Anthropic released version 2.1.88 of the Claude Code npm package — a routine update, by all appearances. But this time, the release included something it shouldn't have: a source map file. Source maps are typically used in development to debug minified or compiled code, mapping it back to the original source. They are never supposed to ship in production packages.
Within hours, sharp-eyed developers noticed. Security researcher Chaofan Shou was the first to publicly flag it on X (formerly Twitter), posting that the Claude Code source code had been exposed through a map file in the npm registry. The post went viral, racking up over 28.8 million views.
The leaked codebase was enormous — nearly 2,000 TypeScript files and more than 512,000 lines of code. Version 2.1.88 has since been pulled from npm, but the damage was already done. The code was quickly mirrored to a public GitHub repository (instructkr/claw-code), which within just 2 hours had amassed 50,000+ stars, eventually surpassing 84,000 stars and 82,000 forks — making it one of the fastest-growing repositories in GitHub history.
Anthropic confirmed the incident, stating it was a packaging issue caused by human error, not a security breach. They emphasized that no sensitive customer data or credentials were involved or exposed, and that measures were being rolled out to prevent a recurrence.
What's Inside: The Architecture of Claude Code
Once the source code was out in the open, developers and researchers wasted no time digging through it. What they found revealed a remarkably sophisticated AI agent system. Here are the most notable discoveries:
1. Self-Healing Memory Architecture
One of the most discussed findings was Claude Code's approach to overcoming the fixed context window limitation that plagues all large language models. The system implements a self-healing memory architecture — a mechanism that allows the tool to intelligently manage, compress, and reconstruct context across long coding sessions. Rather than simply truncating old context, the system uses a four-stage context management pipeline that compacts and preserves the most relevant information.
2. Multi-Agent Orchestration
Claude Code doesn't just run as a single agent. The source revealed a sophisticated multi-agent orchestration system capable of spawning "sub-agents" or swarms to carry out complex, multi-step tasks. This is what allows Claude Code to handle large refactors, multi-file edits, and complex debugging sessions that would overwhelm a single-pass approach.
3. Tool System & Query Engine
At the core of the architecture is a tools system that facilitates various capabilities — from reading files and executing bash commands to interacting with APIs. Alongside it, a query engine handles LLM API calls and orchestration, routing requests to the right model with the right context.
4. Bidirectional Communication Layer
The code includes a communication layer that connects IDE extensions (like VS Code) to the Claude Code CLI. This bidirectional bridge is what makes the seamless editor integration possible — allowing Claude Code to read your editor state, suggest changes in context, and apply them directly.
5. KAIROS: The Background Agent
Perhaps the most futuristic discovery was a feature called KAIROS — a system that allows Claude Code to operate as a persistent background agent. KAIROS can periodically fix errors, run tasks, and even send push notifications to users, all without waiting for human input. It essentially transforms Claude Code from a reactive assistant into a proactive coding partner.
6. "Dream" Mode
Complementing KAIROS is an experimental "Dream" mode that allows Claude to continuously think in the background — developing ideas, iterating on existing solutions, and preparing suggestions before the developer even asks. Think of it as Claude Code brainstorming while you sleep.
7. Undercover Mode
One of the most eyebrow-raising discoveries was an "Undercover Mode" designed for making stealth contributions to open-source repositories. The system prompt for this mode instructs the agent to avoid revealing any Anthropic-internal information in commit messages, PR titles, or PR bodies when operating in public or open-source repos.
8. Anti-Distillation Defenses
The source code also revealed that Anthropic has been actively fighting model distillation attacks — where competitors scrape outputs to train their own models. The system includes controls that inject fake tool definitions into API requests to poison training data if scraping attempts are detected. This is a direct response to reported incidents of Chinese AI firms using model distillation against Claude's outputs.
The GitHub Repository: More Than Just a Code Dump
The mirrored repository at instructkr/claw-code quickly evolved beyond a simple archive of leaked files. The maintainers turned it into a full harness engineering project — a clean-room Python and Rust port of the original Claude Code agent runtime. The goals of this project include:
Reimplementing the agent architecture through functional Python and Rust ports, capturing workflows, command handling, and tool orchestration
Enabling safe experimentation for researchers and developers to study and extend agent systems
Verifying parity with the original system through automated tests and audits
The repository structure includes a Python workspace (src/) with modules for port manifests, data models, commands, tools, and a query engine, plus a Rust workspace (rust/) with crates for the API client, runtime, tools, commands, plugins, a CLI, and a compatibility harness for editor integration.
The maintainers have been transparent that this is focused on open-source harness engineering research and is not affiliated with Anthropic.
The Security Fallout
The leak isn't just an intellectual property problem — it's a security concern. With Claude Code's internals now exposed, attackers have a detailed map of how the system processes data, manages context, and executes tools.
AI security company Straiker warned that instead of brute-forcing jailbreaks and prompt injections, attackers can now study exactly how data flows through the system's context management pipeline and craft payloads designed to survive compaction — effectively persisting a backdoor across an entire session.
Supply Chain Attack: Trojanized Axios
The timing of the leak coincided with a separate but related threat. Users who installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC may have pulled a trojanized version of the Axios HTTP client — a supply chain attack that bundled a cross-platform remote access trojan. Users affected by this window are advised to immediately downgrade to a safe version and rotate all secrets and credentials.
Typosquatting Attacks
Attackers also moved quickly to exploit the leak through typosquatting — registering npm packages with names similar to internal Claude Code dependencies. A user named "pacifier136" published several empty stub packages designed to target developers trying to compile the leaked source code. The packages included names like audio-capture-napi, color-diff-napi, image-processor-napi, modifiers-napi, and url-handler-napi. These start as empty modules but can be weaponized with malicious updates later through dependency confusion attacks.
This Wasn't Anthropic's Only Leak That Week
Remarkably, the Claude Code source leak was the second major incident for Anthropic within a single week. Just days earlier, details about the company's upcoming AI model — along with other internal data — were found accessible through an unsecured content management system. Anthropic later acknowledged it had been testing this model (reportedly codenamed "Mythos") with early access customers, calling it the most capable model they've built to date.
Two significant data exposures in one week raises serious questions about Anthropic's internal security practices, especially for a company that positions itself as the safety-focused AI lab.
What Developers Should Do Right Now
If you're a Claude Code user, here's what you need to know:
Check your install date. If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, assume your installation may be compromised. Downgrade to a known safe version immediately.
Rotate your secrets. If you were in the affected window, rotate all API keys, tokens, SSH keys, and credentials that Claude Code may have had access to.
Watch for typosquat packages. If you've been experimenting with the leaked source code, double-check every npm dependency you've installed. Look out for suspicious packages that mimic internal Claude Code module names.
Stay updated. Follow Anthropic's official channels for further guidance and patches.
The Bigger Picture
This incident is a wake-up call for the entire AI tools ecosystem. As AI coding assistants become deeply embedded in developer workflows — with access to codebases, secrets, terminals, and APIs — the security stakes are exponentially higher than traditional software tools.
A source map that should have been excluded from a production build ended up exposing the inner workings of one of the most popular AI coding tools in the world. It gave competitors a blueprint, gave attackers a roadmap, and gave the open-source community an unprecedented look into how modern AI agent systems are actually built.
The lesson is clear: in the age of AI-powered development tools, your CI/CD pipeline's packaging step is a security boundary — and it needs to be treated as one.
Sources: The Hacker News, Medium (Data Science in Your Pocket), CNBC, Straiker AI Security Blog

Comments (0)
Sign in to join the conversation.
No comments yet. Be the first to share your thoughts.