Claude Code Leak Round 30 — Native Binary Shift, Desktop Multi-Agent, Opus 4.7 Tokenizer Economics (2026-04-21)
- Source ID:
src-20260423-22e662f6932a - Kind:
analysis - Scope:
shared - Origin:
community-analysis/round-30 - Raw path:
sources/raw/claude-code-leak-round-30-native-binary-shift-desktop-multi-agent-opus-4-7-token__src-20260423-22e662f6932a.md - Status:
active
Tags
community-analysis leak-round round-30
Content
Claude Code Leak — Round 30: Native Binary Shift, Desktop Multi-Agent Rebuild, Opus 4.7 Tokenizer Economics, and Post-Leak Culture Analyses (April 21, 2026)
Executive Summary
Round 30 captures the first wave of post-Opus 4.7 and post-Epitaxy-era analysis layered on top of the leaked Claude Code internals.
Four threads matter most:
- The native-binary architecture shift in v2.1.113 — the CLI no longer runs the bundled JavaScript directly; it now spawns a per-platform native binary, tightening the attestation story and aligning the public architecture with what the leak already hinted at (Zig-based client attestation below the JS layer).[^1][^2]
- The desktop multi-agent rebuild — Anthropic has effectively shipped a first-class multi-agent IDE: parallel sessions in a sidebar, integrated terminal and editor, three view modes, side chats, and Routines to run Claude Code automations without an active session.[^3][^4]
- Opus 4.7’s tokenizer and
xhigheffort level economics — the new tokenizer yields 1.0–1.35× more tokens for the same text, which interacts with prompt caching, effort, and leakage-driven architectural quirks in non-obvious ways; 4.7 is “one step better” on SWE-style benchmarks, but it can be more expensive in practice if you do not adjust context strategy.[^5][^6][^7] - Culture and practice analyses inspired by the leak — multiple essays and posts now treat the leaked source as a case study in AI-native engineering culture: AI-written code at scale, massive god functions, feature gates overrun by complexity, and the tension between “coding is solved” rhetoric and a brittle build pipeline.[^2][^8]
The sections below synthesize what changed technically since Round 29 and what the broader community is concluding about the codebase and organization.
Part 1: v2.1.113 and v2.1.114 — Native Binary Spawn and Sandbox Hardening
v2.1.113 — Native Binary Spawn and Network Sandbox Rules
v2.1.113 (April 17) is the most structurally important CLI release since the leak because it changes what actually runs when you type claude.[^9][^1]
Key changes:
- CLI now spawns a native Claude Code binary via a per-platform optional dependency instead of running the bundled JavaScript directly.[^1] The JS entrypoint becomes a launcher.
- This aligns the public distribution with the architecture the leak already revealed: a Zig-based attestation client below the JS layer that cryptographically proves the request comes from a genuine Claude Code binary.[^2]
- Security implication: it becomes harder to shim or repack the CLI in ways that bypass safety checks or distort telemetry without breaking attestation.
sandbox.network.deniedDomainssetting added — allows blocking specific domains even when a broaderallowedDomainswildcard would otherwise permit them.[^1]- This is directly responsive to supply-chain scenarios discovered post-leak (e.g., typosquatted registries, internal artifact mirrors) where a wildcard like
*.npmjs.orgis too broad. /loopwakeup semantics clarified: pressing Esc now cancels pending wakeups, and wakeups display as “Claude resuming /loop wakeup,” making it explicit when the agent woke itself vs. when the user resumed.[^1]- Remote Control improvements:
/extra-usagenow works from Remote Control (mobile/web).[^1]- Remote clients can query
@-file autocomplete suggestions. - Remote Control sessions now stream sub-agent transcripts and get archived correctly when Claude Code exits.[^1][^10]
A cluster of bug fixes further tightens behavior:
- Fixed Claude calling a non-existent
commitskill and showing “Unknown skill: commit” when users did not define a custom/commitcommand; this had been a recurrent pain point in agentic git workflows.[^1] - Fixed Bedrock/Vertex/Foundry 429 errors incorrectly referencing status.claude.com, which only covers Anthropic-operated providers.[^1]
- Improved plan mode to hide “Refine with Ultraplan” when the user’s org or auth cannot reach Claude Code on the web, avoiding broken flows.[^1]
v2.1.114 — Small but Telling: Agent Teams Permission Crash
v2.1.114 (also April 17) contains a single changelog line but it speaks to the growing surface area of multi-user/team features:[^11]
- Fixed a crash in the permission dialog when an agent teams teammate requested tool permission.
The agent teams feature — already visible in earlier leaked flags and CLI behavior — is being hardened for real-world use, where teammates can prompt tools that need permission in shared sessions.
Part 2: Desktop Multi-Agent Rebuild — Epitaxy in All But Name
The TestingCatalog leaks named it Epitaxy; MacRumors and the Anthropic ecosystem have now documented what actually shipped in the Claude desktop app mid-April.
Parallel Sessions and Sidebar Architecture
Anthropic has rebuilt the Claude Code desktop app around parallel sessions:[^3][^12]
- New sidebar lists every active and recent session, filterable by status, project, or environment, with grouping by project.
- Multiple agents can run from a single window — each session is effectively a first-class agent; this closely matches the multi-tab multi-agent setups power users had been hand-rolling.
- Side chats (Command+;) let you branch off questions from a running task into a separate thread without feeding the extra context back into the main thread.[^3][^4]
This design mirrors the multi-session, branch-friendly structure seen in internal KAIROS documentation and leaked coordination code — but now with first-class UI support instead of manual claude --resume and claude --continue juggling.
Integrated Terminal, Editor, Diff, and Preview
The desktop app now “drops more of the developer workflow into the app itself”:[^3]
- Integrated terminal: run tests and builds from inside Claude desktop.
- In-app file editor: apply spot edits without leaving the app.
- Rebuilt diff viewer: optimized for large changesets, like multi-file refactors.
- Expanded preview pane: handles HTML files, PDFs, and local app servers.
All panes are drag-and-drop; layout is user-configurable. There are three view modes — Verbose, Normal, Summary — that correspond to different levels of tool-call transparency.[^3][^4]
This is effectively an IDE shell around the agentic engine described in the leak: terminal, editor, and diff are now panes inside the orchestrator rather than external tools.
Routines: Headless Claude Code Automations
Anthropic also introduced Routines — a way to run Claude Code automations without an active session.[^3]
- A Routine bundles a prompt, repo, and connectors (e.g., GitHub, Jira) into a configuration.
- It can run on a schedule, in response to API calls, or on a GitHub event (e.g., new PR).
From the leaked-source perspective, this essentially exposes parts of the KAIROS dream loop and scheduler to end users — the “autoDream” and scheduled agent runs that were previously internal now have a user-facing abstraction.
Relation to Epitaxy Leaks
TestingCatalog’s earlier reporting described:[^13][^14]
- Plan/Tasks/Diff panels (now visible in the rebuilt diff and layout system)
- Multi-repo support (not fully shipped yet but hinted at by session grouping and remote project handling)
- Coordinator Mode for parallel sub-agents (paralleled in the multi-session sidebar and side chat pattern)
The desktop update is not branded as Epitaxy, but it implements the multi-agent, multi-pane, coordinator-oriented design the leak and testers had described.
Part 3: Opus 4.7 — Tokenizer, Effort, and Economics
Claude Opus 4.7, launched April 15, is largely a model-side story, but it interacts with Claude Code via context windows, effort levels, and billing behavior that the cache-leak work has already problematized.
What’s New in Opus 4.7
Anthropic’s official “What’s new in Opus 4.7” doc and multiple community deep dives agree on the key changes:[^5][^7][^15]
- New tokenizer: produces ~1.0–1.35× more tokens for the same text, depending on language and code mix.
- Better coding benchmarks: small but consistent improvements over Opus 4.6 on SWE-bench, HumanEval, and custom coding tasks.
- Improved long-context retention: better performance in 200K+ token ranges.
- Visible thinking: extended thinking is more accurate and consistent.
The LLM Stats benchmark comparison shows Opus 4.7 beating 4.6 on coding-relevant tasks but not by dramatic margins — the phrase “literally one step better” from Latent Space captures the consensus.[^16]
Tokenizer Impact and Cost
The most practically important detail for Claude Code users: the tokenizer change interacts with prompt caching and context in ways that can raise or lower effective cost, depending on usage.
Community observations summarized by TechScan AI and r/ClaudeCode:[^6][^17]
- For English-heavy prose, Opus 4.7 produces ~1.1× tokens vs. 4.6.
- For code-heavy prompts and responses, it can produce up to 1.35× tokens — particularly with long identifiers, generics, and nested constructs.
- Per-token pricing is unchanged, so worst-case effective cost per kilobyte of text increases by ~35%.
Interacting with the cache TTL issues from Round 29:[^18][^19]
- If your workflow is cache-friendly (reusing static prefixes and stable prompts), Opus 4.7’s extra tokens can be partially offset by more effective caching and fewer re-reads thanks to better long-context handling.
- If your workflow is cache-hostile (changing CLAUDE.md, tools, or system instructions frequently), the extra tokens just raise cost.
- For long sessions that already hit the autocompact threshold, the new tokenizer means you hit compaction earlier unless you adjust context strategy.[^5]
xhigh Effort Level and Claude Code
Opus 4.7 introduces an xhigh effort level that sits above high in the effort hierarchy.[^20]
xhighallocates a larger thinking budget (more internal tokens) and is tuned for longer, more globally coherent reasoning.- In Claude Code,
/effortnow opens an interactive slider when called without arguments; Opus 4.7xhighis available to Max subscribers in auto mode.[^11][^1]
The Finout and Vellum analyses stress that effort and tokenizer both multiply cost:[^21][^22]
- A typical coding turn at
highmight consume 5–10k tokens;xhighcan double that. - With the 1.35× tokenizer overhead, a heavy auto-mode session with Opus 4.7
xhighand poor caching can easily be 2–3× more expensive than the same workflow on 4.6highwith stable prompts.
This is where the leak’s insights into prompt structure, compression stages, and SYSTEM_PROMPT_DYNAMIC_BOUNDARY become operational: knowing exactly what parts of the prompt are cacheable and how compaction works is now a material cost-control advantage.
Part 4: System Prompt and Culture Analyses — Post-Leak Reflections
cchistory and Piebald: Tracking System Prompt Evolution
Mario Zechner’s cchistory tool and the Piebald-AI claude-code-system-prompts repository together provide a time-series view of Claude Code’s system prompt and tool definitions.[^23][^24]
Key observations relevant post-leak:
- The “avoid creating new files” instruction has been progressively softened — from a hard rule (“don’t create new files unless explicitly asked”) to a preference (“prefer editing existing files over creating new ones”).[^23]
- Safety instructions around malware and vulnerability exploitation have been strengthened after Glasswing and Mythos disclosures, with more explicit prohibitions and more frequent reminders.
- The Output Efficiency section for Anthropic employees vs. “IMPORTANT: Go straight to the point” for external users (originally surfaced by dbreunig and Harrison) remains different in spirit — internal prompts emphasize understandability; external prompts emphasize brevity.[^25][^26]
Recent HN and Reddit threads argue this reinforces a cultural gap: the prompts Anthropic engineers use internally optimize for code quality and clarity; the ones paying users see optimize for speed and concision, which was a direct factor in the thinking-depth regression documented in earlier rounds.[^27][^28]
“Don’t Use the Default System Prompt” — Community Counter-Prompts
A widely shared r/ClaudeCode post on April 14 argues that the default system prompt is now actively misaligned with complex engineering work.[^29]
The poster’s workflow:
- Replaces the default with a shorter, explicit prompt that emphasizes:
- reading code before editing
- planning before acting
- avoiding file creation unless necessary
- honest reporting of what was executed
- Layers a minimal CLAUDE.md that encodes repo-specific constraints, not general rules.
They report that this restores much of the lost depth without relying on high-level effort or xhigh. The key insight: the model is capable; the system prompt is the bottleneck. The leak gives users enough visibility into the true system prompt structure to design better counter-prompts and CLAUDE.md hierarchies.[^26][^29]
Culture Essays: “What the Source Revealed About AI Engineering”
Several essays now treat the leaked codebase as a case study in the emerging AI-native engineering culture:[^2][^8][^30]
Common themes:
- AI-written code at scale: Many files, especially in the CLI and tool layers, show signs of AI authorship — repetitive patterns, overlong functions, comments that read like explanations to a colleague rather than maintainers.
- God functions vs. composability: The 3,167-line
print.tsfunction and the 1,421-linewhile(true)loop inquery.tsare used as examples of what happens when a tool-augmented team chooses speed and co-location of logic over modular design.[^31][^32] - Feature flag sprawl: 79
tengu_*gates, 200+ env vars, and multiple unlaunched subsystems (Buddy, KAIROS, Undercover, Epitaxy) create an environment where product velocity is high but reasoning about system behavior is difficult. - “Coding is solved” rhetoric vs. brittle infrastructure: Essays contrast Cherny’s “coding is practically solved” talk with the multi-leak history (three distinct leaks over 14 months), cache regressions, and CVE chains uncovered after the leak.[^2][^30]
The high-level conclusion across these analyses: Claude Code is both a marvel of AI-native engineering and a warning about deferred discipline — the leak shows how much can be built quickly with AI assistance, and the subsequent security and reliability issues show the long tail of that speed.
Part 5: Security Analyses Building on the Leak — ShadowPrompt & Cloudy Day
While not new in the past few days, two security research threads are being reinterpreted in light of the leak.
ShadowPrompt — Zero-Click XSS in the Chrome Extension
The Peneto Labs and Koi Security work on ShadowPrompt uncovered a zero-click XSS prompt injection path in the Claude Chrome extension.[^33][^34]
- A malicious website could inject JavaScript into the Claude sidebar via unescaped DOM insertion.
- That script could then feed prompts directly into Claude and read responses, without the user typing anything.
- Combined with the leaked knowledge of Claude Code’s tool-call and compaction behavior, attackers could craft payloads that survive context compaction and persist across sessions.
TechRadar’s follow-up explicitly connects ShadowPrompt to the leak: knowing the 4-stage context management pipeline and how Auto Compact preserves certain message types makes it easier to design durable injected instructions.[^35][^36]
Cloudy Day — Full Attack Chain on Claude.ai
OASIS Security’s Cloudy Day report describes a three-vulnerability chain that enables data exfiltration from Claude.ai:[^37][^38]
- Prompt injection: trick Claude into running arbitrary instructions in a chat.
- Files API abuse: leverage the Files API to read previously uploaded content.
- Open redirect: send exfiltrated data to attacker-controlled URLs.
While this chain targeted Claude.ai, not Claude Code, the leaked code shows analogous risks:
- Claude Code’s tool layer can read any file the user has given it permission to read.
- MCP servers can bridge to external systems.
- The absence of strict “data-flow least privilege” means a compromised tool can potentially exfiltrate far more than the user realized.
The leak therefore strengthens the case for strict sandbox.fs and sandbox.network configuration, as well as external SIEM rules (like the 16-rule Sigma pack from Round 28) to detect anomalous behavior.[^39][^1]
Part 6: How Practitioners Are Adapting — Post-Leak Playbook
Based on community posts since April 15, a de facto post-leak Claude Code playbook is emerging:[^29][^40][^41]
- Run via CLI + native binary
- Accept the v2.1.113 change; avoid repacked binaries that break attestation.
-
Prefer
npx @anthropic-ai/claude-codein CI/CD to avoid native installer issues. -
Design your own system layer
- Use a short, explicit system prompt that encodes your values (read before edit, plan before act).
-
Keep CLAUDE.md repo-specific; avoid global CLAUDE.md rules that fight the system prompt.
-
Optimize for caching and context
- Treat
SYSTEM_PROMPT_DYNAMIC_BOUNDARYand static prefixes as sacred — don’t modify above the boundary. - Use stable CLAUDE.md and tool sets where possible.
-
Avoid frequent model or effort changes mid-session.
-
Pick effort and model deliberately
- Use Opus 4.7
highfor most coding; reservexhighfor truly hard reasoning tasks. -
Consider Sonnet or cheaper models for read-heavy exploration; switch to Opus 4.7 for committed changes.
-
Lock down sandbox and network
- Use
sandbox.network.deniedDomainsto block anything that shouldn’t be reachable.[^1] -
Be explicit with
sandbox.fs; avoidallowAllstyles of configuration. -
Exploit desktop multi-agent capabilities
- Use side chats for experimental ideas to avoid contaminating main threads.
-
Group sessions by project; use parallel sessions for sub-agents instead of forcing everything through one long thread.
-
Instrument your own usage
- Run
/insightsand custom JSONL scanners (like Tsai’s TTL script) to understand your actual cost and cache behavior.[^41][^18]
Key Metrics: Round 30
| Metric | Value | Source |
|---|---|---|
| Latest Claude Code version (as of Apr 18) | v2.1.114 | [^9][^11] |
| v2.1.113 release date | April 17, 2026 | [^9][^10] |
| Desktop multi-agent update release window | April 14–15, 2026 | [^3][^4][^12] |
| Opus 4.7 tokenizer overhead | ~1.0–1.35× tokens vs. 4.6 | [^6][^7][^5] |
| Cache write premiums | 1h = +100%; 5m = +25% over base | [^19][^42][^43] |
| Routines triggers | Cron, API call, GitHub events | [^3] |
| System prompt evolution trackers | cchistory, Piebald system prompts | [^23][^24] |
| Number of leaked TS files | ~1,900 | [^44][^2][^45] |
| Lines of leaked code | 500k–512k+ | [^44][^36][^2] |
| Native binary spawn platforms | Per-platform optional dependency (Windows/macOS/Linux) | [^1][^10] |
References
-
claude-code/CHANGELOG.md at main - GitHub - Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and he...
-
Claude Code Source Leak: A Timeline - by Darko - Kilo Blog - A factual roundup of the incident.
-
Anthropic Rebuilds Claude Code Desktop App Around Parallel ... - Anthropic has released a redesigned Claude Code experience for its Claude desktop app, bringing in a...
-
Claude Code Update: Parallel Sessions and More | Jeff J Hunter ... - Claude just dropped a massive update to Claude Code on desktop. Parallel agentic sessions. Multiple ...
-
What's new in Claude Opus 4.7 - Overview of new features, breaking changes, and behavior changes in Claude Opus 4.7.
-
Opus 4.7 has a new tokenizer: same $/token, but ~1-1.35x ... - Reddit - So the exact same prompt will consume up to ~35% more input tokens on 4.7 than 4.6, even though the ...
-
Claude Opus 4.7: Complete Guide to Features, Benchmarks ... - Claude Opus 4.7 is here — same $5/$25 pricing, 70% CursorBench (+12pp), 98.5% vision accuracy, 3x im...
-
Leaked Claude Code Exposes Poor Development Practices - LinkedIn - Perhaps the least surprising thing in 2026 is that the Claude code was leaked and that it was bad. B...
-
Claude Code Releases & Changelog | Version History - Code Guide - Complete version history, changelog, and breaking changes for Claude Code. Track every release from ...
-
anthropics/claude-code v2.1.113 on GitHub - NewReleases.io - New release anthropics/claude-code version v2.1.113 on GitHub.
-
Claude Code Changelog & Release Notes | Havoptic - Latest: v2.1.112 · Apr 16, 2026. 259 releases tracked. Every Claude Code update, feature, and versio...
-
Claude Code on desktop, redesigned for parallel agentic work. - 364 votes, 119 comments. New sidebar for parallel sessions. Drag-and-drop layout. Integrated termina...
-
Anthropic tests Claude Code upgrade to rival Codex Superapp - Anthropic is overhauling Claude Code’s desktop app with project Epitaxy, introducing new panels, mul...
-
Both Claude and ChatGPT prepping major interface updates - AI Weekly Update - April 13, 2026
-
Introducing Claude Opus 4.7 - Anthropic - On our 93-task coding benchmark, Claude Opus 4.7 lifted resolution by 13% over Opus 4.6, including f...
-
[AINews] Anthropic Claude Opus 4.7 - literally one step better than ... - While Anthropic says the new tokenizer (new pretrain?) can cause up to 35% more token usage, the ove...
-
Why Claude Opus 4.7 Uses More Tokens — and What Developers ...
-
I Scanned 95 Days of My Claude Code Logs and Found Anthropic''s ... - The community is angry about Anthropic''s March 6 silent TTL change, but billing statements aren''t ...
-
Anthropic: Claude quota drain not caused by cache tweaks - : Dev reports suggest long sessions now burn through usage much faster
-
Detailed explanation of Claude Opus 4.7 xhigh mode - According to official internal Agentic Coding benchmark curves, Opus 4.7 scores approximately 71% at...
-
Claude Opus 4.7 Pricing: The Real Cost Story Behind the ... - Finout - Claude Opus 4.7 keeps Anthropic's $5/$25 per million token pricing, but a new tokenizer can raise ef...
-
Claude Opus 4.7 Benchmarks Explained - Vellum AI - A 10-point improvement from 53.4% to 64.3% puts Opus 4.7 meaningfully ahead of every currently avail...
-
cchistory: Tracking Claude Code System Prompt and Tool Changes - Exploring how to track and analyze changes in Claude Code's system prompts and tools to understand A...
-
Piebald-AI/claude-code-system-prompts - GitHub - All parts of Claude Code's system prompt, 24 builtin tool descriptions, sub agent prompts (Plan/Expl...
-
How Claude Code Builds a System Prompt - With the accidental leak of Claude Code's source code last week, we can see for the first time how C...
-
Claude Code System Prompt: Custom Instructions, Settings & Rules ... - Configure Claude Code with system prompts, custom instructions, and project rules. Complete guide to...
-
For me definitely the worst regression was the system prompt telling ... - For me definitely the worst regression was the system prompt telling claude to analyze file to check...
-
I tested whether a custom system prompt for Claude Code makes a ... - after the Claude Code source leak, the community noticed that the default system prompt could be imp...
-
Don't use Claude Code's Default System Prompt - Reddit - If you're getting frustrated with Claude Code, stop using the default Claude Code's system prompt. I...
-
The Great Claude Code Leak of 2026: Accident, Incompetence, or ... - TL;DR: On March 31, 2026, Anthropic accidentally shipped the entire source code of Claude Code to th...
-
Claude Code's source code has been leaked via a map file in their ... - Hacker Newsnew | past | comments | ask | show | jobs | submit · login · Claude Code's source code ha...
-
Claude Code Deep Dive Part 2: The 1,421-Line While Loop ... - This is the engine that processes every keystroke, every tool call, every error recovery, every cont...
-
Claude Extension Flaw Enabled Zero-Click XSS Prompt Injection via ... - Claude extension flaw enabled silent prompt injection via XSS and weak allowlist, risking data theft...
-
Critical Claude Chrome Extension Vulnerability Exposed - LinkedIn - How the 'Zero‑Click' Attack Worked The exploit combined two flaws ... This incident shows: AI extens...
-
Anthropic confirms it leaked Claude Code source code - TechRadar - Claude Code's entire source code has been leaked and the internet is up in arms.
-
Claude Code Source Leaked via npm Packaging Error, Anthropic ... - Claude Code 2.1.88 leak exposed 512,000 lines via npm error, fueling supply chain risks and typosqua...
-
Claude.ai Prompt Injection Vulnerability - OASIS Security - Three Claude.ai vulnerabilities chained into a full attack: prompt injection to silent data exfiltra...
-
Claude.ai Exploit Chain: Full Technical Report | Oasis Security - How invisible prompt injection, Files API abuse, and a Google Ads open redirect combined into a work...
-
First analysis & detection pack for the Claude Code source leak - The leak exposed undocumented features (KAIROS daemon, autoDream memory persistence, Undercover Mode...
-
How I Orchestrated a Product Migration with Claude Code - When implementation changes a design decision or uncovers a requirement gap, Claude updates Confluen...
-
The Complete Guide to Every Claude Update in Q1 2026 (Tested by ... - Uses parallel sub-agents to do the research and rewriting simultaneously. ... decisions, code style ...
-
Anthropic clarifies Claude quota drain causes - Let's Data Science - Jarred Sumner endorsed the community detective work but argued the five-minute TTL can be cheaper fo...
-
Claude Opus 4.7 Price: 2026 API Rates & Subscription - GlobalGPT - Claude 4.7 same price? Discover the 35% tokenizer trap. Compare 2026 API rates & SWE-bench gains. Sl...
-
Claude Code Source Code Leak: The Full Story 2026 - Anthropic accidentally leaked 512,000 lines of Claude Code source on March 31, 2026. Here's exactly ...
-
Anthropic's Claude Code Source Leak: What Happened ... - LinkedIn - Who should read this and why: If you use Claude, build with AI tools, or simply follow the AI indust...