@MODEL_LAUNCH Annotation
- Entity ID:
ent-model-launch-annotation - Type:
concept - Scope:
shared
What it is
A comment convention in constants/prompts.ts that tags system prompt instructions as temporary model-specific behavioral workarounds. Each annotation documents a specific model generation failure mode (e.g., over-commenting, premature abstraction, false claims) and its prompt-based counterweight. The pattern enables clear identification and removal of stale workarounds when upgrading models.
Community consensus: the single most transferable takeaway from the entire Claude Code leak for anyone building on LLMs.
Why it exists
LLM system prompts accumulate behavioral workarounds over time. Each model generation has different failure modes — Capybara v8 over-comments, produces premature abstractions, and has a 29-30% false claims rate. Without explicit tagging, the system prompt becomes an undocumented archaeology of past model failures. @MODEL_LAUNCH makes these workarounds visible, documented, and disposable.
The approach is simpler than retraining or RLHF: behavioral nudges are prompt-based because modern models are good at instruction following. When a new model ships, engineers can search for @MODEL_LAUNCH annotations and evaluate which workarounds are still needed.
What depends on it
- System prompt assembly — @MODEL_LAUNCH annotations are part of the system prompt in
constants/prompts.ts - Model upgrade process — annotations serve as a removal checklist when models change
- Numbat model launch — source comment reads "Remove this section when we launch numbat", indicating Numbat may resolve current workarounds
- Community CLAUDE.md templates — the Prompt Coach template (3,000+ stars) reconstructs these counterweights as project-level instructions for any user
Known annotations
At least four documented in the leaked source: 1. Over-commenting — counterweight against excessive code comments 2. Premature abstraction — counterweight against over-engineering 3. False claims (29-30% rate) — three-layer verification gate (Ant-only) 4. Assertiveness — counterweight against volunteering unrequested observations (explains the behavior change users noticed between v1.x and v2.x)
Trade-offs and limitations
- Prompt-based, not deterministic — effectiveness depends on model instruction-following quality, which varies across model versions
- Creates prompt bloat — accumulating workarounds consumes context window tokens
- Manual discipline required — engineers must remember to annotate workarounds and review them on model changes
- Ant-only verification — the three-layer verification gate for false claims is gated to
USER_TYPE === 'ant', meaning external users bear the full false claims rate without mitigation
Key claims
- Community consensus: most transferable engineering lesson from the entire leak
- Each annotation names the specific model and specific failure mode being counterweighted
- The assertiveness counterweight explains the v1.x → v2.x behavior change users noticed
- Capybara v8 false claims rate: 29-30% (vs v4's 16.7%)
Relationships
- depends_on: system-prompt-assembly, constants/prompts.ts
- related_to: model-codenames, three-layer-verification, capybara-false-claims
- used_by: prompt-coach-template (community)
Evidence
src-20260409-9ae8df121bc8: Round 11 — The @MODEL_LAUNCH Pattern