@MODEL_LAUNCH Annotation

What it is

A comment convention in constants/prompts.ts that tags system prompt instructions as temporary model-specific behavioral workarounds. Each annotation documents a specific model generation failure mode (e.g., over-commenting, premature abstraction, false claims) and its prompt-based counterweight. The pattern enables clear identification and removal of stale workarounds when upgrading models.

Community consensus: the single most transferable takeaway from the entire Claude Code leak for anyone building on LLMs.

Why it exists

LLM system prompts accumulate behavioral workarounds over time. Each model generation has different failure modes — Capybara v8 over-comments, produces premature abstractions, and has a 29-30% false claims rate. Without explicit tagging, the system prompt becomes an undocumented archaeology of past model failures. @MODEL_LAUNCH makes these workarounds visible, documented, and disposable.

The approach is simpler than retraining or RLHF: behavioral nudges are prompt-based because modern models are good at instruction following. When a new model ships, engineers can search for @MODEL_LAUNCH annotations and evaluate which workarounds are still needed.

What depends on it

Known annotations

At least four documented in the leaked source: 1. Over-commenting — counterweight against excessive code comments 2. Premature abstraction — counterweight against over-engineering 3. False claims (29-30% rate) — three-layer verification gate (Ant-only) 4. Assertiveness — counterweight against volunteering unrequested observations (explains the behavior change users noticed between v1.x and v2.x)

Trade-offs and limitations

Key claims

Relationships

Evidence