120K Token-Stop Configuration

Description

Most-upvoted r/ClaudeAI post-leak configuration tip: cap context at 120K instead of 200K to prevent the compaction cascade from firing at 83% of 200K. Compaction triggers a hidden full-context API call on top of the user's turn; capping below the trigger avoids the doubled billing event. Derived directly from leaked autoCompact.ts thresholds.

Key claims

Relations

Sources

src-20260409-1ef27ff0b214