- For absolute, use non-cached input + output. - For estimating what % of the model's context window is used, we need to account for reasoning output tokens from prior turns being dropped from the context window. We approximate this here by subtracting reasoning output tokens from the total. This will be off for the current turn and pending function calls. We can improve it later.