- I had a recent conversation where the one-liner showed using 11M tokens! But looking into it 10M were cached. So I looked into it and I think we had a regression here. -> - Use blended total tokens for chat composer usage display - Compute remaining context using tokens_in_context_window helper ------ https://chatgpt.com/codex/tasks/task_i_68981a16c0a4832cbf416017390930e5