Show context window usage while tasks run (#4536)

## Summary
- show the remaining context window percentage in `/status` alongside
existing token usage details
- replace the composer shortcut prompt with the context window
percentage (or an unavailable message) while a task is running
- update TUI snapshots to reflect the new context window line

## Testing
- cargo test -p codex-tui

------
https://chatgpt.com/codex/tasks/task_i_68dc6e7397ac8321909d7daff25a396c
This commit is contained in:
Ahmed Ibrahim
2025-10-01 11:03:05 -07:00
committed by GitHub
parent 751b3b50ac
commit 2f370e946d
13 changed files with 236 additions and 78 deletions

View File

@@ -4,15 +4,16 @@ expression: sanitized
---
/status
╭───────────────────────────────────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Model: gpt-5-codex (reasoning none, summaries auto) │
│ Directory: [[workspace]] │
│ Approval: on-request │
│ Sandbox: read-only │
│ Agents.md: <none> │
│ │
│ Token usage: 1.2K total (800 input + 400 output) │
Monthly limit: [██░░░░░░░░░░░░░░░░░░] 12% used (resets 07:08 on 7 May)
╰───────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────
│ >_ OpenAI Codex (v0.0.0)
│ Model: gpt-5-codex (reasoning none, summaries auto) │
│ Directory: [[workspace]]
│ Approval: on-request │
│ Sandbox: read-only │
│ Agents.md: <none> │
│ Token usage: 1.2K total (800 input + 400 output) │
Context window: 100% left (1.2K / 272K)
│ Monthly limit: [██░░░░░░░░░░░░░░░░░░] 12% used (resets 07:08 on 7 May) │
╰────────────────────────────────────────────────────────────────────────────╯

View File

@@ -4,16 +4,17 @@ expression: sanitized
---
/status
╭───────────────────────────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Model: gpt-5-codex (reasoning high, summaries detailed) │
│ Directory: [[workspace]] │
│ Approval: on-request │
│ Sandbox: workspace-write │
│ Agents.md: <none> │
│ │
│ Token usage: 1.9K total (1K input + 900 output) │
5h limit: [███████████████░░░░░] 72% used (resets 03:14)
Weekly limit: [█████████░░░░░░░░░░░] 45% used (resets 03:24) │
╰───────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────
│ >_ OpenAI Codex (v0.0.0)
│ Model: gpt-5-codex (reasoning high, summaries detailed) │
│ Directory: [[workspace]]
│ Approval: on-request │
│ Sandbox: workspace-write │
│ Agents.md: <none> │
│ Token usage: 1.9K total (1K input + 900 output) │
Context window: 100% left (2.1K / 272K)
5h limit: [███████████████░░░░░] 72% used (resets 03:14) │
│ Weekly limit: [█████████░░░░░░░░░░░] 45% used (resets 03:24) │
╰─────────────────────────────────────────────────────────────────────╯

View File

@@ -4,15 +4,16 @@ expression: sanitized
---
/status
╭──────────────────────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Model: gpt-5-codex (reasoning none, summaries auto) │
│ Directory: [[workspace]] │
│ Approval: on-request │
│ Sandbox: read-only │
│ Agents.md: <none> │
│ │
│ Token usage: 750 total (500 input + 250 output) │
Limits: data not available yet
╰──────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────
│ >_ OpenAI Codex (v0.0.0)
│ Model: gpt-5-codex (reasoning none, summaries auto) │
│ Directory: [[workspace]]
│ Approval: on-request │
│ Sandbox: read-only │
│ Agents.md: <none> │
│ Token usage: 750 total (500 input + 250 output) │
Context window: 100% left (750 / 272K)
│ Limits: data not available yet │
╰─────────────────────────────────────────────────────────────────╯

View File

@@ -4,15 +4,16 @@ expression: sanitized
---
/status
╭──────────────────────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Model: gpt-5-codex (reasoning none, summaries auto) │
│ Directory: [[workspace]] │
│ Approval: on-request │
│ Sandbox: read-only │
│ Agents.md: <none> │
│ │
│ Token usage: 750 total (500 input + 250 output) │
Limits: send a message to load usage data
╰──────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────
│ >_ OpenAI Codex (v0.0.0)
│ Model: gpt-5-codex (reasoning none, summaries auto) │
│ Directory: [[workspace]]
│ Approval: on-request │
│ Sandbox: read-only │
│ Agents.md: <none> │
│ Token usage: 750 total (500 input + 250 output) │
Context window: 100% left (750 / 272K)
│ Limits: send a message to load usage data │
╰─────────────────────────────────────────────────────────────────╯

View File

@@ -7,13 +7,14 @@ expression: sanitized
╭────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0) │
│ │
│ Model: gpt-5-codex (reasoning hig
│ Model: gpt-5-codex (reasoning │
│ Directory: [[workspace]] │
│ Approval: on-request
│ Sandbox: read-only
│ Agents.md: <none>
│ Approval: on-request │
│ Sandbox: read-only │
│ Agents.md: <none> │
│ │
│ Token usage: 1.9K total (1K input + 90
5h limit: [███████████████░░░░░] 72%
(resets 03:14)
│ Token usage: 1.9K total (1K input + │
Context window: 100% left (2.1K / 272K)
5h limit: [███████████████░░░░░]
│ (resets 03:14) │
╰────────────────────────────────────────────╯