## Summary
- show the remaining context window percentage in `/status` alongside
existing token usage details
- replace the composer shortcut prompt with the context window
percentage (or an unavailable message) while a task is running
- update TUI snapshots to reflect the new context window line
## Testing
- cargo test -p codex-tui
------
https://chatgpt.com/codex/tasks/task_i_68dc6e7397ac8321909d7daff25a396c
Instead of overwriting the contents of the composer when pressing
<kbd>Esc</kbd> when there's a queued message, prepend the queued
message(s) to the composer draft.
Adds a "View Stack" to the bottom pane to allow for pushing/popping
bottom panels.
`esc` will go back instead of dismissing.
Benefit: We retain the "selection state" of a parent panel (e.g. the
review panel).
Adds the following options:
1. Review current changes
2. Review a specific commit
3. Review against a base branch (PR style)
4. Custom instructions
<img width="487" height="330" alt="Screenshot 2025-09-20 at 2 11 36 PM"
src="https://github.com/user-attachments/assets/edb0aaa5-5747-47fa-881f-cc4c4f7fe8bc"
/>
---
\+ Adds the following UI helpers:
1. Makes list selection searchable
2. Adds navigation to the bottom pane, so you could add a stack of
popups
3. Basic custom prompt view
Created this PR by:
- adding `redundant_clone` to `[workspace.lints.clippy]` in
`cargo-rs/Cargol.toml`
- running `cargo clippy --tests --fix`
- running `just fmt`
Though I had to clean up one instance of the following that resulted:
```rust
let codex = codex;
```
This updates the ctrl + c behavior to clear the current prompt if there
is text and you press ctrl + c.
I also updated the ctrl + c hint text to show `^c to interrupt` instead
of `^c to quit` if there is an active conversation.
Two things I don't love:
1. You can currently interrupt a conversation with escape or ctrl + c
(not related to this PR and maybe fine)
2. The bottom row hint text always says `^c to quit` but this PR doesn't
really make that worse.
https://github.com/user-attachments/assets/6eddadec-0d84-4fa7-abcb-d6f5a04e5748
Fixes https://github.com/openai/codex/issues/3126
Summary:
- pause the status timer while waiting on approval modals
- expose deterministic pause/resume helpers to avoid sleep-based tests
- simplify bottom pane timer handling now that the widget owns the clock
Adds custom `/prompts` to `~/.codex/prompts/<command>.md`.
<img width="239" height="107" alt="Screenshot 2025-08-25 at 6 22 42 PM"
src="https://github.com/user-attachments/assets/fe6ebbaa-1bf6-49d3-95f9-fdc53b752679"
/>
---
Details:
1. Adds `Op::ListCustomPrompts` to core.
2. Returns `ListCustomPromptsResponse` with list of `CustomPrompt`
(name, content).
3. TUI calls the operation on load, and populates the custom prompts
(excluding prompts that collide with builtins).
4. Selecting the custom prompt automatically sends the prompt to the
agent.
This PR fixes two edge cases in managing burst paste (mainly on power
shell).
Bugs:
- Needs an event key after paste to render the pasted items
> ChatComposer::flush_paste_burst_if_due() flushes on timeout. Called:
> - Pre-render in App on TuiEvent::Draw.
> - Via a delayed frame
>
BottomPane::request_redraw_in(ChatComposer::recommended_paste_flush_delay()).
- Parses two key events separately before starting parsing burst paste
> When threshold is crossed, pull preceding burst chars out of the
textarea and prepend to paste_burst_buffer, then keep buffering.
- Integrates with #2567 to bring image pasting to windows.
Esc should have other functionalities when it's not used in a
backtracking situation. i.e. to cancel pop up menu when selecting
model/approvals or to interrupt an active turn.
before:
```
$ time cargo test -p codex-tui -q
[...]
cargo test -p codex-tui -q 39.89s user 10.77s system 98% cpu 51.328 total
```
after:
```
$ time cargo test -p codex-tui -q
[...]
cargo test -p codex-tui -q 1.37s user 0.64s system 29% cpu 6.699 total
```
the major offenders were the textarea fuzz test and the custom_terminal
doctests. (i think the doctests were being recompiled every time which
made them extra slow?)
allow ctrl+v in TUI for images + @file that are images are appended as
raw files (and read by the model) rather than pasted as a path that
cannot be read by the model.
Re-used components and same interface we're using for copying pasted
content in
72504f1d9c.
@aibrahim-oai as you've implemented this, mind having a look at this
one?
https://github.com/user-attachments/assets/c6c1153b-6b32-4558-b9a2-f8c57d2be710
---------
Co-authored-by: easong-openai <easong@openai.com>
Co-authored-by: Daniel Edrisian <dedrisian@openai.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
this is in preparation for adding more separate "modes" to the tui, in
particular, a "transcript mode" to view a full history once #2316 lands.
1. split apart "tui events" from "app events".
2. remove onboarding-related events from AppEvent.
3. move several general drawing tools out of App and into a new Tui
class
## Summary
- Show a temporary Working on diff state in the bottom pan
- Add `DiffResult` app event and dispatch git diff asynchronously
## Testing
- `just fmt`
- `just fix` *(fails: `let` expressions in this position are unstable)*
- `cargo test --all-features` *(fails: `let` expressions in this
position are unstable)*
------
https://chatgpt.com/codex/tasks/task_i_689a839f32b88321840a893551d5fbef
Wait for newlines, then render markdown on a line by line basis. Word wrap it for the current terminal size and then spit it out line by line into the UI. Also adds tests and fixes some UI regressions.
We wait until we have an entire newline, then format it with markdown and stream in to the UI. This reduces time to first token but is the right thing to do with our current rendering model IMO. Also lets us add word wrapping!
https://github.com/openai/codex/pull/1835 has some messed up history.
This adds support for streaming chat completions, which is useful for ollama. We should probably take a very skeptical eye to the code introduced in this PR.
---------
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
Stream models thoughts and responses instead of waiting for the whole
thing to come through. Very rough right now, but I'm making the risk call to push through.
This replaces tui-textarea with a custom textarea component.
Key differences:
1. wrapped lines
2. better unicode handling
3. uses the native terminal cursor
This should perhaps be spun out into its own separate crate at some
point, but for now it's convenient to have it in-tree.