allow ctrl+v in TUI for images + @file that are images are appended as
raw files (and read by the model) rather than pasted as a path that
cannot be read by the model.
Re-used components and same interface we're using for copying pasted
content in
72504f1d9c.
@aibrahim-oai as you've implemented this, mind having a look at this
one?
https://github.com/user-attachments/assets/c6c1153b-6b32-4558-b9a2-f8c57d2be710
---------
Co-authored-by: easong-openai <easong@openai.com>
Co-authored-by: Daniel Edrisian <dedrisian@openai.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
this is in preparation for adding more separate "modes" to the tui, in
particular, a "transcript mode" to view a full history once #2316 lands.
1. split apart "tui events" from "app events".
2. remove onboarding-related events from AppEvent.
3. move several general drawing tools out of App and into a new Tui
class
- For selectable options, use sentences starting in lowercase and not
ending with periods. To be honest I don't love this style, but better to
be consistent for now.
- Tweak some other strings.
- Put in more compelling suggestions on launch. Excited to put `/mcp` in
there next.
## Summary
Adds a `/mcp` command to list active tools. We can extend this command
to allow configuration of MCP tools, but for now a simple list command
will help debug if your config.toml and your tools are working as
expected.
Introduces `EventMsg::TurnAborted` that should be sent in response to
`Op::Interrupt`.
In the MCP server, updates the handling of a
`ClientRequest::InterruptConversation` request such that it sends the
`Op::Interrupt` but does not respond to the request until it sees an
`EventMsg::TurnAborted`.
## Summary
- Show a temporary Working on diff state in the bottom pan
- Add `DiffResult` app event and dispatch git diff asynchronously
## Testing
- `just fmt`
- `just fix` *(fails: `let` expressions in this position are unstable)*
- `cargo test --all-features` *(fails: `let` expressions in this
position are unstable)*
------
https://chatgpt.com/codex/tasks/task_i_689a839f32b88321840a893551d5fbef
we have a very unclear lifecycle for the chatwidget—this should only
have to be added in one place! but this fixes the "hanging commands"
issue where the active_exec_cell wasn't correctly cleared when commands
finished.
To repro w/o this PR:
1. prompt "run sleep 10"
2. once the command starts running, press <kbd>Esc</kbd>
3. prompt "run echo hi"
Expected:
```
✓ Completed
└ ⌨️ echo hi
codex
hi
```
Actual:
```
⚙︎ Working
└ ⌨️ echo hi
▌ Ask Codex to do anything
```
i.e. the "Working" never changes to "Completed".
The bug is fixed with this PR.
refactors HistoryCell to be a trait instead of an enum. Also collapse
the many "degenerate" HistoryCell enums which were just a store of lines
into a single PlainHistoryCell type.
The goal here is to allow more ways of rendering history cells (e.g.
expanded/collapsed/"live"), and I expect we will return to more varied
types of HistoryCell as we develop this area.
Sometimes COT is returns as text content instead of `ReasoningText`. We
should parse it but not serialize back on requests.
---------
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
This PR does two things because after I got deep into the first one I
started pulling on the thread to the second:
- Makes `ConversationManager` the place where all in-memory
conversations are created and stored. Previously, `MessageProcessor` in
the `codex-mcp-server` crate was doing this via its `session_map`, but
this is something that should be done in `codex-core`.
- It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded
throughout our code. I think this made sense at one time, but now that
we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event,
I don't think this was quite right, so I removed it. For `codex exec`
and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we
no longer make `Notify` a field of `Codex` or `CodexConversation`.
Changes of note:
- Adds the files `conversation_manager.rs` and `codex_conversation.rs`
to `codex-core`.
- `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`:
other crates must use `CodexConversation` instead (which is created via
`ConversationManager`).
- `core/src/codex_wrapper.rs` has been deleted in favor of
`ConversationManager`.
- `ConversationManager::new_conversation()` returns `NewConversation`,
which is in line with the `new_conversation` tool we want to add to the
MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so
we eliminate checks in cases like `codex-rs/core/tests/client.rs` to
verify `SessionConfiguredEvent` is the first event because that is now
internal to `ConversationManager`.
- Quite a bit of code was deleted from
`codex-rs/mcp-server/src/message_processor.rs` since it no longer has to
manage multiple conversations itself: it goes through
`ConversationManager` instead.
- `core/tests/live_agent.rs` has been deleted because I had to update a
bunch of tests and all the tests in here were ignored, and I don't think
anyone ever ran them, so this was just technical debt, at this point.
- Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I
hope to refactor the blandly-named `util.rs` into more descriptive
files).
- In general, I started replacing local variables named `codex` as
`conversation`, where appropriate, though admittedly I didn't do it
through all the integration tests because that would have added a lot of
noise to this PR.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240).
* #2264
* #2263
* __->__ #2240
Wait for newlines, then render markdown on a line by line basis. Word wrap it for the current terminal size and then spit it out line by line into the UI. Also adds tests and fixes some UI regressions.
Right now, every time an exec ends, we emit it to history which makes it
immutable. In order to be able to update or merge successive tool calls
(which will be useful after https://github.com/openai/codex/pull/2095),
we need to retain it as the active cell.
This also changes the cell to contain the metadata necessary to render
it so it can be updated rather than baking in the final text lines when
the cell is created.
Part 1: https://github.com/openai/codex/pull/2095
Part 3: https://github.com/openai/codex/pull/2110
# Note for reviewers
The bulk of this PR is in in the new file, `parse_command.rs`. This file
is designed to be written TDD and implemented with Codex. Do not worry
about reviewing the code, just review the unit tests (if you want). If
any cases are missing, we'll add more tests and have Codex fix them.
I think the best approach will be to land and iterate. I have some
follow-ups I want to do after this lands. The next PR after this will
let us merge (and dedupe) multiple sequential cells of the same such as
multiple read commands. The deduping will also be important because the
model often reads the same file multiple times in a row in chunks
===
This PR formats common commands like reading, formatting, testing, etc
more nicely:
It tries to extract things like file names, tests and falls back to the
cmd if it doesn't. It also only shows stdout/err if the command failed.
<img width="770" height="238" alt="CleanShot 2025-08-09 at 16 05 15"
src="https://github.com/user-attachments/assets/0ead179a-8910-486b-aa3d-7d26264d751e"
/>
<img width="348" height="158" alt="CleanShot 2025-08-09 at 16 05 32"
src="https://github.com/user-attachments/assets/4302681b-5e87-4ff3-85b4-0252c6c485a9"
/>
<img width="834" height="324" alt="CleanShot 2025-08-09 at 16 05 56 2"
src="https://github.com/user-attachments/assets/09fb3517-7bd6-40f6-a126-4172106b700f"
/>
Part 2: https://github.com/openai/codex/pull/2097
Part 3: https://github.com/openai/codex/pull/2110
This PR updates ChatWidget to ensure that when AgentMessage,
AgentReasoning, or AgentReasoningRawContent events arrive without any
streamed deltas, the final text from the event is rendered before the
stream is finalized. Previously, these handlers ignored the event text
in such cases, relying solely on prior deltas.
<img width="603" height="189" alt="image"
src="https://github.com/user-attachments/assets/868516f2-7963-4603-9af4-adb1b1eda61e"
/>
## Summary
- allow Esc to interrupt the current session when a task is running
- document Esc as an interrupt key in status indicator
## Testing
- `just fmt`
- `just fix` *(fails: E0658 `let` expressions in this position are
unstable)*
- `cargo test --all-features` *(fails: E0658 `let` expressions in this
position are unstable)*
------
https://chatgpt.com/codex/tasks/task_i_689698cf605883208f57b0317ff6a303
We wait until we have an entire newline, then format it with markdown and stream in to the UI. This reduces time to first token but is the right thing to do with our current rendering model IMO. Also lets us add word wrapping!
- Arguably a bugfix as previously CTRL-Z didn't do anything.
- Only in TUI mode for now. This may make sense in other modes... to be
researched.
- The TUI runs the terminal in raw mode and the signals arrive as key
events, so we handle CTRL-Z as a key event just like CTRL-C.
- Not adding UI for it as a composer redesign is coming, and we can just
add it then.
- We should follow with CTRL-Z a second time doing the native terminal
action.
- Added a `/status` command, which will be useful when we update the
home screen to print less status.
- Moved `create_config_summary_entries` to common since it's used in a
few places.
- Noticed we inconsistently had periods in slash command descriptions
and just removed them everywhere.
- Noticed the diff description was overflowing so made it shorter.
https://github.com/openai/codex/pull/1868 is a related fix that was in
flight simultaenously, but after talking to @easong-openai, this:
- logs instead of renders for `BackgroundEvent`
- logs for `TurnDiff`
- renders for `PatchApplyEnd`
https://github.com/openai/codex/pull/1835 has some messed up history.
This adds support for streaming chat completions, which is useful for ollama. We should probably take a very skeptical eye to the code introduced in this PR.
---------
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
Stream models thoughts and responses instead of waiting for the whole
thing to come through. Very rough right now, but I'm making the risk call to push through.