Created this PR by:
- adding `redundant_clone` to `[workspace.lints.clippy]` in
`cargo-rs/Cargol.toml`
- running `cargo clippy --tests --fix`
- running `just fmt`
Though I had to clean up one instance of the following that resulted:
```rust
let codex = codex;
```
This PR does the following:
* Adds the ability to paste or type an API key.
* Removes the `preferred_auth_method` config option. The last login
method is always persisted in auth.json, so this isn't needed.
* If OPENAI_API_KEY env variable is defined, the value is used to
prepopulate the new UI. The env variable is otherwise ignored by the
CLI.
* Adds a new MCP server entry point "login_api_key" so we can implement
this same API key behavior for the VS Code extension.
<img width="473" height="140" alt="Screenshot 2025-09-04 at 3 51 04 PM"
src="https://github.com/user-attachments/assets/c11bbd5b-8a4d-4d71-90fd-34130460f9d9"
/>
<img width="726" height="254" alt="Screenshot 2025-09-04 at 3 51 32 PM"
src="https://github.com/user-attachments/assets/6cc76b34-309a-4387-acbc-15ee5c756db9"
/>
This PR changes get history op to get path. Then, forking will use a
path. This will help us have one unified codepath for resuming/forking
conversations. Will also help in having rollout history in order. It
also fixes a bug where you won't see the UI when resuming after forking.
Also, simplify the streaming behavior.
This fixes a number of display issues with streaming markdown, and paves
the way for better markdown features (e.g. customizable styles, syntax
highlighting, markdown-aware wrapping).
Not currently supported:
- footnotes
- tables
- reference-style links
This PR adds an `images` field to the existing `UserMessageEvent` so we
can encode zero or more images associated with a user message. This
allows images to be restored when conversations are restored.
The previous config approach had a few issues:
1. It is part of the config but not designed to be used externally
2. It had to be wired through many places (look at the +/- on this PR
3. It wasn't guaranteed to be set consistently everywhere because we
don't have a super well defined way that configs stack. For example, the
extension would configure during newConversation but anything that
happened outside of that (like login) wouldn't get it.
This env var approach is cleaner and also creates one less thing we have
to deal with when coming up with a better holistic story around configs.
One downside is that I removed the unit test testing for the override
because I don't want to deal with setting the global env or spawning
child processes and figuring out how to introspect their originator
header. The new code is sufficiently simple and I tested it e2e that I
feel as if this is still worth it.
Adding the `rollout_path` to the `NewConversationResponse` makes it so a
client can perform subsequent operations on a `(ConversationId,
PathBuf)` pair. #3353 will introduce support for `ArchiveConversation`.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/3352).
* #3353
* __->__ #3352
This PR does multiple things that are necessary for conversation resume
to work from the extension. I wanted to make sure everything worked so
these changes wound up in one PR:
1. Generate more ts types
2. Resume rollout history files rather than create a new one every time
it is resumed so you don't see a duplicate conversation in history for
every resume. Chatted with @aibrahim-oai to verify this
3. Return conversation_id in conversation summaries
4. [Cleanup] Use serde and strong types for a lot of the rollout file
parsing
- In the bottom line of the TUI, print the number of tokens to 3 sigfigs
with an SI suffix, e.g. "1.23K".
- Elsewhere where we print a number, I figure it's worthwhile to print
the exact number, because e.g. it's a summary of your session. Here we print
the numbers comma-separated.
#### Summary
- highlight proposed command previews with the shared bash syntax
highlighter
- keep the Proposed Command section consistent with other execution
renderings
Bumps [image](https://github.com/image-rs/image) from 0.25.6 to 0.25.8.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/image-rs/image/blob/v0.25.8/CHANGES.md">image's
changelog</a>.</em></p>
<blockquote>
<h3>Version 0.25.8</h3>
<p>Re-release of <code>0.25.7</code></p>
<p>Fixes:</p>
<ul>
<li>Reverted a signature change to <code>load_from_memory</code> that
lead to large scale
type inference breakage despite being technically compatible.</li>
<li>Color conversion Luma to Rgb used incorrect coefficients instead of
broadcasting.</li>
</ul>
<h3>Version 0.25.7 (yanked)</h3>
<p>Features:</p>
<ul>
<li>Added an API for external image format implementations to register
themselves as decoders for a specific format in <code>image</code> (<a
href="https://redirect.github.com/image-rs/image/issues/2372">#2372</a>)</li>
<li>Added <a
href="https://www.color.org/iccmax/download/CICP_tag_and_type_amendment.pdf">CICP</a>
awarenes via <a href="https://crates.io/crates/moxcms">moxcms</a> to
support color spaces (<a
href="https://redirect.github.com/image-rs/image/issues/2531">#2531</a>).
The support for transforming is limited for now and will be gradually
expanded.</li>
<li>You can now embed Exif metadata when writing JPEG, PNG and WebP
images (<a
href="https://redirect.github.com/image-rs/image/issues/2537">#2537</a>,
<a
href="https://redirect.github.com/image-rs/image/issues/2539">#2539</a>)</li>
<li>Added functions to extract orientation from Exif metadata and
optionally clear it in the Exif chunk (<a
href="https://redirect.github.com/image-rs/image/issues/2484">#2484</a>)</li>
<li>Serde support for more types (<a
href="https://redirect.github.com/image-rs/image/issues/2445">#2445</a>)</li>
<li>PNM encoder now supports writing 16-bit images (<a
href="https://redirect.github.com/image-rs/image/issues/2431">#2431</a>)</li>
</ul>
<p>API improvements:</p>
<ul>
<li><code>save</code>, <code>save_with_format</code>,
<code>write_to</code> and <code>write_with_encoder</code> methods on
<code>DynamicImage</code> now automatically convert the pixel format
when necessary instead of returning an error (<a
href="https://redirect.github.com/image-rs/image/issues/2501">#2501</a>)</li>
<li>Added <code>DynamicImage::has_alpha()</code> convenience method</li>
<li>Implemented <code>TryFrom<ExtendedColorType></code> for
<code>ColorType</code> (<a
href="https://redirect.github.com/image-rs/image/issues/2444">#2444</a>)</li>
<li>Added <code>const HAS_ALPHA</code> to trait <code>Pixel</code></li>
<li>Unified the error for unsupported encoder colors (<a
href="https://redirect.github.com/image-rs/image/issues/2543">#2543</a>)</li>
<li>Added a <code>hooks</code> module to customize builtin behavior,
<code>register_format_detection_hook</code> and
<code>register_decoding_hook</code> for the determining format of a file
and selecting an <code>ImageDecoder</code> implementation respectively.
(<a
href="https://redirect.github.com/image-rs/image/issues/2372">#2372</a>)</li>
</ul>
<p>Performance improvements:</p>
<ul>
<li>Gaussian blur (<a
href="https://redirect.github.com/image-rs/image/issues/2496">#2496</a>)
and box blur (<a
href="https://redirect.github.com/image-rs/image/issues/2515">#2515</a>)
are now faster</li>
<li>Improve compilation times by avoiding unnecessary instantiation of
generic functions (<a
href="https://redirect.github.com/image-rs/image/issues/2468">#2468</a>,
<a
href="https://redirect.github.com/image-rs/image/issues/2470">#2470</a>)</li>
</ul>
<p>Bug fixes:</p>
<ul>
<li>Many improvements to image format decoding: TIFF, WebP, AVIF, PNG,
GIF, BMP, TGA</li>
<li>Fixed <code>GifEncoder::encode()</code> ignoring the speed parameter
and always using the slowest speed (<a
href="https://redirect.github.com/image-rs/image/issues/2504">#2504</a>)</li>
<li><code>.pnm</code> is now recognized as a file extension for the PNM
format (<a
href="https://redirect.github.com/image-rs/image/issues/2559">#2559</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="98b001da0d"><code>98b001d</code></a>
Merge pull request <a
href="https://redirect.github.com/image-rs/image/issues/2592">#2592</a>
from image-rs/release-0.25.8</li>
<li><a
href="f86232081c"><code>f862320</code></a>
Metadata and changelog for a 0.25.8</li>
<li><a
href="3b1c1db11d"><code>3b1c1db</code></a>
Merge pull request <a
href="https://redirect.github.com/image-rs/image/issues/2593">#2593</a>
from image-rs/luma-to-rgb-transform-is-broadcast</li>
<li><a
href="1f574d3d1e"><code>1f574d3</code></a>
Replace manual rounding code with f32::round</li>
<li><a
href="545cb3788b"><code>545cb37</code></a>
Color tests in the middle of dynamic range</li>
<li><a
href="9882fa9fe0"><code>9882fa9</code></a>
Remove coefficients from luma_expand</li>
<li><a
href="70b9aa3ef1"><code>70b9aa3</code></a>
Revert "Make load_from_memory generic"</li>
<li><a
href="b94c33379f"><code>b94c333</code></a>
Enable CI for backport branch</li>
<li><a
href="a24556bc87"><code>a24556b</code></a>
Merge pull request <a
href="https://redirect.github.com/image-rs/image/issues/2581">#2581</a>
from image-rs/release-0.25.7</li>
<li><a
href="9175dbc70e"><code>9175dbc</code></a>
Fix readme typo (<a
href="https://redirect.github.com/image-rs/image/issues/2580">#2580</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/image-rs/image/compare/v0.25.6...v0.25.8">compare
view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
We're trying to migrate from `session_id: Uuid` to `conversation_id:
ConversationId`. Not only does this give us more type safety but it
unifies our terminology across Codex and with the implementation of
session resuming, a conversation (which can span multiple sessions) is
more appropriate.
I started this impl on https://github.com/openai/codex/pull/3219 as part
of getting resume working in the extension but it's big enough that it
should be broken out.
This updates the ctrl + c behavior to clear the current prompt if there
is text and you press ctrl + c.
I also updated the ctrl + c hint text to show `^c to interrupt` instead
of `^c to quit` if there is an active conversation.
Two things I don't love:
1. You can currently interrupt a conversation with escape or ctrl + c
(not related to this PR and maybe fine)
2. The bottom row hint text always says `^c to quit` but this PR doesn't
really make that worse.
https://github.com/user-attachments/assets/6eddadec-0d84-4fa7-abcb-d6f5a04e5748
Fixes https://github.com/openai/codex/issues/3126
What
- Show compact elapsed time in the TUI status indicator: Xs, MmSSs,
HhMMmSSs.
- Add private helper fmt_elapsed_compact with a unit test.
Why
- Seconds‑only becomes hard to read during longer runs; minutes/hours
improve clarity without extra noise.
How
- Implemented in codex-rs/tui/src/status_indicator_widget.rs only.
- The helper is used when rendering the existing “Working/Thinking”
timer.
- No changes to codex-common::elapsed::format_duration or other crates.
Scope/Impact
- TUI‑only; no public API changes; minimal risk.
- Snapshot tests should remain unchanged (most show “0s”).
Before/After
- Working (65s • Esc to interrupt) → Working (1m05s • Esc to interrupt)
- Working (3723s • …) → Working (1h02m03s • …)
Tests
- Unit: fmt_elapsed_compact_formats_seconds_minutes_hours.
- Local checks: cargo fmt --all, cargo clippy -p codex-tui -- -D
warnings, cargo test -p codex-tui.
Notes
- Open to adjusting the exact format or moving the helper if maintainers
prefer a shared location.
Signed-off-by: Enrique Moreno Tent <enriquemorenotent@gmail.com>
When item ids are sent to Responses API it will load them from the
database ignoring the provided values. This adds extra latency.
Not having the mode to store requests also allows us to simplify the
code.
## Breaking change
The `disable_response_storage` configuration option is removed.
i'm not yet convinced i have the best heuristics for what to highlight,
but this feels like a useful step towards something a bit easier to
read, esp. when the model is producing large commands.
<img width="669" height="589" alt="Screenshot 2025-09-03 at 8 21 56 PM"
src="https://github.com/user-attachments/assets/b9cbcc43-80e8-4d41-93c8-daa74b84b331"
/>
also a fairly significant refactor of our line wrapping logic.
TuiEvent is supposed to be purely events that come from the "driver",
i.e. events from the terminal. Everything app-specific should be an
AppEvent. In this case, it didn't need to be an event at all.
#### Summary
- Emit a “Proposed Command” history cell when an ExecApprovalRequest
arrives (parity with proposed patches).
- Simplify the approval dialog: show only the reason/instructions; move
the command preview into history.
- Make approval/abort decision history concise:
- Single line snippet; if multiline, show first line + " ...".
- Truncate to 80 graphemes with ellipsis for very long commands.
#### Details
- History
- Add `new_proposed_command` to render a header and indented command
preview.
- Use shared `prefix_lines` helper for first/subsequent line prefixes.
- Approval UI
- `UserApprovalWidget` no longer renders the command in the modal; shows
optional `reason` text only.
- Decision history renders an inline, dimmed snippet per rules above.
- Tests (snapshot-based)
- Proposed/decision flow for short command.
- Proposed multi-line + aborted decision snippet with “ ...”.
- Very long one-line command -> truncated snippet with “…”.
- Updated existing exec approval snapshots and test reasons.
<img width="1053" height="704" alt="Screenshot 2025-09-03 at 11 57
35 AM"
src="https://github.com/user-attachments/assets/9ed4c316-9daf-4ac1-80ff-7ae1f481dd10"
/>
after approving:
<img width="1053" height="704" alt="Screenshot 2025-09-03 at 11 58
18 AM"
src="https://github.com/user-attachments/assets/a44e243f-eb9d-42ea-87f4-171b3fb481e7"
/>
rejection:
<img width="1053" height="207" alt="Screenshot 2025-09-03 at 11 58
45 AM"
src="https://github.com/user-attachments/assets/a022664b-ae0e-4b70-a388-509208707934"
/>
big command:
https://github.com/user-attachments/assets/2dd976e5-799f-4af7-9682-a046e66cc494
We had multiple issues with context size calculation:
1. `initial_prompt_tokens` calculation based on cache size is not
reliable, cache misses might set it to much higher value. For now
hardcoded to a safer constant.
2. Input context size for GPT-5 is 272k (that's where 33% came from).
Fixes.
Summary:
- pause the status timer while waiting on approval modals
- expose deterministic pause/resume helpers to avoid sleep-based tests
- simplify bottom pane timer handling now that the widget owns the clock
when the pager is scrolled to the bottom of the buffer, keep it there.
this should make transcript mode feel a bit more "alive". i've also seen
some confusion about what transcript mode does/doesn't show that i think
has been related to it not pinning scroll.
#### Summary
- render the edit queued message shortcut with the ⌥ modifier on macOS
builds
- add a helper for status indicator snapshot suffixes
- record macOS-specific snapshots for the status indicator widget
#### Summary
Avoid a potential panic when rendering the active execution cell when
the allocated area has zero height.
#### Changes
- Guard rendering with `active_cell_area.height > 0` and presence of
`active_exec_cell`.
- Use `saturating_add(1)` for the Y offset to avoid overflow.
- Render via `active_exec_cell.as_ref().unwrap().render_ref(...)` after
the explicit `is_some` check.
Adds a TUI resume flow with an interactive picker and quick resume.
- CLI:
- --resume / -r: open picker to resume a prior session
- --continue / -l: resume the most recent session (no picker)
- Behavior on resume: initial history is replayed, welcome banner
hidden, and the first redraw is suppressed to avoid flicker.
- Implementation:
- New tui/src/resume_picker.rs (paginated listing via
RolloutRecorder::list_conversations)
- App::run accepts ResumeSelection; resumes from disk when requested
- ChatWidget refactor with ChatWidgetInit and new_from_existing; replays
initial messages
- Tests: cover picker sorting/preview extraction and resumed-history
rendering.
- Docs: getting-started updated with flags and picker usage.
https://github.com/user-attachments/assets/1bb6469b-e5d1-42f6-bec6-b1ae6debda3b
This PR does the following:
- divides user msgs into 3 categories: plain, user instructions, and
environment context
- Centralizes adding user instructions and environment context to a
degree
- Improve the integration testing
Building on top of #3123
Specifically this
[comment](https://github.com/openai/codex/pull/3123#discussion_r2319885089).
We need to send the user message while ignoring the User Instructions
and Environment Context we attach.
### Overview
This PR introduces the following changes:
1. Adds a unified mechanism to convert ResponseItem into EventMsg.
2. Ensures that when a session is initialized with initial history, a
vector of EventMsg is sent along with the session configuration. This
allows clients to re-render the UI accordingly.
3. Added integration testing
### Caveats
This implementation does not send every EventMsg that was previously
dispatched to clients. The excluded events fall into two categories:
• “Arguably” rolled-out events
Examples include tool calls and apply-patch calls. While these events
are conceptually rolled out, we currently only roll out ResponseItems.
These events are already being handled elsewhere and transformed into
EventMsg before being sent.
• Non-rolled-out events
Certain events such as TurnDiff, Error, and TokenCount are not rolled
out at all.
### Future Directions
At present, resuming a session involves maintaining two states:
• UI State
Clients can replay most of the important UI from the provided EventMsg
history.
• Model State
The model receives the complete session history to reconstruct its
internal state.
This design provides a solid foundation. If, in the future, more precise
UI reconstruction is needed, we have two potential paths:
1. Introduce a third data structure that allows us to derive both
ResponseItems and EventMsgs.
2. Clearly divide responsibilities: the core system ensures the
integrity of the model state, while clients are responsible for
reconstructing the UI.
## Summary
This PR enables Codex to build and run on Android/Termux environments by
conditionally gating the arboard clipboard dependency for Android
targets.
## Key Changes
- **Android Compatibility**: Gate arboard dependency for Android targets
where clipboard access may be restricted
- **Build Fixes**: Add missing tempfile::Builder import for image
clipboard operations
- **Code Cleanup**: Remove unnecessary parentheses to resolve formatting
warnings
## Technical Details
### Clipboard Dependency Gating
- Uses conditional compilation to exclude arboard on Android targets
- Maintains full clipboard functionality on other platforms
- Prevents build failures on Android/Termux where system clipboard
access is limited
### Import Fixes
- Adds missing tempfile::Builder import that was causing compilation
errors
- Ensures image clipboard operations work correctly when clipboard is
available
## Platform Support
- ✅ **Linux/macOS/Windows**: Full clipboard functionality maintained
- ✅ **Android/Termux**: Builds successfully without clipboard dependency
- ✅ **Other Unix platforms**: Unchanged behavior
## Testing
- ✅ Builds successfully on Android/Termux
- ✅ Maintains clipboard functionality on supported platforms
- ✅ No regression in existing functionality
This addresses the Android/Termux compatibility issues while keeping
clipboard functionality intact for platforms that support it.
- Summary:
- Updated the hardcoded hyperlink shown when no MCP servers are
configured to point at the canonical docs section:
- From: codex-rs/config.md#mcp_servers (moved/obsolete)
- To: docs/config.md#mcp_servers (correct GitHub path)
- Rationale:
- The TUI link was pointing to a file that only redirects; this makes
the link accurate and reduces user confusion.
- Validation:
- Verified that the target anchor exists at:
https://github.com/openai/codex/blob/main/docs/config.md#mcp_servers
- UI behavior unchanged otherwise (rendering of link text remains “MCP
docs”).
- Impact:
- One-line change in TUI display logic; no functional behavior change.
Co-authored-by: Michael Bolin <mbolin@openai.com>
## Summary
- allow selection popups to specify their empty state message
- show a "loading..." placeholder in the file search popup while matches
are pending
- update other popup call sites to continue using a "no matches" message
## Testing
- just fmt
- just fix -p codex-tui
- cargo test -p codex-tui
------
https://chatgpt.com/codex/tasks/task_i_68b73e956e90832caf4d04a75fcc9c46