This PR is a follow-up to #5591. It allows users to choose which auth
storage mode they want by using the new
`cli_auth_credentials_store_mode` config.
## Summary
- Coerce Windows `workspace-write` configs back to read-only, surface
the forced downgrade in the approvals popup,
and funnel users toward WSL or Full Access.
- Add WSL installation instructions to the Auto preset on Windows while
keeping the preset available for other
platforms.
- Skip the trust-on-first-run prompt on native Windows so new folders
remain read-only without additional
confirmation.
- Expose a structured sandbox policy resolution from config to flag
Windows downgrades and adjust tests (core,
exec, TUI) to reflect the new behavior; provide a Windows-only approvals
snapshot.
## Testing
- cargo fmt
- cargo test -p codex-core
config::tests::add_dir_override_extends_workspace_writable_roots
- cargo test -p codex-exec
suite::resume::exec_resume_preserves_cli_configuration_overrides
- cargo test -p codex-tui
chatwidget::tests::approvals_selection_popup_snapshot
- cargo test -p codex-tui
approvals_popup_includes_wsl_note_for_auto_mode
- cargo test -p codex-tui windows_skips_trust_prompt
- just fix -p codex-core
- just fix -p codex-tui
It's pretty amazing we have gotten here without the ability for the
model to see image content from MCP tool calls.
This PR builds off of 4391 and fixes#4819. I would like @KKcorps to get
adequete credit here but I also want to get this fix in ASAP so I gave
him a week to update it and haven't gotten a response so I'm going to
take it across the finish line.
This test highlights how absured the current situation is. I asked the
model to read this image using the Chrome MCP
<img width="2378" height="674" alt="image"
src="https://github.com/user-attachments/assets/9ef52608-72a2-4423-9f5e-7ae36b2b56e0"
/>
After this change, it correctly outputs:
> Captured the page: image dhows a dark terminal-style UI labeled
`OpenAI Codex (v0.0.0)` with prompt `model: gpt-5-codex medium` and
working directory `/codex/codex-rs`
(and more)
Before this change, it said:
> Took the full-page screenshot you asked for. It shows a long,
horizontally repeating pattern of stylized people in orange, light-blue,
and mustard clothing, holding hands in alternating poses against a white
background. No text or other graphics-just rows of flat illustration
stretching off to the right.
Without this change, the Figma, Playwright, Chrome, and other visual MCP
servers are pretty much entirely useless.
I tested this change with the openai respones api as well as a third
party completions api
move the truncation logic to conversation history to use on any tool
output. This will help us in avoiding edge cases while truncating the
tool calls and mcp calls.
Follow-up PR to #5569. Add Keyring Support for Auth Storage in Codex CLI
as well as a hybrid mode (default to persisting in keychain but fall
back to file when unavailable.)
It also refactors out the keyringstore implementation from rmcp-client
[here](https://github.com/openai/codex/blob/main/codex-rs/rmcp-client/src/oauth.rs)
to a new keyring-store crate.
There will be a follow-up that picks the right credential mode depending
on the config, instead of hardcoding `AuthCredentialsStoreMode::File`.
This PR introduces a new `Auth Storage` abstraction layer that takes
care of read, write, and load of auth tokens based on the
AuthCredentialsStoreMode. It is similar to how we handle MCP client
oauth
[here](https://github.com/openai/codex/blob/main/codex-rs/rmcp-client/src/oauth.rs).
Instead of reading and writing directly from disk for auth tokens, Codex
CLI workflows now should instead use this auth storage using the public
helper functions.
This PR is just a refactor of the current code so the behavior stays the
same. We will add support for keyring and hybrid mode in follow-up PRs.
I have read the CLA Document and I hereby sign the CLA
This PR does the following:
1. Changes `try_refresh_token` to handle the case where the endpoint
returns a response without an `id_token`. The OpenID spec indicates that
this field is optional and clients should not assume it's present.
2. Changes the `attempt_stream_responses` to propagate token refresh
errors rather than silently ignoring them.
3. Fixes a typo in a couple of error messages (unrelated to the above,
but something I noticed in passing) - "reconnect" should be spelled
without a hyphen.
This PR does not implement the additional suggestion from @pakrym-oai
that we should sign out when receiving `refresh_token_expired` from the
refresh endpoint. Leaving this as a follow-on because I'm undecided on
whether this should be implemented in `try_refresh_token` or its
callers.
This adds an RPC to the app server to the the `ConversationSummary` via
a rollout path. Now that the VS Code extension supports showing the
Codex UI in an editor panel where the URI of the panel maps to the
rollout file, we need to be able to get the `ConversationSummary` from
the rollout file directly.
Because conversations that use the Responses API can have encrypted
reasoning messages, trying to resume a conversation with a different
provider could lead to confusing "failed to decrypt" errors. (This is
reproducible by starting a conversation using ChatGPT login and resuming
it as a conversation that uses OpenAI models via Azure.)
This changes `ListConversationsParams` to take a `model_providers:
Option<Vec<String>>` and adds `model_provider` on each
`ConversationSummary` it returns so these cases can be disambiguated.
Note this ended up making changes to
`codex-rs/core/src/rollout/tests.rs` because it had a number of cases
where it expected `Some` for the value of `next_cursor`, but the list of
rollouts was complete, so according to this docstring:
bcd64c7e72/codex-rs/app-server-protocol/src/protocol.rs (L334-L337)
If there are no more items to return, then `next_cursor` should be
`None`. This PR updates that logic.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/5658).
* #5803
* #5793
* __->__ #5658
Currently, `approval_policy` is supported in profiles, but
`sandbox_mode` is not. This PR adds support for `sandbox_mode`.
Note: a fix for this was submitted in [this
PR](https://github.com/openai/codex/pull/2397), but the underlying code
has changed significantly since then.
This addresses issue #3034
This PR adds support for a model-based summary and risk assessment for
commands that violate the sandbox policy and require user approval. This
aids the user in evaluating whether the command should be approved.
The feature works by taking a failed command and passing it back to the
model and asking it to summarize the command, give it a risk level (low,
medium, high) and a risk category (e.g. "data deletion" or "data
exfiltration"). It uses a new conversation thread so the context in the
existing thread doesn't influence the answer. If the call to the model
fails or takes longer than 5 seconds, it falls back to the current
behavior.
For now, this is an experimental feature and is gated by a config key
`experimental_sandbox_command_assessment`.
Here is a screen shot of the approval prompt showing the risk assessment
and summary.
<img width="723" height="282" alt="image"
src="https://github.com/user-attachments/assets/4597dd7c-d5a0-4e9f-9d13-414bd082fd6b"
/>
## Summary
- wrap the default reqwest::Client inside a new
CodexHttpClient/CodexRequestBuilder pair and log the HTTP method, URL,
and status for each request
- update the auth/model/provider plumbing to use the new builder helpers
so headers and bearer auth continue to be applied consistently
- add the shared `http` dependency that backs the header conversion
helpers
## Testing
- `CODEX_SANDBOX=seatbelt CODEX_SANDBOX_NETWORK_DISABLED=1 cargo test -p
codex-core`
- `CODEX_SANDBOX=seatbelt CODEX_SANDBOX_NETWORK_DISABLED=1 cargo test -p
codex-chatgpt`
- `CODEX_SANDBOX=seatbelt CODEX_SANDBOX_NETWORK_DISABLED=1 cargo test -p
codex-tui`
------
https://chatgpt.com/codex/tasks/task_i_68fa5038c17483208b1148661c5873be
1. I have seen too many reports of people hitting startup timeout errors
and thinking Codex is broken. Hopefully this will help people
self-serve. We may also want to consider raising the timeout to ~15s.
2. Make it more clear what PAT is (personal access token) in the GitHub
error
<img width="2378" height="674" alt="CleanShot 2025-10-23 at 22 05 06"
src="https://github.com/user-attachments/assets/d148ce1d-ade3-4511-84a4-c164aefdb5c5"
/>
I want to centralize input processing and management to
`ConversationHistory`. This would need `ConversationHistory` to have
access to `token_info` (i.e. preventing adding a big input to the
history). Besides, it makes more sense to have it on
`ConversationHistory` than `state`.
Walk the sessions tree instead of using file_search so gitignored
CODEX_HOME directories can resume sessions. Add a regression test that
covers a .gitignore'd sessions directory.
Fixes#5247Fixes#5412
---------
Co-authored-by: Owen Lin <owen@openai.com>
Currently we collect all all turn items in a vector, then we add it to
the history on success. This result in losing those items on errors
including aborting `ctrl+c`.
This PR:
- Adds the ability for the tool call to handle cancellation
- bubble the turn items up to where we are recording this info
Admittedly, this logic is an ad-hoc logic that doesn't handle a lot of
error edge cases. The right thing to do is recording to the history on
the spot as `items`/`tool calls output` come. However, this isn't
possible because of having different `task_kind` that has different
`conversation_histories`. The `try_run_turn` has no idea what thread are
we using. We cannot also pass an `arc` to the `conversation_histories`
because it's a private element of `state`.
That's said, `abort` is the most common case and we should cover it
until we remove `task kind`
We are doing some ad-hoc logic while dealing with conversation history.
Ideally, we shouldn't mutate `vec[responseitem]` manually at all and
should depend on `ConversationHistory` for those changes.
Those changes are:
- Adding input to the history
- Removing items from the history
- Correcting history
I am also adding some `error` logs for cases we shouldn't ideally face.
For example, we shouldn't be missing `toolcalls` or `outputs`. We
shouldn't hit `ContextWindowExceeded` while performing `compact`
This refactor will give us granular control over our context management.
I haven't heard of any issues with the studio rmcp client so let's
remove the legacy one and default to the new one.
Any code changes are moving code from the adapter inline but there
should be no meaningful functionality changes.
1. Adds AgentMessage, Reasoning, WebSearch items.
2. Switches the ResponseItem parsing to use new items and then also emit
3. Removes user-item kind and filters out "special" (environment) user
items when returning to clients.
While we do not want to encourage users to hardcode secrets in their
`config.toml` file, it should be possible to pass an API key
programmatically. For example, when using `codex app-server`, it is
possible to pass a "bag of configuration" as part of the
`NewConversationParams`:
682d05512f/codex-rs/app-server-protocol/src/protocol.rs (L248-L251)
When using `codex app-server`, it's not practical to change env vars of
the `codex app-server` process on the fly (which is how we usually read
API key values), so this helps with that.