It turns out that we want slightly different behavior for the
`SetDefaultModel` RPC because some models do not work with reasoning
(like GPT-4.1), so we should be able to explicitly clear this value.
Verified in `codex-rs/mcp-server/tests/suite/set_default_model.rs`.
## Summary
Standardizes the shell description across sandbox_types, since we cover
this in the prompt, and have moved necessary details (like
network_access and writeable workspace roots) to EnvironmentContext
messages.
## Test Plan
- [x] updated unit tests
This adds `SetDefaultModel`, which takes `model` and `reasoning_effort`
as optional fields. If set, the field will overwrite what is in the
user's `config.toml`.
This reuses logic that was added to support the `/model` command in the
TUI: https://github.com/openai/codex/pull/2799.
`ClientRequest::NewConversation` picks up the reasoning level from the user's defaults in `config.toml`, so it should be reported in `NewConversationResponse`.
Adds further information on how to get started with `codex mcp`:
- Tool details and parameter references
- Quickstart with example using MCP inspector.
## Summary
Handle timeouts the same way, regardless of approval mode. There's more
to do here, but this is simple and should be zero-regret
## Testing
- [x] existing tests pass
- [x] test locally and verify rollout
Created this PR by:
- adding `redundant_clone` to `[workspace.lints.clippy]` in
`cargo-rs/Cargol.toml`
- running `cargo clippy --tests --fix`
- running `just fmt`
Though I had to clean up one instance of the following that resulted:
```rust
let codex = codex;
```
Apparently `-F` is the correct thing to use. From the code sample on
https://docs.github.com/en/rest/git/refs?apiVersion=2022-11-28#update-a-reference
```shell
gh api \
--method PATCH \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
/repos/OWNER/REPO/git/refs/REF \
-f 'sha=aa218f56b14c9653891f9e74264a383fa43fefbd' -F "force=true"
```
Also, I ran the following locally and verified it worked:
```shell
export GITHUB_REPOSITORY=openai/codex
export GITHUB_SHA=305252b2fb2d57bb40a9e4bad269db9a761f7099
gh api \
repos/${GITHUB_REPOSITORY}/git/refs/heads/latest-alpha-cli \
-X PATCH \
-f sha="${GITHUB_SHA}" \
-F force=true
```
`$GITHUB_REPOSITORY` and `$GITHUB_SHA` should already be available as
environment variables for the `run` step without having to be redeclared
in the `env` section.
This PR does the following:
* Adds the ability to paste or type an API key.
* Removes the `preferred_auth_method` config option. The last login
method is always persisted in auth.json, so this isn't needed.
* If OPENAI_API_KEY env variable is defined, the value is used to
prepopulate the new UI. The env variable is otherwise ignored by the
CLI.
* Adds a new MCP server entry point "login_api_key" so we can implement
this same API key behavior for the VS Code extension.
<img width="473" height="140" alt="Screenshot 2025-09-04 at 3 51 04 PM"
src="https://github.com/user-attachments/assets/c11bbd5b-8a4d-4d71-90fd-34130460f9d9"
/>
<img width="726" height="254" alt="Screenshot 2025-09-04 at 3 51 32 PM"
src="https://github.com/user-attachments/assets/6cc76b34-309a-4387-acbc-15ee5c756db9"
/>
- Ensure replacements are applied in index order for determinism.
- Add tests for addition chunk followed by removal and worktree-aware
helper.
This fixes a panic I observed.
Co-authored-by: Codex <199175422+chatgpt-codex-connector[bot]@users.noreply.github.com>
This updates `rust-release.yml` so that the last step of creating a
release entails updating the `latest-alpha-cli` branch to point to the
tag used to create the latest release. This will facilitate building
automation to identify the most recent alpha release of Codex CLI
(though note this branch could also point to an official release, as it
is implemented today).
This introduces a new job, `update-branch`, which depends on the
`release` job. I made it separate from the `release` job because
`update-branch` needs the `contents: write` permission, so this limits
the amount of work we do with that permission.
Note I also created a branch protection rule for `latest-alpha-cli`
that:
- specifies repository admins as the only members of the bypass list
- only those with bypass permissions can create, update, or delete this
branch
- this branch requires a linear history
- note that force pushes _are_ allowed
This is the first step in fixing
https://github.com/openai/codex/issues/3098.
As a follow-up to https://github.com/openai/codex/pull/3439, this adds a
CI job to ensure the codegen script has to be updated in order to change
`codex-rs/mcp-types/src/lib.rs`.
This PR changes get history op to get path. Then, forking will use a
path. This will help us have one unified codepath for resuming/forking
conversations. Will also help in having rollout history in order. It
also fixes a bug where you won't see the UI when resuming after forking.
## Unified PTY-Based Exec Tool
Note: this requires to have this flag in the config:
`use_experimental_unified_exec_tool=true`
- Adds a PTY-backed interactive exec feature (“unified_exec”) with
session reuse via
session_id, bounded output (128 KiB), and timeout clamping (≤ 60 s).
- Protocol: introduces ResponseItem::UnifiedExec { session_id,
arguments, timeout_ms }.
- Tools: exposes unified_exec as a function tool (Responses API);
excluded from Chat
Completions payload while still supported in tool lists.
- Path handling: resolves commands via PATH (or explicit paths), with
UTF‑8/newline‑aware
truncation (truncate_middle).
- Tests: cover command parsing, path resolution, session
persistence/cleanup, multi‑session
isolation, timeouts, and truncation behavior.
This adds a simple endpoint that provides the email address encoded in
`$CODEX_HOME/auth.json`.
As noted, for now, we do not hit the server to verify this is the user's
true email address.
https://github.com/openai/codex/pull/3395 updated `mcp-types/src/lib.rs`
by hand, but that file is generated code that is produced by
`mcp-types/generate_mcp_types.py`. Unfortunately, we do not have
anything in CI to verify this right now, but I will address that in a
subsequent PR.
#3395 ended up introducing a change that added a required field when
deserializing `InitializeResult`, breaking Codex when used as an MCP
client, so the quick fix in #3436 was to make the new field `Optional`
with `skip_serializing_if = "Option::is_none"`, but that did not address
the problem that `mcp-types/generate_mcp_types.py` and
`mcp-types/src/lib.rs` are out of sync.
This PR gets things back to where they are in sync. It removes the
custom `mcp_types::McpClientInfo` type that was added to
`mcp-types/src/lib.rs` and forces us to use the generated
`mcp_types::Implementation` type. Though this PR also updates
`generate_mcp_types.py` to generate the additional `user_agent:
Optional<String>` field on `Implementation` so that we can continue to
specify it when Codex operates as an MCP server.
However, this also requires us to specify `user_agent: None` when Codex
operates as an MCP client.
We may want to introduce our own `InitializeResult` type that is
specific to when we run as a server to avoid this in the future, but my
immediate goal is just to get things back in sync.
# External (non-OpenAI) Pull Request Requirements
Currently, mcp server fail to start with:
```
🖐 MCP client for `<CLIENT>` failed to start: missing field `user_agent`
````
It isn't clear to me yet why this is happening. My understanding is that
this struct is simply added as a new field to the response but this
should fix it until I figure out the full story here.
<img width="714" height="262" alt="CleanShot 2025-09-10 at 13 58 59"
src="https://github.com/user-attachments/assets/946b1313-5c1c-43d3-8ae8-ecc3de3406fc"
/>
Also, simplify the streaming behavior.
This fixes a number of display issues with streaming markdown, and paves
the way for better markdown features (e.g. customizable styles, syntax
highlighting, markdown-aware wrapping).
Not currently supported:
- footnotes
- tables
- reference-style links
This PR improves two existing auth-related tests. They were failing when
run in an environment where an `OPENAI_API_KEY` env variable was
defined. The change makes them more resilient.
# External (non-OpenAI) Pull Request Requirements
Before opening this Pull Request, please read the "Contributing" section
of the README or your PR may be closed:
https://github.com/openai/codex#contributing
If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.
This PR adds an `images` field to the existing `UserMessageEvent` so we
can encode zero or more images associated with a user message. This
allows images to be restored when conversations are restored.
Model providers like Groq, Openrouter, AWS Bedrock, VertexAI and others
typically prefix the name of gpt-oss models with `openai`, e.g.
`openai/gpt-oss-120b`.
This PR is to match the model name slug using `contains` instead of
`starts_with` to ensure that the `apply_patch` tool is included in the
tools for models names like `openai/gpt-oss-120b`
Without this, the gpt-oss models will often try to call the
`apply_patch` tool directly instead of via the `shell` command, leading
to validation errors.
I have run all the local checks.
Note: The gpt-oss models from non-Ollama providers are typically run via
a profile with a different base_url (instead of with the `--oss` flag)
---------
Co-authored-by: Andrew Tan <andrewtan@Andrews-Mac.local>
The previous config approach had a few issues:
1. It is part of the config but not designed to be used externally
2. It had to be wired through many places (look at the +/- on this PR
3. It wasn't guaranteed to be set consistently everywhere because we
don't have a super well defined way that configs stack. For example, the
extension would configure during newConversation but anything that
happened outside of that (like login) wouldn't get it.
This env var approach is cleaner and also creates one less thing we have
to deal with when coming up with a better holistic story around configs.
One downside is that I removed the unit test testing for the override
because I don't want to deal with setting the global env or spawning
child processes and figuring out how to introspect their originator
header. The new code is sufficiently simple and I tested it e2e that I
feel as if this is still worth it.
It was hard for me to read the expected lines as a `["one", "two",
"three"]` array, maybe not so hard for the model but probably not having
to un-escape in its head would help it out :)
Co-authored-by: Codex <199175422+chatgpt-codex-connector[bot]@users.noreply.github.com>