- added `uninlined_format_args` to `[workspace.lints.clippy]` in the
`Cargo.toml` for the workspace
- ran `cargo clippy --tests --fix`
- ran `just fmt`
This is a stopgap solution, but today, we are seeing the client get
flooded with events. Since we already truncate the output we send to the
model, it feels reasonable to limit how many deltas we send to the
client.
### What this PR does
This PR introduces a new public method,
remove_conversation(conversation_id: Uuid), to the ConversationManager.
This allows consumers of the codex-core library to manually remove a
conversation from the manager's in-memory storage.
### Why this change is needed
I am currently adapting the Codex client to run as a long-lived server
application. In this server environment, ConversationManager instances
persist for extended periods, and new conversations are created for each
incoming user request.
The current implementation of ConversationManager stores all created
conversations in a HashMap indefinitely, with no mechanism for removal.
This leads to unbounded memory growth in a server context, as every new
conversation permanently occupies memory.
While an automatic TTL-based cleanup mechanism could be one solution, a
simpler, more direct remove_conversation method provides the necessary
control for my use case. It allows my server application to explicitly
manage the lifecycle of conversations, such as cleaning them up after a
request is fully processed or after a period of inactivity is detected
at the application level.
This change provides a minimal, non-intrusive way to address the memory
management issue for server-like applications built on top of
codex-core, giving developers the flexibility to implement their own
cleanup logic.
Signed-off-by: M4n5ter <m4n5terrr@gmail.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
The CLI supports config settings `stream_max_retries` and
`request_max_retries` that allow users to override the default retry
counts (4 and 5, respectively). However, there's currently no cap placed
on these values. In theory, a user could configure an effectively
infinite retry count which could hammer the server. This PR adds a
reasonable cap (currently 100) to both of these values.
This PR improves the error message presented to the user when logged in
with ChatGPT and a rate-limit error occurs. In particular, it provides
the user with information about when the rate limit will be reset. It
removes older code that attempted to do the same but relied on parsing
of error messages that are not generated by the ChatGPT endpoint. The
new code uses newly-added error fields.
This PR fixes a bug in the token refresh logic. Token refresh is
performed in a retry loop so if we receive a 401 error, we refresh the
token, then we go around the loop again and reissue the fetch with a
fresh token. The bug is that we're not using the updated token on the
second and subsequent times through the loop. The result is that we'll
try to refresh the token a few more times until we hit the retry limit
(default of 4). The 401 error is then passed back up to the caller.
Subsequent calls will use the refreshed token, so the problem clears
itself up.
The fix is straightforward — make sure we use the updated auth
information each time through the retry loop.
**Context**
When running `/compact`, `drain_to_completed` would throw an error if
`token_usage` was `None` in `ResponseEvent::Completed`. This made the
command fail even though everything else had succeeded.
**What changed**
- Instead of erroring, we now just check `if let Some(token_usage)`
before sending the event.
- If it’s missing, we skip it and move on.
**Why**
This makes `AgentTask::compact()` behave in the same way as
`AgentTask::spawn()`, which also doesn’t error out when `token_usage`
isn’t available. Keeps things consistent and avoids unnecessary
failures.
**Fixes**
Closes#2417
---------
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
The `SessionManager` in `exec_command` owns a number of
`ExecCommandSession` objects where `ExecCommandSession` has a
non-trivial implementation of `Drop`, so we want to be able to drop an
individual `SessionManager` to help ensure things get cleaned up in a
timely fashion. To that end, we should have one `SessionManager` per
session rather than one global one for the lifetime of the CLI process.
`ToolsConfig::new()` taking a large number of boolean params was hard to
manage and it finally bit us (see
https://github.com/openai/codex/pull/2660). This changes
`ToolsConfig::new()` so that it takes a struct (and also reduces the
visibility of some members, where possible).
Fixes https://github.com/openai/codex/issues/2610
This PR sorts the tools in `get_openai_tools` by name to ensure a
consistent MCP tool order.
Currently, MCP servers are stored in a HashMap, which does not guarantee
ordering. As a result, the tool order changes across turns, effectively
breaking prompt caching in multi-turn sessions.
An alternative solution would be to replace the HashMap with an ordered
structure, but that would require a much larger code change. Given that
it is unrealistic to have so many MCP tools that sorting would cause
performance issues, this lightweight fix is chosen instead.
By ensuring deterministic tool order, this change should significantly
improve cache hit rates and prevent users from hitting usage limits too
quickly. (For reference, my own sessions last week reached the limit
unusually fast, with cache hit rates falling below 1%.)
## Result
After this fix, sessions with MCP servers now show caching behavior
almost identical to sessions without MCP servers.
Without MCP | With MCP
:-------------------------:|:-------------------------:
<img width="1368" height="1634" alt="image"
src="https://github.com/user-attachments/assets/26edab45-7be8-4d6a-b471-558016615fc8"
/> | <img width="1356" height="1632" alt="image"
src="https://github.com/user-attachments/assets/5f3634e0-3888-420b-9aaf-deefd9397b40"
/>
## Summary
We're seeing some issues in the freeform tool - let's disable by default
until it stabilizes.
## Testing
- [x] Ran locally, confirmed codex-cli could make edits
Adds web_search tool, enabling the model to use Responses API web_search
tool.
- Disabled by default, enabled by --search flag
- When --search is passed, exposes web_search_request function tool to
the model, which triggers user approval. When approved, the model can
use the web_search tool for the remainder of the turn
<img width="1033" height="294" alt="image"
src="https://github.com/user-attachments/assets/62ac6563-b946-465c-ba5d-9325af28b28f"
/>
---------
Co-authored-by: easong-openai <easong@openai.com>
We want to send an aggregated output of stderr and stdout so we don't
have to aggregate it stderr+stdout as we lose order sometimes.
---------
Co-authored-by: Gabriel Peal <gpeal@users.noreply.github.com>
This can be the underlying logic in order to start a conversation from a
previous message. will need some love in the UI.
Base for building this: #2588
Prior to this change, when we got a `CallToolResult` from an MCP server,
we JSON-serialized its `content` field as the `content` to send back to
the model as part of the function call output that we send back to the
model. This meant that we were dropping the `structuredContent` on the
floor.
Though reading
https://modelcontextprotocol.io/specification/2025-06-18/schema#tool, it
appears that if `outputSchema` is specified, then `structuredContent`
should be set, which seems to be a "higher-fidelity" response to the
function call. This PR updates our handling of `CallToolResult` to
prefer using the JSON-serialization of `structuredContent`, if present,
using `content` as a fallback.
Also, it appears that the sense of `success` was inverted prior to this
PR!
## Summary
When resolving our current directory as a project, we want to be a
little bit more clever:
1. If we're in a sub-directory of a git repo, resolve our project
against the root of the git repo
2. If we're in a git worktree, resolve the project against the root of
the git repo
## Testing
- [x] Added unit tests
- [x] Confirmed locally with a git worktree (the one i was using for
this feature)
## Summary
GPT-5 introduced the concept of [custom
tools](https://platform.openai.com/docs/guides/function-calling#custom-tools),
which allow the model to send a raw string result back, simplifying
json-escape issues. We are migrating gpt-5 to use this by default.
However, gpt-oss models do not support custom tools, only normal
functions. So we keep both tool definitions, and provide whichever one
the model family supports.
## Testing
- [x] Tested locally with various models
- [x] Unit tests pass
This PR adds a central `AuthManager` struct that manages the auth
information used across conversations and the MCP server. Prior to this,
each conversation and the MCP server got their own private snapshots of
the auth information, and changes to one (such as a logout or token
refresh) were not seen by others.
This is especially problematic when multiple instances of the CLI are
run. For example, consider the case where you start CLI 1 and log in to
ChatGPT account X and then start CLI 2 and log out and then log in to
ChatGPT account Y. The conversation in CLI 1 is still using account X,
but if you create a new conversation, it will suddenly (and
unexpectedly) switch to account Y.
With the `AuthManager`, auth information is read from disk at the time
the `ConversationManager` is constructed, and it is cached in memory.
All new conversations use this same auth information, as do any token
refreshes.
The `AuthManager` is also used by the MCP server's GetAuthStatus
command, which now returns the auth method currently used by the MCP
server.
This PR also includes an enhancement to the GetAuthStatus command. It
now accepts two new (optional) input parameters: `include_token` and
`refresh_token`. Callers can use this to request the in-use auth token
and can optionally request to refresh the token.
The PR also adds tests for the login and auth APIs that I recently added
to the MCP server.
**Summary**
- Adds `model_verbosity` config (values: low, medium, high).
- Sends `text.verbosity` only for GPT‑5 family models via the Responses
API.
- Updates docs and adds serialization tests.
**Motivation**
- GPT‑5 introduces a verbosity control to steer output length/detail
without pro
mpt surgery.
- Exposing it as a config knob keeps prompts stable and makes behavior
explicit
and repeatable.
**Changes**
- Config:
- Added `Verbosity` enum (low|medium|high).
- Added optional `model_verbosity` to `ConfigToml`, `Config`, and
`ConfigProfi
le`.
- Request wiring:
- Extended `ResponsesApiRequest` with optional `text` object.
- Populates `text.verbosity` only when model family is `gpt-5`; omitted
otherw
ise.
- Tests:
- Verifies `text.verbosity` serializes when set and is omitted when not
set.
- Docs:
- Added “GPT‑5 Verbosity” section in `codex-rs/README.md`.
- Added `model_verbosity` section to `codex-rs/config.md`.
**Usage**
- In `~/.codex/config.toml`:
- `model = "gpt-5"`
- `model_verbosity = "low"` (or `"medium"` default, `"high"`)
- CLI override example:
- `codex -c model="gpt-5" -c model_verbosity="high"`
**API Impact**
- Requests to GPT‑5 via Responses API include: `text: { verbosity:
"low|medium|h
igh" }` when configured.
- For legacy models or Chat Completions providers, `text` is omitted.
**Backward Compatibility**
- Default behavior unchanged when `model_verbosity` is not set (server
default “
medium”).
**Testing**
- Added unit tests for serialization/omission of `text.verbosity`.
- Ran `cargo fmt` and `cargo test --all-features` (all green).
**Docs**
- `README.md`: new “GPT‑5 Verbosity” note under Config with example.
- `config.md`: new `model_verbosity` section.
**Out of Scope**
- No changes to temperature/top_p or other GPT‑5 parameters.
- No changes to Chat Completions wiring.
**Risks / Notes**
- If OpenAI changes the wire shape for verbosity, we may need to update
`Respons
esApiRequest`.
- Behavior gated to `gpt-5` model family to avoid unexpected effects
elsewhere.
**Checklist**
- [x] Code gated to GPT‑5 family only
- [x] Docs updated (`README.md`, `config.md`)
- [x] Tests added and passing
- [x] Formatting applied
Release note: Add `model_verbosity` config to control GPT‑5 output verbosity via the Responses API (low|medium|high).
## Summary
We've experienced a bit of drift in system prompting for `apply_patch`:
- As pointed out in #2030 , our prettier formatting started altering
prompt.md in a few ways
- We introduced a separate markdown file for apply_patch instructions in
#993, but currently duplicate them in the prompt.md file
- We added a first-class apply_patch tool in #2303, which has yet
another definition
This PR starts to consolidate our logic in a few ways:
- We now only use
`apply_patch_tool_instructions.md](https://github.com/openai/codex/compare/dh--apply-patch-tool-definition?expand=1#diff-d4fffee5f85cb1975d3f66143a379e6c329de40c83ed5bf03ffd3829df985bea)
for system instructions
- We no longer include apply_patch system instructions if the tool is
specified
I'm leaving the definition in openai_tools.rs as duplicated text for now
because we're going to be iterated on the first-class tool soon.
## Testing
- [x] Added integration tests to verify prompt stability
- [x] Tested locally with several different models (gpt-5, gpt-oss,
o4-mini)
## Summary
Small update to hopefully improve some shell edge cases, and make the
function clearer to the model what is going on. Keeping `timeout` as an
alias means that calls with the previous name will still work.
## Test Plan
- [x] Tested locally, model still works
## Summary
Before we land #2243, let's start printing environment_context in our
preferred format. This struct will evolve over time with new
information, xml gives us a balance of human readable without too much
parsing, llm readable, and extensible.
Also moves us over to an Option-based struct, so we can easily provide
diffs to the model.
## Testing
- [x] Updated tests to reflect new format
## What? Why? How?
- When running on Windows, codex often tries to invoke bash commands,
which commonly fail (unless WSL is installed)
- Fix: Detect if powershell is available and, if so, route commands to
it
- Also add a shell_name property to environmental context for codex to
default to powershell commands when running in that environment
## Testing
- Tested within WSL and powershell (e.g. get top 5 largest files within
a folder and validated that commands generated were powershell commands)
- Tested within Zsh
- Updated unit tests
---------
Co-authored-by: Eddy Escardo <eddy@openai.com>
This PR:
- fixes for internal employee because we currently want to prefer SIWC
for them.
- fixes retrying forever on unauthorized access. we need to break
eventually on max retries.
ChatGPT token's live for only 1 hour. If the session is longer we don't
refresh the token. We should get the expiry timestamp and attempt to
refresh before it.
Codex created this PR from the following prompt:
> upgrade this entire repo to Rust 1.89. Note that this requires
updating codex-rs/rust-toolchain.toml as well as the workflows in
.github/. Make sure that things are "clippy clean" as this change will
likely uncover new Clippy errors. `just fmt` and `cargo clippy --tests`
are sufficient to check for correctness
Note this modifies a lot of lines because it folds nested `if`
statements using `&&`.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2465).
* #2467
* __->__ #2465
## Summary
Adds a `/mcp` command to list active tools. We can extend this command
to allow configuration of MCP tools, but for now a simple list command
will help debug if your config.toml and your tools are working as
expected.
Motivation: we have users who uses their API key although they want to
use ChatGPT account. We want to give them the chance to always login
with their account.
This PR displays login options when the user is not signed in with
ChatGPT. Even if you have set an OpenAI API key as an environment
variable, you will still be prompted to log in with ChatGPT.
We’ve also added a new flag, `always_use_api_key_signing` false by
default, which ensures you are never asked to log in with ChatGPT and
always defaults to using your API key.
https://github.com/user-attachments/assets/b61ebfa9-3c5e-4ab7-bf94-395c23a0e0af
After ChatGPT sign in:
https://github.com/user-attachments/assets/d58b366b-c46a-428f-a22f-2ac230f991c0