Commit Graph

406 Commits

Author SHA1 Message Date
dedrisian-oai
b2f6fc3b9a Fix flaky windows test (#3564)
There are exactly 4 types of flaky tests in Windows x86 right now:

1. `review_input_isolated_from_parent_history` => Times out waiting for
closing events
2. `review_does_not_emit_agent_message_on_structured_output` => Times
out waiting for closing events
3. `auto_compact_runs_after_token_limit_hit` => Times out waiting for
closing events
4. `auto_compact_runs_after_token_limit_hit` => Also has a problem where
auto compact should add a third request, but receives 4 requests.

1, 2, and 3 seem to be solved with increasing threads on windows runner
from 2 -> 4.

Don't know yet why # 4 is happening, but probably also because of
WireMock issues on windows causing races.
2025-09-14 23:20:25 +00:00
pakrym-oai
51f88fd04a Fix swiftfox model selector (#3598)
The model shouldn't be saved with a suffix. The effort is a separate
field.
2025-09-14 23:12:21 +00:00
pakrym-oai
916fdc2a37 Add per-model-family prompts (#3597)
Allows more flexibility in defining prompts.
2025-09-14 22:45:15 +00:00
pakrym-oai
863d9c237e Include command output when sending timeout to model (#3576)
Being able to see the output helps the model decide how to handle the
timeout.
2025-09-14 14:38:26 -07:00
Ahmed Ibrahim
bbea6bbf7e Handle resuming/forking after compact (#3533)
We need to construct the history different when compact happens. For
this, we need to just consider the history after compact and convert
compact to a response item.

This needs to change and use `build_compact_history` when this #3446 is
merged.
2025-09-14 13:23:31 +00:00
Jeremy Rose
4891ee29c5 refactor transcript view to handle HistoryCells (#3538)
No (intended) functional change.

This refactors the transcript view to hold a list of HistoryCells
instead of a list of Lines. This simplifies and makes much of the logic
more robust, as well as laying the groundwork for future changes, e.g.
live-updating history cells in the transcript.

Similar to #2879 in goal. Fixes #2755.
2025-09-13 19:23:14 -07:00
Thibault Sottiaux
bac8a427f3 chore: default swiftfox models to experimental reasoning summaries (#3560) 2025-09-13 23:40:54 +00:00
Thibault Sottiaux
14ab1063a7 chore: rename 2025-09-12 23:17:41 -07:00
Thibault Sottiaux
19b4ed3c96 w 2025-09-12 22:44:05 -07:00
pakrym-oai
3d4acbaea0 Preserve IDs for more item types in azure (#3542)
https://github.com/openai/codex/issues/3509
2025-09-13 01:09:56 +00:00
pakrym-oai
414b8be8b6 Always request encrypted cot (#3539)
Otherwise future requests will fail with 500
2025-09-12 23:51:30 +00:00
dedrisian-oai
90a0fd342f Review Mode (Core) (#3401)
## 📝 Review Mode -- Core

This PR introduces the Core implementation for Review mode:

- New op `Op::Review { prompt: String }:` spawns a child review task
with isolated context, a review‑specific system prompt, and a
`Config.review_model`.
- `EnteredReviewMode`: emitted when the child review session starts.
Every event from this point onwards reflects the review session.
- `ExitedReviewMode(Option<ReviewOutputEvent>)`: emitted when the review
finishes or is interrupted, with optional structured findings:

```json
{
  "findings": [
    {
      "title": "<≤ 80 chars, imperative>",
      "body": "<valid Markdown explaining *why* this is a problem; cite files/lines/functions>",
      "confidence_score": <float 0.0-1.0>,
      "priority": <int 0-3>,
      "code_location": {
        "absolute_file_path": "<file path>",
        "line_range": {"start": <int>, "end": <int>}
      }
    }
  ],
  "overall_correctness": "patch is correct" | "patch is incorrect",
  "overall_explanation": "<1-3 sentence explanation justifying the overall_correctness verdict>",
  "overall_confidence_score": <float 0.0-1.0>
}
```

## Questions

### Why separate out its own message history?

We want the review thread to match the training of our review models as
much as possible -- that means using a custom prompt, removing user
instructions, and starting a clean chat history.

We also want to make sure the review thread doesn't leak into the parent
thread.

### Why do this as a mode, vs. sub-agents?

1. We want review to be a synchronous task, so it's fine for now to do a
bespoke implementation.
2. We're still unclear about the final structure for sub-agents. We'd
prefer to land this quickly and then refactor into sub-agents without
rushing that implementation.
2025-09-12 23:25:10 +00:00
jif-oai
8d56d2f655 fix: NIT None reasoning effort (#3536)
Fix the reasoning effort not being set to None in the UI
2025-09-12 21:17:49 +00:00
Jeremy Rose
b8ccfe9b65 core: expand default sandbox (#3483)
this adds some more capabilities to the default sandbox which I feel are
safe. Most are in the
[renderer.sb](https://source.chromium.org/chromium/chromium/src/+/main:sandbox/policy/mac/renderer.sb)
sandbox for chrome renderers, which i feel is fair game for codex
commands.

Specific changes:

1. Allow processes in the sandbox to send signals to any other process
in the same sandbox (e.g. child processes or daemonized processes),
instead of just themselves.
2. Allow user-preference-read
3. Allow process-info* to anything in the same sandbox. This is a bit
wider than Chromium allows, but it seems OK to me to allow anything in
the sandbox to get details about other processes in the same sandbox.
Bazel uses these to e.g. wait for another process to exit.
4. Allow all CPU feature detection, this seems harmless to me. It's
wider than Chromium, but Chromium is concerned about fingerprinting, and
tightly controls what CPU features they actually care about, and we
don't have either that restriction or that advantage.
5. Allow new sysctl-reads:
   ```
     (sysctl-name "vm.loadavg")
     (sysctl-name-prefix "kern.proc.pgrp.")
     (sysctl-name-prefix "kern.proc.pid.")
     (sysctl-name-prefix "net.routetable.")
   ```
bazel needs these for waiting on child processes and for communicating
with its local build server, i believe. I wonder if we should just allow
all (sysctl-read), as reading any arbitrary info about the system seems
fine to me.
6. Allow iokit-open on RootDomainUserClient. This has to do with power
management I believe, and Chromium allows renderers to do this, so okay.
Bazel needs it to boot successfully, possibly for sleep/wake callbacks?
7. Mach lookup to `com.apple.system.opendirectoryd.libinfo`, which has
to do with user data, and which Chrome allows.
8. Mach lookup to `com.apple.PowerManagement.control`. Chromium allows
its GPU process to do this, but not its renderers. Bazel needs this to
boot, probably relatedly to sleep/wake stuff.
2025-09-12 14:03:02 -07:00
pakrym-oai
e3c6903199 Add Azure Responses API workaround (#3528)
Azure Responses API doesn't work well with store:false and response
items.

If store = false and id is sent an error is thrown that ID is not found
If store = false and id is not sent an error is thrown that ID is
required

Add detection for Azure urls and add a workaround to preserve reasoning
item IDs and send store:true
2025-09-12 13:52:15 -07:00
jif-oai
ea225df22e feat: context compaction (#3446)
## Compact feature:
1. Stops the model when the context window become too large
2. Add a user turn, asking for the model to summarize
3. Build a bridge that contains all the previous user message + the
summary. Rendered from a template
4. Start sampling again from a clean conversation with only that bridge
2025-09-12 13:07:10 -07:00
jif-oai
c6fd056aa6 feat: reasoning effort as optional (#3527)
Allow the reasoning effort to be optional
2025-09-12 12:06:33 -07:00
Michael Bolin
abdcb40f4c feat: change the behavior of SetDefaultModel RPC so None clears the value. (#3529)
It turns out that we want slightly different behavior for the
`SetDefaultModel` RPC because some models do not work with reasoning
(like GPT-4.1), so we should be able to explicitly clear this value.

Verified in `codex-rs/mcp-server/tests/suite/set_default_model.rs`.
2025-09-12 11:35:51 -07:00
Dylan
4ae6b9787a standardize shell description (#3514)
## Summary
Standardizes the shell description across sandbox_types, since we cover
this in the prompt, and have moved necessary details (like
network_access and writeable workspace roots) to EnvironmentContext
messages.

## Test Plan
- [x] updated unit tests
2025-09-12 14:24:09 -04:00
jif-oai
bba567cee9 bug: fix model save (#3525)
Fix those 2 behaviors:
1. The model does not get saved if we don't CTRL + S
2. The reasoning effort get saved
2025-09-12 10:38:12 -07:00
Michael Bolin
c172e8e997 feat: added SetDefaultModel to JSON-RPC server (#3512)
This adds `SetDefaultModel`, which takes `model` and `reasoning_effort`
as optional fields. If set, the field will overwrite what is in the
user's `config.toml`.

This reuses logic that was added to support the `/model` command in the
TUI: https://github.com/openai/codex/pull/2799.
2025-09-11 23:44:17 -07:00
Michael Bolin
9bbeb75361 feat: include reasoning_effort in NewConversationResponse (#3506)
`ClientRequest::NewConversation` picks up the reasoning level from the user's defaults in `config.toml`, so it should be reported in `NewConversationResponse`.
2025-09-11 21:04:40 -07:00
pakrym-oai
3b5a5412bb Log cf-ray header in client traces (#3488)
## Summary
- log the `cf-ray` header when tracing HTTP responses in the Codex
client
- keep existing response status logging unchanged

## Testing
- just fmt
- just fix -p codex-core
- cargo test -p codex-core *(fails:
suite::client::azure_overrides_assign_properties_used_for_responses_url,
suite::client::env_var_overrides_loaded_auth)*

------
https://chatgpt.com/codex/tasks/task_i_68c31640dacc83209be131baf91611cd
2025-09-11 17:42:44 -07:00
jif-oai
44bb53df1e bug: default to image (#3501)
Default the MIME type to image
2025-09-11 23:10:24 +00:00
jif-oai
8453915e02 feat: TUI onboarding (#3398)
Example of how onboarding could look like
2025-09-11 15:04:29 -07:00
Ahmed Ibrahim
44587c2443 Use PlanType enum when formatting usage-limit CTA (#3495)
- Started using Play type struct
- Added CTA for team/business 
- Refactored a bit to unify the logic
2025-09-11 22:01:25 +00:00
Dylan
027944c64e fix: improve handle_sandbox_error timeouts (#3435)
## Summary
Handle timeouts the same way, regardless of approval mode. There's more
to do here, but this is simple and should be zero-regret

## Testing
- [x] existing tests pass
- [x] test locally and verify rollout
2025-09-11 12:09:20 -07:00
Michael Bolin
bec51f6c05 chore: enable clippy::redundant_clone (#3489)
Created this PR by:

- adding `redundant_clone` to `[workspace.lints.clippy]` in
`cargo-rs/Cargol.toml`
- running `cargo clippy --tests --fix`
- running `just fmt`

Though I had to clean up one instance of the following that resulted:

```rust
let codex = codex;
```
2025-09-11 11:59:37 -07:00
pakrym-oai
66967500bb Assign the entire gpt-5 model family same characteristics (#3490)
So the context size indicator is displayed.
2025-09-11 18:56:49 +00:00
Ahmed Ibrahim
674e3d3c90 Add Compact and Turn Context to the rollout items (#3444)
Adding compact and turn context to the rollout items

based on #3440
2025-09-11 18:08:51 +00:00
jif-oai
114ce9ff4d NIT unified exec (#3479)
Fix the default value of the experimental flag of unified_exec
2025-09-11 16:19:12 +00:00
Eric Traut
e13b35ecb0 Simplify auth flow and reconcile differences between ChatGPT and API Key auth (#3189)
This PR does the following:
* Adds the ability to paste or type an API key.
* Removes the `preferred_auth_method` config option. The last login
method is always persisted in auth.json, so this isn't needed.
* If OPENAI_API_KEY env variable is defined, the value is used to
prepopulate the new UI. The env variable is otherwise ignored by the
CLI.
* Adds a new MCP server entry point "login_api_key" so we can implement
this same API key behavior for the VS Code extension.
<img width="473" height="140" alt="Screenshot 2025-09-04 at 3 51 04 PM"
src="https://github.com/user-attachments/assets/c11bbd5b-8a4d-4d71-90fd-34130460f9d9"
/>
<img width="726" height="254" alt="Screenshot 2025-09-04 at 3 51 32 PM"
src="https://github.com/user-attachments/assets/6cc76b34-309a-4387-acbc-15ee5c756db9"
/>
2025-09-11 09:16:34 -07:00
Ahmed Ibrahim
162e1235a8 Change forking to read the rollout from file (#3440)
This PR changes get history op to get path. Then, forking will use a
path. This will help us have one unified codepath for resuming/forking
conversations. Will also help in having rollout history in order. It
also fixes a bug where you won't see the UI when resuming after forking.
2025-09-10 17:42:54 -07:00
jif-oai
c09ed74a16 Unified execution (#3288)
## Unified PTY-Based Exec Tool

Note: this requires to have this flag in the config:
`use_experimental_unified_exec_tool=true`

- Adds a PTY-backed interactive exec feature (“unified_exec”) with
session reuse via
  session_id, bounded output (128 KiB), and timeout clamping (≤ 60 s).
- Protocol: introduces ResponseItem::UnifiedExec { session_id,
arguments, timeout_ms }.
- Tools: exposes unified_exec as a function tool (Responses API);
excluded from Chat
  Completions payload while still supported in tool lists.
- Path handling: resolves commands via PATH (or explicit paths), with
UTF‑8/newline‑aware
  truncation (truncate_middle).
- Tests: cover command parsing, path resolution, session
persistence/cleanup, multi‑session
  isolation, timeouts, and truncation behavior.
2025-09-10 17:38:11 -07:00
Michael Bolin
44262d8fd8 fix: ensure output of codex-rs/mcp-types/generate_mcp_types.py matches codex-rs/mcp-types/src/lib.rs (#3439)
https://github.com/openai/codex/pull/3395 updated `mcp-types/src/lib.rs`
by hand, but that file is generated code that is produced by
`mcp-types/generate_mcp_types.py`. Unfortunately, we do not have
anything in CI to verify this right now, but I will address that in a
subsequent PR.

#3395 ended up introducing a change that added a required field when
deserializing `InitializeResult`, breaking Codex when used as an MCP
client, so the quick fix in #3436 was to make the new field `Optional`
with `skip_serializing_if = "Option::is_none"`, but that did not address
the problem that `mcp-types/generate_mcp_types.py` and
`mcp-types/src/lib.rs` are out of sync.

This PR gets things back to where they are in sync. It removes the
custom `mcp_types::McpClientInfo` type that was added to
`mcp-types/src/lib.rs` and forces us to use the generated
`mcp_types::Implementation` type. Though this PR also updates
`generate_mcp_types.py` to generate the additional `user_agent:
Optional<String>` field on `Implementation` so that we can continue to
specify it when Codex operates as an MCP server.

However, this also requires us to specify `user_agent: None` when Codex
operates as an MCP client.

We may want to introduce our own `InitializeResult` type that is
specific to when we run as a server to avoid this in the future, but my
immediate goal is just to get things back in sync.
2025-09-10 16:14:41 -07:00
Jeremy Rose
95a9938d3a fix trampling projects table when accepting trusted dirs (#3434)
Co-authored-by: Codex <199175422+chatgpt-codex-connector[bot]@users.noreply.github.com>
2025-09-10 23:01:31 +00:00
Jeremy Rose
f69f07b028 put workspace roots in the environment context (#3375)
to keep the tool description constant when the writable roots change.
2025-09-10 15:10:52 -07:00
dedrisian-oai
87654ec0b7 Persist model & reasoning changes (#2799)
Persists `/model` changes across both general and profile-specific
sessions.
2025-09-10 20:53:46 +00:00
Michael Bolin
51d9e05de7 Back out "feat: POSIX unification and snapshot sessions (#3179)" (#3430)
This reverts https://github.com/openai/codex/pull/3179.

#3179 appears to introduce a regression where sourcing dotfiles causes a
bunch of activity in the title bar (and potentially slows things down?)


https://github.com/user-attachments/assets/a68f7fb3-0749-4e0e-a321-2aa6993e01da

Verified this no longer happens after backing out #3179.

Original commit changeset: 62bd0e3d9d
2025-09-10 12:40:24 -07:00
Eric Traut
39db113cc9 Added images to UserMessageEvent (#3400)
This PR adds an `images` field to the existing `UserMessageEvent` so we
can encode zero or more images associated with a user message. This
allows images to be restored when conversations are restored.
2025-09-10 10:18:43 -07:00
Ahmed Ibrahim
45bd5ca4b9 Move initial history to protocol (#3422)
To fix an edge case of forking then resuming

#3419
2025-09-10 10:17:24 -07:00
Michael Bolin
c13c3dadbf fix: remove unnecessary #[allow(dead_code)] annotation (#3357) 2025-09-10 08:19:05 -07:00
Gabriel Peal
8636bff46d Set a user agent suffix when used as a mcp server (#3395)
This automatically adds a user agent suffix whenever the CLI is used as
a MCP server
2025-09-10 02:32:57 +00:00
Ahmed Ibrahim
43809a454e Introduce rollout items (#3380)
This PR introduces Rollout items. This enable us to rollout eventmsgs
and session meta.

This is mostly #3214 with rebase on main
2025-09-09 23:52:33 +00:00
Andrew Tan
de6559f2ab Include apply_patch tool for oss models from gpt-oss providers with different naming convention (e.g. openai/gpt-oss-*) (#2811)
Model providers like Groq, Openrouter, AWS Bedrock, VertexAI and others
typically prefix the name of gpt-oss models with `openai`, e.g.
`openai/gpt-oss-120b`.

This PR is to match the model name slug using `contains` instead of
`starts_with` to ensure that the `apply_patch` tool is included in the
tools for models names like `openai/gpt-oss-120b`

Without this, the gpt-oss models will often try to call the
`apply_patch` tool directly instead of via the `shell` command, leading
to validation errors.

I have run all the local checks.

Note: The gpt-oss models from non-Ollama providers are typically run via
a profile with a different base_url (instead of with the `--oss` flag)

---------

Co-authored-by: Andrew Tan <andrewtan@Andrews-Mac.local>
2025-09-09 15:02:02 -07:00
Gabriel Peal
5eab4c7ab4 Replace config.responses_originator_header_internal_override with CODEX_INTERNAL_ORIGINATOR_OVERRIDE_ENV_VAR (#3388)
The previous config approach had a few issues:
1. It is part of the config but not designed to be used externally
2. It had to be wired through many places (look at the +/- on this PR
3. It wasn't guaranteed to be set consistently everywhere because we
don't have a super well defined way that configs stack. For example, the
extension would configure during newConversation but anything that
happened outside of that (like login) wouldn't get it.

This env var approach is cleaner and also creates one less thing we have
to deal with when coming up with a better holistic story around configs.

One downside is that I removed the unit test testing for the override
because I don't want to deal with setting the global env or spawning
child processes and figuring out how to introspect their originator
header. The new code is sufficiently simple and I tested it e2e that I
feel as if this is still worth it.
2025-09-09 17:23:23 -04:00
Wang
ac8a3155d6 feat(core): re-export InitialHistory from conversation_manager (#3270)
This commit adds a re-export for InitialHistory from the internal
conversation_manager module in codex-core's lib.rs.

The `RolloutRecorder::get_rollout_history` method (exposed via `pub use
rollout::RolloutRecorder;`, already present in lib.rs) returns an
`InitialHistory` type, which is defined in the private
conversation_manager module. Without this re-export, consumers of the
public RolloutRecorder API would not be able to directly use the return
type, as they cannot access the private module. This would result in an
inconvenient experience where the method's return value cannot be
handled without additional, non-obvious imports.

By adding `pub use conversation_manager::InitialHistory;`, we make
InitialHistory available as `codex_core::InitialHistory`, improving API
ergonomics for users of the rollout functionality while keeping the
conversation_manager module internal.

No functional changes are made; this is a pure re-export for better
usability.

Signed-off-by: M4n5ter <m4n5terrr@gmail.com>
2025-09-09 10:37:08 -07:00
Michael Bolin
ace14e8d36 feat: add ArchiveConversation to ClientRequest (#3353)
Adds support for `ArchiveConversation` in the JSON-RPC server that takes
a `(ConversationId, PathBuf)` pair and:

- verifies the `ConversationId` corresponds to the rollout id at the
`PathBuf`
- if so, invokes
`ConversationManager.remove_conversation(ConversationId)`
- if the `CodexConversation` was in memory, send `Shutdown` and wait for
`ShutdownComplete` with a timeout
- moves the `.jsonl` file to `$CODEX_HOME/archived_sessions`

---------

Co-authored-by: Gabriel Peal <gabriel@openai.com>
2025-09-09 11:39:00 -04:00
Michael Bolin
2a76a08a9e fix: include rollout_path in NewConversationResponse (#3352)
Adding the `rollout_path` to the `NewConversationResponse` makes it so a
client can perform subsequent operations on a `(ConversationId,
PathBuf)` pair. #3353 will introduce support for `ArchiveConversation`.

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/3352).
* #3353
* __->__ #3352
2025-09-09 00:11:48 -07:00
jif-oai
62bd0e3d9d feat: POSIX unification and snapshot sessions (#3179)
## Session snapshot
For POSIX shell, the goal is to take a snapshot of the interactive shell
environment, store it in a session file located in `.codex/` and only
source this file for every command that is run.
As a result, if a snapshot files exist, `bash -lc <CALL>` get replaced
by `bash -c <CALL>`.

This also fixes the issue that `bash -lc` does not source `.bashrc`,
resulting in missing env variables and aliases in the codex session.
## POSIX unification
Unify `bash` and `zsh` shell into a POSIX shell. The rational is that
the tool will not use any `zsh` specific capabilities.

---------

Co-authored-by: Michael Bolin <mbolin@openai.com>
2025-09-08 18:09:45 -07:00