Files
llmx/codex-rs/core/src/codex.rs

3270 lines
119 KiB
Rust
Raw Normal View History

use std::borrow::Cow;
use std::collections::HashMap;
use std::collections::HashSet;
use std::path::PathBuf;
use std::sync::Arc;
use std::sync::Mutex;
use std::sync::MutexGuard;
use std::sync::atomic::AtomicU64;
use std::time::Duration;
use crate::AuthManager;
use crate::config_edit::CONFIG_KEY_EFFORT;
use crate::config_edit::CONFIG_KEY_MODEL;
use crate::config_edit::persist_non_null_overrides;
Replay EventMsgs from Response Items when resuming a session with history. (#3123) ### Overview This PR introduces the following changes: 1. Adds a unified mechanism to convert ResponseItem into EventMsg. 2. Ensures that when a session is initialized with initial history, a vector of EventMsg is sent along with the session configuration. This allows clients to re-render the UI accordingly. 3. Added integration testing ### Caveats This implementation does not send every EventMsg that was previously dispatched to clients. The excluded events fall into two categories: • “Arguably” rolled-out events Examples include tool calls and apply-patch calls. While these events are conceptually rolled out, we currently only roll out ResponseItems. These events are already being handled elsewhere and transformed into EventMsg before being sent. • Non-rolled-out events Certain events such as TurnDiff, Error, and TokenCount are not rolled out at all. ### Future Directions At present, resuming a session involves maintaining two states: • UI State Clients can replay most of the important UI from the provided EventMsg history. • Model State The model receives the complete session history to reconstruct its internal state. This design provides a solid foundation. If, in the future, more precise UI reconstruction is needed, we have two potential paths: 1. Introduce a third data structure that allows us to derive both ResponseItems and EventMsgs. 2. Clearly divide responsibilities: the core system ensures the integrity of the model state, while clients are responsible for reconstructing the UI.
2025-09-03 21:47:00 -07:00
use crate::event_mapping::map_response_item_to_event_messages;
use async_channel::Receiver;
use async_channel::Sender;
use codex_apply_patch::ApplyPatchAction;
use codex_apply_patch::MaybeApplyPatchVerified;
use codex_apply_patch::maybe_parse_apply_patch_verified;
use codex_protocol::mcp_protocol::ConversationId;
use codex_protocol::protocol::CompactedItem;
use codex_protocol::protocol::ConversationPathResponseEvent;
use codex_protocol::protocol::RolloutItem;
use codex_protocol::protocol::TaskStartedEvent;
use codex_protocol::protocol::TurnAbortReason;
use codex_protocol::protocol::TurnAbortedEvent;
use codex_protocol::protocol::TurnContextItem;
use futures::prelude::*;
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
use mcp_types::CallToolResult;
use serde::Deserialize;
use serde::Serialize;
feat: configurable notifications in the Rust CLI (#793) With this change, you can specify a program that will be executed to get notified about events generated by Codex. The notification info will be packaged as a JSON object. The supported notification types are defined by the `UserNotification` enum introduced in this PR. Initially, it contains only one variant, `AgentTurnComplete`: ```rust pub(crate) enum UserNotification { #[serde(rename_all = "kebab-case")] AgentTurnComplete { turn_id: String, /// Messages that the user sent to the agent to initiate the turn. input_messages: Vec<String>, /// The last message sent by the assistant in the turn. last_assistant_message: Option<String>, }, } ``` This is intended to support the common case when a "turn" ends, which often means it is now your chance to give Codex further instructions. For example, I have the following in my `~/.codex/config.toml`: ```toml notify = ["python3", "/Users/mbolin/.codex/notify.py"] ``` I created my own custom notifier script that calls out to [terminal-notifier](https://github.com/julienXX/terminal-notifier) to show a desktop push notification on macOS. Contents of `notify.py`: ```python #!/usr/bin/env python3 import json import subprocess import sys def main() -> int: if len(sys.argv) != 2: print("Usage: notify.py <NOTIFICATION_JSON>") return 1 try: notification = json.loads(sys.argv[1]) except json.JSONDecodeError: return 1 match notification_type := notification.get("type"): case "agent-turn-complete": assistant_message = notification.get("last-assistant-message") if assistant_message: title = f"Codex: {assistant_message}" else: title = "Codex: Turn Complete!" input_messages = notification.get("input_messages", []) message = " ".join(input_messages) title += message case _: print(f"not sending a push notification for: {notification_type}") return 0 subprocess.check_output( [ "terminal-notifier", "-title", title, "-message", message, "-group", "codex", "-ignoreDnD", "-activate", "com.googlecode.iterm2", ] ) return 0 if __name__ == "__main__": sys.exit(main()) ``` For reference, here are related PRs that tried to add this functionality to the TypeScript version of the Codex CLI: * https://github.com/openai/codex/pull/160 * https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
use serde_json;
use tokio::sync::oneshot;
use tokio::task::AbortHandle;
use tracing::debug;
use tracing::error;
use tracing::info;
use tracing::trace;
use tracing::warn;
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
use crate::ModelProviderInfo;
fix: tighten up checks against writable folders for SandboxPolicy (#2338) I was looking at the implementation of `Session::get_writable_roots()`, which did not seem right, as it was a copy of writable roots, which is not guaranteed to be in sync with the `sandbox_policy` field. I looked at who was calling `get_writable_roots()` and its only call site was `apply_patch()` in `codex-rs/core/src/apply_patch.rs`, which took the roots and forwarded them to `assess_patch_safety()` in `safety.rs`. I updated `assess_patch_safety()` to take `sandbox_policy: &SandboxPolicy` instead of `writable_roots: &[PathBuf]` (and replaced `Session::get_writable_roots()` with `Session::get_sandbox_policy()`). Within `safety.rs`, it was fairly easy to update `is_write_patch_constrained_to_writable_paths()` to work with `SandboxPolicy`, and in particular, it is far more accurate because, for better or worse, `SandboxPolicy::get_writable_roots_with_cwd()` _returns an empty vec_ for `SandboxPolicy::DangerFullAccess`, suggesting that _nothing_ is writable when in reality _everything_ is writable. With this PR, `is_write_patch_constrained_to_writable_paths()` now does the right thing for each variant of `SandboxPolicy`. I thought this would be the end of the story, but it turned out that `test_writable_roots_constraint()` in `safety.rs` needed to be updated, as well. In particular, the test was writing to `std::env::current_dir()` instead of a `TempDir`, which I suspect was a holdover from earlier when `SandboxPolicy::WorkspaceWrite` would always make `TMPDIR` writable on macOS, which made it hard to write tests to verify `SandboxPolicy` in `TMPDIR`. Fortunately, we now have `exclude_tmpdir_env_var` as an option on `SandboxPolicy::WorkspaceWrite`, so I was able to update the test to preserve the existing behavior, but to no longer write to `std::env::current_dir()`. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2338). * #2345 * #2329 * #2343 * #2340 * __->__ #2338
2025-08-15 09:06:15 -07:00
use crate::apply_patch;
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
use crate::apply_patch::ApplyPatchExec;
fix: run apply_patch calls through the sandbox (#1705) Building on the work of https://github.com/openai/codex/pull/1702, this changes how a shell call to `apply_patch` is handled. Previously, a shell call to `apply_patch` was always handled in-process, never leveraging a sandbox. To determine whether the `apply_patch` operation could be auto-approved, the `is_write_patch_constrained_to_writable_paths()` function would check if all the paths listed in the paths were writable. If so, the agent would apply the changes listed in the patch. Unfortunately, this approach afforded a loophole: symlinks! * For a soft link, we could fix this issue by tracing the link and checking whether the target is in the set of writable paths, however... * ...For a hard link, things are not as simple. We can run `stat FILE` to see if the number of links is greater than 1, but then we would have to do something potentially expensive like `find . -inum <inode_number>` to find the other paths for `FILE`. Further, even if this worked, this approach runs the risk of a [TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race condition, so it is not robust. The solution, implemented in this PR, is to take the virtual execution of the `apply_patch` CLI into an _actual_ execution using `codex --codex-run-as-apply-patch PATCH`, which we can run under the sandbox the user specified, just like any other `shell` call. This, of course, assumes that the sandbox prevents writing through symlinks as a mechanism to write to folders that are not in the writable set configured by the sandbox. I verified this by testing the following on both Mac and Linux: ```shell #!/usr/bin/env bash set -euo pipefail # Can running a command in SANDBOX_DIR write a file in EXPLOIT_DIR? # Codex is run in SANDBOX_DIR, so writes should be constrianed to this directory. SANDBOX_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) # EXPLOIT_DIR is outside of SANDBOX_DIR, so let's see if we can write to it. EXPLOIT_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) echo "SANDBOX_DIR: $SANDBOX_DIR" echo "EXPLOIT_DIR: $EXPLOIT_DIR" cleanup() { # Only remove if it looks sane and still exists [[ -n "${SANDBOX_DIR:-}" && -d "$SANDBOX_DIR" ]] && rm -rf -- "$SANDBOX_DIR" [[ -n "${EXPLOIT_DIR:-}" && -d "$EXPLOIT_DIR" ]] && rm -rf -- "$EXPLOIT_DIR" } trap cleanup EXIT echo "I am the original content" > "${EXPLOIT_DIR}/original.txt" # Drop the -s to test hard links. ln -s "${EXPLOIT_DIR}/original.txt" "${SANDBOX_DIR}/link-to-original.txt" cat "${SANDBOX_DIR}/link-to-original.txt" if [[ "$(uname)" == "Linux" ]]; then SANDBOX_SUBCOMMAND=landlock else SANDBOX_SUBCOMMAND=seatbelt fi # Attempt the exploit cd "${SANDBOX_DIR}" codex debug "${SANDBOX_SUBCOMMAND}" bash -lc "echo pwned > ./link-to-original.txt" || true cat "${EXPLOIT_DIR}/original.txt" ``` Admittedly, this change merits a proper integration test, but I think I will have to do that in a follow-up PR.
2025-07-30 16:45:08 -07:00
use crate::apply_patch::CODEX_APPLY_PATCH_ARG1;
use crate::apply_patch::InternalApplyPatchInvocation;
use crate::apply_patch::convert_apply_patch_to_protocol;
use crate::client::ModelClient;
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
use crate::client_common::Prompt;
use crate::client_common::ResponseEvent;
feat: support mcp_servers in config.toml (#829) This adds initial support for MCP servers in the style of Claude Desktop and Cursor. Note this PR is the bare minimum to get things working end to end: all configured MCP servers are launched every time Codex is run, there is no recovery for MCP servers that crash, etc. (Also, I took some shortcuts to change some fields of `Session` to be `pub(crate)`, which also means there are circular deps between `codex.rs` and `mcp_tool_call.rs`, but I will clean that up in a subsequent PR.) `codex-rs/README.md` is updated as part of this PR to explain how to use this feature. There is a bit of plumbing to route the new settings from `Config` to the business logic in `codex.rs`. The most significant chunks for new code are in `mcp_connection_manager.rs` (which defines the `McpConnectionManager` struct) and `mcp_tool_call.rs`, which is responsible for tool calls. This PR also introduces new `McpToolCallBegin` and `McpToolCallEnd` event types to the protocol, but does not add any handlers for them. (See https://github.com/openai/codex/pull/836 for initial usage.) To test, I added the following to my `~/.codex/config.toml`: ```toml # Local build of https://github.com/hideya/mcp-server-weather-js [mcp_servers.weather] command = "/Users/mbolin/code/mcp-server-weather-js/dist/index.js" args = [] ``` And then I ran the following: ``` codex-rs$ cargo run --bin codex exec 'what is the weather in san francisco' [2025-05-06T22:40:05] Task started: 1 [2025-05-06T22:40:18] Agent message: Here’s the latest National Weather Service forecast for San Francisco (downtown, near 37.77° N, 122.42° W): This Afternoon (Tue): • Sunny, high near 69 °F • West-southwest wind around 12 mph Tonight: • Partly cloudy, low around 52 °F • SW wind 7–10 mph ... ``` Note that Codex itself is not able to make network calls, so it would not normally be able to get live weather information like this. However, the weather MCP is [currently] not run under the Codex sandbox, so it is able to hit `api.weather.gov` and fetch current weather information. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/829). * #836 * __->__ #829
2025-05-06 15:47:59 -07:00
use crate::config::Config;
feat: introduce support for shell_environment_policy in config.toml (#1061) To date, when handling `shell` and `local_shell` tool calls, we were spawning new processes using the environment inherited from the Codex process itself. This means that the sensitive `OPENAI_API_KEY` that Codex needs to talk to OpenAI models was made available to everything run by `shell` and `local_shell`. While there are cases where that might be useful, it does not seem like a good default. This PR introduces a complex `shell_environment_policy` config option to control the `env` used with these tool calls. It is inevitably a bit complex so that it is possible to override individual components of the policy so without having to restate the entire thing. Details are in the updated `README.md` in this PR, but here is the relevant bit that explains the individual fields of `shell_environment_policy`: | Field | Type | Default | Description | | ------------------------- | -------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | | `inherit` | string | `core` | Starting template for the environment:<br>`core` (`HOME`, `PATH`, `USER`, …), `all` (clone full parent env), or `none` (start empty). | | `ignore_default_excludes` | boolean | `false` | When `false`, Codex removes any var whose **name** contains `KEY`, `SECRET`, or `TOKEN` (case-insensitive) before other rules run. | | `exclude` | array&lt;string&gt; | `[]` | Case-insensitive glob patterns to drop after the default filter.<br>Examples: `"AWS_*"`, `"AZURE_*"`. | | `set` | table&lt;string,string&gt; | `{}` | Explicit key/value overrides or additions – always win over inherited values. | | `include_only` | array&lt;string&gt; | `[]` | If non-empty, a whitelist of patterns; only variables that match _one_ pattern survive the final step. (Generally used with `inherit = "all"`.) | In particular, note that the default is `inherit = "core"`, so: * if you have extra env variables that you want to inherit from the parent process, use `inherit = "all"` and then specify `include_only` * if you have extra env variables where you want to hardcode the values, the default `inherit = "core"` will work fine, but then you need to specify `set` This configuration is not battle-tested, so we will probably still have to play with it a bit. `core/src/exec_env.rs` has the critical business logic as well as unit tests. Though if nothing else, previous to this change: ``` $ cargo run --bin codex -- debug seatbelt -- printenv OPENAI_API_KEY # ...prints OPENAI_API_KEY... ``` But after this change it does not print anything (as desired). One final thing to call out about this PR is that the `configure_command!` macro we use in `core/src/exec.rs` has to do some complex logic with respect to how it builds up the `env` for the process being spawned under Landlock/seccomp. Specifically, doing `cmd.env_clear()` followed by `cmd.envs(&$env_map)` (which is arguably the most intuitive way to do it) caused the Landlock unit tests to fail because the processes spawned by the unit tests started failing in unexpected ways! If we forgo `env_clear()` in favor of updating env vars one at a time, the tests still pass. The comment in the code talks about this a bit, and while I would like to investigate this more, I need to move on for the moment, but I do plan to come back to it to fully understand what is going on. For example, this suggests that we might not be able to spawn a C program that calls `env_clear()`, which would be...weird. We may still have to fiddle with our Landlock config if that is the case.
2025-05-22 09:51:19 -07:00
use crate::config_types::ShellEnvironmentPolicy;
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
use crate::conversation_history::ConversationHistory;
use crate::environment_context::EnvironmentContext;
use crate::error::CodexErr;
use crate::error::Result as CodexResult;
use crate::error::SandboxErr;
use crate::error::get_error_message_ui;
use crate::exec::ExecParams;
use crate::exec::ExecToolCallOutput;
use crate::exec::SandboxType;
use crate::exec::StdoutStream;
use crate::exec::StreamOutput;
use crate::exec::process_exec_tool_call;
use crate::exec_command::EXEC_COMMAND_TOOL_NAME;
use crate::exec_command::ExecCommandParams;
use crate::exec_command::ExecSessionManager;
use crate::exec_command::WRITE_STDIN_TOOL_NAME;
use crate::exec_command::WriteStdinParams;
feat: introduce support for shell_environment_policy in config.toml (#1061) To date, when handling `shell` and `local_shell` tool calls, we were spawning new processes using the environment inherited from the Codex process itself. This means that the sensitive `OPENAI_API_KEY` that Codex needs to talk to OpenAI models was made available to everything run by `shell` and `local_shell`. While there are cases where that might be useful, it does not seem like a good default. This PR introduces a complex `shell_environment_policy` config option to control the `env` used with these tool calls. It is inevitably a bit complex so that it is possible to override individual components of the policy so without having to restate the entire thing. Details are in the updated `README.md` in this PR, but here is the relevant bit that explains the individual fields of `shell_environment_policy`: | Field | Type | Default | Description | | ------------------------- | -------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | | `inherit` | string | `core` | Starting template for the environment:<br>`core` (`HOME`, `PATH`, `USER`, …), `all` (clone full parent env), or `none` (start empty). | | `ignore_default_excludes` | boolean | `false` | When `false`, Codex removes any var whose **name** contains `KEY`, `SECRET`, or `TOKEN` (case-insensitive) before other rules run. | | `exclude` | array&lt;string&gt; | `[]` | Case-insensitive glob patterns to drop after the default filter.<br>Examples: `"AWS_*"`, `"AZURE_*"`. | | `set` | table&lt;string,string&gt; | `{}` | Explicit key/value overrides or additions – always win over inherited values. | | `include_only` | array&lt;string&gt; | `[]` | If non-empty, a whitelist of patterns; only variables that match _one_ pattern survive the final step. (Generally used with `inherit = "all"`.) | In particular, note that the default is `inherit = "core"`, so: * if you have extra env variables that you want to inherit from the parent process, use `inherit = "all"` and then specify `include_only` * if you have extra env variables where you want to hardcode the values, the default `inherit = "core"` will work fine, but then you need to specify `set` This configuration is not battle-tested, so we will probably still have to play with it a bit. `core/src/exec_env.rs` has the critical business logic as well as unit tests. Though if nothing else, previous to this change: ``` $ cargo run --bin codex -- debug seatbelt -- printenv OPENAI_API_KEY # ...prints OPENAI_API_KEY... ``` But after this change it does not print anything (as desired). One final thing to call out about this PR is that the `configure_command!` macro we use in `core/src/exec.rs` has to do some complex logic with respect to how it builds up the `env` for the process being spawned under Landlock/seccomp. Specifically, doing `cmd.env_clear()` followed by `cmd.envs(&$env_map)` (which is arguably the most intuitive way to do it) caused the Landlock unit tests to fail because the processes spawned by the unit tests started failing in unexpected ways! If we forgo `env_clear()` in favor of updating env vars one at a time, the tests still pass. The comment in the code talks about this a bit, and while I would like to investigate this more, I need to move on for the moment, but I do plan to come back to it to fully understand what is going on. For example, this suggests that we might not be able to spawn a C program that calls `env_clear()`, which would be...weird. We may still have to fiddle with our Landlock config if that is the case.
2025-05-22 09:51:19 -07:00
use crate::exec_env::create_env;
feat: support mcp_servers in config.toml (#829) This adds initial support for MCP servers in the style of Claude Desktop and Cursor. Note this PR is the bare minimum to get things working end to end: all configured MCP servers are launched every time Codex is run, there is no recovery for MCP servers that crash, etc. (Also, I took some shortcuts to change some fields of `Session` to be `pub(crate)`, which also means there are circular deps between `codex.rs` and `mcp_tool_call.rs`, but I will clean that up in a subsequent PR.) `codex-rs/README.md` is updated as part of this PR to explain how to use this feature. There is a bit of plumbing to route the new settings from `Config` to the business logic in `codex.rs`. The most significant chunks for new code are in `mcp_connection_manager.rs` (which defines the `McpConnectionManager` struct) and `mcp_tool_call.rs`, which is responsible for tool calls. This PR also introduces new `McpToolCallBegin` and `McpToolCallEnd` event types to the protocol, but does not add any handlers for them. (See https://github.com/openai/codex/pull/836 for initial usage.) To test, I added the following to my `~/.codex/config.toml`: ```toml # Local build of https://github.com/hideya/mcp-server-weather-js [mcp_servers.weather] command = "/Users/mbolin/code/mcp-server-weather-js/dist/index.js" args = [] ``` And then I ran the following: ``` codex-rs$ cargo run --bin codex exec 'what is the weather in san francisco' [2025-05-06T22:40:05] Task started: 1 [2025-05-06T22:40:18] Agent message: Here’s the latest National Weather Service forecast for San Francisco (downtown, near 37.77° N, 122.42° W): This Afternoon (Tue): • Sunny, high near 69 °F • West-southwest wind around 12 mph Tonight: • Partly cloudy, low around 52 °F • SW wind 7–10 mph ... ``` Note that Codex itself is not able to make network calls, so it would not normally be able to get live weather information like this. However, the weather MCP is [currently] not run under the Codex sandbox, so it is able to hit `api.weather.gov` and fetch current weather information. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/829). * #836 * __->__ #829
2025-05-06 15:47:59 -07:00
use crate::mcp_connection_manager::McpConnectionManager;
use crate::mcp_tool_call::handle_mcp_tool_call;
use crate::model_family::find_family_for_model;
use crate::openai_model_info::get_model_info;
use crate::openai_tools::ApplyPatchToolArgs;
use crate::openai_tools::ToolsConfig;
use crate::openai_tools::ToolsConfigParams;
use crate::openai_tools::get_openai_tools;
[1/3] Parse exec commands and format them more nicely in the UI (#2095) # Note for reviewers The bulk of this PR is in in the new file, `parse_command.rs`. This file is designed to be written TDD and implemented with Codex. Do not worry about reviewing the code, just review the unit tests (if you want). If any cases are missing, we'll add more tests and have Codex fix them. I think the best approach will be to land and iterate. I have some follow-ups I want to do after this lands. The next PR after this will let us merge (and dedupe) multiple sequential cells of the same such as multiple read commands. The deduping will also be important because the model often reads the same file multiple times in a row in chunks === This PR formats common commands like reading, formatting, testing, etc more nicely: It tries to extract things like file names, tests and falls back to the cmd if it doesn't. It also only shows stdout/err if the command failed. <img width="770" height="238" alt="CleanShot 2025-08-09 at 16 05 15" src="https://github.com/user-attachments/assets/0ead179a-8910-486b-aa3d-7d26264d751e" /> <img width="348" height="158" alt="CleanShot 2025-08-09 at 16 05 32" src="https://github.com/user-attachments/assets/4302681b-5e87-4ff3-85b4-0252c6c485a9" /> <img width="834" height="324" alt="CleanShot 2025-08-09 at 16 05 56 2" src="https://github.com/user-attachments/assets/09fb3517-7bd6-40f6-a126-4172106b700f" /> Part 2: https://github.com/openai/codex/pull/2097 Part 3: https://github.com/openai/codex/pull/2110
2025-08-11 11:26:15 -07:00
use crate::parse_command::parse_command;
use crate::plan_tool::handle_update_plan;
use crate::project_doc::get_user_instructions;
use crate::protocol::AgentMessageDeltaEvent;
use crate::protocol::AgentMessageEvent;
use crate::protocol::AgentReasoningDeltaEvent;
use crate::protocol::AgentReasoningRawContentDeltaEvent;
use crate::protocol::AgentReasoningSectionBreakEvent;
use crate::protocol::ApplyPatchApprovalRequestEvent;
use crate::protocol::AskForApproval;
use crate::protocol::BackgroundEventEvent;
use crate::protocol::ErrorEvent;
use crate::protocol::Event;
use crate::protocol::EventMsg;
use crate::protocol::ExecApprovalRequestEvent;
use crate::protocol::ExecCommandBeginEvent;
use crate::protocol::ExecCommandEndEvent;
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
use crate::protocol::FileChange;
use crate::protocol::InputItem;
use crate::protocol::ListCustomPromptsResponseEvent;
use crate::protocol::Op;
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
use crate::protocol::PatchApplyBeginEvent;
use crate::protocol::PatchApplyEndEvent;
use crate::protocol::ReviewDecision;
use crate::protocol::SandboxPolicy;
use crate::protocol::SessionConfiguredEvent;
2025-08-21 01:15:24 -07:00
use crate::protocol::StreamErrorEvent;
use crate::protocol::Submission;
use crate::protocol::TaskCompleteEvent;
use crate::protocol::TokenUsageInfo;
use crate::protocol::TurnDiffEvent;
use crate::protocol::WebSearchBeginEvent;
use crate::rollout::RolloutRecorder;
use crate::rollout::RolloutRecorderParams;
use crate::safety::SafetyCheck;
use crate::safety::assess_command_safety;
fix: run apply_patch calls through the sandbox (#1705) Building on the work of https://github.com/openai/codex/pull/1702, this changes how a shell call to `apply_patch` is handled. Previously, a shell call to `apply_patch` was always handled in-process, never leveraging a sandbox. To determine whether the `apply_patch` operation could be auto-approved, the `is_write_patch_constrained_to_writable_paths()` function would check if all the paths listed in the paths were writable. If so, the agent would apply the changes listed in the patch. Unfortunately, this approach afforded a loophole: symlinks! * For a soft link, we could fix this issue by tracing the link and checking whether the target is in the set of writable paths, however... * ...For a hard link, things are not as simple. We can run `stat FILE` to see if the number of links is greater than 1, but then we would have to do something potentially expensive like `find . -inum <inode_number>` to find the other paths for `FILE`. Further, even if this worked, this approach runs the risk of a [TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race condition, so it is not robust. The solution, implemented in this PR, is to take the virtual execution of the `apply_patch` CLI into an _actual_ execution using `codex --codex-run-as-apply-patch PATCH`, which we can run under the sandbox the user specified, just like any other `shell` call. This, of course, assumes that the sandbox prevents writing through symlinks as a mechanism to write to folders that are not in the writable set configured by the sandbox. I verified this by testing the following on both Mac and Linux: ```shell #!/usr/bin/env bash set -euo pipefail # Can running a command in SANDBOX_DIR write a file in EXPLOIT_DIR? # Codex is run in SANDBOX_DIR, so writes should be constrianed to this directory. SANDBOX_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) # EXPLOIT_DIR is outside of SANDBOX_DIR, so let's see if we can write to it. EXPLOIT_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) echo "SANDBOX_DIR: $SANDBOX_DIR" echo "EXPLOIT_DIR: $EXPLOIT_DIR" cleanup() { # Only remove if it looks sane and still exists [[ -n "${SANDBOX_DIR:-}" && -d "$SANDBOX_DIR" ]] && rm -rf -- "$SANDBOX_DIR" [[ -n "${EXPLOIT_DIR:-}" && -d "$EXPLOIT_DIR" ]] && rm -rf -- "$EXPLOIT_DIR" } trap cleanup EXIT echo "I am the original content" > "${EXPLOIT_DIR}/original.txt" # Drop the -s to test hard links. ln -s "${EXPLOIT_DIR}/original.txt" "${SANDBOX_DIR}/link-to-original.txt" cat "${SANDBOX_DIR}/link-to-original.txt" if [[ "$(uname)" == "Linux" ]]; then SANDBOX_SUBCOMMAND=landlock else SANDBOX_SUBCOMMAND=seatbelt fi # Attempt the exploit cd "${SANDBOX_DIR}" codex debug "${SANDBOX_SUBCOMMAND}" bash -lc "echo pwned > ./link-to-original.txt" || true cat "${EXPLOIT_DIR}/original.txt" ``` Admittedly, this change merits a proper integration test, but I think I will have to do that in a follow-up PR.
2025-07-30 16:45:08 -07:00
use crate::safety::assess_safety_for_untrusted_command;
use crate::shell;
use crate::turn_diff_tracker::TurnDiffTracker;
use crate::unified_exec::UnifiedExecSessionManager;
use crate::user_instructions::UserInstructions;
feat: configurable notifications in the Rust CLI (#793) With this change, you can specify a program that will be executed to get notified about events generated by Codex. The notification info will be packaged as a JSON object. The supported notification types are defined by the `UserNotification` enum introduced in this PR. Initially, it contains only one variant, `AgentTurnComplete`: ```rust pub(crate) enum UserNotification { #[serde(rename_all = "kebab-case")] AgentTurnComplete { turn_id: String, /// Messages that the user sent to the agent to initiate the turn. input_messages: Vec<String>, /// The last message sent by the assistant in the turn. last_assistant_message: Option<String>, }, } ``` This is intended to support the common case when a "turn" ends, which often means it is now your chance to give Codex further instructions. For example, I have the following in my `~/.codex/config.toml`: ```toml notify = ["python3", "/Users/mbolin/.codex/notify.py"] ``` I created my own custom notifier script that calls out to [terminal-notifier](https://github.com/julienXX/terminal-notifier) to show a desktop push notification on macOS. Contents of `notify.py`: ```python #!/usr/bin/env python3 import json import subprocess import sys def main() -> int: if len(sys.argv) != 2: print("Usage: notify.py <NOTIFICATION_JSON>") return 1 try: notification = json.loads(sys.argv[1]) except json.JSONDecodeError: return 1 match notification_type := notification.get("type"): case "agent-turn-complete": assistant_message = notification.get("last-assistant-message") if assistant_message: title = f"Codex: {assistant_message}" else: title = "Codex: Turn Complete!" input_messages = notification.get("input_messages", []) message = " ".join(input_messages) title += message case _: print(f"not sending a push notification for: {notification_type}") return 0 subprocess.check_output( [ "terminal-notifier", "-title", title, "-message", message, "-group", "codex", "-ignoreDnD", "-activate", "com.googlecode.iterm2", ] ) return 0 if __name__ == "__main__": sys.exit(main()) ``` For reference, here are related PRs that tried to add this functionality to the TypeScript version of the Codex CLI: * https://github.com/openai/codex/pull/160 * https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
use crate::user_notification::UserNotification;
use crate::util::backoff;
use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig;
use codex_protocol::config_types::ReasoningSummary as ReasoningSummaryConfig;
use codex_protocol::custom_prompts::CustomPrompt;
use codex_protocol::models::ContentItem;
use codex_protocol::models::FunctionCallOutputPayload;
use codex_protocol::models::LocalShellAction;
use codex_protocol::models::ResponseInputItem;
use codex_protocol::models::ResponseItem;
use codex_protocol::models::ShellToolCallParams;
use codex_protocol::protocol::InitialHistory;
// A convenience extension trait for acquiring mutex locks where poisoning is
// unrecoverable and should abort the program. This avoids scattered `.unwrap()`
// calls on `lock()` while still surfacing a clear panic message when a lock is
// poisoned.
trait MutexExt<T> {
fn lock_unchecked(&self) -> MutexGuard<'_, T>;
}
impl<T> MutexExt<T> for Mutex<T> {
fn lock_unchecked(&self) -> MutexGuard<'_, T> {
#[expect(clippy::expect_used)]
self.lock().expect("poisoned lock")
}
}
/// The high-level interface to the Codex system.
/// It operates as a queue pair where you send submissions and receive events.
pub struct Codex {
next_id: AtomicU64,
tx_sub: Sender<Submission>,
rx_event: Receiver<Event>,
}
/// Wrapper returned by [`Codex::spawn`] containing the spawned [`Codex`],
/// the submission id for the initial `ConfigureSession` request and the
/// unique session id.
pub struct CodexSpawnOk {
pub codex: Codex,
pub conversation_id: ConversationId,
}
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
pub(crate) const INITIAL_SUBMIT_ID: &str = "";
pub(crate) const SUBMISSION_CHANNEL_CAPACITY: usize = 64;
// Model-formatting limits: clients get full streams; oonly content sent to the model is truncated.
pub(crate) const MODEL_FORMAT_MAX_BYTES: usize = 10 * 1024; // 10 KiB
pub(crate) const MODEL_FORMAT_MAX_LINES: usize = 256; // lines
pub(crate) const MODEL_FORMAT_HEAD_LINES: usize = MODEL_FORMAT_MAX_LINES / 2;
pub(crate) const MODEL_FORMAT_TAIL_LINES: usize = MODEL_FORMAT_MAX_LINES - MODEL_FORMAT_HEAD_LINES; // 128
pub(crate) const MODEL_FORMAT_HEAD_BYTES: usize = MODEL_FORMAT_MAX_BYTES / 2;
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
impl Codex {
/// Spawn a new [`Codex`] and initialize the session.
pub async fn spawn(
config: Config,
auth_manager: Arc<AuthManager>,
conversation_history: InitialHistory,
) -> CodexResult<CodexSpawnOk> {
let (tx_sub, rx_sub) = async_channel::bounded(SUBMISSION_CHANNEL_CAPACITY);
let (tx_event, rx_event) = async_channel::unbounded();
let user_instructions = get_user_instructions(&config).await;
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
let config = Arc::new(config);
let configure_session = ConfigureSession {
provider: config.model_provider.clone(),
model: config.model.clone(),
model_reasoning_effort: config.model_reasoning_effort,
model_reasoning_summary: config.model_reasoning_summary,
user_instructions,
base_instructions: config.base_instructions.clone(),
approval_policy: config.approval_policy,
sandbox_policy: config.sandbox_policy.clone(),
notify: config.notify.clone(),
cwd: config.cwd.clone(),
};
// Generate a unique ID for the lifetime of this Codex session.
let (session, turn_context) = Session::new(
configure_session,
config.clone(),
auth_manager.clone(),
tx_event.clone(),
Replay EventMsgs from Response Items when resuming a session with history. (#3123) ### Overview This PR introduces the following changes: 1. Adds a unified mechanism to convert ResponseItem into EventMsg. 2. Ensures that when a session is initialized with initial history, a vector of EventMsg is sent along with the session configuration. This allows clients to re-render the UI accordingly. 3. Added integration testing ### Caveats This implementation does not send every EventMsg that was previously dispatched to clients. The excluded events fall into two categories: • “Arguably” rolled-out events Examples include tool calls and apply-patch calls. While these events are conceptually rolled out, we currently only roll out ResponseItems. These events are already being handled elsewhere and transformed into EventMsg before being sent. • Non-rolled-out events Certain events such as TurnDiff, Error, and TokenCount are not rolled out at all. ### Future Directions At present, resuming a session involves maintaining two states: • UI State Clients can replay most of the important UI from the provided EventMsg history. • Model State The model receives the complete session history to reconstruct its internal state. This design provides a solid foundation. If, in the future, more precise UI reconstruction is needed, we have two potential paths: 1. Introduce a third data structure that allows us to derive both ResponseItems and EventMsgs. 2. Clearly divide responsibilities: the core system ensures the integrity of the model state, while clients are responsible for reconstructing the UI.
2025-09-03 21:47:00 -07:00
conversation_history.clone(),
)
.await
.map_err(|e| {
error!("Failed to create session: {e:#}");
CodexErr::InternalAgentDied
})?;
let conversation_id = session.conversation_id;
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
// This task will run until Op::Shutdown is received.
tokio::spawn(submission_loop(session, turn_context, config, rx_sub));
let codex = Codex {
next_id: AtomicU64::new(0),
tx_sub,
rx_event,
};
Ok(CodexSpawnOk {
codex,
conversation_id,
})
}
/// Submit the `op` wrapped in a `Submission` with a unique ID.
pub async fn submit(&self, op: Op) -> CodexResult<String> {
let id = self
.next_id
.fetch_add(1, std::sync::atomic::Ordering::SeqCst)
.to_string();
let sub = Submission { id: id.clone(), op };
self.submit_with_id(sub).await?;
Ok(id)
}
/// Use sparingly: prefer `submit()` so Codex is responsible for generating
/// unique IDs for each submission.
pub async fn submit_with_id(&self, sub: Submission) -> CodexResult<()> {
self.tx_sub
.send(sub)
.await
.map_err(|_| CodexErr::InternalAgentDied)?;
Ok(())
}
pub async fn next_event(&self) -> CodexResult<Event> {
let event = self
.rx_event
.recv()
.await
.map_err(|_| CodexErr::InternalAgentDied)?;
Ok(event)
}
}
/// Mutable state of the agent
#[derive(Default)]
struct State {
approved_commands: HashSet<Vec<String>>,
current_task: Option<AgentTask>,
pending_approvals: HashMap<String, oneshot::Sender<ReviewDecision>>,
pending_input: Vec<ResponseInputItem>,
history: ConversationHistory,
token_info: Option<TokenUsageInfo>,
}
/// Context for an initialized model agent
///
/// A session has at most 1 running task at a time, and can be interrupted by user input.
feat: support mcp_servers in config.toml (#829) This adds initial support for MCP servers in the style of Claude Desktop and Cursor. Note this PR is the bare minimum to get things working end to end: all configured MCP servers are launched every time Codex is run, there is no recovery for MCP servers that crash, etc. (Also, I took some shortcuts to change some fields of `Session` to be `pub(crate)`, which also means there are circular deps between `codex.rs` and `mcp_tool_call.rs`, but I will clean that up in a subsequent PR.) `codex-rs/README.md` is updated as part of this PR to explain how to use this feature. There is a bit of plumbing to route the new settings from `Config` to the business logic in `codex.rs`. The most significant chunks for new code are in `mcp_connection_manager.rs` (which defines the `McpConnectionManager` struct) and `mcp_tool_call.rs`, which is responsible for tool calls. This PR also introduces new `McpToolCallBegin` and `McpToolCallEnd` event types to the protocol, but does not add any handlers for them. (See https://github.com/openai/codex/pull/836 for initial usage.) To test, I added the following to my `~/.codex/config.toml`: ```toml # Local build of https://github.com/hideya/mcp-server-weather-js [mcp_servers.weather] command = "/Users/mbolin/code/mcp-server-weather-js/dist/index.js" args = [] ``` And then I ran the following: ``` codex-rs$ cargo run --bin codex exec 'what is the weather in san francisco' [2025-05-06T22:40:05] Task started: 1 [2025-05-06T22:40:18] Agent message: Here’s the latest National Weather Service forecast for San Francisco (downtown, near 37.77° N, 122.42° W): This Afternoon (Tue): • Sunny, high near 69 °F • West-southwest wind around 12 mph Tonight: • Partly cloudy, low around 52 °F • SW wind 7–10 mph ... ``` Note that Codex itself is not able to make network calls, so it would not normally be able to get live weather information like this. However, the weather MCP is [currently] not run under the Codex sandbox, so it is able to hit `api.weather.gov` and fetch current weather information. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/829). * #836 * __->__ #829
2025-05-06 15:47:59 -07:00
pub(crate) struct Session {
conversation_id: ConversationId,
tx_event: Sender<Event>,
feat: support mcp_servers in config.toml (#829) This adds initial support for MCP servers in the style of Claude Desktop and Cursor. Note this PR is the bare minimum to get things working end to end: all configured MCP servers are launched every time Codex is run, there is no recovery for MCP servers that crash, etc. (Also, I took some shortcuts to change some fields of `Session` to be `pub(crate)`, which also means there are circular deps between `codex.rs` and `mcp_tool_call.rs`, but I will clean that up in a subsequent PR.) `codex-rs/README.md` is updated as part of this PR to explain how to use this feature. There is a bit of plumbing to route the new settings from `Config` to the business logic in `codex.rs`. The most significant chunks for new code are in `mcp_connection_manager.rs` (which defines the `McpConnectionManager` struct) and `mcp_tool_call.rs`, which is responsible for tool calls. This PR also introduces new `McpToolCallBegin` and `McpToolCallEnd` event types to the protocol, but does not add any handlers for them. (See https://github.com/openai/codex/pull/836 for initial usage.) To test, I added the following to my `~/.codex/config.toml`: ```toml # Local build of https://github.com/hideya/mcp-server-weather-js [mcp_servers.weather] command = "/Users/mbolin/code/mcp-server-weather-js/dist/index.js" args = [] ``` And then I ran the following: ``` codex-rs$ cargo run --bin codex exec 'what is the weather in san francisco' [2025-05-06T22:40:05] Task started: 1 [2025-05-06T22:40:18] Agent message: Here’s the latest National Weather Service forecast for San Francisco (downtown, near 37.77° N, 122.42° W): This Afternoon (Tue): • Sunny, high near 69 °F • West-southwest wind around 12 mph Tonight: • Partly cloudy, low around 52 °F • SW wind 7–10 mph ... ``` Note that Codex itself is not able to make network calls, so it would not normally be able to get live weather information like this. However, the weather MCP is [currently] not run under the Codex sandbox, so it is able to hit `api.weather.gov` and fetch current weather information. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/829). * #836 * __->__ #829
2025-05-06 15:47:59 -07:00
/// Manager for external MCP servers/tools.
mcp_connection_manager: McpConnectionManager,
session_manager: ExecSessionManager,
unified_exec_manager: UnifiedExecSessionManager,
feat: support mcp_servers in config.toml (#829) This adds initial support for MCP servers in the style of Claude Desktop and Cursor. Note this PR is the bare minimum to get things working end to end: all configured MCP servers are launched every time Codex is run, there is no recovery for MCP servers that crash, etc. (Also, I took some shortcuts to change some fields of `Session` to be `pub(crate)`, which also means there are circular deps between `codex.rs` and `mcp_tool_call.rs`, but I will clean that up in a subsequent PR.) `codex-rs/README.md` is updated as part of this PR to explain how to use this feature. There is a bit of plumbing to route the new settings from `Config` to the business logic in `codex.rs`. The most significant chunks for new code are in `mcp_connection_manager.rs` (which defines the `McpConnectionManager` struct) and `mcp_tool_call.rs`, which is responsible for tool calls. This PR also introduces new `McpToolCallBegin` and `McpToolCallEnd` event types to the protocol, but does not add any handlers for them. (See https://github.com/openai/codex/pull/836 for initial usage.) To test, I added the following to my `~/.codex/config.toml`: ```toml # Local build of https://github.com/hideya/mcp-server-weather-js [mcp_servers.weather] command = "/Users/mbolin/code/mcp-server-weather-js/dist/index.js" args = [] ``` And then I ran the following: ``` codex-rs$ cargo run --bin codex exec 'what is the weather in san francisco' [2025-05-06T22:40:05] Task started: 1 [2025-05-06T22:40:18] Agent message: Here’s the latest National Weather Service forecast for San Francisco (downtown, near 37.77° N, 122.42° W): This Afternoon (Tue): • Sunny, high near 69 °F • West-southwest wind around 12 mph Tonight: • Partly cloudy, low around 52 °F • SW wind 7–10 mph ... ``` Note that Codex itself is not able to make network calls, so it would not normally be able to get live weather information like this. However, the weather MCP is [currently] not run under the Codex sandbox, so it is able to hit `api.weather.gov` and fetch current weather information. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/829). * #836 * __->__ #829
2025-05-06 15:47:59 -07:00
feat: configurable notifications in the Rust CLI (#793) With this change, you can specify a program that will be executed to get notified about events generated by Codex. The notification info will be packaged as a JSON object. The supported notification types are defined by the `UserNotification` enum introduced in this PR. Initially, it contains only one variant, `AgentTurnComplete`: ```rust pub(crate) enum UserNotification { #[serde(rename_all = "kebab-case")] AgentTurnComplete { turn_id: String, /// Messages that the user sent to the agent to initiate the turn. input_messages: Vec<String>, /// The last message sent by the assistant in the turn. last_assistant_message: Option<String>, }, } ``` This is intended to support the common case when a "turn" ends, which often means it is now your chance to give Codex further instructions. For example, I have the following in my `~/.codex/config.toml`: ```toml notify = ["python3", "/Users/mbolin/.codex/notify.py"] ``` I created my own custom notifier script that calls out to [terminal-notifier](https://github.com/julienXX/terminal-notifier) to show a desktop push notification on macOS. Contents of `notify.py`: ```python #!/usr/bin/env python3 import json import subprocess import sys def main() -> int: if len(sys.argv) != 2: print("Usage: notify.py <NOTIFICATION_JSON>") return 1 try: notification = json.loads(sys.argv[1]) except json.JSONDecodeError: return 1 match notification_type := notification.get("type"): case "agent-turn-complete": assistant_message = notification.get("last-assistant-message") if assistant_message: title = f"Codex: {assistant_message}" else: title = "Codex: Turn Complete!" input_messages = notification.get("input_messages", []) message = " ".join(input_messages) title += message case _: print(f"not sending a push notification for: {notification_type}") return 0 subprocess.check_output( [ "terminal-notifier", "-title", title, "-message", message, "-group", "codex", "-ignoreDnD", "-activate", "com.googlecode.iterm2", ] ) return 0 if __name__ == "__main__": sys.exit(main()) ``` For reference, here are related PRs that tried to add this functionality to the TypeScript version of the Codex CLI: * https://github.com/openai/codex/pull/160 * https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
/// External notifier command (will be passed as args to exec()). When
/// `None` this feature is disabled.
notify: Option<Vec<String>>,
/// Optional rollout recorder for persisting the conversation transcript so
/// sessions can be replayed or inspected later.
rollout: Mutex<Option<RolloutRecorder>>,
state: Mutex<State>,
fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086) Historically, we spawned the Seatbelt and Landlock sandboxes in substantially different ways: For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy specified as an arg followed by the original command: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec.rs#L147-L219 For **Landlock/Seccomp**, we would do `tokio::runtime::Builder::new_current_thread()`, _invoke Landlock/Seccomp APIs to modify the permissions of that new thread_, and then spawn the command: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec_linux.rs#L28-L49 While it is neat that Landlock/Seccomp supports applying a policy to only one thread without having to apply it to the entire process, it requires us to maintain two different codepaths and is a bit harder to reason about. The tipping point was https://github.com/openai/codex/pull/1061, in which we had to start building up the `env` in an unexpected way for the existing Landlock/Seccomp approach to continue to work. This PR overhauls things so that we do similar things for Mac and Linux. It turned out that we were already building our own "helper binary" comparable to Mac's `sandbox-exec` as part of the `cli` crate: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/cli/Cargo.toml#L10-L12 We originally created this to build a small binary to include with the Node.js version of the Codex CLI to provide support for Linux sandboxing. Though the sticky bit is that, at this point, we still want to deploy the Rust version of Codex as a single, standalone binary rather than a CLI and a supporting sandboxing binary. To satisfy this goal, we use "the arg0 trick," in which we: * use `std::env::current_exe()` to get the path to the CLI that is currently running * use the CLI as the `program` for the `Command` * set `"codex-linux-sandbox"` as arg0 for the `Command` A CLI that supports sandboxing should check arg0 at the start of the program. If it is `"codex-linux-sandbox"`, it must invoke `codex_linux_sandbox::run_main()`, which runs the CLI as if it were `codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn the original command, so do _replace_ the process rather than spawn a subprocess. Incidentally, we do this before starting the Tokio runtime, so the process should only have one thread when `execvp(3)` is called. Because the `core` crate that needs to spawn the Linux sandboxing is not a CLI in its own right, this means that every CLI that includes `core` and relies on this behavior has to (1) implement it and (2) provide the path to the sandboxing executable. While the path is almost always `std::env::current_exe()`, we needed to make this configurable for integration tests, so `Config` now has a `codex_linux_sandbox_exe: Option<PathBuf>` property to facilitate threading this through, introduced in https://github.com/openai/codex/pull/1089. This common pattern is now captured in `codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs` functions that should use it have been updated as part of this PR. The `codex-linux-sandbox` crate added to the Cargo workspace as part of this PR now has the bulk of the Landlock/Seccomp logic, which makes `core` a bit simpler. Indeed, `core/src/exec_linux.rs` and `core/src/landlock.rs` were removed/ported as part of this PR. I also moved the unit tests for this code into an integration test, `linux-sandbox/tests/landlock.rs`, in which I use `env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for `codex_linux_sandbox_exe` since `std::env::current_exe()` is not appropriate in that case.
2025-05-23 11:37:07 -07:00
codex_linux_sandbox_exe: Option<PathBuf>,
user_shell: shell::Shell,
show_raw_agent_reasoning: bool,
}
/// The context needed for a single turn of the conversation.
#[derive(Debug)]
pub(crate) struct TurnContext {
pub(crate) client: ModelClient,
/// The session's current working directory. All relative paths provided by
/// the model as well as sandbox policies are resolved against this path
/// instead of `std::env::current_dir()`.
pub(crate) cwd: PathBuf,
pub(crate) base_instructions: Option<String>,
pub(crate) user_instructions: Option<String>,
pub(crate) approval_policy: AskForApproval,
pub(crate) sandbox_policy: SandboxPolicy,
pub(crate) shell_environment_policy: ShellEnvironmentPolicy,
pub(crate) tools_config: ToolsConfig,
}
impl TurnContext {
fn resolve_path(&self, path: Option<String>) -> PathBuf {
path.as_ref()
.map(PathBuf::from)
.map_or_else(|| self.cwd.clone(), |p| self.cwd.join(p))
}
}
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
/// Configure the model session.
struct ConfigureSession {
/// Provider identifier ("openai", "openrouter", ...).
provider: ModelProviderInfo,
/// If not specified, server will use its default model.
model: String,
model_reasoning_effort: ReasoningEffortConfig,
model_reasoning_summary: ReasoningSummaryConfig,
/// Model instructions that are appended to the base instructions.
user_instructions: Option<String>,
/// Base instructions override.
base_instructions: Option<String>,
/// When to escalate for approval for execution
approval_policy: AskForApproval,
/// How to sandbox commands executed in the system
sandbox_policy: SandboxPolicy,
/// Optional external notifier command tokens. Present only when the
/// client wants the agent to spawn a program after each completed
/// turn.
notify: Option<Vec<String>>,
/// Working directory that should be treated as the *root* of the
/// session. All relative paths supplied by the model as well as the
/// execution sandbox are resolved against this directory **instead**
/// of the process-wide current working directory. CLI front-ends are
/// expected to expand this to an absolute path before sending the
/// `ConfigureSession` operation so that the business-logic layer can
/// operate deterministically.
cwd: PathBuf,
}
impl Session {
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
async fn new(
configure_session: ConfigureSession,
config: Arc<Config>,
auth_manager: Arc<AuthManager>,
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
tx_event: Sender<Event>,
Replay EventMsgs from Response Items when resuming a session with history. (#3123) ### Overview This PR introduces the following changes: 1. Adds a unified mechanism to convert ResponseItem into EventMsg. 2. Ensures that when a session is initialized with initial history, a vector of EventMsg is sent along with the session configuration. This allows clients to re-render the UI accordingly. 3. Added integration testing ### Caveats This implementation does not send every EventMsg that was previously dispatched to clients. The excluded events fall into two categories: • “Arguably” rolled-out events Examples include tool calls and apply-patch calls. While these events are conceptually rolled out, we currently only roll out ResponseItems. These events are already being handled elsewhere and transformed into EventMsg before being sent. • Non-rolled-out events Certain events such as TurnDiff, Error, and TokenCount are not rolled out at all. ### Future Directions At present, resuming a session involves maintaining two states: • UI State Clients can replay most of the important UI from the provided EventMsg history. • Model State The model receives the complete session history to reconstruct its internal state. This design provides a solid foundation. If, in the future, more precise UI reconstruction is needed, we have two potential paths: 1. Introduce a third data structure that allows us to derive both ResponseItems and EventMsgs. 2. Clearly divide responsibilities: the core system ensures the integrity of the model state, while clients are responsible for reconstructing the UI.
2025-09-03 21:47:00 -07:00
initial_history: InitialHistory,
) -> anyhow::Result<(Arc<Self>, TurnContext)> {
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
let ConfigureSession {
provider,
model,
model_reasoning_effort,
model_reasoning_summary,
user_instructions,
base_instructions,
approval_policy,
sandbox_policy,
notify,
cwd,
} = configure_session;
debug!("Configuring session: model={model}; provider={provider:?}");
if !cwd.is_absolute() {
return Err(anyhow::anyhow!("cwd is not absolute: {cwd:?}"));
}
let (conversation_id, rollout_params) = match &initial_history {
InitialHistory::New | InitialHistory::Forked(_) => {
let conversation_id = ConversationId::default();
(
conversation_id,
RolloutRecorderParams::new(conversation_id, user_instructions.clone()),
)
}
InitialHistory::Resumed(resumed_history) => (
resumed_history.conversation_id,
RolloutRecorderParams::resume(resumed_history.rollout_path.clone()),
),
};
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
// Error messages to dispatch after SessionConfigured is sent.
let mut post_session_configured_error_events = Vec::<Event>::new();
// Kick off independent async setup tasks in parallel to reduce startup latency.
//
// - initialize RolloutRecorder with new or resumed session info
// - spin up MCP connection manager
// - perform default shell discovery
// - load history metadata
let rollout_fut = RolloutRecorder::new(&config, rollout_params);
let mcp_fut = McpConnectionManager::new(config.mcp_servers.clone());
let default_shell_fut = shell::default_user_shell();
let history_meta_fut = crate::message_history::history_metadata(&config);
// Join all independent futures.
let (rollout_recorder, mcp_res, default_shell, (history_log_id, history_entry_count)) =
tokio::join!(rollout_fut, mcp_fut, default_shell_fut, history_meta_fut);
let rollout_recorder = rollout_recorder.map_err(|e| {
error!("failed to initialize rollout recorder: {e:#}");
anyhow::anyhow!("failed to initialize rollout recorder: {e:#}")
})?;
let rollout_path = rollout_recorder.rollout_path.clone();
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
// Create the mutable state for the Session.
let state = State {
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
history: ConversationHistory::new(),
..Default::default()
};
// Handle MCP manager result and record any startup failures.
let (mcp_connection_manager, failed_clients) = match mcp_res {
Ok((mgr, failures)) => (mgr, failures),
Err(e) => {
let message = format!("Failed to create MCP connection manager: {e:#}");
error!("{message}");
post_session_configured_error_events.push(Event {
id: INITIAL_SUBMIT_ID.to_owned(),
msg: EventMsg::Error(ErrorEvent { message }),
});
(McpConnectionManager::default(), Default::default())
}
};
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
// Surface individual client start-up failures to the user.
if !failed_clients.is_empty() {
for (server_name, err) in failed_clients {
let message = format!("MCP client for `{server_name}` failed to start: {err:#}");
error!("{message}");
post_session_configured_error_events.push(Event {
id: INITIAL_SUBMIT_ID.to_owned(),
msg: EventMsg::Error(ErrorEvent { message }),
});
}
}
// Now that the conversation id is final (may have been updated by resume),
// construct the model client.
let client = ModelClient::new(
config.clone(),
Some(auth_manager.clone()),
provider.clone(),
model_reasoning_effort,
model_reasoning_summary,
conversation_id,
);
let turn_context = TurnContext {
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
client,
tools_config: ToolsConfig::new(&ToolsConfigParams {
model_family: &config.model_family,
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
approval_policy,
sandbox_policy: sandbox_policy.clone(),
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
experimental_unified_exec_tool: config.use_experimental_unified_exec_tool,
}),
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
user_instructions,
base_instructions,
approval_policy,
sandbox_policy,
shell_environment_policy: config.shell_environment_policy.clone(),
cwd,
};
let sess = Arc::new(Session {
conversation_id,
tx_event: tx_event.clone(),
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
mcp_connection_manager,
session_manager: ExecSessionManager::default(),
unified_exec_manager: UnifiedExecSessionManager::default(),
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
notify,
state: Mutex::new(state),
rollout: Mutex::new(Some(rollout_recorder)),
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
codex_linux_sandbox_exe: config.codex_linux_sandbox_exe.clone(),
user_shell: default_shell,
show_raw_agent_reasoning: config.show_raw_agent_reasoning,
});
// Dispatch the SessionConfiguredEvent first and then report any errors.
Replay EventMsgs from Response Items when resuming a session with history. (#3123) ### Overview This PR introduces the following changes: 1. Adds a unified mechanism to convert ResponseItem into EventMsg. 2. Ensures that when a session is initialized with initial history, a vector of EventMsg is sent along with the session configuration. This allows clients to re-render the UI accordingly. 3. Added integration testing ### Caveats This implementation does not send every EventMsg that was previously dispatched to clients. The excluded events fall into two categories: • “Arguably” rolled-out events Examples include tool calls and apply-patch calls. While these events are conceptually rolled out, we currently only roll out ResponseItems. These events are already being handled elsewhere and transformed into EventMsg before being sent. • Non-rolled-out events Certain events such as TurnDiff, Error, and TokenCount are not rolled out at all. ### Future Directions At present, resuming a session involves maintaining two states: • UI State Clients can replay most of the important UI from the provided EventMsg history. • Model State The model receives the complete session history to reconstruct its internal state. This design provides a solid foundation. If, in the future, more precise UI reconstruction is needed, we have two potential paths: 1. Introduce a third data structure that allows us to derive both ResponseItems and EventMsgs. 2. Clearly divide responsibilities: the core system ensures the integrity of the model state, while clients are responsible for reconstructing the UI.
2025-09-03 21:47:00 -07:00
// If resuming, include converted initial messages in the payload so UIs can render them immediately.
let initial_messages = initial_history.get_event_msgs();
sess.record_initial_history(&turn_context, initial_history)
.await;
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
let events = std::iter::once(Event {
id: INITIAL_SUBMIT_ID.to_owned(),
msg: EventMsg::SessionConfigured(SessionConfiguredEvent {
session_id: conversation_id,
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
model,
history_log_id,
history_entry_count,
Replay EventMsgs from Response Items when resuming a session with history. (#3123) ### Overview This PR introduces the following changes: 1. Adds a unified mechanism to convert ResponseItem into EventMsg. 2. Ensures that when a session is initialized with initial history, a vector of EventMsg is sent along with the session configuration. This allows clients to re-render the UI accordingly. 3. Added integration testing ### Caveats This implementation does not send every EventMsg that was previously dispatched to clients. The excluded events fall into two categories: • “Arguably” rolled-out events Examples include tool calls and apply-patch calls. While these events are conceptually rolled out, we currently only roll out ResponseItems. These events are already being handled elsewhere and transformed into EventMsg before being sent. • Non-rolled-out events Certain events such as TurnDiff, Error, and TokenCount are not rolled out at all. ### Future Directions At present, resuming a session involves maintaining two states: • UI State Clients can replay most of the important UI from the provided EventMsg history. • Model State The model receives the complete session history to reconstruct its internal state. This design provides a solid foundation. If, in the future, more precise UI reconstruction is needed, we have two potential paths: 1. Introduce a third data structure that allows us to derive both ResponseItems and EventMsgs. 2. Clearly divide responsibilities: the core system ensures the integrity of the model state, while clients are responsible for reconstructing the UI.
2025-09-03 21:47:00 -07:00
initial_messages,
rollout_path,
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
}),
})
.chain(post_session_configured_error_events.into_iter());
for event in events {
sess.send_event(event).await;
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
}
Ok((sess, turn_context))
}
pub fn set_task(&self, task: AgentTask) {
let mut state = self.state.lock_unchecked();
if let Some(current_task) = state.current_task.take() {
current_task.abort(TurnAbortReason::Replaced);
}
state.current_task = Some(task);
}
pub fn remove_task(&self, sub_id: &str) {
let mut state = self.state.lock_unchecked();
if let Some(task) = &state.current_task
&& task.sub_id == sub_id
{
state.current_task.take();
}
}
async fn record_initial_history(
&self,
turn_context: &TurnContext,
conversation_history: InitialHistory,
) {
match conversation_history {
InitialHistory::New => {
// Build and record initial items (user instructions + environment context)
let items = self.build_initial_context(turn_context);
self.record_conversation_items(&items).await;
}
InitialHistory::Resumed(_) | InitialHistory::Forked(_) => {
let rollout_items = conversation_history.get_rollout_items();
let persist = matches!(conversation_history, InitialHistory::Forked(_));
// Always add response items to conversation history
let response_items = conversation_history.get_response_items();
if !response_items.is_empty() {
self.record_into_history(&response_items);
}
// If persisting, persist all rollout items as-is (recorder filters)
if persist && !rollout_items.is_empty() {
self.persist_rollout_items(&rollout_items).await;
}
}
}
}
/// Persist the event to rollout and send it to clients.
pub(crate) async fn send_event(&self, event: Event) {
// Persist the event into rollout (recorder filters as needed)
let rollout_items = vec![RolloutItem::EventMsg(event.msg.clone())];
self.persist_rollout_items(&rollout_items).await;
if let Err(e) = self.tx_event.send(event).await {
error!("failed to send tool call event: {e}");
}
}
pub async fn request_command_approval(
&self,
sub_id: String,
call_id: String,
command: Vec<String>,
cwd: PathBuf,
reason: Option<String>,
) -> oneshot::Receiver<ReviewDecision> {
fix: add callback to map before sending request to fix race condition (#3146) Last week, I thought I found the smoking gun in our flaky integration tests where holding these locks could have led to potential deadlock: - https://github.com/openai/codex/pull/2876 - https://github.com/openai/codex/pull/2878 Yet even after those PRs went in, we continued to see flakinees in our integration tests! Though with the additional logging added as part of debugging those tests, I now saw things like: ``` read message from stdout: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) notification: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) read message from stdout: Request(JSONRPCRequest { id: Integer(0), jsonrpc: "2.0", method: "execCommandApproval", params: Some(Object {"conversation_id": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}) }) writing message to stdin: Response(JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} }) in read_stream_until_notification_message(codex/event/task_complete) [mcp stderr] 2025-09-04T00:00:59.738585Z INFO codex_mcp_server::message_processor: <- response: JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} } [mcp stderr] 2025-09-04T00:00:59.738740Z DEBUG codex_core::codex: Submission sub=Submission { id: "1", op: ExecApproval { id: "0", decision: Approved } } [mcp stderr] 2025-09-04T00:00:59.738832Z WARN codex_core::codex: No pending approval found for sub_id: 0 ``` That is, a response was sent for a request, but no callback was in place to handle the response! This time, I think I may have found the underlying issue (though the fixes for holding locks for too long may have also been part of it), which is I found cases where we were sending the request: https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L597 before inserting the `Sender` into the `pending_approvals` map (which has to wait on acquiring a mutex): https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L598-L601 so it is possible the request could go out and the client could respond before `pending_approvals` was updated! Note this was happening in both `request_command_approval()` and `request_patch_approval()`, which maps to the sorts of errors we have been seeing when these integration tests have been flaking on us. While here, I am also adding some extra logging that prints if inserting into `pending_approvals` overwrites an entry as opposed to purely inserting one. Today, a conversation can have only one pending request at a time, but as we are planning to support parallel tool calls, this invariant may not continue to hold, in which case we need to revisit this abstraction.
2025-09-04 07:38:28 -07:00
// Add the tx_approve callback to the map before sending the request.
let (tx_approve, rx_approve) = oneshot::channel();
fix: add callback to map before sending request to fix race condition (#3146) Last week, I thought I found the smoking gun in our flaky integration tests where holding these locks could have led to potential deadlock: - https://github.com/openai/codex/pull/2876 - https://github.com/openai/codex/pull/2878 Yet even after those PRs went in, we continued to see flakinees in our integration tests! Though with the additional logging added as part of debugging those tests, I now saw things like: ``` read message from stdout: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) notification: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) read message from stdout: Request(JSONRPCRequest { id: Integer(0), jsonrpc: "2.0", method: "execCommandApproval", params: Some(Object {"conversation_id": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}) }) writing message to stdin: Response(JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} }) in read_stream_until_notification_message(codex/event/task_complete) [mcp stderr] 2025-09-04T00:00:59.738585Z INFO codex_mcp_server::message_processor: <- response: JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} } [mcp stderr] 2025-09-04T00:00:59.738740Z DEBUG codex_core::codex: Submission sub=Submission { id: "1", op: ExecApproval { id: "0", decision: Approved } } [mcp stderr] 2025-09-04T00:00:59.738832Z WARN codex_core::codex: No pending approval found for sub_id: 0 ``` That is, a response was sent for a request, but no callback was in place to handle the response! This time, I think I may have found the underlying issue (though the fixes for holding locks for too long may have also been part of it), which is I found cases where we were sending the request: https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L597 before inserting the `Sender` into the `pending_approvals` map (which has to wait on acquiring a mutex): https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L598-L601 so it is possible the request could go out and the client could respond before `pending_approvals` was updated! Note this was happening in both `request_command_approval()` and `request_patch_approval()`, which maps to the sorts of errors we have been seeing when these integration tests have been flaking on us. While here, I am also adding some extra logging that prints if inserting into `pending_approvals` overwrites an entry as opposed to purely inserting one. Today, a conversation can have only one pending request at a time, but as we are planning to support parallel tool calls, this invariant may not continue to hold, in which case we need to revisit this abstraction.
2025-09-04 07:38:28 -07:00
let event_id = sub_id.clone();
let prev_entry = {
let mut state = self.state.lock_unchecked();
state.pending_approvals.insert(sub_id, tx_approve)
};
if prev_entry.is_some() {
warn!("Overwriting existing pending approval for sub_id: {event_id}");
}
let event = Event {
fix: add callback to map before sending request to fix race condition (#3146) Last week, I thought I found the smoking gun in our flaky integration tests where holding these locks could have led to potential deadlock: - https://github.com/openai/codex/pull/2876 - https://github.com/openai/codex/pull/2878 Yet even after those PRs went in, we continued to see flakinees in our integration tests! Though with the additional logging added as part of debugging those tests, I now saw things like: ``` read message from stdout: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) notification: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) read message from stdout: Request(JSONRPCRequest { id: Integer(0), jsonrpc: "2.0", method: "execCommandApproval", params: Some(Object {"conversation_id": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}) }) writing message to stdin: Response(JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} }) in read_stream_until_notification_message(codex/event/task_complete) [mcp stderr] 2025-09-04T00:00:59.738585Z INFO codex_mcp_server::message_processor: <- response: JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} } [mcp stderr] 2025-09-04T00:00:59.738740Z DEBUG codex_core::codex: Submission sub=Submission { id: "1", op: ExecApproval { id: "0", decision: Approved } } [mcp stderr] 2025-09-04T00:00:59.738832Z WARN codex_core::codex: No pending approval found for sub_id: 0 ``` That is, a response was sent for a request, but no callback was in place to handle the response! This time, I think I may have found the underlying issue (though the fixes for holding locks for too long may have also been part of it), which is I found cases where we were sending the request: https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L597 before inserting the `Sender` into the `pending_approvals` map (which has to wait on acquiring a mutex): https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L598-L601 so it is possible the request could go out and the client could respond before `pending_approvals` was updated! Note this was happening in both `request_command_approval()` and `request_patch_approval()`, which maps to the sorts of errors we have been seeing when these integration tests have been flaking on us. While here, I am also adding some extra logging that prints if inserting into `pending_approvals` overwrites an entry as opposed to purely inserting one. Today, a conversation can have only one pending request at a time, but as we are planning to support parallel tool calls, this invariant may not continue to hold, in which case we need to revisit this abstraction.
2025-09-04 07:38:28 -07:00
id: event_id,
msg: EventMsg::ExecApprovalRequest(ExecApprovalRequestEvent {
call_id,
command,
cwd,
reason,
}),
};
self.send_event(event).await;
rx_approve
}
pub async fn request_patch_approval(
&self,
sub_id: String,
call_id: String,
action: &ApplyPatchAction,
reason: Option<String>,
grant_root: Option<PathBuf>,
) -> oneshot::Receiver<ReviewDecision> {
fix: add callback to map before sending request to fix race condition (#3146) Last week, I thought I found the smoking gun in our flaky integration tests where holding these locks could have led to potential deadlock: - https://github.com/openai/codex/pull/2876 - https://github.com/openai/codex/pull/2878 Yet even after those PRs went in, we continued to see flakinees in our integration tests! Though with the additional logging added as part of debugging those tests, I now saw things like: ``` read message from stdout: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) notification: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) read message from stdout: Request(JSONRPCRequest { id: Integer(0), jsonrpc: "2.0", method: "execCommandApproval", params: Some(Object {"conversation_id": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}) }) writing message to stdin: Response(JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} }) in read_stream_until_notification_message(codex/event/task_complete) [mcp stderr] 2025-09-04T00:00:59.738585Z INFO codex_mcp_server::message_processor: <- response: JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} } [mcp stderr] 2025-09-04T00:00:59.738740Z DEBUG codex_core::codex: Submission sub=Submission { id: "1", op: ExecApproval { id: "0", decision: Approved } } [mcp stderr] 2025-09-04T00:00:59.738832Z WARN codex_core::codex: No pending approval found for sub_id: 0 ``` That is, a response was sent for a request, but no callback was in place to handle the response! This time, I think I may have found the underlying issue (though the fixes for holding locks for too long may have also been part of it), which is I found cases where we were sending the request: https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L597 before inserting the `Sender` into the `pending_approvals` map (which has to wait on acquiring a mutex): https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L598-L601 so it is possible the request could go out and the client could respond before `pending_approvals` was updated! Note this was happening in both `request_command_approval()` and `request_patch_approval()`, which maps to the sorts of errors we have been seeing when these integration tests have been flaking on us. While here, I am also adding some extra logging that prints if inserting into `pending_approvals` overwrites an entry as opposed to purely inserting one. Today, a conversation can have only one pending request at a time, but as we are planning to support parallel tool calls, this invariant may not continue to hold, in which case we need to revisit this abstraction.
2025-09-04 07:38:28 -07:00
// Add the tx_approve callback to the map before sending the request.
let (tx_approve, rx_approve) = oneshot::channel();
fix: add callback to map before sending request to fix race condition (#3146) Last week, I thought I found the smoking gun in our flaky integration tests where holding these locks could have led to potential deadlock: - https://github.com/openai/codex/pull/2876 - https://github.com/openai/codex/pull/2878 Yet even after those PRs went in, we continued to see flakinees in our integration tests! Though with the additional logging added as part of debugging those tests, I now saw things like: ``` read message from stdout: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) notification: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) read message from stdout: Request(JSONRPCRequest { id: Integer(0), jsonrpc: "2.0", method: "execCommandApproval", params: Some(Object {"conversation_id": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}) }) writing message to stdin: Response(JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} }) in read_stream_until_notification_message(codex/event/task_complete) [mcp stderr] 2025-09-04T00:00:59.738585Z INFO codex_mcp_server::message_processor: <- response: JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} } [mcp stderr] 2025-09-04T00:00:59.738740Z DEBUG codex_core::codex: Submission sub=Submission { id: "1", op: ExecApproval { id: "0", decision: Approved } } [mcp stderr] 2025-09-04T00:00:59.738832Z WARN codex_core::codex: No pending approval found for sub_id: 0 ``` That is, a response was sent for a request, but no callback was in place to handle the response! This time, I think I may have found the underlying issue (though the fixes for holding locks for too long may have also been part of it), which is I found cases where we were sending the request: https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L597 before inserting the `Sender` into the `pending_approvals` map (which has to wait on acquiring a mutex): https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L598-L601 so it is possible the request could go out and the client could respond before `pending_approvals` was updated! Note this was happening in both `request_command_approval()` and `request_patch_approval()`, which maps to the sorts of errors we have been seeing when these integration tests have been flaking on us. While here, I am also adding some extra logging that prints if inserting into `pending_approvals` overwrites an entry as opposed to purely inserting one. Today, a conversation can have only one pending request at a time, but as we are planning to support parallel tool calls, this invariant may not continue to hold, in which case we need to revisit this abstraction.
2025-09-04 07:38:28 -07:00
let event_id = sub_id.clone();
let prev_entry = {
let mut state = self.state.lock_unchecked();
state.pending_approvals.insert(sub_id, tx_approve)
};
if prev_entry.is_some() {
warn!("Overwriting existing pending approval for sub_id: {event_id}");
}
let event = Event {
fix: add callback to map before sending request to fix race condition (#3146) Last week, I thought I found the smoking gun in our flaky integration tests where holding these locks could have led to potential deadlock: - https://github.com/openai/codex/pull/2876 - https://github.com/openai/codex/pull/2878 Yet even after those PRs went in, we continued to see flakinees in our integration tests! Though with the additional logging added as part of debugging those tests, I now saw things like: ``` read message from stdout: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) notification: Notification(JSONRPCNotification { jsonrpc: "2.0", method: "codex/event/exec_approval_request", params: Some(Object {"id": String("0"), "msg": Object {"type": String("exec_approval_request"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}, "conversationId": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6")}) }) read message from stdout: Request(JSONRPCRequest { id: Integer(0), jsonrpc: "2.0", method: "execCommandApproval", params: Some(Object {"conversation_id": String("c67b32c5-9475-41bf-8680-f4b4834ebcc6"), "call_id": String("call1"), "command": Array [String("python3"), String("-c"), String("print(42)")], "cwd": String("/tmp/.tmpFj2zwi/workdir")}) }) writing message to stdin: Response(JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} }) in read_stream_until_notification_message(codex/event/task_complete) [mcp stderr] 2025-09-04T00:00:59.738585Z INFO codex_mcp_server::message_processor: <- response: JSONRPCResponse { id: Integer(0), jsonrpc: "2.0", result: Object {"decision": String("approved")} } [mcp stderr] 2025-09-04T00:00:59.738740Z DEBUG codex_core::codex: Submission sub=Submission { id: "1", op: ExecApproval { id: "0", decision: Approved } } [mcp stderr] 2025-09-04T00:00:59.738832Z WARN codex_core::codex: No pending approval found for sub_id: 0 ``` That is, a response was sent for a request, but no callback was in place to handle the response! This time, I think I may have found the underlying issue (though the fixes for holding locks for too long may have also been part of it), which is I found cases where we were sending the request: https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L597 before inserting the `Sender` into the `pending_approvals` map (which has to wait on acquiring a mutex): https://github.com/openai/codex/blob/234c0a0469db222f05df08d00ae5032312f77427/codex-rs/core/src/codex.rs#L598-L601 so it is possible the request could go out and the client could respond before `pending_approvals` was updated! Note this was happening in both `request_command_approval()` and `request_patch_approval()`, which maps to the sorts of errors we have been seeing when these integration tests have been flaking on us. While here, I am also adding some extra logging that prints if inserting into `pending_approvals` overwrites an entry as opposed to purely inserting one. Today, a conversation can have only one pending request at a time, but as we are planning to support parallel tool calls, this invariant may not continue to hold, in which case we need to revisit this abstraction.
2025-09-04 07:38:28 -07:00
id: event_id,
msg: EventMsg::ApplyPatchApprovalRequest(ApplyPatchApprovalRequestEvent {
call_id,
changes: convert_apply_patch_to_protocol(action),
reason,
grant_root,
}),
};
self.send_event(event).await;
rx_approve
}
pub fn notify_approval(&self, sub_id: &str, decision: ReviewDecision) {
let entry = {
let mut state = self.state.lock_unchecked();
state.pending_approvals.remove(sub_id)
};
match entry {
Some(tx_approve) => {
tx_approve.send(decision).ok();
}
None => {
warn!("No pending approval found for sub_id: {sub_id}");
}
}
}
pub fn add_approved_command(&self, cmd: Vec<String>) {
let mut state = self.state.lock_unchecked();
state.approved_commands.insert(cmd);
}
/// Records input items: always append to conversation history and
/// persist these response items to rollout.
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
async fn record_conversation_items(&self, items: &[ResponseItem]) {
self.record_into_history(items);
self.persist_rollout_response_items(items).await;
}
/// Append ResponseItems to the in-memory conversation history only.
fn record_into_history(&self, items: &[ResponseItem]) {
self.state
.lock_unchecked()
.history
.record_items(items.iter());
}
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
async fn persist_rollout_response_items(&self, items: &[ResponseItem]) {
let rollout_items: Vec<RolloutItem> = items
.iter()
.cloned()
.map(RolloutItem::ResponseItem)
.collect();
self.persist_rollout_items(&rollout_items).await;
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
}
fn build_initial_context(&self, turn_context: &TurnContext) -> Vec<ResponseItem> {
let mut items = Vec::<ResponseItem>::with_capacity(2);
if let Some(user_instructions) = turn_context.user_instructions.as_deref() {
items.push(UserInstructions::new(user_instructions.to_string()).into());
}
items.push(ResponseItem::from(EnvironmentContext::new(
Some(turn_context.cwd.clone()),
Some(turn_context.approval_policy),
Some(turn_context.sandbox_policy.clone()),
Some(self.user_shell.clone()),
)));
items
}
async fn persist_rollout_items(&self, items: &[RolloutItem]) {
let recorder = {
let guard = self.rollout.lock_unchecked();
guard.as_ref().cloned()
};
if let Some(rec) = recorder
&& let Err(e) = rec.record_items(items).await
{
error!("failed to record rollout items: {e:#}");
}
}
/// Record a user input item to conversation history and also persist a
/// corresponding UserMessage EventMsg to rollout.
async fn record_input_and_rollout_usermsg(&self, response_input: &ResponseInputItem) {
let response_item: ResponseItem = response_input.clone().into();
// Add to conversation history and persist response item to rollout
self.record_conversation_items(std::slice::from_ref(&response_item))
.await;
// Derive user message events and persist only UserMessage to rollout
let msgs =
map_response_item_to_event_messages(&response_item, self.show_raw_agent_reasoning);
let user_msgs: Vec<RolloutItem> = msgs
.into_iter()
.filter_map(|m| match m {
EventMsg::UserMessage(ev) => Some(RolloutItem::EventMsg(EventMsg::UserMessage(ev))),
_ => None,
})
.collect();
if !user_msgs.is_empty() {
self.persist_rollout_items(&user_msgs).await;
}
}
async fn on_exec_command_begin(
&self,
turn_diff_tracker: &mut TurnDiffTracker,
exec_command_context: ExecCommandContext,
) {
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
let ExecCommandContext {
sub_id,
call_id,
command_for_display,
cwd,
apply_patch,
} = exec_command_context;
let msg = match apply_patch {
Some(ApplyPatchCommandContext {
user_explicitly_approved_this_action,
changes,
}) => {
turn_diff_tracker.on_patch_begin(&changes);
EventMsg::PatchApplyBegin(PatchApplyBeginEvent {
call_id,
auto_approved: !user_explicitly_approved_this_action,
changes,
})
}
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
None => EventMsg::ExecCommandBegin(ExecCommandBeginEvent {
call_id,
command: command_for_display.clone(),
cwd,
parsed_cmd: parse_command(&command_for_display)
.into_iter()
.map(Into::into)
.collect(),
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
}),
};
let event = Event {
id: sub_id.to_string(),
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
msg,
};
self.send_event(event).await;
}
async fn on_exec_command_end(
&self,
turn_diff_tracker: &mut TurnDiffTracker,
sub_id: &str,
call_id: &str,
output: &ExecToolCallOutput,
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
is_apply_patch: bool,
) {
let ExecToolCallOutput {
stdout,
stderr,
aggregated_output,
duration,
exit_code,
} = output;
// Send full stdout/stderr to clients; do not truncate.
let stdout = stdout.text.clone();
let stderr = stderr.text.clone();
let formatted_output = format_exec_output_str(output);
let aggregated_output: String = aggregated_output.text.clone();
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
let msg = if is_apply_patch {
EventMsg::PatchApplyEnd(PatchApplyEndEvent {
call_id: call_id.to_string(),
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
stdout,
stderr,
success: *exit_code == 0,
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
})
} else {
EventMsg::ExecCommandEnd(ExecCommandEndEvent {
call_id: call_id.to_string(),
stdout,
stderr,
aggregated_output,
exit_code: *exit_code,
duration: *duration,
formatted_output,
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
})
};
let event = Event {
id: sub_id.to_string(),
msg,
};
self.send_event(event).await;
// If this is an apply_patch, after we emit the end patch, emit a second event
// with the full turn diff if there is one.
if is_apply_patch {
let unified_diff = turn_diff_tracker.get_unified_diff();
if let Ok(Some(unified_diff)) = unified_diff {
let msg = EventMsg::TurnDiff(TurnDiffEvent { unified_diff });
let event = Event {
id: sub_id.into(),
msg,
};
self.send_event(event).await;
}
}
}
/// Runs the exec tool call and emits events for the begin and end of the
/// command even on error.
///
/// Returns the output of the exec tool call.
async fn run_exec_with_events<'a>(
&self,
turn_diff_tracker: &mut TurnDiffTracker,
begin_ctx: ExecCommandContext,
exec_args: ExecInvokeArgs<'a>,
) -> crate::error::Result<ExecToolCallOutput> {
let is_apply_patch = begin_ctx.apply_patch.is_some();
let sub_id = begin_ctx.sub_id.clone();
let call_id = begin_ctx.call_id.clone();
self.on_exec_command_begin(turn_diff_tracker, begin_ctx.clone())
.await;
let result = process_exec_tool_call(
exec_args.params,
exec_args.sandbox_type,
exec_args.sandbox_policy,
exec_args.codex_linux_sandbox_exe,
exec_args.stdout_stream,
)
.await;
let output_stderr;
let borrowed: &ExecToolCallOutput = match &result {
Ok(output) => output,
Err(e) => {
output_stderr = ExecToolCallOutput {
exit_code: -1,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(get_error_message_ui(e)),
aggregated_output: StreamOutput::new(get_error_message_ui(e)),
duration: Duration::default(),
};
&output_stderr
}
};
self.on_exec_command_end(
turn_diff_tracker,
&sub_id,
&call_id,
borrowed,
is_apply_patch,
)
.await;
result
}
/// Helper that emits a BackgroundEvent with the given message. This keeps
/// the callsites terse so adding more diagnostics does not clutter the
/// core agent logic.
async fn notify_background_event(&self, sub_id: &str, message: impl Into<String>) {
let event = Event {
id: sub_id.to_string(),
msg: EventMsg::BackgroundEvent(BackgroundEventEvent {
message: message.into(),
}),
};
self.send_event(event).await;
}
2025-08-21 01:15:24 -07:00
async fn notify_stream_error(&self, sub_id: &str, message: impl Into<String>) {
let event = Event {
id: sub_id.to_string(),
msg: EventMsg::StreamError(StreamErrorEvent {
message: message.into(),
}),
};
self.send_event(event).await;
2025-08-21 01:15:24 -07:00
}
/// Build the full turn input by concatenating the current conversation
/// history with additional items for this turn.
pub fn turn_input_with_history(&self, extra: Vec<ResponseItem>) -> Vec<ResponseItem> {
[self.state.lock_unchecked().history.contents(), extra].concat()
}
/// Returns the input if there was no task running to inject into
pub fn inject_input(&self, input: Vec<InputItem>) -> Result<(), Vec<InputItem>> {
let mut state = self.state.lock_unchecked();
if state.current_task.is_some() {
state.pending_input.push(input.into());
Ok(())
} else {
Err(input)
}
}
pub fn get_pending_input(&self) -> Vec<ResponseInputItem> {
let mut state = self.state.lock_unchecked();
if state.pending_input.is_empty() {
Vec::with_capacity(0)
} else {
let mut ret = Vec::new();
std::mem::swap(&mut ret, &mut state.pending_input);
ret
}
}
pub async fn call_tool(
&self,
server: &str,
tool: &str,
arguments: Option<serde_json::Value>,
timeout: Option<Duration>,
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
) -> anyhow::Result<CallToolResult> {
self.mcp_connection_manager
.call_tool(server, tool, arguments, timeout)
.await
}
fn interrupt_task(&self) {
info!("interrupt received: abort current task, if any");
let mut state = self.state.lock_unchecked();
state.pending_approvals.clear();
state.pending_input.clear();
if let Some(task) = state.current_task.take() {
task.abort(TurnAbortReason::Interrupted);
}
}
feat: configurable notifications in the Rust CLI (#793) With this change, you can specify a program that will be executed to get notified about events generated by Codex. The notification info will be packaged as a JSON object. The supported notification types are defined by the `UserNotification` enum introduced in this PR. Initially, it contains only one variant, `AgentTurnComplete`: ```rust pub(crate) enum UserNotification { #[serde(rename_all = "kebab-case")] AgentTurnComplete { turn_id: String, /// Messages that the user sent to the agent to initiate the turn. input_messages: Vec<String>, /// The last message sent by the assistant in the turn. last_assistant_message: Option<String>, }, } ``` This is intended to support the common case when a "turn" ends, which often means it is now your chance to give Codex further instructions. For example, I have the following in my `~/.codex/config.toml`: ```toml notify = ["python3", "/Users/mbolin/.codex/notify.py"] ``` I created my own custom notifier script that calls out to [terminal-notifier](https://github.com/julienXX/terminal-notifier) to show a desktop push notification on macOS. Contents of `notify.py`: ```python #!/usr/bin/env python3 import json import subprocess import sys def main() -> int: if len(sys.argv) != 2: print("Usage: notify.py <NOTIFICATION_JSON>") return 1 try: notification = json.loads(sys.argv[1]) except json.JSONDecodeError: return 1 match notification_type := notification.get("type"): case "agent-turn-complete": assistant_message = notification.get("last-assistant-message") if assistant_message: title = f"Codex: {assistant_message}" else: title = "Codex: Turn Complete!" input_messages = notification.get("input_messages", []) message = " ".join(input_messages) title += message case _: print(f"not sending a push notification for: {notification_type}") return 0 subprocess.check_output( [ "terminal-notifier", "-title", title, "-message", message, "-group", "codex", "-ignoreDnD", "-activate", "com.googlecode.iterm2", ] ) return 0 if __name__ == "__main__": sys.exit(main()) ``` For reference, here are related PRs that tried to add this functionality to the TypeScript version of the Codex CLI: * https://github.com/openai/codex/pull/160 * https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
/// Spawn the configured notifier (if any) with the given JSON payload as
/// the last argument. Failures are logged but otherwise ignored so that
/// notification issues do not interfere with the main workflow.
fn maybe_notify(&self, notification: UserNotification) {
let Some(notify_command) = &self.notify else {
return;
};
if notify_command.is_empty() {
return;
}
let Ok(json) = serde_json::to_string(&notification) else {
error!("failed to serialise notification payload");
feat: configurable notifications in the Rust CLI (#793) With this change, you can specify a program that will be executed to get notified about events generated by Codex. The notification info will be packaged as a JSON object. The supported notification types are defined by the `UserNotification` enum introduced in this PR. Initially, it contains only one variant, `AgentTurnComplete`: ```rust pub(crate) enum UserNotification { #[serde(rename_all = "kebab-case")] AgentTurnComplete { turn_id: String, /// Messages that the user sent to the agent to initiate the turn. input_messages: Vec<String>, /// The last message sent by the assistant in the turn. last_assistant_message: Option<String>, }, } ``` This is intended to support the common case when a "turn" ends, which often means it is now your chance to give Codex further instructions. For example, I have the following in my `~/.codex/config.toml`: ```toml notify = ["python3", "/Users/mbolin/.codex/notify.py"] ``` I created my own custom notifier script that calls out to [terminal-notifier](https://github.com/julienXX/terminal-notifier) to show a desktop push notification on macOS. Contents of `notify.py`: ```python #!/usr/bin/env python3 import json import subprocess import sys def main() -> int: if len(sys.argv) != 2: print("Usage: notify.py <NOTIFICATION_JSON>") return 1 try: notification = json.loads(sys.argv[1]) except json.JSONDecodeError: return 1 match notification_type := notification.get("type"): case "agent-turn-complete": assistant_message = notification.get("last-assistant-message") if assistant_message: title = f"Codex: {assistant_message}" else: title = "Codex: Turn Complete!" input_messages = notification.get("input_messages", []) message = " ".join(input_messages) title += message case _: print(f"not sending a push notification for: {notification_type}") return 0 subprocess.check_output( [ "terminal-notifier", "-title", title, "-message", message, "-group", "codex", "-ignoreDnD", "-activate", "com.googlecode.iterm2", ] ) return 0 if __name__ == "__main__": sys.exit(main()) ``` For reference, here are related PRs that tried to add this functionality to the TypeScript version of the Codex CLI: * https://github.com/openai/codex/pull/160 * https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
return;
};
let mut command = std::process::Command::new(&notify_command[0]);
if notify_command.len() > 1 {
command.args(&notify_command[1..]);
}
command.arg(json);
// Fire-and-forget we do not wait for completion.
if let Err(e) = command.spawn() {
warn!("failed to spawn notifier '{}': {e}", notify_command[0]);
feat: configurable notifications in the Rust CLI (#793) With this change, you can specify a program that will be executed to get notified about events generated by Codex. The notification info will be packaged as a JSON object. The supported notification types are defined by the `UserNotification` enum introduced in this PR. Initially, it contains only one variant, `AgentTurnComplete`: ```rust pub(crate) enum UserNotification { #[serde(rename_all = "kebab-case")] AgentTurnComplete { turn_id: String, /// Messages that the user sent to the agent to initiate the turn. input_messages: Vec<String>, /// The last message sent by the assistant in the turn. last_assistant_message: Option<String>, }, } ``` This is intended to support the common case when a "turn" ends, which often means it is now your chance to give Codex further instructions. For example, I have the following in my `~/.codex/config.toml`: ```toml notify = ["python3", "/Users/mbolin/.codex/notify.py"] ``` I created my own custom notifier script that calls out to [terminal-notifier](https://github.com/julienXX/terminal-notifier) to show a desktop push notification on macOS. Contents of `notify.py`: ```python #!/usr/bin/env python3 import json import subprocess import sys def main() -> int: if len(sys.argv) != 2: print("Usage: notify.py <NOTIFICATION_JSON>") return 1 try: notification = json.loads(sys.argv[1]) except json.JSONDecodeError: return 1 match notification_type := notification.get("type"): case "agent-turn-complete": assistant_message = notification.get("last-assistant-message") if assistant_message: title = f"Codex: {assistant_message}" else: title = "Codex: Turn Complete!" input_messages = notification.get("input_messages", []) message = " ".join(input_messages) title += message case _: print(f"not sending a push notification for: {notification_type}") return 0 subprocess.check_output( [ "terminal-notifier", "-title", title, "-message", message, "-group", "codex", "-ignoreDnD", "-activate", "com.googlecode.iterm2", ] ) return 0 if __name__ == "__main__": sys.exit(main()) ``` For reference, here are related PRs that tried to add this functionality to the TypeScript version of the Codex CLI: * https://github.com/openai/codex/pull/160 * https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
}
}
}
impl Drop for Session {
fn drop(&mut self) {
self.interrupt_task();
}
}
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
#[derive(Clone, Debug)]
pub(crate) struct ExecCommandContext {
pub(crate) sub_id: String,
pub(crate) call_id: String,
pub(crate) command_for_display: Vec<String>,
pub(crate) cwd: PathBuf,
pub(crate) apply_patch: Option<ApplyPatchCommandContext>,
}
#[derive(Clone, Debug)]
pub(crate) struct ApplyPatchCommandContext {
pub(crate) user_explicitly_approved_this_action: bool,
pub(crate) changes: HashMap<PathBuf, FileChange>,
}
/// A series of Turns in response to user input.
feat: support mcp_servers in config.toml (#829) This adds initial support for MCP servers in the style of Claude Desktop and Cursor. Note this PR is the bare minimum to get things working end to end: all configured MCP servers are launched every time Codex is run, there is no recovery for MCP servers that crash, etc. (Also, I took some shortcuts to change some fields of `Session` to be `pub(crate)`, which also means there are circular deps between `codex.rs` and `mcp_tool_call.rs`, but I will clean that up in a subsequent PR.) `codex-rs/README.md` is updated as part of this PR to explain how to use this feature. There is a bit of plumbing to route the new settings from `Config` to the business logic in `codex.rs`. The most significant chunks for new code are in `mcp_connection_manager.rs` (which defines the `McpConnectionManager` struct) and `mcp_tool_call.rs`, which is responsible for tool calls. This PR also introduces new `McpToolCallBegin` and `McpToolCallEnd` event types to the protocol, but does not add any handlers for them. (See https://github.com/openai/codex/pull/836 for initial usage.) To test, I added the following to my `~/.codex/config.toml`: ```toml # Local build of https://github.com/hideya/mcp-server-weather-js [mcp_servers.weather] command = "/Users/mbolin/code/mcp-server-weather-js/dist/index.js" args = [] ``` And then I ran the following: ``` codex-rs$ cargo run --bin codex exec 'what is the weather in san francisco' [2025-05-06T22:40:05] Task started: 1 [2025-05-06T22:40:18] Agent message: Here’s the latest National Weather Service forecast for San Francisco (downtown, near 37.77° N, 122.42° W): This Afternoon (Tue): • Sunny, high near 69 °F • West-southwest wind around 12 mph Tonight: • Partly cloudy, low around 52 °F • SW wind 7–10 mph ... ``` Note that Codex itself is not able to make network calls, so it would not normally be able to get live weather information like this. However, the weather MCP is [currently] not run under the Codex sandbox, so it is able to hit `api.weather.gov` and fetch current weather information. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/829). * #836 * __->__ #829
2025-05-06 15:47:59 -07:00
pub(crate) struct AgentTask {
sess: Arc<Session>,
sub_id: String,
handle: AbortHandle,
}
impl AgentTask {
fn spawn(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
) -> Self {
let handle = {
let sess = sess.clone();
let sub_id = sub_id.clone();
let tc = Arc::clone(&turn_context);
tokio::spawn(async move { run_task(sess, tc.as_ref(), sub_id, input).await })
.abort_handle()
};
Self {
sess,
sub_id,
handle,
}
}
fn compact(
sess: Arc<Session>,
turn_context: Arc<TurnContext>,
sub_id: String,
input: Vec<InputItem>,
compact_instructions: String,
) -> Self {
let handle = {
let sess = sess.clone();
let sub_id = sub_id.clone();
let tc = Arc::clone(&turn_context);
tokio::spawn(async move {
run_compact_task(sess, tc.as_ref(), sub_id, input, compact_instructions).await
})
.abort_handle()
};
Self {
sess,
sub_id,
handle,
}
}
fn abort(self, reason: TurnAbortReason) {
// TOCTOU?
if !self.handle.is_finished() {
self.handle.abort();
let event = Event {
id: self.sub_id,
msg: EventMsg::TurnAborted(TurnAbortedEvent { reason }),
};
let sess = self.sess;
tokio::spawn(async move {
sess.send_event(event).await;
});
}
}
}
async fn submission_loop(
sess: Arc<Session>,
turn_context: TurnContext,
config: Arc<Config>,
rx_sub: Receiver<Submission>,
) {
// Wrap once to avoid cloning TurnContext for each task.
let mut turn_context = Arc::new(turn_context);
chore: introduce ConversationManager as a clearinghouse for all conversations (#2240) This PR does two things because after I got deep into the first one I started pulling on the thread to the second: - Makes `ConversationManager` the place where all in-memory conversations are created and stored. Previously, `MessageProcessor` in the `codex-mcp-server` crate was doing this via its `session_map`, but this is something that should be done in `codex-core`. - It unwinds the `ctrl_c: tokio::sync::Notify` that was threaded throughout our code. I think this made sense at one time, but now that we handle Ctrl-C within the TUI and have a proper `Op::Interrupt` event, I don't think this was quite right, so I removed it. For `codex exec` and `codex proto`, we now use `tokio::signal::ctrl_c()` directly, but we no longer make `Notify` a field of `Codex` or `CodexConversation`. Changes of note: - Adds the files `conversation_manager.rs` and `codex_conversation.rs` to `codex-core`. - `Codex` and `CodexSpawnOk` are no longer exported from `codex-core`: other crates must use `CodexConversation` instead (which is created via `ConversationManager`). - `core/src/codex_wrapper.rs` has been deleted in favor of `ConversationManager`. - `ConversationManager::new_conversation()` returns `NewConversation`, which is in line with the `new_conversation` tool we want to add to the MCP server. Note `NewConversation` includes `SessionConfiguredEvent`, so we eliminate checks in cases like `codex-rs/core/tests/client.rs` to verify `SessionConfiguredEvent` is the first event because that is now internal to `ConversationManager`. - Quite a bit of code was deleted from `codex-rs/mcp-server/src/message_processor.rs` since it no longer has to manage multiple conversations itself: it goes through `ConversationManager` instead. - `core/tests/live_agent.rs` has been deleted because I had to update a bunch of tests and all the tests in here were ignored, and I don't think anyone ever ran them, so this was just technical debt, at this point. - Removed `notify_on_sigint()` from `util.rs` (and in a follow-up, I hope to refactor the blandly-named `util.rs` into more descriptive files). - In general, I started replacing local variables named `codex` as `conversation`, where appropriate, though admittedly I didn't do it through all the integration tests because that would have added a lot of noise to this PR. --- [//]: # (BEGIN SAPLING FOOTER) Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/openai/codex/pull/2240). * #2264 * #2263 * __->__ #2240
2025-08-13 13:38:18 -07:00
// To break out of this loop, send Op::Shutdown.
while let Ok(sub) = rx_sub.recv().await {
debug!(?sub, "Submission");
match sub.op {
Op::Interrupt => {
sess.interrupt_task();
}
Op::OverrideTurnContext {
cwd,
approval_policy,
sandbox_policy,
model,
effort,
summary,
} => {
// Recalculate the persistent turn context with provided overrides.
let prev = Arc::clone(&turn_context);
let provider = prev.client.get_provider();
// Effective model + family
let (effective_model, effective_family) = if let Some(ref m) = model {
let fam =
find_family_for_model(m).unwrap_or_else(|| config.model_family.clone());
(m.clone(), fam)
} else {
(prev.client.get_model(), prev.client.get_model_family())
};
// Effective reasoning settings
let effective_effort = effort.unwrap_or(prev.client.get_reasoning_effort());
let effective_summary = summary.unwrap_or(prev.client.get_reasoning_summary());
let auth_manager = prev.client.get_auth_manager();
// Build updated config for the client
let mut updated_config = (*config).clone();
updated_config.model = effective_model.clone();
updated_config.model_family = effective_family.clone();
if let Some(model_info) = get_model_info(&effective_family) {
updated_config.model_context_window = Some(model_info.context_window);
}
let client = ModelClient::new(
Arc::new(updated_config),
auth_manager,
provider,
effective_effort,
effective_summary,
sess.conversation_id,
);
let new_approval_policy = approval_policy.unwrap_or(prev.approval_policy);
let new_sandbox_policy = sandbox_policy
.clone()
.unwrap_or(prev.sandbox_policy.clone());
let new_cwd = cwd.clone().unwrap_or_else(|| prev.cwd.clone());
let tools_config = ToolsConfig::new(&ToolsConfigParams {
model_family: &effective_family,
approval_policy: new_approval_policy,
sandbox_policy: new_sandbox_policy.clone(),
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
experimental_unified_exec_tool: config.use_experimental_unified_exec_tool,
});
let new_turn_context = TurnContext {
client,
tools_config,
user_instructions: prev.user_instructions.clone(),
base_instructions: prev.base_instructions.clone(),
approval_policy: new_approval_policy,
sandbox_policy: new_sandbox_policy.clone(),
shell_environment_policy: prev.shell_environment_policy.clone(),
cwd: new_cwd.clone(),
};
// Install the new persistent context for subsequent tasks/turns.
turn_context = Arc::new(new_turn_context);
// Optionally persist changes to model / effort
let effort_str = effort.map(|_| effective_effort.to_string());
if let Err(e) = persist_non_null_overrides(
&config.codex_home,
config.active_profile.as_deref(),
&[
(&[CONFIG_KEY_MODEL], model.as_deref()),
(&[CONFIG_KEY_EFFORT], effort_str.as_deref()),
],
)
.await
{
warn!("failed to persist overrides: {e:#}");
}
if cwd.is_some() || approval_policy.is_some() || sandbox_policy.is_some() {
sess.record_conversation_items(&[ResponseItem::from(EnvironmentContext::new(
cwd,
approval_policy,
sandbox_policy,
// Shell is not configurable from turn to turn
None,
))])
.await;
}
}
Op::UserInput { items } => {
// attempt to inject input into current task
if let Err(items) = sess.inject_input(items) {
// no current task, spawn a new one
let task =
AgentTask::spawn(sess.clone(), Arc::clone(&turn_context), sub.id, items);
sess.set_task(task);
}
}
Op::UserTurn {
items,
cwd,
approval_policy,
sandbox_policy,
model,
effort,
summary,
} => {
// attempt to inject input into current task
if let Err(items) = sess.inject_input(items) {
// Derive a fresh TurnContext for this turn using the provided overrides.
let provider = turn_context.client.get_provider();
let auth_manager = turn_context.client.get_auth_manager();
// Derive a model family for the requested model; fall back to the session's.
let model_family = find_family_for_model(&model)
.unwrap_or_else(|| config.model_family.clone());
// Create a perturn Config clone with the requested model/family.
let mut per_turn_config = (*config).clone();
per_turn_config.model = model.clone();
per_turn_config.model_family = model_family.clone();
if let Some(model_info) = get_model_info(&model_family) {
per_turn_config.model_context_window = Some(model_info.context_window);
}
// Build a new client with perturn reasoning settings.
// Reuse the same provider and session id; auth defaults to env/API key.
let client = ModelClient::new(
Arc::new(per_turn_config),
auth_manager,
provider,
effort,
summary,
sess.conversation_id,
);
let fresh_turn_context = TurnContext {
client,
tools_config: ToolsConfig::new(&ToolsConfigParams {
model_family: &model_family,
approval_policy,
sandbox_policy: sandbox_policy.clone(),
include_plan_tool: config.include_plan_tool,
include_apply_patch_tool: config.include_apply_patch_tool,
include_web_search_request: config.tools_web_search_request,
use_streamable_shell_tool: config
.use_experimental_streamable_shell_tool,
include_view_image_tool: config.include_view_image_tool,
experimental_unified_exec_tool: config
.use_experimental_unified_exec_tool,
}),
user_instructions: turn_context.user_instructions.clone(),
base_instructions: turn_context.base_instructions.clone(),
approval_policy,
sandbox_policy,
shell_environment_policy: turn_context.shell_environment_policy.clone(),
cwd,
};
// TODO: record the new environment context in the conversation history
// no current task, spawn a new one with the perturn context
let task =
AgentTask::spawn(sess.clone(), Arc::new(fresh_turn_context), sub.id, items);
sess.set_task(task);
}
}
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
Op::ExecApproval { id, decision } => match decision {
ReviewDecision::Abort => {
sess.interrupt_task();
}
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
other => sess.notify_approval(&id, other),
},
Op::PatchApproval { id, decision } => match decision {
ReviewDecision::Abort => {
sess.interrupt_task();
}
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
other => sess.notify_approval(&id, other),
},
feat: record messages from user in ~/.codex/history.jsonl (#939) This is a large change to support a "history" feature like you would expect in a shell like Bash. History events are recorded in `$CODEX_HOME/history.jsonl`. Because it is a JSONL file, it is straightforward to append new entries (as opposed to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be valid JSON, each new entry entails rewriting the entire file). Because it is possible for there to be multiple instances of Codex CLI writing to `history.jsonl` at once, we use advisory file locking when working with `history.jsonl` in `codex-rs/core/src/message_history.rs`. Because we believe history is a sufficiently useful feature, we enable it by default. Though to provide some safety, we set the file permissions of `history.jsonl` to be `o600` so that other users on the system cannot read the user's history. We do not yet support a default list of `SENSITIVE_PATTERNS` as the TypeScript CLI does: https://github.com/openai/codex/blob/3fdf9df1335ac9501e3fb0e61715359145711e8b/codex-cli/src/utils/storage/command-history.ts#L10-L17 We are going to take a more conservative approach to this list in the Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude sensitive information like API tokens, it would also exclude valuable information such as references to Git commits. As noted in the updated documentation, users can opt-out of history by adding the following to `config.toml`: ```toml [history] persistence = "none" ``` Because `history.jsonl` could, in theory, be quite large, we take a[n arguably overly pedantic] approach in reading history entries into memory. Specifically, we start by telling the client the current number of entries in the history file (`history_entry_count`) as well as the inode (`history_log_id`) of `history.jsonl` (see the new fields on `SessionConfiguredEvent`). The client is responsible for keeping new entries in memory to create a "local history," but if the user hits up enough times to go "past" the end of local history, then the client should use the new `GetHistoryEntryRequest` in the protocol to fetch older entries. Specifically, it should pass the `history_log_id` it was given originally and work backwards from `history_entry_count`. (It should really fetch history in batches rather than one-at-a-time, but that is something we can improve upon in subsequent PRs.) The motivation behind this crazy scheme is that it is designed to defend against: * The `history.jsonl` being truncated during the session such that the index into the history is no longer consistent with what had been read up to that point. We do not yet have logic to enforce a `max_bytes` for `history.jsonl`, but once we do, we will aspire to implement it in a way that should result in a new inode for the file on most systems. * New items from concurrent Codex CLI sessions amending to the history. Because, in absence of truncation, `history.jsonl` is an append-only log, so long as the client reads backwards from `history_entry_count`, it should always get a consistent view of history. (That said, it will not be able to read _new_ commands from concurrent sessions, but perhaps we will introduce a `/` command to reload latest history or something down the road.) Admittedly, my testing of this feature thus far has been fairly light. I expect we will find bugs and introduce enhancements/fixes going forward.
2025-05-15 16:26:23 -07:00
Op::AddToHistory { text } => {
let id = sess.conversation_id;
feat: record messages from user in ~/.codex/history.jsonl (#939) This is a large change to support a "history" feature like you would expect in a shell like Bash. History events are recorded in `$CODEX_HOME/history.jsonl`. Because it is a JSONL file, it is straightforward to append new entries (as opposed to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be valid JSON, each new entry entails rewriting the entire file). Because it is possible for there to be multiple instances of Codex CLI writing to `history.jsonl` at once, we use advisory file locking when working with `history.jsonl` in `codex-rs/core/src/message_history.rs`. Because we believe history is a sufficiently useful feature, we enable it by default. Though to provide some safety, we set the file permissions of `history.jsonl` to be `o600` so that other users on the system cannot read the user's history. We do not yet support a default list of `SENSITIVE_PATTERNS` as the TypeScript CLI does: https://github.com/openai/codex/blob/3fdf9df1335ac9501e3fb0e61715359145711e8b/codex-cli/src/utils/storage/command-history.ts#L10-L17 We are going to take a more conservative approach to this list in the Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude sensitive information like API tokens, it would also exclude valuable information such as references to Git commits. As noted in the updated documentation, users can opt-out of history by adding the following to `config.toml`: ```toml [history] persistence = "none" ``` Because `history.jsonl` could, in theory, be quite large, we take a[n arguably overly pedantic] approach in reading history entries into memory. Specifically, we start by telling the client the current number of entries in the history file (`history_entry_count`) as well as the inode (`history_log_id`) of `history.jsonl` (see the new fields on `SessionConfiguredEvent`). The client is responsible for keeping new entries in memory to create a "local history," but if the user hits up enough times to go "past" the end of local history, then the client should use the new `GetHistoryEntryRequest` in the protocol to fetch older entries. Specifically, it should pass the `history_log_id` it was given originally and work backwards from `history_entry_count`. (It should really fetch history in batches rather than one-at-a-time, but that is something we can improve upon in subsequent PRs.) The motivation behind this crazy scheme is that it is designed to defend against: * The `history.jsonl` being truncated during the session such that the index into the history is no longer consistent with what had been read up to that point. We do not yet have logic to enforce a `max_bytes` for `history.jsonl`, but once we do, we will aspire to implement it in a way that should result in a new inode for the file on most systems. * New items from concurrent Codex CLI sessions amending to the history. Because, in absence of truncation, `history.jsonl` is an append-only log, so long as the client reads backwards from `history_entry_count`, it should always get a consistent view of history. (That said, it will not be able to read _new_ commands from concurrent sessions, but perhaps we will introduce a `/` command to reload latest history or something down the road.) Admittedly, my testing of this feature thus far has been fairly light. I expect we will find bugs and introduce enhancements/fixes going forward.
2025-05-15 16:26:23 -07:00
let config = config.clone();
tokio::spawn(async move {
if let Err(e) = crate::message_history::append_entry(&text, &id, &config).await
{
warn!("failed to append to message history: {e}");
feat: record messages from user in ~/.codex/history.jsonl (#939) This is a large change to support a "history" feature like you would expect in a shell like Bash. History events are recorded in `$CODEX_HOME/history.jsonl`. Because it is a JSONL file, it is straightforward to append new entries (as opposed to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be valid JSON, each new entry entails rewriting the entire file). Because it is possible for there to be multiple instances of Codex CLI writing to `history.jsonl` at once, we use advisory file locking when working with `history.jsonl` in `codex-rs/core/src/message_history.rs`. Because we believe history is a sufficiently useful feature, we enable it by default. Though to provide some safety, we set the file permissions of `history.jsonl` to be `o600` so that other users on the system cannot read the user's history. We do not yet support a default list of `SENSITIVE_PATTERNS` as the TypeScript CLI does: https://github.com/openai/codex/blob/3fdf9df1335ac9501e3fb0e61715359145711e8b/codex-cli/src/utils/storage/command-history.ts#L10-L17 We are going to take a more conservative approach to this list in the Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude sensitive information like API tokens, it would also exclude valuable information such as references to Git commits. As noted in the updated documentation, users can opt-out of history by adding the following to `config.toml`: ```toml [history] persistence = "none" ``` Because `history.jsonl` could, in theory, be quite large, we take a[n arguably overly pedantic] approach in reading history entries into memory. Specifically, we start by telling the client the current number of entries in the history file (`history_entry_count`) as well as the inode (`history_log_id`) of `history.jsonl` (see the new fields on `SessionConfiguredEvent`). The client is responsible for keeping new entries in memory to create a "local history," but if the user hits up enough times to go "past" the end of local history, then the client should use the new `GetHistoryEntryRequest` in the protocol to fetch older entries. Specifically, it should pass the `history_log_id` it was given originally and work backwards from `history_entry_count`. (It should really fetch history in batches rather than one-at-a-time, but that is something we can improve upon in subsequent PRs.) The motivation behind this crazy scheme is that it is designed to defend against: * The `history.jsonl` being truncated during the session such that the index into the history is no longer consistent with what had been read up to that point. We do not yet have logic to enforce a `max_bytes` for `history.jsonl`, but once we do, we will aspire to implement it in a way that should result in a new inode for the file on most systems. * New items from concurrent Codex CLI sessions amending to the history. Because, in absence of truncation, `history.jsonl` is an append-only log, so long as the client reads backwards from `history_entry_count`, it should always get a consistent view of history. (That said, it will not be able to read _new_ commands from concurrent sessions, but perhaps we will introduce a `/` command to reload latest history or something down the road.) Admittedly, my testing of this feature thus far has been fairly light. I expect we will find bugs and introduce enhancements/fixes going forward.
2025-05-15 16:26:23 -07:00
}
});
}
Op::GetHistoryEntryRequest { offset, log_id } => {
let config = config.clone();
let sess_clone = sess.clone();
feat: record messages from user in ~/.codex/history.jsonl (#939) This is a large change to support a "history" feature like you would expect in a shell like Bash. History events are recorded in `$CODEX_HOME/history.jsonl`. Because it is a JSONL file, it is straightforward to append new entries (as opposed to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be valid JSON, each new entry entails rewriting the entire file). Because it is possible for there to be multiple instances of Codex CLI writing to `history.jsonl` at once, we use advisory file locking when working with `history.jsonl` in `codex-rs/core/src/message_history.rs`. Because we believe history is a sufficiently useful feature, we enable it by default. Though to provide some safety, we set the file permissions of `history.jsonl` to be `o600` so that other users on the system cannot read the user's history. We do not yet support a default list of `SENSITIVE_PATTERNS` as the TypeScript CLI does: https://github.com/openai/codex/blob/3fdf9df1335ac9501e3fb0e61715359145711e8b/codex-cli/src/utils/storage/command-history.ts#L10-L17 We are going to take a more conservative approach to this list in the Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude sensitive information like API tokens, it would also exclude valuable information such as references to Git commits. As noted in the updated documentation, users can opt-out of history by adding the following to `config.toml`: ```toml [history] persistence = "none" ``` Because `history.jsonl` could, in theory, be quite large, we take a[n arguably overly pedantic] approach in reading history entries into memory. Specifically, we start by telling the client the current number of entries in the history file (`history_entry_count`) as well as the inode (`history_log_id`) of `history.jsonl` (see the new fields on `SessionConfiguredEvent`). The client is responsible for keeping new entries in memory to create a "local history," but if the user hits up enough times to go "past" the end of local history, then the client should use the new `GetHistoryEntryRequest` in the protocol to fetch older entries. Specifically, it should pass the `history_log_id` it was given originally and work backwards from `history_entry_count`. (It should really fetch history in batches rather than one-at-a-time, but that is something we can improve upon in subsequent PRs.) The motivation behind this crazy scheme is that it is designed to defend against: * The `history.jsonl` being truncated during the session such that the index into the history is no longer consistent with what had been read up to that point. We do not yet have logic to enforce a `max_bytes` for `history.jsonl`, but once we do, we will aspire to implement it in a way that should result in a new inode for the file on most systems. * New items from concurrent Codex CLI sessions amending to the history. Because, in absence of truncation, `history.jsonl` is an append-only log, so long as the client reads backwards from `history_entry_count`, it should always get a consistent view of history. (That said, it will not be able to read _new_ commands from concurrent sessions, but perhaps we will introduce a `/` command to reload latest history or something down the road.) Admittedly, my testing of this feature thus far has been fairly light. I expect we will find bugs and introduce enhancements/fixes going forward.
2025-05-15 16:26:23 -07:00
let sub_id = sub.id.clone();
tokio::spawn(async move {
// Run lookup in blocking thread because it does file IO + locking.
let entry_opt = tokio::task::spawn_blocking(move || {
crate::message_history::lookup(log_id, offset, &config)
})
.await
.unwrap_or(None);
let event = Event {
id: sub_id,
msg: EventMsg::GetHistoryEntryResponse(
crate::protocol::GetHistoryEntryResponseEvent {
offset,
log_id,
entry: entry_opt.map(|e| {
codex_protocol::message_history::HistoryEntry {
conversation_id: e.session_id,
ts: e.ts,
text: e.text,
}
}),
feat: record messages from user in ~/.codex/history.jsonl (#939) This is a large change to support a "history" feature like you would expect in a shell like Bash. History events are recorded in `$CODEX_HOME/history.jsonl`. Because it is a JSONL file, it is straightforward to append new entries (as opposed to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be valid JSON, each new entry entails rewriting the entire file). Because it is possible for there to be multiple instances of Codex CLI writing to `history.jsonl` at once, we use advisory file locking when working with `history.jsonl` in `codex-rs/core/src/message_history.rs`. Because we believe history is a sufficiently useful feature, we enable it by default. Though to provide some safety, we set the file permissions of `history.jsonl` to be `o600` so that other users on the system cannot read the user's history. We do not yet support a default list of `SENSITIVE_PATTERNS` as the TypeScript CLI does: https://github.com/openai/codex/blob/3fdf9df1335ac9501e3fb0e61715359145711e8b/codex-cli/src/utils/storage/command-history.ts#L10-L17 We are going to take a more conservative approach to this list in the Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude sensitive information like API tokens, it would also exclude valuable information such as references to Git commits. As noted in the updated documentation, users can opt-out of history by adding the following to `config.toml`: ```toml [history] persistence = "none" ``` Because `history.jsonl` could, in theory, be quite large, we take a[n arguably overly pedantic] approach in reading history entries into memory. Specifically, we start by telling the client the current number of entries in the history file (`history_entry_count`) as well as the inode (`history_log_id`) of `history.jsonl` (see the new fields on `SessionConfiguredEvent`). The client is responsible for keeping new entries in memory to create a "local history," but if the user hits up enough times to go "past" the end of local history, then the client should use the new `GetHistoryEntryRequest` in the protocol to fetch older entries. Specifically, it should pass the `history_log_id` it was given originally and work backwards from `history_entry_count`. (It should really fetch history in batches rather than one-at-a-time, but that is something we can improve upon in subsequent PRs.) The motivation behind this crazy scheme is that it is designed to defend against: * The `history.jsonl` being truncated during the session such that the index into the history is no longer consistent with what had been read up to that point. We do not yet have logic to enforce a `max_bytes` for `history.jsonl`, but once we do, we will aspire to implement it in a way that should result in a new inode for the file on most systems. * New items from concurrent Codex CLI sessions amending to the history. Because, in absence of truncation, `history.jsonl` is an append-only log, so long as the client reads backwards from `history_entry_count`, it should always get a consistent view of history. (That said, it will not be able to read _new_ commands from concurrent sessions, but perhaps we will introduce a `/` command to reload latest history or something down the road.) Admittedly, my testing of this feature thus far has been fairly light. I expect we will find bugs and introduce enhancements/fixes going forward.
2025-05-15 16:26:23 -07:00
},
),
};
sess_clone.send_event(event).await;
feat: record messages from user in ~/.codex/history.jsonl (#939) This is a large change to support a "history" feature like you would expect in a shell like Bash. History events are recorded in `$CODEX_HOME/history.jsonl`. Because it is a JSONL file, it is straightforward to append new entries (as opposed to the TypeScript file that uses `$CODEX_HOME/history.json`, so to be valid JSON, each new entry entails rewriting the entire file). Because it is possible for there to be multiple instances of Codex CLI writing to `history.jsonl` at once, we use advisory file locking when working with `history.jsonl` in `codex-rs/core/src/message_history.rs`. Because we believe history is a sufficiently useful feature, we enable it by default. Though to provide some safety, we set the file permissions of `history.jsonl` to be `o600` so that other users on the system cannot read the user's history. We do not yet support a default list of `SENSITIVE_PATTERNS` as the TypeScript CLI does: https://github.com/openai/codex/blob/3fdf9df1335ac9501e3fb0e61715359145711e8b/codex-cli/src/utils/storage/command-history.ts#L10-L17 We are going to take a more conservative approach to this list in the Rust CLI. For example, while `/\b[A-Za-z0-9-_]{20,}\b/` might exclude sensitive information like API tokens, it would also exclude valuable information such as references to Git commits. As noted in the updated documentation, users can opt-out of history by adding the following to `config.toml`: ```toml [history] persistence = "none" ``` Because `history.jsonl` could, in theory, be quite large, we take a[n arguably overly pedantic] approach in reading history entries into memory. Specifically, we start by telling the client the current number of entries in the history file (`history_entry_count`) as well as the inode (`history_log_id`) of `history.jsonl` (see the new fields on `SessionConfiguredEvent`). The client is responsible for keeping new entries in memory to create a "local history," but if the user hits up enough times to go "past" the end of local history, then the client should use the new `GetHistoryEntryRequest` in the protocol to fetch older entries. Specifically, it should pass the `history_log_id` it was given originally and work backwards from `history_entry_count`. (It should really fetch history in batches rather than one-at-a-time, but that is something we can improve upon in subsequent PRs.) The motivation behind this crazy scheme is that it is designed to defend against: * The `history.jsonl` being truncated during the session such that the index into the history is no longer consistent with what had been read up to that point. We do not yet have logic to enforce a `max_bytes` for `history.jsonl`, but once we do, we will aspire to implement it in a way that should result in a new inode for the file on most systems. * New items from concurrent Codex CLI sessions amending to the history. Because, in absence of truncation, `history.jsonl` is an append-only log, so long as the client reads backwards from `history_entry_count`, it should always get a consistent view of history. (That said, it will not be able to read _new_ commands from concurrent sessions, but perhaps we will introduce a `/` command to reload latest history or something down the road.) Admittedly, my testing of this feature thus far has been fairly light. I expect we will find bugs and introduce enhancements/fixes going forward.
2025-05-15 16:26:23 -07:00
});
}
Op::ListMcpTools => {
let sub_id = sub.id.clone();
// This is a cheap lookup from the connection manager's cache.
let tools = sess.mcp_connection_manager.list_all_tools();
let event = Event {
id: sub_id,
msg: EventMsg::McpListToolsResponse(
crate::protocol::McpListToolsResponseEvent { tools },
),
};
sess.send_event(event).await;
}
Op::ListCustomPrompts => {
let sub_id = sub.id.clone();
let custom_prompts: Vec<CustomPrompt> =
if let Some(dir) = crate::custom_prompts::default_prompts_dir() {
crate::custom_prompts::discover_prompts_in(&dir).await
} else {
Vec::new()
};
let event = Event {
id: sub_id,
msg: EventMsg::ListCustomPromptsResponse(ListCustomPromptsResponseEvent {
custom_prompts,
}),
};
sess.send_event(event).await;
}
Op::Compact => {
// Create a summarization request as user input
const SUMMARIZATION_PROMPT: &str = include_str!("prompt_for_compact_command.md");
// Attempt to inject input into current task
if let Err(items) = sess.inject_input(vec![InputItem::Text {
text: "Start Summarization".to_string(),
}]) {
let task = AgentTask::compact(
sess.clone(),
Arc::clone(&turn_context),
sub.id,
items,
SUMMARIZATION_PROMPT.to_string(),
);
sess.set_task(task);
}
}
Op::Shutdown => {
info!("Shutting down Codex instance");
// Gracefully flush and shutdown rollout recorder on session end so tests
// that inspect the rollout file do not race with the background writer.
let recorder_opt = sess.rollout.lock_unchecked().take();
if let Some(rec) = recorder_opt
&& let Err(e) = rec.shutdown().await
{
warn!("failed to shutdown rollout recorder: {e}");
let event = Event {
id: sub.id.clone(),
msg: EventMsg::Error(ErrorEvent {
message: "Failed to shutdown rollout recorder".to_string(),
}),
};
sess.send_event(event).await;
}
exploration: create Session as part of Codex::spawn() (#2291) Historically, `Codex::spawn()` would create the instance of `Codex` and enforce, by construction, that `Op::ConfigureSession` was the first `Op` submitted via `submit()`. Then over in `submission_loop()`, it would handle the case for taking the parameters of `Op::ConfigureSession` and turning it into a `Session`. This approach has two challenges from a state management perspective: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L718 - The local `sess` variable in `submission_loop()` has to be `mut` and `Option<Arc<Session>>` because it is not invariant that a `Session` is present for the lifetime of the loop, so there is a lot of logic to deal with the case where `sess` is `None` (e.g., the `send_no_session_event` function and all of its callsites). - `submission_loop()` is written in such a way that `Op::ConfigureSession` could be observed multiple times, but in practice, it is only observed exactly once at the start of the loop. In this PR, we try to simplify the state management by _removing_ the `Op::ConfigureSession` enum variant and constructing the `Session` as part of `Codex::spawn()` so that it can be passed to `submission_loop()` as `Arc<Session>`. The original logic from the `Op::ConfigureSession` has largely been moved to the new `Session::new()` constructor. --- Incidentally, I also noticed that the handling of `Op::ConfigureSession` can result in events being dispatched in addition to `EventMsg::SessionConfigured`, as an `EventMsg::Error` is created for every MCP initialization error, so it is important to preserve that behavior: https://github.com/openai/codex/blob/f968a1327ad39a7786759ea8f1d1c088fe41e91b/codex-rs/core/src/codex.rs#L901-L916 Though admittedly, I believe this does not play nice with #2264, as these error messages will likely be dispatched before the client has a chance to call `addConversationListener`, so we likely need to make it so `newConversation` automatically creates the subscription, but we must also guarantee that the "ack" from `newConversation` is returned before any other conversation-related notifications are sent so the client knows what `conversation_id` to match on.
2025-08-14 09:55:28 -07:00
let event = Event {
id: sub.id.clone(),
msg: EventMsg::ShutdownComplete,
};
sess.send_event(event).await;
break;
}
Op::GetPath => {
let sub_id = sub.id.clone();
// Flush rollout writes before returning the path so readers observe a consistent file.
let (path, rec_opt) = {
let guard = sess.rollout.lock_unchecked();
match guard.as_ref() {
Some(rec) => (rec.get_rollout_path(), Some(rec.clone())),
None => {
error!("rollout recorder not found");
continue;
}
}
};
if let Some(rec) = rec_opt
&& let Err(e) = rec.flush().await
{
warn!("failed to flush rollout recorder before GetHistory: {e}");
}
let event = Event {
id: sub_id.clone(),
msg: EventMsg::ConversationPath(ConversationPathResponseEvent {
conversation_id: sess.conversation_id,
path,
}),
};
sess.send_event(event).await;
}
_ => {
// Ignore unknown ops; enum is non_exhaustive to allow extensions.
}
}
}
debug!("Agent loop exited");
}
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
/// Takes a user message as input and runs a loop where, at each turn, the model
/// replies with either:
///
/// - requested function calls
/// - an assistant message
///
/// While it is possible for the model to return multiple of these items in a
/// single turn, in practice, we generally one item per turn:
///
/// - If the model requests a function call, we execute it and send the output
/// back to the model in the next turn.
/// - If the model sends only an assistant message, we record it in the
/// conversation history and consider the task complete.
async fn run_task(
sess: Arc<Session>,
turn_context: &TurnContext,
sub_id: String,
input: Vec<InputItem>,
) {
if input.is_empty() {
return;
}
let event = Event {
id: sub_id.clone(),
msg: EventMsg::TaskStarted(TaskStartedEvent {
model_context_window: turn_context.client.get_model_context_window(),
}),
};
sess.send_event(event).await;
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input);
sess.record_input_and_rollout_usermsg(&initial_input_for_turn)
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
.await;
let mut last_agent_message: Option<String> = None;
// Although from the perspective of codex.rs, TurnDiffTracker has the lifecycle of a Task which contains
// many turns, from the perspective of the user, it is a single turn.
let mut turn_diff_tracker = TurnDiffTracker::new();
loop {
// Note that pending_input would be something like a message the user
// submitted through the UI while the model was running. Though the UI
// may support this, the model might not.
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
let pending_input = sess
.get_pending_input()
.into_iter()
.map(ResponseItem::from)
.collect::<Vec<ResponseItem>>();
sess.record_conversation_items(&pending_input).await;
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
// Construct the input that we will send to the model. When using the
// Chat completions API (or ZDR clients), the model needs the full
// conversation history on each turn. The rollout file, however, should
// only record the new items that originated in this turn so that it
// represents an append-only log without duplicates.
let turn_input: Vec<ResponseItem> = sess.turn_input_with_history(pending_input);
feat: configurable notifications in the Rust CLI (#793) With this change, you can specify a program that will be executed to get notified about events generated by Codex. The notification info will be packaged as a JSON object. The supported notification types are defined by the `UserNotification` enum introduced in this PR. Initially, it contains only one variant, `AgentTurnComplete`: ```rust pub(crate) enum UserNotification { #[serde(rename_all = "kebab-case")] AgentTurnComplete { turn_id: String, /// Messages that the user sent to the agent to initiate the turn. input_messages: Vec<String>, /// The last message sent by the assistant in the turn. last_assistant_message: Option<String>, }, } ``` This is intended to support the common case when a "turn" ends, which often means it is now your chance to give Codex further instructions. For example, I have the following in my `~/.codex/config.toml`: ```toml notify = ["python3", "/Users/mbolin/.codex/notify.py"] ``` I created my own custom notifier script that calls out to [terminal-notifier](https://github.com/julienXX/terminal-notifier) to show a desktop push notification on macOS. Contents of `notify.py`: ```python #!/usr/bin/env python3 import json import subprocess import sys def main() -> int: if len(sys.argv) != 2: print("Usage: notify.py <NOTIFICATION_JSON>") return 1 try: notification = json.loads(sys.argv[1]) except json.JSONDecodeError: return 1 match notification_type := notification.get("type"): case "agent-turn-complete": assistant_message = notification.get("last-assistant-message") if assistant_message: title = f"Codex: {assistant_message}" else: title = "Codex: Turn Complete!" input_messages = notification.get("input_messages", []) message = " ".join(input_messages) title += message case _: print(f"not sending a push notification for: {notification_type}") return 0 subprocess.check_output( [ "terminal-notifier", "-title", title, "-message", message, "-group", "codex", "-ignoreDnD", "-activate", "com.googlecode.iterm2", ] ) return 0 if __name__ == "__main__": sys.exit(main()) ``` For reference, here are related PRs that tried to add this functionality to the TypeScript version of the Codex CLI: * https://github.com/openai/codex/pull/160 * https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
let turn_input_messages: Vec<String> = turn_input
.iter()
.filter_map(|item| match item {
ResponseItem::Message { content, .. } => Some(content),
_ => None,
})
.flat_map(|content| {
content.iter().filter_map(|item| match item {
ContentItem::OutputText { text } => Some(text.clone()),
_ => None,
})
})
.collect();
match run_turn(
&sess,
turn_context,
&mut turn_diff_tracker,
sub_id.clone(),
turn_input,
)
.await
{
Ok(turn_output) => {
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
let mut items_to_record_in_conversation_history = Vec::<ResponseItem>::new();
let mut responses = Vec::<ResponseInputItem>::new();
for processed_response_item in turn_output {
let ProcessedResponseItem { item, response } = processed_response_item;
match (&item, &response) {
(ResponseItem::Message { role, .. }, None) if role == "assistant" => {
// If the model returned a message, we need to record it.
items_to_record_in_conversation_history.push(item);
}
(
ResponseItem::LocalShellCall { .. },
Some(ResponseInputItem::FunctionCallOutput { call_id, output }),
) => {
items_to_record_in_conversation_history.push(item);
items_to_record_in_conversation_history.push(
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: output.clone(),
},
);
}
(
ResponseItem::FunctionCall { .. },
Some(ResponseInputItem::FunctionCallOutput { call_id, output }),
) => {
items_to_record_in_conversation_history.push(item);
items_to_record_in_conversation_history.push(
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output: output.clone(),
},
);
}
(
ResponseItem::CustomToolCall { .. },
Some(ResponseInputItem::CustomToolCallOutput { call_id, output }),
) => {
items_to_record_in_conversation_history.push(item);
items_to_record_in_conversation_history.push(
ResponseItem::CustomToolCallOutput {
call_id: call_id.clone(),
output: output.clone(),
},
);
}
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
(
ResponseItem::FunctionCall { .. },
Some(ResponseInputItem::McpToolCallOutput { call_id, result }),
) => {
items_to_record_in_conversation_history.push(item);
let output = match result {
Ok(call_tool_result) => {
convert_call_tool_result_to_function_call_output_payload(
call_tool_result,
)
}
Err(err) => FunctionCallOutputPayload {
content: err.clone(),
success: Some(false),
},
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
};
items_to_record_in_conversation_history.push(
ResponseItem::FunctionCallOutput {
call_id: call_id.clone(),
output,
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
},
);
}
(
ResponseItem::Reasoning {
id,
summary,
content,
encrypted_content,
},
None,
) => {
items_to_record_in_conversation_history.push(ResponseItem::Reasoning {
id: id.clone(),
summary: summary.clone(),
content: content.clone(),
encrypted_content: encrypted_content.clone(),
});
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
}
_ => {
warn!("Unexpected response item: {item:?} with response: {response:?}");
}
};
if let Some(response) = response {
responses.push(response);
}
}
// Only attempt to take the lock if there is something to record.
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
if !items_to_record_in_conversation_history.is_empty() {
sess.record_conversation_items(&items_to_record_in_conversation_history)
.await;
}
if responses.is_empty() {
debug!("Turn completed");
fix: chat completions API now also passes tools along (#1167) Prior to this PR, there were two big misses in `chat_completions.rs`: 1. The loop in `stream_chat_completions()` was only including items of type `ResponseItem::Message` when building up the `"messages"` JSON for the `POST` request to the `chat/completions` endpoint. This fixes things by ensuring other variants (`FunctionCall`, `LocalShellCall`, and `FunctionCallOutput`) are included, as well. 2. In `process_chat_sse()`, we were not recording tool calls and were only emitting items of type `ResponseEvent::OutputItemDone(ResponseItem::Message)` to the stream. Now we introduce `FunctionCallState`, which is used to accumulate the `delta`s of type `tool_calls`, so we can ultimately emit a `ResponseItem::FunctionCall`, when appropriate. While function calling now appears to work for chat completions with my local testing, I believe that there are still edge cases that are not covered and that this codepath would benefit from a battery of integration tests. (As part of that further cleanup, we should also work to support streaming responses in the UI.) The other important part of this PR is some cleanup in `core/src/codex.rs`. In particular, it was hard to reason about how `run_task()` was building up the list of messages to include in a request across the various cases: - Responses API - Chat Completions API - Responses API used in concert with ZDR I like to think things are a bit cleaner now where: - `zdr_transcript` (if present) contains all messages in the history of the conversation, which includes function call outputs that have not been sent back to the model yet - `pending_input` includes any messages the user has submitted while the turn is in flight that need to be injected as part of the next `POST` to the model - `input_for_next_turn` includes the tool call outputs that have not been sent back to the model yet
2025-06-02 13:47:51 -07:00
last_agent_message = get_last_assistant_message_from_turn(
&items_to_record_in_conversation_history,
);
feat: configurable notifications in the Rust CLI (#793) With this change, you can specify a program that will be executed to get notified about events generated by Codex. The notification info will be packaged as a JSON object. The supported notification types are defined by the `UserNotification` enum introduced in this PR. Initially, it contains only one variant, `AgentTurnComplete`: ```rust pub(crate) enum UserNotification { #[serde(rename_all = "kebab-case")] AgentTurnComplete { turn_id: String, /// Messages that the user sent to the agent to initiate the turn. input_messages: Vec<String>, /// The last message sent by the assistant in the turn. last_assistant_message: Option<String>, }, } ``` This is intended to support the common case when a "turn" ends, which often means it is now your chance to give Codex further instructions. For example, I have the following in my `~/.codex/config.toml`: ```toml notify = ["python3", "/Users/mbolin/.codex/notify.py"] ``` I created my own custom notifier script that calls out to [terminal-notifier](https://github.com/julienXX/terminal-notifier) to show a desktop push notification on macOS. Contents of `notify.py`: ```python #!/usr/bin/env python3 import json import subprocess import sys def main() -> int: if len(sys.argv) != 2: print("Usage: notify.py <NOTIFICATION_JSON>") return 1 try: notification = json.loads(sys.argv[1]) except json.JSONDecodeError: return 1 match notification_type := notification.get("type"): case "agent-turn-complete": assistant_message = notification.get("last-assistant-message") if assistant_message: title = f"Codex: {assistant_message}" else: title = "Codex: Turn Complete!" input_messages = notification.get("input_messages", []) message = " ".join(input_messages) title += message case _: print(f"not sending a push notification for: {notification_type}") return 0 subprocess.check_output( [ "terminal-notifier", "-title", title, "-message", message, "-group", "codex", "-ignoreDnD", "-activate", "com.googlecode.iterm2", ] ) return 0 if __name__ == "__main__": sys.exit(main()) ``` For reference, here are related PRs that tried to add this functionality to the TypeScript version of the Codex CLI: * https://github.com/openai/codex/pull/160 * https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
sess.maybe_notify(UserNotification::AgentTurnComplete {
turn_id: sub_id.clone(),
input_messages: turn_input_messages,
last_assistant_message: last_agent_message.clone(),
feat: configurable notifications in the Rust CLI (#793) With this change, you can specify a program that will be executed to get notified about events generated by Codex. The notification info will be packaged as a JSON object. The supported notification types are defined by the `UserNotification` enum introduced in this PR. Initially, it contains only one variant, `AgentTurnComplete`: ```rust pub(crate) enum UserNotification { #[serde(rename_all = "kebab-case")] AgentTurnComplete { turn_id: String, /// Messages that the user sent to the agent to initiate the turn. input_messages: Vec<String>, /// The last message sent by the assistant in the turn. last_assistant_message: Option<String>, }, } ``` This is intended to support the common case when a "turn" ends, which often means it is now your chance to give Codex further instructions. For example, I have the following in my `~/.codex/config.toml`: ```toml notify = ["python3", "/Users/mbolin/.codex/notify.py"] ``` I created my own custom notifier script that calls out to [terminal-notifier](https://github.com/julienXX/terminal-notifier) to show a desktop push notification on macOS. Contents of `notify.py`: ```python #!/usr/bin/env python3 import json import subprocess import sys def main() -> int: if len(sys.argv) != 2: print("Usage: notify.py <NOTIFICATION_JSON>") return 1 try: notification = json.loads(sys.argv[1]) except json.JSONDecodeError: return 1 match notification_type := notification.get("type"): case "agent-turn-complete": assistant_message = notification.get("last-assistant-message") if assistant_message: title = f"Codex: {assistant_message}" else: title = "Codex: Turn Complete!" input_messages = notification.get("input_messages", []) message = " ".join(input_messages) title += message case _: print(f"not sending a push notification for: {notification_type}") return 0 subprocess.check_output( [ "terminal-notifier", "-title", title, "-message", message, "-group", "codex", "-ignoreDnD", "-activate", "com.googlecode.iterm2", ] ) return 0 if __name__ == "__main__": sys.exit(main()) ``` For reference, here are related PRs that tried to add this functionality to the TypeScript version of the Codex CLI: * https://github.com/openai/codex/pull/160 * https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
});
break;
}
}
Err(e) => {
info!("Turn error: {e:#}");
let event = Event {
id: sub_id.clone(),
msg: EventMsg::Error(ErrorEvent {
message: e.to_string(),
}),
};
sess.send_event(event).await;
// let the user continue the conversation
break;
}
}
}
sess.remove_task(&sub_id);
let event = Event {
id: sub_id,
msg: EventMsg::TaskComplete(TaskCompleteEvent { last_agent_message }),
};
sess.send_event(event).await;
}
async fn run_turn(
sess: &Session,
turn_context: &TurnContext,
turn_diff_tracker: &mut TurnDiffTracker,
sub_id: String,
input: Vec<ResponseItem>,
) -> CodexResult<Vec<ProcessedResponseItem>> {
let tools = get_openai_tools(
&turn_context.tools_config,
Some(sess.mcp_connection_manager.list_all_tools()),
);
let prompt = Prompt {
input,
tools,
base_instructions_override: turn_context.base_instructions.clone(),
};
let mut retries = 0;
loop {
match try_run_turn(sess, turn_context, turn_diff_tracker, &sub_id, &prompt).await {
Ok(output) => return Ok(output),
Err(CodexErr::Interrupted) => return Err(CodexErr::Interrupted),
feat: support the chat completions API in the Rust CLI (#862) This is a substantial PR to add support for the chat completions API, which in turn makes it possible to use non-OpenAI model providers (just like in the TypeScript CLI): * It moves a number of structs from `client.rs` to `client_common.rs` so they can be shared. * It introduces support for the chat completions API in `chat_completions.rs`. * It updates `ModelProviderInfo` so that `env_key` is `Option<String>` instead of `String` (for e.g., ollama) and adds a `wire_api` field * It updates `client.rs` to choose between `stream_responses()` and `stream_chat_completions()` based on the `wire_api` for the `ModelProviderInfo` * It updates the `exec` and TUI CLIs to no longer fail if the `OPENAI_API_KEY` environment variable is not set * It updates the TUI so that `EventMsg::Error` is displayed more prominently when it occurs, particularly now that it is important to alert users to the `CodexErr::EnvVar` variant. * `CodexErr::EnvVar` was updated to include an optional `instructions` field so we can preserve the behavior where we direct users to https://platform.openai.com if `OPENAI_API_KEY` is not set. * Cleaned up the "welcome message" in the TUI to ensure the model provider is displayed. * Updated the docs in `codex-rs/README.md`. To exercise the chat completions API from OpenAI models, I added the following to my `config.toml`: ```toml model = "gpt-4o" model_provider = "openai-chat-completions" [model_providers.openai-chat-completions] name = "OpenAI using Chat Completions" base_url = "https://api.openai.com/v1" env_key = "OPENAI_API_KEY" wire_api = "chat" ``` Though to test a non-OpenAI provider, I installed ollama with mistral locally on my Mac because ChatGPT said that would be a good match for my hardware: ```shell brew install ollama ollama serve ollama pull mistral ``` Then I added the following to my `~/.codex/config.toml`: ```toml model = "mistral" model_provider = "ollama" ``` Note this code could certainly use more test coverage, but I want to get this in so folks can start playing with it. For reference, I believe https://github.com/openai/codex/pull/247 was roughly the comparable PR on the TypeScript side.
2025-05-08 21:46:06 -07:00
Err(CodexErr::EnvVar(var)) => return Err(CodexErr::EnvVar(var)),
Err(e @ (CodexErr::UsageLimitReached(_) | CodexErr::UsageNotIncluded)) => {
return Err(e);
}
Err(e) => {
// Use the configured provider-specific stream retry budget.
let max_retries = turn_context.client.get_provider().stream_max_retries();
if retries < max_retries {
retries += 1;
let delay = match e {
CodexErr::Stream(_, Some(delay)) => delay,
_ => backoff(retries),
};
warn!(
"stream disconnected - retrying turn ({retries}/{max_retries} in {delay:?})...",
);
// Surface retry information to any UI/frontend so the
// user understands what is happening instead of staring
// at a seemingly frozen screen.
2025-08-21 01:15:24 -07:00
sess.notify_stream_error(
&sub_id,
format!(
"stream error: {e}; retrying {retries}/{max_retries} in {delay:?}…"
),
)
.await;
tokio::time::sleep(delay).await;
} else {
return Err(e);
}
}
}
}
}
/// When the model is prompted, it returns a stream of events. Some of these
/// events map to a `ResponseItem`. A `ResponseItem` may need to be
/// "handled" such that it produces a `ResponseInputItem` that needs to be
/// sent back to the model on the next turn.
#[derive(Debug)]
struct ProcessedResponseItem {
item: ResponseItem,
response: Option<ResponseInputItem>,
}
async fn try_run_turn(
sess: &Session,
turn_context: &TurnContext,
turn_diff_tracker: &mut TurnDiffTracker,
sub_id: &str,
prompt: &Prompt,
) -> CodexResult<Vec<ProcessedResponseItem>> {
// call_ids that are part of this response.
let completed_call_ids = prompt
.input
.iter()
.filter_map(|ri| match ri {
ResponseItem::FunctionCallOutput { call_id, .. } => Some(call_id),
ResponseItem::LocalShellCall {
call_id: Some(call_id),
..
} => Some(call_id),
ResponseItem::CustomToolCallOutput { call_id, .. } => Some(call_id),
_ => None,
})
.collect::<Vec<_>>();
// call_ids that were pending but are not part of this response.
// This usually happens because the user interrupted the model before we responded to one of its tool calls
// and then the user sent a follow-up message.
let missing_calls = {
prompt
.input
.iter()
.filter_map(|ri| match ri {
ResponseItem::FunctionCall { call_id, .. } => Some(call_id),
ResponseItem::LocalShellCall {
call_id: Some(call_id),
..
} => Some(call_id),
ResponseItem::CustomToolCall { call_id, .. } => Some(call_id),
_ => None,
})
.filter_map(|call_id| {
if completed_call_ids.contains(&call_id) {
None
} else {
Some(call_id.clone())
}
})
.map(|call_id| ResponseItem::CustomToolCallOutput {
call_id,
output: "aborted".to_string(),
})
.collect::<Vec<_>>()
};
let prompt: Cow<Prompt> = if missing_calls.is_empty() {
Cow::Borrowed(prompt)
} else {
// Add the synthetic aborted missing calls to the beginning of the input to ensure all call ids have responses.
let input = [missing_calls, prompt.input.clone()].concat();
Cow::Owned(Prompt {
input,
..prompt.clone()
})
};
let rollout_item = RolloutItem::TurnContext(TurnContextItem {
cwd: turn_context.cwd.clone(),
approval_policy: turn_context.approval_policy,
sandbox_policy: turn_context.sandbox_policy.clone(),
model: turn_context.client.get_model(),
effort: turn_context.client.get_reasoning_effort(),
summary: turn_context.client.get_reasoning_summary(),
});
sess.persist_rollout_items(&[rollout_item]).await;
let mut stream = turn_context.client.clone().stream(&prompt).await?;
let mut output = Vec::new();
loop {
// Poll the next item from the model stream. We must inspect *both* Ok and Err
// cases so that transient stream failures (e.g., dropped SSE connection before
// `response.completed`) bubble up and trigger the caller's retry logic.
let event = stream.next().await;
let Some(event) = event else {
// Channel closed without yielding a final Completed event or explicit error.
// Treat as a disconnected stream so the caller can retry.
return Err(CodexErr::Stream(
"stream closed before response.completed".into(),
None,
));
};
let event = match event {
Ok(ev) => ev,
Err(e) => {
// Propagate the underlying stream error to the caller (run_turn), which
// will apply the configured `stream_max_retries` policy.
return Err(e);
}
};
match event {
ResponseEvent::Created => {}
ResponseEvent::OutputItemDone(item) => {
let response = handle_response_item(
sess,
turn_context,
turn_diff_tracker,
sub_id,
item.clone(),
)
.await?;
output.push(ProcessedResponseItem { item, response });
}
ResponseEvent::WebSearchCallBegin { call_id } => {
let _ = sess
.tx_event
.send(Event {
id: sub_id.to_string(),
msg: EventMsg::WebSearchBegin(WebSearchBeginEvent { call_id }),
})
.await;
}
feat: show number of tokens remaining in UI (#1388) When using the OpenAI Responses API, we now record the `usage` field for a `"response.completed"` event, which includes metrics about the number of tokens consumed. We also introduce `openai_model_info.rs`, which includes current data about the most common OpenAI models available via the API (specifically `context_window` and `max_output_tokens`). If Codex does not recognize the model, you can set `model_context_window` and `model_max_output_tokens` explicitly in `config.toml`. When then introduce a new event type to `protocol.rs`, `TokenCount`, which includes the `TokenUsage` for the most recent turn. Finally, we update the TUI to record the running sum of tokens used so the percentage of available context window remaining can be reported via the placeholder text for the composer: ![Screenshot 2025-06-25 at 11 20 55 PM](https://github.com/user-attachments/assets/6fd6982f-7247-4f14-84b2-2e600cb1fd49) We could certainly get much fancier with this (such as reporting the estimated cost of the conversation), but for now, we are just trying to achieve feature parity with the TypeScript CLI. Though arguably this improves upon the TypeScript CLI, as the TypeScript CLI uses heuristics to estimate the number of tokens used rather than using the `usage` information directly: https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16 Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
ResponseEvent::Completed {
response_id: _,
feat: show number of tokens remaining in UI (#1388) When using the OpenAI Responses API, we now record the `usage` field for a `"response.completed"` event, which includes metrics about the number of tokens consumed. We also introduce `openai_model_info.rs`, which includes current data about the most common OpenAI models available via the API (specifically `context_window` and `max_output_tokens`). If Codex does not recognize the model, you can set `model_context_window` and `model_max_output_tokens` explicitly in `config.toml`. When then introduce a new event type to `protocol.rs`, `TokenCount`, which includes the `TokenUsage` for the most recent turn. Finally, we update the TUI to record the running sum of tokens used so the percentage of available context window remaining can be reported via the placeholder text for the composer: ![Screenshot 2025-06-25 at 11 20 55 PM](https://github.com/user-attachments/assets/6fd6982f-7247-4f14-84b2-2e600cb1fd49) We could certainly get much fancier with this (such as reporting the estimated cost of the conversation), but for now, we are just trying to achieve feature parity with the TypeScript CLI. Though arguably this improves upon the TypeScript CLI, as the TypeScript CLI uses heuristics to estimate the number of tokens used rather than using the `usage` information directly: https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16 Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
token_usage,
} => {
let info = {
let mut st = sess.state.lock_unchecked();
let info = TokenUsageInfo::new_or_append(
&st.token_info,
&token_usage,
turn_context.client.get_model_context_window(),
);
st.token_info = info.clone();
info
};
let _ = sess
.send_event(Event {
id: sub_id.to_string(),
msg: EventMsg::TokenCount(crate::protocol::TokenCountEvent { info }),
})
.await;
feat: show number of tokens remaining in UI (#1388) When using the OpenAI Responses API, we now record the `usage` field for a `"response.completed"` event, which includes metrics about the number of tokens consumed. We also introduce `openai_model_info.rs`, which includes current data about the most common OpenAI models available via the API (specifically `context_window` and `max_output_tokens`). If Codex does not recognize the model, you can set `model_context_window` and `model_max_output_tokens` explicitly in `config.toml`. When then introduce a new event type to `protocol.rs`, `TokenCount`, which includes the `TokenUsage` for the most recent turn. Finally, we update the TUI to record the running sum of tokens used so the percentage of available context window remaining can be reported via the placeholder text for the composer: ![Screenshot 2025-06-25 at 11 20 55 PM](https://github.com/user-attachments/assets/6fd6982f-7247-4f14-84b2-2e600cb1fd49) We could certainly get much fancier with this (such as reporting the estimated cost of the conversation), but for now, we are just trying to achieve feature parity with the TypeScript CLI. Though arguably this improves upon the TypeScript CLI, as the TypeScript CLI uses heuristics to estimate the number of tokens used rather than using the `usage` information directly: https://github.com/openai/codex/blob/296996d74e345b1b05d8c3451a06ace21c5ada96/codex-cli/src/utils/approximate-tokens-used.ts#L3-L16 Fixes https://github.com/openai/codex/issues/1242
2025-06-25 23:31:11 -07:00
let unified_diff = turn_diff_tracker.get_unified_diff();
if let Ok(Some(unified_diff)) = unified_diff {
let msg = EventMsg::TurnDiff(TurnDiffEvent { unified_diff });
let event = Event {
id: sub_id.to_string(),
msg,
};
sess.send_event(event).await;
}
return Ok(output);
}
ResponseEvent::OutputTextDelta(delta) => {
let event = Event {
id: sub_id.to_string(),
msg: EventMsg::AgentMessageDelta(AgentMessageDeltaEvent { delta }),
};
sess.send_event(event).await;
}
ResponseEvent::ReasoningSummaryDelta(delta) => {
let event = Event {
id: sub_id.to_string(),
msg: EventMsg::AgentReasoningDelta(AgentReasoningDeltaEvent { delta }),
};
sess.send_event(event).await;
}
ResponseEvent::ReasoningSummaryPartAdded => {
let event = Event {
id: sub_id.to_string(),
msg: EventMsg::AgentReasoningSectionBreak(AgentReasoningSectionBreakEvent {}),
};
sess.send_event(event).await;
}
ResponseEvent::ReasoningContentDelta(delta) => {
if sess.show_raw_agent_reasoning {
let event = Event {
id: sub_id.to_string(),
msg: EventMsg::AgentReasoningRawContentDelta(
AgentReasoningRawContentDeltaEvent { delta },
),
};
sess.send_event(event).await;
}
}
}
}
}
async fn run_compact_task(
sess: Arc<Session>,
turn_context: &TurnContext,
sub_id: String,
input: Vec<InputItem>,
compact_instructions: String,
) {
let model_context_window = turn_context.client.get_model_context_window();
let start_event = Event {
id: sub_id.clone(),
msg: EventMsg::TaskStarted(TaskStartedEvent {
model_context_window,
}),
};
sess.send_event(start_event).await;
let initial_input_for_turn: ResponseInputItem = ResponseInputItem::from(input);
let turn_input: Vec<ResponseItem> =
sess.turn_input_with_history(vec![initial_input_for_turn.clone().into()]);
let prompt = Prompt {
input: turn_input,
tools: Vec::new(),
base_instructions_override: Some(compact_instructions.clone()),
};
let max_retries = turn_context.client.get_provider().stream_max_retries();
let mut retries = 0;
loop {
let attempt_result = drain_to_completed(&sess, turn_context, &sub_id, &prompt).await;
match attempt_result {
Ok(()) => break,
Err(CodexErr::Interrupted) => return,
Err(e) => {
if retries < max_retries {
retries += 1;
let delay = backoff(retries);
2025-08-21 01:15:24 -07:00
sess.notify_stream_error(
&sub_id,
format!(
"stream error: {e}; retrying {retries}/{max_retries} in {delay:?}…"
),
)
.await;
tokio::time::sleep(delay).await;
continue;
} else {
let event = Event {
id: sub_id.clone(),
msg: EventMsg::Error(ErrorEvent {
message: e.to_string(),
}),
};
sess.send_event(event).await;
return;
}
}
}
}
sess.remove_task(&sub_id);
let rollout_item = {
let mut state = sess.state.lock_unchecked();
state.history.keep_last_messages(1);
RolloutItem::Compacted(CompactedItem {
message: state.history.last_agent_message(),
})
};
sess.persist_rollout_items(&[rollout_item]).await;
let event = Event {
id: sub_id.clone(),
msg: EventMsg::AgentMessage(AgentMessageEvent {
message: "Compact task completed".to_string(),
}),
};
sess.send_event(event).await;
let event = Event {
id: sub_id.clone(),
msg: EventMsg::TaskComplete(TaskCompleteEvent {
last_agent_message: None,
}),
};
sess.send_event(event).await;
}
async fn handle_response_item(
sess: &Session,
turn_context: &TurnContext,
turn_diff_tracker: &mut TurnDiffTracker,
sub_id: &str,
item: ResponseItem,
) -> CodexResult<Option<ResponseInputItem>> {
debug!(?item, "Output item");
let output = match item {
ResponseItem::FunctionCall {
name,
arguments,
call_id,
..
} => {
info!("FunctionCall: {name}({arguments})");
Some(
handle_function_call(
sess,
turn_context,
turn_diff_tracker,
sub_id.to_string(),
name,
arguments,
call_id,
)
.await,
)
}
ResponseItem::LocalShellCall {
id,
call_id,
status: _,
action,
} => {
let LocalShellAction::Exec(action) = action;
tracing::info!("LocalShellCall: {action:?}");
let params = ShellToolCallParams {
command: action.command,
workdir: action.working_directory,
timeout_ms: action.timeout_ms,
with_escalated_permissions: None,
justification: None,
};
let effective_call_id = match (call_id, id) {
(Some(call_id), _) => call_id,
(None, Some(id)) => id,
(None, None) => {
error!("LocalShellCall without call_id or id");
return Ok(Some(ResponseInputItem::FunctionCallOutput {
call_id: "".to_string(),
output: FunctionCallOutputPayload {
content: "LocalShellCall without call_id or id".to_string(),
success: None,
},
}));
}
};
let exec_params = to_exec_params(params, turn_context);
Some(
handle_container_exec_with_params(
exec_params,
sess,
turn_context,
turn_diff_tracker,
sub_id.to_string(),
effective_call_id,
)
.await,
)
}
ResponseItem::CustomToolCall {
id: _,
call_id,
name,
input,
status: _,
} => Some(
handle_custom_tool_call(
sess,
turn_context,
turn_diff_tracker,
sub_id.to_string(),
name,
input,
call_id,
)
.await,
),
ResponseItem::FunctionCallOutput { .. } => {
debug!("unexpected FunctionCallOutput from stream");
None
}
ResponseItem::CustomToolCallOutput { .. } => {
debug!("unexpected CustomToolCallOutput from stream");
None
}
Replay EventMsgs from Response Items when resuming a session with history. (#3123) ### Overview This PR introduces the following changes: 1. Adds a unified mechanism to convert ResponseItem into EventMsg. 2. Ensures that when a session is initialized with initial history, a vector of EventMsg is sent along with the session configuration. This allows clients to re-render the UI accordingly. 3. Added integration testing ### Caveats This implementation does not send every EventMsg that was previously dispatched to clients. The excluded events fall into two categories: • “Arguably” rolled-out events Examples include tool calls and apply-patch calls. While these events are conceptually rolled out, we currently only roll out ResponseItems. These events are already being handled elsewhere and transformed into EventMsg before being sent. • Non-rolled-out events Certain events such as TurnDiff, Error, and TokenCount are not rolled out at all. ### Future Directions At present, resuming a session involves maintaining two states: • UI State Clients can replay most of the important UI from the provided EventMsg history. • Model State The model receives the complete session history to reconstruct its internal state. This design provides a solid foundation. If, in the future, more precise UI reconstruction is needed, we have two potential paths: 1. Introduce a third data structure that allows us to derive both ResponseItems and EventMsgs. 2. Clearly divide responsibilities: the core system ensures the integrity of the model state, while clients are responsible for reconstructing the UI.
2025-09-03 21:47:00 -07:00
ResponseItem::Message { .. }
| ResponseItem::Reasoning { .. }
| ResponseItem::WebSearchCall { .. } => {
let msgs = map_response_item_to_event_messages(&item, sess.show_raw_agent_reasoning);
for msg in msgs {
let event = Event {
id: sub_id.to_string(),
Replay EventMsgs from Response Items when resuming a session with history. (#3123) ### Overview This PR introduces the following changes: 1. Adds a unified mechanism to convert ResponseItem into EventMsg. 2. Ensures that when a session is initialized with initial history, a vector of EventMsg is sent along with the session configuration. This allows clients to re-render the UI accordingly. 3. Added integration testing ### Caveats This implementation does not send every EventMsg that was previously dispatched to clients. The excluded events fall into two categories: • “Arguably” rolled-out events Examples include tool calls and apply-patch calls. While these events are conceptually rolled out, we currently only roll out ResponseItems. These events are already being handled elsewhere and transformed into EventMsg before being sent. • Non-rolled-out events Certain events such as TurnDiff, Error, and TokenCount are not rolled out at all. ### Future Directions At present, resuming a session involves maintaining two states: • UI State Clients can replay most of the important UI from the provided EventMsg history. • Model State The model receives the complete session history to reconstruct its internal state. This design provides a solid foundation. If, in the future, more precise UI reconstruction is needed, we have two potential paths: 1. Introduce a third data structure that allows us to derive both ResponseItems and EventMsgs. 2. Clearly divide responsibilities: the core system ensures the integrity of the model state, while clients are responsible for reconstructing the UI.
2025-09-03 21:47:00 -07:00
msg,
};
sess.send_event(event).await;
}
None
}
ResponseItem::Other => None,
};
Ok(output)
}
async fn handle_unified_exec_tool_call(
sess: &Session,
call_id: String,
session_id: Option<String>,
arguments: Vec<String>,
timeout_ms: Option<u64>,
) -> ResponseInputItem {
let parsed_session_id = if let Some(session_id) = session_id {
match session_id.parse::<i32>() {
Ok(parsed) => Some(parsed),
Err(output) => {
return ResponseInputItem::FunctionCallOutput {
call_id: call_id.to_string(),
output: FunctionCallOutputPayload {
content: format!("invalid session_id: {session_id} due to error {output}"),
success: Some(false),
},
};
}
}
} else {
None
};
let request = crate::unified_exec::UnifiedExecRequest {
session_id: parsed_session_id,
input_chunks: &arguments,
timeout_ms,
};
let result = sess.unified_exec_manager.handle_request(request).await;
let output_payload = match result {
Ok(value) => {
#[derive(Serialize)]
struct SerializedUnifiedExecResult<'a> {
session_id: Option<String>,
output: &'a str,
}
match serde_json::to_string(&SerializedUnifiedExecResult {
session_id: value.session_id.map(|id| id.to_string()),
output: &value.output,
}) {
Ok(serialized) => FunctionCallOutputPayload {
content: serialized,
success: Some(true),
},
Err(err) => FunctionCallOutputPayload {
content: format!("failed to serialize unified exec output: {err}"),
success: Some(false),
},
}
}
Err(err) => FunctionCallOutputPayload {
content: format!("unified exec failed: {err}"),
success: Some(false),
},
};
ResponseInputItem::FunctionCallOutput {
call_id,
output: output_payload,
}
}
async fn handle_function_call(
sess: &Session,
turn_context: &TurnContext,
turn_diff_tracker: &mut TurnDiffTracker,
sub_id: String,
name: String,
arguments: String,
call_id: String,
) -> ResponseInputItem {
match name.as_str() {
"container.exec" | "shell" => {
let params = match parse_container_exec_arguments(arguments, turn_context, &call_id) {
Ok(params) => params,
Err(output) => {
return *output;
}
};
handle_container_exec_with_params(
params,
sess,
turn_context,
turn_diff_tracker,
sub_id,
call_id,
)
.await
}
"unified_exec" => {
#[derive(Deserialize)]
struct UnifiedExecArgs {
input: Vec<String>,
#[serde(default)]
session_id: Option<String>,
#[serde(default)]
timeout_ms: Option<u64>,
}
let args = match serde_json::from_str::<UnifiedExecArgs>(&arguments) {
Ok(args) => args,
Err(err) => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("failed to parse function arguments: {err}"),
success: Some(false),
},
};
}
};
handle_unified_exec_tool_call(
sess,
call_id,
args.session_id,
args.input,
args.timeout_ms,
)
.await
}
"view_image" => {
#[derive(serde::Deserialize)]
struct SeeImageArgs {
path: String,
}
let args = match serde_json::from_str::<SeeImageArgs>(&arguments) {
Ok(a) => a,
Err(e) => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("failed to parse function arguments: {e}"),
success: Some(false),
},
};
}
};
let abs = turn_context.resolve_path(Some(args.path));
let output = match sess.inject_input(vec![InputItem::LocalImage { path: abs }]) {
Ok(()) => FunctionCallOutputPayload {
content: "attached local image path".to_string(),
success: Some(true),
},
Err(_) => FunctionCallOutputPayload {
content: "unable to attach image (no active task)".to_string(),
success: Some(false),
},
};
ResponseInputItem::FunctionCallOutput { call_id, output }
}
"apply_patch" => {
let args = match serde_json::from_str::<ApplyPatchToolArgs>(&arguments) {
Ok(a) => a,
Err(e) => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("failed to parse function arguments: {e}"),
success: None,
},
};
}
};
let exec_params = ExecParams {
command: vec!["apply_patch".to_string(), args.input.clone()],
cwd: turn_context.cwd.clone(),
timeout_ms: None,
env: HashMap::new(),
with_escalated_permissions: None,
justification: None,
};
handle_container_exec_with_params(
exec_params,
sess,
turn_context,
turn_diff_tracker,
sub_id,
call_id,
)
.await
}
"update_plan" => handle_update_plan(sess, arguments, sub_id, call_id).await,
EXEC_COMMAND_TOOL_NAME => {
// TODO(mbolin): Sandbox check.
let exec_params = match serde_json::from_str::<ExecCommandParams>(&arguments) {
Ok(params) => params,
Err(e) => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("failed to parse function arguments: {e}"),
success: Some(false),
},
};
}
};
let result = sess
.session_manager
.handle_exec_command_request(exec_params)
.await;
let function_call_output = crate::exec_command::result_into_payload(result);
ResponseInputItem::FunctionCallOutput {
call_id,
output: function_call_output,
}
}
WRITE_STDIN_TOOL_NAME => {
let write_stdin_params = match serde_json::from_str::<WriteStdinParams>(&arguments) {
Ok(params) => params,
Err(e) => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("failed to parse function arguments: {e}"),
success: Some(false),
},
};
}
};
let result = sess
.session_manager
.handle_write_stdin_request(write_stdin_params)
.await;
let function_call_output: FunctionCallOutputPayload =
crate::exec_command::result_into_payload(result);
ResponseInputItem::FunctionCallOutput {
call_id,
output: function_call_output,
}
}
_ => {
match sess.mcp_connection_manager.parse_tool_name(&name) {
Some((server, tool_name)) => {
// TODO(mbolin): Determine appropriate timeout for tool call.
let timeout = None;
handle_mcp_tool_call(
sess, &sub_id, call_id, server, tool_name, arguments, timeout,
)
.await
}
None => {
// Unknown function: reply with structured failure so the model can adapt.
ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("unsupported call: {name}"),
success: None,
},
}
}
}
}
}
}
async fn handle_custom_tool_call(
sess: &Session,
turn_context: &TurnContext,
turn_diff_tracker: &mut TurnDiffTracker,
sub_id: String,
name: String,
input: String,
call_id: String,
) -> ResponseInputItem {
info!("CustomToolCall: {name} {input}");
match name.as_str() {
"apply_patch" => {
let exec_params = ExecParams {
command: vec!["apply_patch".to_string(), input.clone()],
cwd: turn_context.cwd.clone(),
timeout_ms: None,
env: HashMap::new(),
with_escalated_permissions: None,
justification: None,
};
let resp = handle_container_exec_with_params(
exec_params,
sess,
turn_context,
turn_diff_tracker,
sub_id,
call_id,
)
.await;
// Convert function-call style output into a custom tool call output
match resp {
ResponseInputItem::FunctionCallOutput { call_id, output } => {
ResponseInputItem::CustomToolCallOutput {
call_id,
output: output.content,
}
}
// Pass through if already a custom tool output or other variant
other => other,
}
}
_ => {
debug!("unexpected CustomToolCall from stream");
ResponseInputItem::CustomToolCallOutput {
call_id,
output: format!("unsupported custom tool call: {name}"),
}
}
}
}
fn to_exec_params(params: ShellToolCallParams, turn_context: &TurnContext) -> ExecParams {
ExecParams {
command: params.command,
cwd: turn_context.resolve_path(params.workdir.clone()),
timeout_ms: params.timeout_ms,
env: create_env(&turn_context.shell_environment_policy),
with_escalated_permissions: params.with_escalated_permissions,
justification: params.justification,
}
}
fn parse_container_exec_arguments(
arguments: String,
turn_context: &TurnContext,
call_id: &str,
) -> Result<ExecParams, Box<ResponseInputItem>> {
// parse command
match serde_json::from_str::<ShellToolCallParams>(&arguments) {
Ok(shell_tool_call_params) => Ok(to_exec_params(shell_tool_call_params, turn_context)),
Err(e) => {
// allow model to re-sample
let output = ResponseInputItem::FunctionCallOutput {
call_id: call_id.to_string(),
output: FunctionCallOutputPayload {
content: format!("failed to parse function arguments: {e}"),
success: None,
},
};
Err(Box::new(output))
}
}
}
pub struct ExecInvokeArgs<'a> {
pub params: ExecParams,
pub sandbox_type: SandboxType,
pub sandbox_policy: &'a SandboxPolicy,
pub codex_linux_sandbox_exe: &'a Option<PathBuf>,
pub stdout_stream: Option<StdoutStream>,
}
fn maybe_translate_shell_command(
params: ExecParams,
sess: &Session,
turn_context: &TurnContext,
) -> ExecParams {
let should_translate = matches!(sess.user_shell, crate::shell::Shell::PowerShell(_))
|| turn_context.shell_environment_policy.use_profile;
if should_translate
&& let Some(command) = sess
.user_shell
.format_default_shell_invocation(params.command.clone())
{
return ExecParams { command, ..params };
}
params
}
async fn handle_container_exec_with_params(
params: ExecParams,
sess: &Session,
turn_context: &TurnContext,
turn_diff_tracker: &mut TurnDiffTracker,
sub_id: String,
call_id: String,
) -> ResponseInputItem {
// check if this was a patch, and apply it if so
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
let apply_patch_exec = match maybe_parse_apply_patch_verified(&params.command, &params.cwd) {
MaybeApplyPatchVerified::Body(changes) => {
match apply_patch::apply_patch(sess, turn_context, &sub_id, &call_id, changes).await {
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
InternalApplyPatchInvocation::Output(item) => return item,
InternalApplyPatchInvocation::DelegateToExec(apply_patch_exec) => {
Some(apply_patch_exec)
fix: run apply_patch calls through the sandbox (#1705) Building on the work of https://github.com/openai/codex/pull/1702, this changes how a shell call to `apply_patch` is handled. Previously, a shell call to `apply_patch` was always handled in-process, never leveraging a sandbox. To determine whether the `apply_patch` operation could be auto-approved, the `is_write_patch_constrained_to_writable_paths()` function would check if all the paths listed in the paths were writable. If so, the agent would apply the changes listed in the patch. Unfortunately, this approach afforded a loophole: symlinks! * For a soft link, we could fix this issue by tracing the link and checking whether the target is in the set of writable paths, however... * ...For a hard link, things are not as simple. We can run `stat FILE` to see if the number of links is greater than 1, but then we would have to do something potentially expensive like `find . -inum <inode_number>` to find the other paths for `FILE`. Further, even if this worked, this approach runs the risk of a [TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race condition, so it is not robust. The solution, implemented in this PR, is to take the virtual execution of the `apply_patch` CLI into an _actual_ execution using `codex --codex-run-as-apply-patch PATCH`, which we can run under the sandbox the user specified, just like any other `shell` call. This, of course, assumes that the sandbox prevents writing through symlinks as a mechanism to write to folders that are not in the writable set configured by the sandbox. I verified this by testing the following on both Mac and Linux: ```shell #!/usr/bin/env bash set -euo pipefail # Can running a command in SANDBOX_DIR write a file in EXPLOIT_DIR? # Codex is run in SANDBOX_DIR, so writes should be constrianed to this directory. SANDBOX_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) # EXPLOIT_DIR is outside of SANDBOX_DIR, so let's see if we can write to it. EXPLOIT_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) echo "SANDBOX_DIR: $SANDBOX_DIR" echo "EXPLOIT_DIR: $EXPLOIT_DIR" cleanup() { # Only remove if it looks sane and still exists [[ -n "${SANDBOX_DIR:-}" && -d "$SANDBOX_DIR" ]] && rm -rf -- "$SANDBOX_DIR" [[ -n "${EXPLOIT_DIR:-}" && -d "$EXPLOIT_DIR" ]] && rm -rf -- "$EXPLOIT_DIR" } trap cleanup EXIT echo "I am the original content" > "${EXPLOIT_DIR}/original.txt" # Drop the -s to test hard links. ln -s "${EXPLOIT_DIR}/original.txt" "${SANDBOX_DIR}/link-to-original.txt" cat "${SANDBOX_DIR}/link-to-original.txt" if [[ "$(uname)" == "Linux" ]]; then SANDBOX_SUBCOMMAND=landlock else SANDBOX_SUBCOMMAND=seatbelt fi # Attempt the exploit cd "${SANDBOX_DIR}" codex debug "${SANDBOX_SUBCOMMAND}" bash -lc "echo pwned > ./link-to-original.txt" || true cat "${EXPLOIT_DIR}/original.txt" ``` Admittedly, this change merits a proper integration test, but I think I will have to do that in a follow-up PR.
2025-07-30 16:45:08 -07:00
}
}
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
}
MaybeApplyPatchVerified::CorrectnessError(parse_error) => {
// It looks like an invocation of `apply_patch`, but we
// could not resolve it into a patch that would apply
// cleanly. Return to model for resample.
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("error: {parse_error:#}"),
success: None,
},
};
}
MaybeApplyPatchVerified::ShellParseError(error) => {
trace!("Failed to parse shell command, {error:?}");
None
}
MaybeApplyPatchVerified::NotApplyPatch => None,
};
fix: run apply_patch calls through the sandbox (#1705) Building on the work of https://github.com/openai/codex/pull/1702, this changes how a shell call to `apply_patch` is handled. Previously, a shell call to `apply_patch` was always handled in-process, never leveraging a sandbox. To determine whether the `apply_patch` operation could be auto-approved, the `is_write_patch_constrained_to_writable_paths()` function would check if all the paths listed in the paths were writable. If so, the agent would apply the changes listed in the patch. Unfortunately, this approach afforded a loophole: symlinks! * For a soft link, we could fix this issue by tracing the link and checking whether the target is in the set of writable paths, however... * ...For a hard link, things are not as simple. We can run `stat FILE` to see if the number of links is greater than 1, but then we would have to do something potentially expensive like `find . -inum <inode_number>` to find the other paths for `FILE`. Further, even if this worked, this approach runs the risk of a [TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race condition, so it is not robust. The solution, implemented in this PR, is to take the virtual execution of the `apply_patch` CLI into an _actual_ execution using `codex --codex-run-as-apply-patch PATCH`, which we can run under the sandbox the user specified, just like any other `shell` call. This, of course, assumes that the sandbox prevents writing through symlinks as a mechanism to write to folders that are not in the writable set configured by the sandbox. I verified this by testing the following on both Mac and Linux: ```shell #!/usr/bin/env bash set -euo pipefail # Can running a command in SANDBOX_DIR write a file in EXPLOIT_DIR? # Codex is run in SANDBOX_DIR, so writes should be constrianed to this directory. SANDBOX_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) # EXPLOIT_DIR is outside of SANDBOX_DIR, so let's see if we can write to it. EXPLOIT_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) echo "SANDBOX_DIR: $SANDBOX_DIR" echo "EXPLOIT_DIR: $EXPLOIT_DIR" cleanup() { # Only remove if it looks sane and still exists [[ -n "${SANDBOX_DIR:-}" && -d "$SANDBOX_DIR" ]] && rm -rf -- "$SANDBOX_DIR" [[ -n "${EXPLOIT_DIR:-}" && -d "$EXPLOIT_DIR" ]] && rm -rf -- "$EXPLOIT_DIR" } trap cleanup EXIT echo "I am the original content" > "${EXPLOIT_DIR}/original.txt" # Drop the -s to test hard links. ln -s "${EXPLOIT_DIR}/original.txt" "${SANDBOX_DIR}/link-to-original.txt" cat "${SANDBOX_DIR}/link-to-original.txt" if [[ "$(uname)" == "Linux" ]]; then SANDBOX_SUBCOMMAND=landlock else SANDBOX_SUBCOMMAND=seatbelt fi # Attempt the exploit cd "${SANDBOX_DIR}" codex debug "${SANDBOX_SUBCOMMAND}" bash -lc "echo pwned > ./link-to-original.txt" || true cat "${EXPLOIT_DIR}/original.txt" ``` Admittedly, this change merits a proper integration test, but I think I will have to do that in a follow-up PR.
2025-07-30 16:45:08 -07:00
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
let (params, safety, command_for_display) = match &apply_patch_exec {
Some(ApplyPatchExec {
action: ApplyPatchAction { patch, cwd, .. },
user_explicitly_approved_this_action,
}) => {
fix: run apply_patch calls through the sandbox (#1705) Building on the work of https://github.com/openai/codex/pull/1702, this changes how a shell call to `apply_patch` is handled. Previously, a shell call to `apply_patch` was always handled in-process, never leveraging a sandbox. To determine whether the `apply_patch` operation could be auto-approved, the `is_write_patch_constrained_to_writable_paths()` function would check if all the paths listed in the paths were writable. If so, the agent would apply the changes listed in the patch. Unfortunately, this approach afforded a loophole: symlinks! * For a soft link, we could fix this issue by tracing the link and checking whether the target is in the set of writable paths, however... * ...For a hard link, things are not as simple. We can run `stat FILE` to see if the number of links is greater than 1, but then we would have to do something potentially expensive like `find . -inum <inode_number>` to find the other paths for `FILE`. Further, even if this worked, this approach runs the risk of a [TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race condition, so it is not robust. The solution, implemented in this PR, is to take the virtual execution of the `apply_patch` CLI into an _actual_ execution using `codex --codex-run-as-apply-patch PATCH`, which we can run under the sandbox the user specified, just like any other `shell` call. This, of course, assumes that the sandbox prevents writing through symlinks as a mechanism to write to folders that are not in the writable set configured by the sandbox. I verified this by testing the following on both Mac and Linux: ```shell #!/usr/bin/env bash set -euo pipefail # Can running a command in SANDBOX_DIR write a file in EXPLOIT_DIR? # Codex is run in SANDBOX_DIR, so writes should be constrianed to this directory. SANDBOX_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) # EXPLOIT_DIR is outside of SANDBOX_DIR, so let's see if we can write to it. EXPLOIT_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) echo "SANDBOX_DIR: $SANDBOX_DIR" echo "EXPLOIT_DIR: $EXPLOIT_DIR" cleanup() { # Only remove if it looks sane and still exists [[ -n "${SANDBOX_DIR:-}" && -d "$SANDBOX_DIR" ]] && rm -rf -- "$SANDBOX_DIR" [[ -n "${EXPLOIT_DIR:-}" && -d "$EXPLOIT_DIR" ]] && rm -rf -- "$EXPLOIT_DIR" } trap cleanup EXIT echo "I am the original content" > "${EXPLOIT_DIR}/original.txt" # Drop the -s to test hard links. ln -s "${EXPLOIT_DIR}/original.txt" "${SANDBOX_DIR}/link-to-original.txt" cat "${SANDBOX_DIR}/link-to-original.txt" if [[ "$(uname)" == "Linux" ]]; then SANDBOX_SUBCOMMAND=landlock else SANDBOX_SUBCOMMAND=seatbelt fi # Attempt the exploit cd "${SANDBOX_DIR}" codex debug "${SANDBOX_SUBCOMMAND}" bash -lc "echo pwned > ./link-to-original.txt" || true cat "${EXPLOIT_DIR}/original.txt" ``` Admittedly, this change merits a proper integration test, but I think I will have to do that in a follow-up PR.
2025-07-30 16:45:08 -07:00
let path_to_codex = std::env::current_exe()
.ok()
.map(|p| p.to_string_lossy().to_string());
let Some(path_to_codex) = path_to_codex else {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: "failed to determine path to codex executable".to_string(),
success: None,
},
};
};
let params = ExecParams {
command: vec![
path_to_codex,
CODEX_APPLY_PATCH_ARG1.to_string(),
patch.clone(),
],
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
cwd: cwd.clone(),
fix: run apply_patch calls through the sandbox (#1705) Building on the work of https://github.com/openai/codex/pull/1702, this changes how a shell call to `apply_patch` is handled. Previously, a shell call to `apply_patch` was always handled in-process, never leveraging a sandbox. To determine whether the `apply_patch` operation could be auto-approved, the `is_write_patch_constrained_to_writable_paths()` function would check if all the paths listed in the paths were writable. If so, the agent would apply the changes listed in the patch. Unfortunately, this approach afforded a loophole: symlinks! * For a soft link, we could fix this issue by tracing the link and checking whether the target is in the set of writable paths, however... * ...For a hard link, things are not as simple. We can run `stat FILE` to see if the number of links is greater than 1, but then we would have to do something potentially expensive like `find . -inum <inode_number>` to find the other paths for `FILE`. Further, even if this worked, this approach runs the risk of a [TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race condition, so it is not robust. The solution, implemented in this PR, is to take the virtual execution of the `apply_patch` CLI into an _actual_ execution using `codex --codex-run-as-apply-patch PATCH`, which we can run under the sandbox the user specified, just like any other `shell` call. This, of course, assumes that the sandbox prevents writing through symlinks as a mechanism to write to folders that are not in the writable set configured by the sandbox. I verified this by testing the following on both Mac and Linux: ```shell #!/usr/bin/env bash set -euo pipefail # Can running a command in SANDBOX_DIR write a file in EXPLOIT_DIR? # Codex is run in SANDBOX_DIR, so writes should be constrianed to this directory. SANDBOX_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) # EXPLOIT_DIR is outside of SANDBOX_DIR, so let's see if we can write to it. EXPLOIT_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) echo "SANDBOX_DIR: $SANDBOX_DIR" echo "EXPLOIT_DIR: $EXPLOIT_DIR" cleanup() { # Only remove if it looks sane and still exists [[ -n "${SANDBOX_DIR:-}" && -d "$SANDBOX_DIR" ]] && rm -rf -- "$SANDBOX_DIR" [[ -n "${EXPLOIT_DIR:-}" && -d "$EXPLOIT_DIR" ]] && rm -rf -- "$EXPLOIT_DIR" } trap cleanup EXIT echo "I am the original content" > "${EXPLOIT_DIR}/original.txt" # Drop the -s to test hard links. ln -s "${EXPLOIT_DIR}/original.txt" "${SANDBOX_DIR}/link-to-original.txt" cat "${SANDBOX_DIR}/link-to-original.txt" if [[ "$(uname)" == "Linux" ]]; then SANDBOX_SUBCOMMAND=landlock else SANDBOX_SUBCOMMAND=seatbelt fi # Attempt the exploit cd "${SANDBOX_DIR}" codex debug "${SANDBOX_SUBCOMMAND}" bash -lc "echo pwned > ./link-to-original.txt" || true cat "${EXPLOIT_DIR}/original.txt" ``` Admittedly, this change merits a proper integration test, but I think I will have to do that in a follow-up PR.
2025-07-30 16:45:08 -07:00
timeout_ms: params.timeout_ms,
env: HashMap::new(),
with_escalated_permissions: params.with_escalated_permissions,
justification: params.justification.clone(),
};
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
let safety = if *user_explicitly_approved_this_action {
SafetyCheck::AutoApprove {
sandbox_type: SandboxType::None,
}
} else {
assess_safety_for_untrusted_command(
turn_context.approval_policy,
&turn_context.sandbox_policy,
params.with_escalated_permissions.unwrap_or(false),
)
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
};
(
params,
safety,
vec!["apply_patch".to_string(), patch.clone()],
)
}
fix: run apply_patch calls through the sandbox (#1705) Building on the work of https://github.com/openai/codex/pull/1702, this changes how a shell call to `apply_patch` is handled. Previously, a shell call to `apply_patch` was always handled in-process, never leveraging a sandbox. To determine whether the `apply_patch` operation could be auto-approved, the `is_write_patch_constrained_to_writable_paths()` function would check if all the paths listed in the paths were writable. If so, the agent would apply the changes listed in the patch. Unfortunately, this approach afforded a loophole: symlinks! * For a soft link, we could fix this issue by tracing the link and checking whether the target is in the set of writable paths, however... * ...For a hard link, things are not as simple. We can run `stat FILE` to see if the number of links is greater than 1, but then we would have to do something potentially expensive like `find . -inum <inode_number>` to find the other paths for `FILE`. Further, even if this worked, this approach runs the risk of a [TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race condition, so it is not robust. The solution, implemented in this PR, is to take the virtual execution of the `apply_patch` CLI into an _actual_ execution using `codex --codex-run-as-apply-patch PATCH`, which we can run under the sandbox the user specified, just like any other `shell` call. This, of course, assumes that the sandbox prevents writing through symlinks as a mechanism to write to folders that are not in the writable set configured by the sandbox. I verified this by testing the following on both Mac and Linux: ```shell #!/usr/bin/env bash set -euo pipefail # Can running a command in SANDBOX_DIR write a file in EXPLOIT_DIR? # Codex is run in SANDBOX_DIR, so writes should be constrianed to this directory. SANDBOX_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) # EXPLOIT_DIR is outside of SANDBOX_DIR, so let's see if we can write to it. EXPLOIT_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) echo "SANDBOX_DIR: $SANDBOX_DIR" echo "EXPLOIT_DIR: $EXPLOIT_DIR" cleanup() { # Only remove if it looks sane and still exists [[ -n "${SANDBOX_DIR:-}" && -d "$SANDBOX_DIR" ]] && rm -rf -- "$SANDBOX_DIR" [[ -n "${EXPLOIT_DIR:-}" && -d "$EXPLOIT_DIR" ]] && rm -rf -- "$EXPLOIT_DIR" } trap cleanup EXIT echo "I am the original content" > "${EXPLOIT_DIR}/original.txt" # Drop the -s to test hard links. ln -s "${EXPLOIT_DIR}/original.txt" "${SANDBOX_DIR}/link-to-original.txt" cat "${SANDBOX_DIR}/link-to-original.txt" if [[ "$(uname)" == "Linux" ]]; then SANDBOX_SUBCOMMAND=landlock else SANDBOX_SUBCOMMAND=seatbelt fi # Attempt the exploit cd "${SANDBOX_DIR}" codex debug "${SANDBOX_SUBCOMMAND}" bash -lc "echo pwned > ./link-to-original.txt" || true cat "${EXPLOIT_DIR}/original.txt" ``` Admittedly, this change merits a proper integration test, but I think I will have to do that in a follow-up PR.
2025-07-30 16:45:08 -07:00
None => {
let safety = {
let state = sess.state.lock_unchecked();
fix: run apply_patch calls through the sandbox (#1705) Building on the work of https://github.com/openai/codex/pull/1702, this changes how a shell call to `apply_patch` is handled. Previously, a shell call to `apply_patch` was always handled in-process, never leveraging a sandbox. To determine whether the `apply_patch` operation could be auto-approved, the `is_write_patch_constrained_to_writable_paths()` function would check if all the paths listed in the paths were writable. If so, the agent would apply the changes listed in the patch. Unfortunately, this approach afforded a loophole: symlinks! * For a soft link, we could fix this issue by tracing the link and checking whether the target is in the set of writable paths, however... * ...For a hard link, things are not as simple. We can run `stat FILE` to see if the number of links is greater than 1, but then we would have to do something potentially expensive like `find . -inum <inode_number>` to find the other paths for `FILE`. Further, even if this worked, this approach runs the risk of a [TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race condition, so it is not robust. The solution, implemented in this PR, is to take the virtual execution of the `apply_patch` CLI into an _actual_ execution using `codex --codex-run-as-apply-patch PATCH`, which we can run under the sandbox the user specified, just like any other `shell` call. This, of course, assumes that the sandbox prevents writing through symlinks as a mechanism to write to folders that are not in the writable set configured by the sandbox. I verified this by testing the following on both Mac and Linux: ```shell #!/usr/bin/env bash set -euo pipefail # Can running a command in SANDBOX_DIR write a file in EXPLOIT_DIR? # Codex is run in SANDBOX_DIR, so writes should be constrianed to this directory. SANDBOX_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) # EXPLOIT_DIR is outside of SANDBOX_DIR, so let's see if we can write to it. EXPLOIT_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) echo "SANDBOX_DIR: $SANDBOX_DIR" echo "EXPLOIT_DIR: $EXPLOIT_DIR" cleanup() { # Only remove if it looks sane and still exists [[ -n "${SANDBOX_DIR:-}" && -d "$SANDBOX_DIR" ]] && rm -rf -- "$SANDBOX_DIR" [[ -n "${EXPLOIT_DIR:-}" && -d "$EXPLOIT_DIR" ]] && rm -rf -- "$EXPLOIT_DIR" } trap cleanup EXIT echo "I am the original content" > "${EXPLOIT_DIR}/original.txt" # Drop the -s to test hard links. ln -s "${EXPLOIT_DIR}/original.txt" "${SANDBOX_DIR}/link-to-original.txt" cat "${SANDBOX_DIR}/link-to-original.txt" if [[ "$(uname)" == "Linux" ]]; then SANDBOX_SUBCOMMAND=landlock else SANDBOX_SUBCOMMAND=seatbelt fi # Attempt the exploit cd "${SANDBOX_DIR}" codex debug "${SANDBOX_SUBCOMMAND}" bash -lc "echo pwned > ./link-to-original.txt" || true cat "${EXPLOIT_DIR}/original.txt" ``` Admittedly, this change merits a proper integration test, but I think I will have to do that in a follow-up PR.
2025-07-30 16:45:08 -07:00
assess_command_safety(
&params.command,
turn_context.approval_policy,
&turn_context.sandbox_policy,
fix: run apply_patch calls through the sandbox (#1705) Building on the work of https://github.com/openai/codex/pull/1702, this changes how a shell call to `apply_patch` is handled. Previously, a shell call to `apply_patch` was always handled in-process, never leveraging a sandbox. To determine whether the `apply_patch` operation could be auto-approved, the `is_write_patch_constrained_to_writable_paths()` function would check if all the paths listed in the paths were writable. If so, the agent would apply the changes listed in the patch. Unfortunately, this approach afforded a loophole: symlinks! * For a soft link, we could fix this issue by tracing the link and checking whether the target is in the set of writable paths, however... * ...For a hard link, things are not as simple. We can run `stat FILE` to see if the number of links is greater than 1, but then we would have to do something potentially expensive like `find . -inum <inode_number>` to find the other paths for `FILE`. Further, even if this worked, this approach runs the risk of a [TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race condition, so it is not robust. The solution, implemented in this PR, is to take the virtual execution of the `apply_patch` CLI into an _actual_ execution using `codex --codex-run-as-apply-patch PATCH`, which we can run under the sandbox the user specified, just like any other `shell` call. This, of course, assumes that the sandbox prevents writing through symlinks as a mechanism to write to folders that are not in the writable set configured by the sandbox. I verified this by testing the following on both Mac and Linux: ```shell #!/usr/bin/env bash set -euo pipefail # Can running a command in SANDBOX_DIR write a file in EXPLOIT_DIR? # Codex is run in SANDBOX_DIR, so writes should be constrianed to this directory. SANDBOX_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) # EXPLOIT_DIR is outside of SANDBOX_DIR, so let's see if we can write to it. EXPLOIT_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) echo "SANDBOX_DIR: $SANDBOX_DIR" echo "EXPLOIT_DIR: $EXPLOIT_DIR" cleanup() { # Only remove if it looks sane and still exists [[ -n "${SANDBOX_DIR:-}" && -d "$SANDBOX_DIR" ]] && rm -rf -- "$SANDBOX_DIR" [[ -n "${EXPLOIT_DIR:-}" && -d "$EXPLOIT_DIR" ]] && rm -rf -- "$EXPLOIT_DIR" } trap cleanup EXIT echo "I am the original content" > "${EXPLOIT_DIR}/original.txt" # Drop the -s to test hard links. ln -s "${EXPLOIT_DIR}/original.txt" "${SANDBOX_DIR}/link-to-original.txt" cat "${SANDBOX_DIR}/link-to-original.txt" if [[ "$(uname)" == "Linux" ]]; then SANDBOX_SUBCOMMAND=landlock else SANDBOX_SUBCOMMAND=seatbelt fi # Attempt the exploit cd "${SANDBOX_DIR}" codex debug "${SANDBOX_SUBCOMMAND}" bash -lc "echo pwned > ./link-to-original.txt" || true cat "${EXPLOIT_DIR}/original.txt" ``` Admittedly, this change merits a proper integration test, but I think I will have to do that in a follow-up PR.
2025-07-30 16:45:08 -07:00
&state.approved_commands,
params.with_escalated_permissions.unwrap_or(false),
fix: run apply_patch calls through the sandbox (#1705) Building on the work of https://github.com/openai/codex/pull/1702, this changes how a shell call to `apply_patch` is handled. Previously, a shell call to `apply_patch` was always handled in-process, never leveraging a sandbox. To determine whether the `apply_patch` operation could be auto-approved, the `is_write_patch_constrained_to_writable_paths()` function would check if all the paths listed in the paths were writable. If so, the agent would apply the changes listed in the patch. Unfortunately, this approach afforded a loophole: symlinks! * For a soft link, we could fix this issue by tracing the link and checking whether the target is in the set of writable paths, however... * ...For a hard link, things are not as simple. We can run `stat FILE` to see if the number of links is greater than 1, but then we would have to do something potentially expensive like `find . -inum <inode_number>` to find the other paths for `FILE`. Further, even if this worked, this approach runs the risk of a [TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race condition, so it is not robust. The solution, implemented in this PR, is to take the virtual execution of the `apply_patch` CLI into an _actual_ execution using `codex --codex-run-as-apply-patch PATCH`, which we can run under the sandbox the user specified, just like any other `shell` call. This, of course, assumes that the sandbox prevents writing through symlinks as a mechanism to write to folders that are not in the writable set configured by the sandbox. I verified this by testing the following on both Mac and Linux: ```shell #!/usr/bin/env bash set -euo pipefail # Can running a command in SANDBOX_DIR write a file in EXPLOIT_DIR? # Codex is run in SANDBOX_DIR, so writes should be constrianed to this directory. SANDBOX_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) # EXPLOIT_DIR is outside of SANDBOX_DIR, so let's see if we can write to it. EXPLOIT_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) echo "SANDBOX_DIR: $SANDBOX_DIR" echo "EXPLOIT_DIR: $EXPLOIT_DIR" cleanup() { # Only remove if it looks sane and still exists [[ -n "${SANDBOX_DIR:-}" && -d "$SANDBOX_DIR" ]] && rm -rf -- "$SANDBOX_DIR" [[ -n "${EXPLOIT_DIR:-}" && -d "$EXPLOIT_DIR" ]] && rm -rf -- "$EXPLOIT_DIR" } trap cleanup EXIT echo "I am the original content" > "${EXPLOIT_DIR}/original.txt" # Drop the -s to test hard links. ln -s "${EXPLOIT_DIR}/original.txt" "${SANDBOX_DIR}/link-to-original.txt" cat "${SANDBOX_DIR}/link-to-original.txt" if [[ "$(uname)" == "Linux" ]]; then SANDBOX_SUBCOMMAND=landlock else SANDBOX_SUBCOMMAND=seatbelt fi # Attempt the exploit cd "${SANDBOX_DIR}" codex debug "${SANDBOX_SUBCOMMAND}" bash -lc "echo pwned > ./link-to-original.txt" || true cat "${EXPLOIT_DIR}/original.txt" ``` Admittedly, this change merits a proper integration test, but I think I will have to do that in a follow-up PR.
2025-07-30 16:45:08 -07:00
)
};
let command_for_display = params.command.clone();
(params, safety, command_for_display)
}
};
fix: run apply_patch calls through the sandbox (#1705) Building on the work of https://github.com/openai/codex/pull/1702, this changes how a shell call to `apply_patch` is handled. Previously, a shell call to `apply_patch` was always handled in-process, never leveraging a sandbox. To determine whether the `apply_patch` operation could be auto-approved, the `is_write_patch_constrained_to_writable_paths()` function would check if all the paths listed in the paths were writable. If so, the agent would apply the changes listed in the patch. Unfortunately, this approach afforded a loophole: symlinks! * For a soft link, we could fix this issue by tracing the link and checking whether the target is in the set of writable paths, however... * ...For a hard link, things are not as simple. We can run `stat FILE` to see if the number of links is greater than 1, but then we would have to do something potentially expensive like `find . -inum <inode_number>` to find the other paths for `FILE`. Further, even if this worked, this approach runs the risk of a [TOCTOU](https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use) race condition, so it is not robust. The solution, implemented in this PR, is to take the virtual execution of the `apply_patch` CLI into an _actual_ execution using `codex --codex-run-as-apply-patch PATCH`, which we can run under the sandbox the user specified, just like any other `shell` call. This, of course, assumes that the sandbox prevents writing through symlinks as a mechanism to write to folders that are not in the writable set configured by the sandbox. I verified this by testing the following on both Mac and Linux: ```shell #!/usr/bin/env bash set -euo pipefail # Can running a command in SANDBOX_DIR write a file in EXPLOIT_DIR? # Codex is run in SANDBOX_DIR, so writes should be constrianed to this directory. SANDBOX_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) # EXPLOIT_DIR is outside of SANDBOX_DIR, so let's see if we can write to it. EXPLOIT_DIR=$(mktemp -d -p "$HOME" sandboxtesttemp.XXXXXX) echo "SANDBOX_DIR: $SANDBOX_DIR" echo "EXPLOIT_DIR: $EXPLOIT_DIR" cleanup() { # Only remove if it looks sane and still exists [[ -n "${SANDBOX_DIR:-}" && -d "$SANDBOX_DIR" ]] && rm -rf -- "$SANDBOX_DIR" [[ -n "${EXPLOIT_DIR:-}" && -d "$EXPLOIT_DIR" ]] && rm -rf -- "$EXPLOIT_DIR" } trap cleanup EXIT echo "I am the original content" > "${EXPLOIT_DIR}/original.txt" # Drop the -s to test hard links. ln -s "${EXPLOIT_DIR}/original.txt" "${SANDBOX_DIR}/link-to-original.txt" cat "${SANDBOX_DIR}/link-to-original.txt" if [[ "$(uname)" == "Linux" ]]; then SANDBOX_SUBCOMMAND=landlock else SANDBOX_SUBCOMMAND=seatbelt fi # Attempt the exploit cd "${SANDBOX_DIR}" codex debug "${SANDBOX_SUBCOMMAND}" bash -lc "echo pwned > ./link-to-original.txt" || true cat "${EXPLOIT_DIR}/original.txt" ``` Admittedly, this change merits a proper integration test, but I think I will have to do that in a follow-up PR.
2025-07-30 16:45:08 -07:00
let sandbox_type = match safety {
SafetyCheck::AutoApprove { sandbox_type } => sandbox_type,
SafetyCheck::AskUser => {
let rx_approve = sess
.request_command_approval(
sub_id.clone(),
call_id.clone(),
params.command.clone(),
params.cwd.clone(),
params.justification.clone(),
)
.await;
match rx_approve.await.unwrap_or_default() {
ReviewDecision::Approved => (),
ReviewDecision::ApprovedForSession => {
sess.add_approved_command(params.command.clone());
}
ReviewDecision::Denied | ReviewDecision::Abort => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: "exec command rejected by user".to_string(),
success: None,
},
};
}
}
// No sandboxing is applied because the user has given
// explicit approval. Often, we end up in this case because
// the command cannot be run in a sandbox, such as
// installing a new dependency that requires network access.
SandboxType::None
}
SafetyCheck::Reject { reason } => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!("exec command rejected: {reason}"),
success: None,
},
};
}
};
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
let exec_command_context = ExecCommandContext {
sub_id: sub_id.clone(),
call_id: call_id.clone(),
command_for_display: command_for_display.clone(),
cwd: params.cwd.clone(),
apply_patch: apply_patch_exec.map(
|ApplyPatchExec {
action,
user_explicitly_approved_this_action,
}| ApplyPatchCommandContext {
user_explicitly_approved_this_action,
changes: convert_apply_patch_to_protocol(&action),
},
),
};
let params = maybe_translate_shell_command(params, sess, turn_context);
let output_result = sess
.run_exec_with_events(
turn_diff_tracker,
exec_command_context.clone(),
ExecInvokeArgs {
params: params.clone(),
sandbox_type,
sandbox_policy: &turn_context.sandbox_policy,
codex_linux_sandbox_exe: &sess.codex_linux_sandbox_exe,
stdout_stream: if exec_command_context.apply_patch.is_some() {
None
} else {
Some(StdoutStream {
sub_id: sub_id.clone(),
call_id: call_id.clone(),
tx_event: sess.tx_event.clone(),
})
},
},
)
.await;
match output_result {
Ok(output) => {
let ExecToolCallOutput { exit_code, .. } = &output;
let is_success = *exit_code == 0;
let content = format_exec_output(&output);
ResponseInputItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
content,
success: Some(is_success),
},
}
}
Err(CodexErr::Sandbox(error)) => {
handle_sandbox_error(
turn_diff_tracker,
params,
exec_command_context,
error,
sandbox_type,
sess,
turn_context,
)
.await
}
Err(e) => ResponseInputItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
content: format!("execution error: {e}"),
success: None,
},
},
}
}
async fn handle_sandbox_error(
turn_diff_tracker: &mut TurnDiffTracker,
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
params: ExecParams,
exec_command_context: ExecCommandContext,
error: SandboxErr,
sandbox_type: SandboxType,
sess: &Session,
turn_context: &TurnContext,
) -> ResponseInputItem {
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
let call_id = exec_command_context.call_id.clone();
let sub_id = exec_command_context.sub_id.clone();
let cwd = exec_command_context.cwd.clone();
// if the command timed out, we can simply return this failure to the model
if matches!(error, SandboxErr::Timeout) {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!(
"command timed out after {} milliseconds",
params.timeout_duration().as_millis()
),
success: Some(false),
},
};
}
// Early out if either the user never wants to be asked for approval, or
// we're letting the model manage escalation requests. Otherwise, continue
match turn_context.approval_policy {
AskForApproval::Never | AskForApproval::OnRequest => {
return ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: format!(
"failed in sandbox {sandbox_type:?} with execution error: {error}"
),
success: Some(false),
},
};
}
AskForApproval::UnlessTrusted | AskForApproval::OnFailure => (),
}
// Note that when `error` is `SandboxErr::Denied`, it could be a false
// positive. That is, it may have exited with a non-zero exit code, not
// because the sandbox denied it, but because that is its expected behavior,
// i.e., a grep command that did not match anything. Ideally we would
// include additional metadata on the command to indicate whether non-zero
// exit codes merit a retry.
// For now, we categorically ask the user to retry without sandbox and
// emit the raw error as a background event.
sess.notify_background_event(&sub_id, format!("Execution failed: {error}"))
.await;
let rx_approve = sess
.request_command_approval(
sub_id.clone(),
call_id.clone(),
params.command.clone(),
fix: ensure PatchApplyBeginEvent and PatchApplyEndEvent are dispatched reliably (#1760) This is a follow-up to https://github.com/openai/codex/pull/1705, as that PR inadvertently lost the logic where `PatchApplyBeginEvent` and `PatchApplyEndEvent` events were sent when patches were auto-approved. Though as part of this fix, I believe this also makes an important safety fix to `assess_patch_safety()`, as there was a case that returned `SandboxType::None`, which arguably is the thing we were trying to avoid in #1705. On a high level, we want there to be only one codepath where `apply_patch` happens, which should be unified with the patch to run `exec`, in general, so that sandboxing is applied consistently for both cases. Prior to this change, `apply_patch()` in `core` would either: * exit early, delegating to `exec()` to shell out to `apply_patch` using the appropriate sandbox * proceed to run the logic for `apply_patch` in memory https://github.com/openai/codex/blob/549846b29ad52f6cb4f8560365a731966054a9b3/codex-rs/core/src/apply_patch.rs#L61-L63 In this implementation, only the latter would dispatch `PatchApplyBeginEvent` and `PatchApplyEndEvent`, though the former would dispatch `ExecCommandBeginEvent` and `ExecCommandEndEvent` for the `apply_patch` call (or, more specifically, the `codex --codex-run-as-apply-patch PATCH` call). To unify things in this PR, we: * Eliminate the back half of the `apply_patch()` function, and instead have it also return with `DelegateToExec`, though we add an extra field to the return value, `user_explicitly_approved_this_action`. * In `codex.rs` where we process `DelegateToExec`, we use `SandboxType::None` when `user_explicitly_approved_this_action` is `true`. This means **we no longer run the apply_patch logic in memory**, as we always `exec()`. (Note this is what allowed us to delete so much code in `apply_patch.rs`.) * In `codex.rs`, we further update `notify_exec_command_begin()` and `notify_exec_command_end()` to take additional fields to determine what type of notification to send: `ExecCommand` or `PatchApply`. Admittedly, this PR also drops some of the functionality about giving the user the opportunity to expand the set of writable roots as part of approving the `apply_patch` command. I'm not sure how much that was used, and we should probably rethink how that works as we are currently tidying up the protocol to the TUI, in general.
2025-07-31 11:13:57 -07:00
cwd.clone(),
Some("command failed; retry without sandbox?".to_string()),
)
.await;
match rx_approve.await.unwrap_or_default() {
ReviewDecision::Approved | ReviewDecision::ApprovedForSession => {
// Persist this command as preapproved for the
// remainder of the session so future
// executions skip the sandbox directly.
// TODO(ragona): Isn't this a bug? It always saves the command in an | fork?
sess.add_approved_command(params.command.clone());
// Inform UI we are retrying without sandbox.
sess.notify_background_event(&sub_id, "retrying command without sandbox")
.await;
// This is an escalated retry; the policy will not be
// examined and the sandbox has been set to `None`.
let retry_output_result = sess
.run_exec_with_events(
turn_diff_tracker,
exec_command_context.clone(),
ExecInvokeArgs {
params,
sandbox_type: SandboxType::None,
sandbox_policy: &turn_context.sandbox_policy,
codex_linux_sandbox_exe: &sess.codex_linux_sandbox_exe,
stdout_stream: if exec_command_context.apply_patch.is_some() {
None
} else {
Some(StdoutStream {
sub_id: sub_id.clone(),
call_id: call_id.clone(),
tx_event: sess.tx_event.clone(),
})
},
},
)
.await;
match retry_output_result {
Ok(retry_output) => {
let ExecToolCallOutput { exit_code, .. } = &retry_output;
let is_success = *exit_code == 0;
let content = format_exec_output(&retry_output);
ResponseInputItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
content,
success: Some(is_success),
},
}
}
Err(e) => ResponseInputItem::FunctionCallOutput {
call_id: call_id.clone(),
output: FunctionCallOutputPayload {
content: format!("retry failed: {e}"),
success: None,
},
},
}
}
ReviewDecision::Denied | ReviewDecision::Abort => {
// Fall through to original failure handling.
ResponseInputItem::FunctionCallOutput {
call_id,
output: FunctionCallOutputPayload {
content: "exec command rejected by user".to_string(),
success: None,
},
}
}
}
}
fn format_exec_output_str(exec_output: &ExecToolCallOutput) -> String {
let ExecToolCallOutput {
aggregated_output, ..
} = exec_output;
// Head+tail truncation for the model: show the beginning and end with an elision.
// Clients still receive full streams; only this formatted summary is capped.
let s = aggregated_output.text.as_str();
let total_lines = s.lines().count();
if s.len() <= MODEL_FORMAT_MAX_BYTES && total_lines <= MODEL_FORMAT_MAX_LINES {
return s.to_string();
}
let lines: Vec<&str> = s.lines().collect();
let head_take = MODEL_FORMAT_HEAD_LINES.min(lines.len());
let tail_take = MODEL_FORMAT_TAIL_LINES.min(lines.len().saturating_sub(head_take));
let omitted = lines.len().saturating_sub(head_take + tail_take);
// Join head and tail blocks (lines() strips newlines; reinsert them)
let head_block = lines
.iter()
.take(head_take)
.cloned()
.collect::<Vec<_>>()
.join("\n");
let tail_block = if tail_take > 0 {
lines[lines.len() - tail_take..].join("\n")
} else {
String::new()
};
let marker = format!("\n[... omitted {omitted} of {total_lines} lines ...]\n\n");
// Byte budgets for head/tail around the marker
let mut head_budget = MODEL_FORMAT_HEAD_BYTES.min(MODEL_FORMAT_MAX_BYTES);
let tail_budget = MODEL_FORMAT_MAX_BYTES.saturating_sub(head_budget + marker.len());
if tail_budget == 0 && marker.len() >= MODEL_FORMAT_MAX_BYTES {
// Degenerate case: marker alone exceeds budget; return a clipped marker
return take_bytes_at_char_boundary(&marker, MODEL_FORMAT_MAX_BYTES).to_string();
}
if tail_budget == 0 {
// Make room for the marker by shrinking head
head_budget = MODEL_FORMAT_MAX_BYTES.saturating_sub(marker.len());
}
// Enforce line-count cap by trimming head/tail lines
let head_lines_text = head_block;
let tail_lines_text = tail_block;
// Build final string respecting byte budgets
let head_part = take_bytes_at_char_boundary(&head_lines_text, head_budget);
let mut result = String::with_capacity(MODEL_FORMAT_MAX_BYTES.min(s.len()));
result.push_str(head_part);
result.push_str(&marker);
let remaining = MODEL_FORMAT_MAX_BYTES.saturating_sub(result.len());
let tail_budget_final = remaining;
let tail_part = take_last_bytes_at_char_boundary(&tail_lines_text, tail_budget_final);
result.push_str(tail_part);
result
}
// Truncate a &str to a byte budget at a char boundary (prefix)
#[inline]
fn take_bytes_at_char_boundary(s: &str, maxb: usize) -> &str {
if s.len() <= maxb {
return s;
}
let mut last_ok = 0;
for (i, ch) in s.char_indices() {
let nb = i + ch.len_utf8();
if nb > maxb {
break;
}
last_ok = nb;
}
&s[..last_ok]
}
// Take a suffix of a &str within a byte budget at a char boundary
#[inline]
fn take_last_bytes_at_char_boundary(s: &str, maxb: usize) -> &str {
if s.len() <= maxb {
return s;
}
let mut start = s.len();
let mut used = 0usize;
for (i, ch) in s.char_indices().rev() {
let nb = ch.len_utf8();
if used + nb > maxb {
break;
}
start = i;
used += nb;
if start == 0 {
break;
}
}
&s[start..]
}
/// Exec output is a pre-serialized JSON payload
fn format_exec_output(exec_output: &ExecToolCallOutput) -> String {
let ExecToolCallOutput {
exit_code,
duration,
..
} = exec_output;
#[derive(Serialize)]
struct ExecMetadata {
exit_code: i32,
duration_seconds: f32,
}
#[derive(Serialize)]
struct ExecOutput<'a> {
output: &'a str,
metadata: ExecMetadata,
}
// round to 1 decimal place
let duration_seconds = ((duration.as_secs_f32()) * 10.0).round() / 10.0;
let formatted_output = format_exec_output_str(exec_output);
let payload = ExecOutput {
output: &formatted_output,
metadata: ExecMetadata {
exit_code: *exit_code,
duration_seconds,
},
};
#[expect(clippy::expect_used)]
serde_json::to_string(&payload).expect("serialize ExecOutput")
}
feat: configurable notifications in the Rust CLI (#793) With this change, you can specify a program that will be executed to get notified about events generated by Codex. The notification info will be packaged as a JSON object. The supported notification types are defined by the `UserNotification` enum introduced in this PR. Initially, it contains only one variant, `AgentTurnComplete`: ```rust pub(crate) enum UserNotification { #[serde(rename_all = "kebab-case")] AgentTurnComplete { turn_id: String, /// Messages that the user sent to the agent to initiate the turn. input_messages: Vec<String>, /// The last message sent by the assistant in the turn. last_assistant_message: Option<String>, }, } ``` This is intended to support the common case when a "turn" ends, which often means it is now your chance to give Codex further instructions. For example, I have the following in my `~/.codex/config.toml`: ```toml notify = ["python3", "/Users/mbolin/.codex/notify.py"] ``` I created my own custom notifier script that calls out to [terminal-notifier](https://github.com/julienXX/terminal-notifier) to show a desktop push notification on macOS. Contents of `notify.py`: ```python #!/usr/bin/env python3 import json import subprocess import sys def main() -> int: if len(sys.argv) != 2: print("Usage: notify.py <NOTIFICATION_JSON>") return 1 try: notification = json.loads(sys.argv[1]) except json.JSONDecodeError: return 1 match notification_type := notification.get("type"): case "agent-turn-complete": assistant_message = notification.get("last-assistant-message") if assistant_message: title = f"Codex: {assistant_message}" else: title = "Codex: Turn Complete!" input_messages = notification.get("input_messages", []) message = " ".join(input_messages) title += message case _: print(f"not sending a push notification for: {notification_type}") return 0 subprocess.check_output( [ "terminal-notifier", "-title", title, "-message", message, "-group", "codex", "-ignoreDnD", "-activate", "com.googlecode.iterm2", ] ) return 0 if __name__ == "__main__": sys.exit(main()) ``` For reference, here are related PRs that tried to add this functionality to the TypeScript version of the Codex CLI: * https://github.com/openai/codex/pull/160 * https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
fn get_last_assistant_message_from_turn(responses: &[ResponseItem]) -> Option<String> {
responses.iter().rev().find_map(|item| {
if let ResponseItem::Message { role, content, .. } = item {
feat: configurable notifications in the Rust CLI (#793) With this change, you can specify a program that will be executed to get notified about events generated by Codex. The notification info will be packaged as a JSON object. The supported notification types are defined by the `UserNotification` enum introduced in this PR. Initially, it contains only one variant, `AgentTurnComplete`: ```rust pub(crate) enum UserNotification { #[serde(rename_all = "kebab-case")] AgentTurnComplete { turn_id: String, /// Messages that the user sent to the agent to initiate the turn. input_messages: Vec<String>, /// The last message sent by the assistant in the turn. last_assistant_message: Option<String>, }, } ``` This is intended to support the common case when a "turn" ends, which often means it is now your chance to give Codex further instructions. For example, I have the following in my `~/.codex/config.toml`: ```toml notify = ["python3", "/Users/mbolin/.codex/notify.py"] ``` I created my own custom notifier script that calls out to [terminal-notifier](https://github.com/julienXX/terminal-notifier) to show a desktop push notification on macOS. Contents of `notify.py`: ```python #!/usr/bin/env python3 import json import subprocess import sys def main() -> int: if len(sys.argv) != 2: print("Usage: notify.py <NOTIFICATION_JSON>") return 1 try: notification = json.loads(sys.argv[1]) except json.JSONDecodeError: return 1 match notification_type := notification.get("type"): case "agent-turn-complete": assistant_message = notification.get("last-assistant-message") if assistant_message: title = f"Codex: {assistant_message}" else: title = "Codex: Turn Complete!" input_messages = notification.get("input_messages", []) message = " ".join(input_messages) title += message case _: print(f"not sending a push notification for: {notification_type}") return 0 subprocess.check_output( [ "terminal-notifier", "-title", title, "-message", message, "-group", "codex", "-ignoreDnD", "-activate", "com.googlecode.iterm2", ] ) return 0 if __name__ == "__main__": sys.exit(main()) ``` For reference, here are related PRs that tried to add this functionality to the TypeScript version of the Codex CLI: * https://github.com/openai/codex/pull/160 * https://github.com/openai/codex/pull/498
2025-05-02 19:48:13 -07:00
if role == "assistant" {
content.iter().rev().find_map(|ci| {
if let ContentItem::OutputText { text } = ci {
Some(text.clone())
} else {
None
}
})
} else {
None
}
} else {
None
}
})
}
async fn drain_to_completed(
sess: &Session,
turn_context: &TurnContext,
sub_id: &str,
prompt: &Prompt,
) -> CodexResult<()> {
let rollout_item = RolloutItem::TurnContext(TurnContextItem {
cwd: turn_context.cwd.clone(),
approval_policy: turn_context.approval_policy,
sandbox_policy: turn_context.sandbox_policy.clone(),
model: turn_context.client.get_model(),
effort: turn_context.client.get_reasoning_effort(),
summary: turn_context.client.get_reasoning_summary(),
});
sess.persist_rollout_items(&[rollout_item]).await;
let mut stream = turn_context.client.clone().stream(prompt).await?;
loop {
let maybe_event = stream.next().await;
let Some(event) = maybe_event else {
return Err(CodexErr::Stream(
"stream closed before response.completed".into(),
None,
));
};
match event {
Ok(ResponseEvent::OutputItemDone(item)) => {
// Record only to in-memory conversation history; avoid state snapshot.
let mut state = sess.state.lock_unchecked();
state.history.record_items(std::slice::from_ref(&item));
}
Ok(ResponseEvent::Completed {
response_id: _,
token_usage,
}) => {
let info = {
let mut st = sess.state.lock_unchecked();
let info = TokenUsageInfo::new_or_append(
&st.token_info,
&token_usage,
turn_context.client.get_model_context_window(),
);
st.token_info = info.clone();
info
};
sess.tx_event
.send(Event {
id: sub_id.to_string(),
msg: EventMsg::TokenCount(crate::protocol::TokenCountEvent { info }),
})
.await
.ok();
return Ok(());
}
Ok(_) => continue,
Err(e) => return Err(e),
}
}
}
fn convert_call_tool_result_to_function_call_output_payload(
call_tool_result: &CallToolResult,
) -> FunctionCallOutputPayload {
let CallToolResult {
content,
is_error,
structured_content,
} = call_tool_result;
// In terms of what to send back to the model, we prefer structured_content,
// if available, and fallback to content, otherwise.
let mut is_success = is_error != &Some(true);
let content = if let Some(structured_content) = structured_content
&& structured_content != &serde_json::Value::Null
&& let Ok(serialized_structured_content) = serde_json::to_string(&structured_content)
{
serialized_structured_content
} else {
match serde_json::to_string(&content) {
Ok(serialized_content) => serialized_content,
Err(err) => {
// If we could not serialize either content or structured_content to
// JSON, flag this as an error.
is_success = false;
err.to_string()
}
}
};
FunctionCallOutputPayload {
content,
success: Some(is_success),
}
}
#[cfg(test)]
mod tests {
use super::*;
use mcp_types::ContentBlock;
use mcp_types::TextContent;
use pretty_assertions::assert_eq;
use serde_json::json;
use std::time::Duration as StdDuration;
fn text_block(s: &str) -> ContentBlock {
ContentBlock::TextContent(TextContent {
annotations: None,
text: s.to_string(),
r#type: "text".to_string(),
})
}
#[test]
fn prefers_structured_content_when_present() {
let ctr = CallToolResult {
// Content present but should be ignored because structured_content is set.
content: vec![text_block("ignored")],
is_error: None,
structured_content: Some(json!({
"ok": true,
"value": 42
})),
};
let got = convert_call_tool_result_to_function_call_output_payload(&ctr);
let expected = FunctionCallOutputPayload {
content: serde_json::to_string(&json!({
"ok": true,
"value": 42
}))
.unwrap(),
success: Some(true),
};
assert_eq!(expected, got);
}
#[test]
fn model_truncation_head_tail_by_lines() {
// Build 400 short lines so line-count limit, not byte budget, triggers truncation
let lines: Vec<String> = (1..=400).map(|i| format!("line{i}")).collect();
let full = lines.join("\n");
let exec = ExecToolCallOutput {
exit_code: 0,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(full),
duration: StdDuration::from_secs(1),
};
let out = format_exec_output_str(&exec);
// Expect elision marker with correct counts
let omitted = 400 - MODEL_FORMAT_MAX_LINES; // 144
let marker = format!("\n[... omitted {omitted} of 400 lines ...]\n\n");
assert!(out.contains(&marker), "missing marker: {out}");
// Validate head and tail
let parts: Vec<&str> = out.split(&marker).collect();
assert_eq!(parts.len(), 2, "expected one marker split");
let head = parts[0];
let tail = parts[1];
let expected_head: String = (1..=MODEL_FORMAT_HEAD_LINES)
.map(|i| format!("line{i}"))
.collect::<Vec<_>>()
.join("\n");
assert!(head.starts_with(&expected_head), "head mismatch");
let expected_tail: String = ((400 - MODEL_FORMAT_TAIL_LINES + 1)..=400)
.map(|i| format!("line{i}"))
.collect::<Vec<_>>()
.join("\n");
assert!(tail.ends_with(&expected_tail), "tail mismatch");
}
#[test]
fn model_truncation_respects_byte_budget() {
// Construct a large output (about 100kB) so byte budget dominates
let big_line = "x".repeat(100);
let full = std::iter::repeat_n(big_line, 1000)
.collect::<Vec<_>>()
.join("\n");
let exec = ExecToolCallOutput {
exit_code: 0,
stdout: StreamOutput::new(String::new()),
stderr: StreamOutput::new(String::new()),
aggregated_output: StreamOutput::new(full.clone()),
duration: StdDuration::from_secs(1),
};
let out = format_exec_output_str(&exec);
assert!(out.len() <= MODEL_FORMAT_MAX_BYTES, "exceeds byte budget");
assert!(out.contains("omitted"), "should contain elision marker");
// Ensure head and tail are drawn from the original
assert!(full.starts_with(out.chars().take(8).collect::<String>().as_str()));
assert!(
full.ends_with(
out.chars()
.rev()
.take(8)
.collect::<String>()
.chars()
.rev()
.collect::<String>()
.as_str()
)
);
}
#[test]
fn falls_back_to_content_when_structured_is_null() {
let ctr = CallToolResult {
content: vec![text_block("hello"), text_block("world")],
is_error: None,
structured_content: Some(serde_json::Value::Null),
};
let got = convert_call_tool_result_to_function_call_output_payload(&ctr);
let expected = FunctionCallOutputPayload {
content: serde_json::to_string(&vec![text_block("hello"), text_block("world")])
.unwrap(),
success: Some(true),
};
assert_eq!(expected, got);
}
#[test]
fn success_flag_reflects_is_error_true() {
let ctr = CallToolResult {
content: vec![text_block("unused")],
is_error: Some(true),
structured_content: Some(json!({ "message": "bad" })),
};
let got = convert_call_tool_result_to_function_call_output_payload(&ctr);
let expected = FunctionCallOutputPayload {
content: serde_json::to_string(&json!({ "message": "bad" })).unwrap(),
success: Some(false),
};
assert_eq!(expected, got);
}
#[test]
fn success_flag_true_with_no_error_and_content_used() {
let ctr = CallToolResult {
content: vec![text_block("alpha")],
is_error: Some(false),
structured_content: None,
};
let got = convert_call_tool_result_to_function_call_output_payload(&ctr);
let expected = FunctionCallOutputPayload {
content: serde_json::to_string(&vec![text_block("alpha")]).unwrap(),
success: Some(true),
};
assert_eq!(expected, got);
}
}