I discovered that `cargo build` worked for the entire workspace, but not
for the `mcp-client` or `core` crates.
* `mcp-client` failed to build because it underspecified the set of
features it needed from `tokio`.
* `core` failed to build because it was using a "feature" of its own
crate in the default, no-feature version.
This PR fixes the builds and adds a check in CI to defend against this
sort of thing going forward.
Cleans up the signature for `new_stdio_client()` to more closely mirror
how MCP servers are declared in config files (`command`, `args`, `env`).
Also takes a cue from Claude Code where the MCP server is launched with
a restricted `env` so that it only includes "safe" things like `USER`
and `PATH` (see the `create_env_for_mcp_server()` function introduced in
this PR for details) by default, as it is common for developers to have
sensitive API keys present in their environment that should only be
forwarded to the MCP server when the user has explicitly configured it
to do so.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/831).
* #829
* __->__ #831
This PR introduces an initial `McpClient` that we will use to give Codex
itself programmatic access to foreign MCPs. This does not wire it up in
Codex itself yet, but the new `mcp-client` crate includes a `main.rs`
for basic testing for now.
Manually tested by sending a `tools/list` request to Codex's own MCP
server:
```
codex-rs$ cargo build
codex-rs$ cargo run --bin codex-mcp-client ./target/debug/codex-mcp-server
{
"tools": [
{
"description": "Run a Codex session. Accepts configuration parameters matching the Codex Config struct.",
"inputSchema": {
"properties": {
"approval-policy": {
"description": "Execution approval policy expressed as the kebab-case variant name (`unless-allow-listed`, `auto-edit`, `on-failure`, `never`).",
"enum": [
"auto-edit",
"unless-allow-listed",
"on-failure",
"never"
],
"type": "string"
},
"cwd": {
"description": "Working directory for the session. If relative, it is resolved against the server process's current working directory.",
"type": "string"
},
"disable-response-storage": {
"description": "Disable server-side response storage.",
"type": "boolean"
},
"model": {
"description": "Optional override for the model name (e.g. \"o3\", \"o4-mini\")",
"type": "string"
},
"prompt": {
"description": "The *initial user prompt* to start the Codex conversation.",
"type": "string"
},
"sandbox-permissions": {
"description": "Sandbox permissions using the same string values accepted by the CLI (e.g. \"disk-write-cwd\", \"network-full-access\").",
"items": {
"enum": [
"disk-full-read-access",
"disk-write-cwd",
"disk-write-platform-user-temp-folder",
"disk-write-platform-global-temp-folder",
"disk-full-write-access",
"network-full-access"
],
"type": "string"
},
"type": "array"
}
},
"required": [
"prompt"
],
"type": "object"
},
"name": "codex"
}
]
}
```
This PR replaces the placeholder `"echo"` tool call in the MCP server
with a `"codex"` tool that calls Codex. Events such as
`ExecApprovalRequest` and `ApplyPatchApprovalRequest` are not handled
properly yet, but I have `approval_policy = "never"` set in my
`~/.codex/config.toml` such that those codepaths are not exercised.
The schema for this MPC tool is defined by a new `CodexToolCallParam`
struct introduced in this PR. It is fairly similar to `ConfigOverrides`,
as the param is used to help create the `Config` used to start the Codex
session, though it also includes the `prompt` used to kick off the
session.
This PR also introduces the use of the third-party `schemars` crate to
generate the JSON schema, which is verified in the
`verify_codex_tool_json_schema()` unit test.
Events that are dispatched during the Codex session are sent back to the
MCP client as MCP notifications. This gives the client a way to monitor
progress as the tool call itself may take minutes to complete depending
on the complexity of the task requested by the user.
In the video below, I launched the server via:
```shell
mcp-server$ RUST_LOG=debug npx @modelcontextprotocol/inspector cargo run --
```
In the video, you can see the flow of:
* requesting the list of tools
* choosing the **codex** tool
* entering a value for **prompt** and then making the tool call
Note that I left the other fields blank because when unspecified, the
values in my `~/.codex/config.toml` were used:
https://github.com/user-attachments/assets/1975058c-b004-43ef-8c8d-800a953b8192
Note that while using the inspector, I did run into
https://github.com/modelcontextprotocol/inspector/issues/293, though the
tip about ensuring I had only one instance of the **MCP Inspector** tab
open in my browser seemed to fix things.
https://github.com/openai/codex/pull/800 kicked off some work to be more
disciplined about honoring the `cwd` param passed in rather than
assuming `std::env::current_dir()` as the `cwd`. As part of this, we
need to ensure `apply_patch` calls honor the appropriate `cwd` as well,
which is significant if the paths in the `apply_patch` arg are not
absolute paths themselves. Failing that:
- The `apply_patch` function call can contain an optional`workdir`
param, so:
- If specified and is an absolute path, it should be used to resolve
relative paths
- If specified and is a relative path, should be resolved against
`Config.cwd` and then any relative paths will be resolved against the
result
- If `workdir` is not specified on the function call, relative paths
should be resolved against `Config.cwd`
Note that we had a similar issue in the TypeScript CLI that was fixed in
https://github.com/openai/codex/pull/556.
As part of the fix, this PR introduces `ApplyPatchAction` so clients can
deal with that instead of the raw `HashMap<PathBuf,
ApplyPatchFileChange>`. This enables us to enforce, by construction,
that all paths contained in the `ApplyPatchAction` are absolute paths.
https://github.com/openai/codex/pull/800 made `cwd` a property of
`Config` and made it so the `cwd` is not necessarily
`std::env::current_dir()`. As such, `is_inside_git_repo()` should check
`Config.cwd` rather than `std::env::current_dir()`.
This PR updates `is_inside_git_repo()` to take `Config` instead of an
arbitrary `PathBuf` to force the check to operate on a `Config` where
`cwd` has been resolved to what the user specified.
In order to expose Codex via an MCP server, I realized that we should be
taking `cwd` as a parameter rather than assuming
`std::env::current_dir()` as the `cwd`. Specifically, the user may want
to start a session in a directory other than the one where the MCP
server has been started.
This PR makes `cwd: PathBuf` a required field of `Session` and threads
it all the way through, though I think there is still an issue with not
honoring `workdir` for `apply_patch`, which is something we also had to
fix in the TypeScript version: https://github.com/openai/codex/pull/556.
This also adds `-C`/`--cd` to change the cwd via the command line.
To test, I ran:
```
cargo run --bin codex -- exec -C /tmp 'show the output of ls'
```
and verified it showed the contents of my `/tmp` folder instead of
`$PWD`.
https://github.com/openai/codex/pull/793 had important information on
the `notify` config option that seemed worth memorializing, so this PR
updates the documentation about all of the configurable options in
`~/.codex/config.toml`.
With this change, you can specify a program that will be executed to get
notified about events generated by Codex. The notification info will be
packaged as a JSON object. The supported notification types are defined
by the `UserNotification` enum introduced in this PR. Initially, it
contains only one variant, `AgentTurnComplete`:
```rust
pub(crate) enum UserNotification {
#[serde(rename_all = "kebab-case")]
AgentTurnComplete {
turn_id: String,
/// Messages that the user sent to the agent to initiate the turn.
input_messages: Vec<String>,
/// The last message sent by the assistant in the turn.
last_assistant_message: Option<String>,
},
}
```
This is intended to support the common case when a "turn" ends, which
often means it is now your chance to give Codex further instructions.
For example, I have the following in my `~/.codex/config.toml`:
```toml
notify = ["python3", "/Users/mbolin/.codex/notify.py"]
```
I created my own custom notifier script that calls out to
[terminal-notifier](https://github.com/julienXX/terminal-notifier) to
show a desktop push notification on macOS. Contents of `notify.py`:
```python
#!/usr/bin/env python3
import json
import subprocess
import sys
def main() -> int:
if len(sys.argv) != 2:
print("Usage: notify.py <NOTIFICATION_JSON>")
return 1
try:
notification = json.loads(sys.argv[1])
except json.JSONDecodeError:
return 1
match notification_type := notification.get("type"):
case "agent-turn-complete":
assistant_message = notification.get("last-assistant-message")
if assistant_message:
title = f"Codex: {assistant_message}"
else:
title = "Codex: Turn Complete!"
input_messages = notification.get("input_messages", [])
message = " ".join(input_messages)
title += message
case _:
print(f"not sending a push notification for: {notification_type}")
return 0
subprocess.check_output(
[
"terminal-notifier",
"-title",
title,
"-message",
message,
"-group",
"codex",
"-ignoreDnD",
"-activate",
"com.googlecode.iterm2",
]
)
return 0
if __name__ == "__main__":
sys.exit(main())
```
For reference, here are related PRs that tried to add this functionality
to the TypeScript version of the Codex CLI:
* https://github.com/openai/codex/pull/160
* https://github.com/openai/codex/pull/498
While creating a basic MCP server in
https://github.com/openai/codex/pull/792, I discovered a number of bugs
with the initial `mcp-types` crate that I needed to fix in order to
implement the server.
For example, I discovered that when serializing a message, `"jsonrpc":
"2.0"` was not being included.
I changed the codegen so that the field is added as:
```rust
#[serde(rename = "jsonrpc", default = "default_jsonrpc")]
pub jsonrpc: String,
```
This ensures that the field is serialized as `"2.0"`, though the field
still has to be assigned, which is tedious. I may experiment with
`Default` or something else in the future. (I also considered creating a
custom serializer, but I'm not sure it's worth the trouble.)
While here, I also added `MCP_SCHEMA_VERSION` and `JSONRPC_VERSION` as
`pub const`s for the crate.
I also discovered that MCP rejects sending `null` for optional fields,
so I had to add `#[serde(skip_serializing_if = "Option::is_none")]` on
`Option` fields.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/791).
* #792
* __->__ #791
This adds our own `mcp-types` crate to our Cargo workspace. We vendor in
the
[`2025-03-26/schema.json`](05f2045136/schema/2025-03-26/schema.json)
from the MCP repo and introduce a `generate_mcp_types.py` script to
codegen the `lib.rs` from the JSON schema.
Test coverage is currently light, but I plan to refine things as we
start making use of this crate.
And yes, I am aware that
https://github.com/modelcontextprotocol/rust-sdk exists, though the
published https://crates.io/crates/rmcp appears to be a competing
effort. While things are up in the air, it seems better for us to
control our own version of this code.
Incidentally, Codex did a lot of the work for this PR. I told it to
never edit `lib.rs` directly and instead to update
`generate_mcp_types.py` and then re-run it to update `lib.rs`. It
followed these instructions and once things were working end-to-end, I
iteratively asked for changes to the tests until the API looked
reasonable (and the code worked). Codex was responsible for figuring out
what to do to `generate_mcp_types.py` to achieve the requested test/API
changes.
For now, keep things simple such that we never update the `version` in
the `Cargo.toml` for the workspace root on the `main` branch. Instead,
create a new branch for a release, push one commit that updates the
`version`, and then tag that branch to kick off a release.
To test, I ran this script and created this release job:
https://github.com/openai/codex/actions/runs/14762580641
The generated DotSlash file has URLs that refer to
`https://github.com/openai/codex/releases/`, so let's set
`prerelease:false` (but keep `draft:true` for now) so those URLs should
work.
Also updated `version` in Cargo workspace so I will kick off a build
once this lands.
@oai-ragona and I discussed it, and we feel the REPL crate has served
its purpose, so we're going to delete the code and future archaeologists
can find it in Git history.
Apparently I made two key mistakes in
https://github.com/openai/codex/pull/740 (fixed in this PR):
* I forgot to redefine `$dest` in the `Stage Linux-only artifacts` step
* I did not define the `if` check correctly in the `Stage Linux-only
artifacts` step
This fixes both of those issues and bumps the workspace version to
`0.0.2504292006` in preparation for another release attempt.
This introduces a standalone executable that run the equivalent of the
`codex debug landlock` subcommand and updates `rust-release.yml` to
include it in the release.
The idea is that we will include this small binary with the TypeScript
CLI to provide support for Linux sandboxing.
Taking a pass at building artifacts per platform so we can consider
different distribution strategies that don't require users to install
the full `cargo` toolchain.
Right now this grabs just the `codex-repl` and `codex-tui` bins for 5
different targets and bundles them into a draft release. I think a
clearly marked pre-release set of artifacts will unblock the next step
of testing.
Previous to this PR, `SandboxPolicy` was a bit difficult to work with:
237f8a11e1/codex-rs/core/src/protocol.rs (L98-L108)
Specifically:
* It was an `enum` and therefore options were mutually exclusive as
opposed to additive.
* It defined things in terms of what the agent _could not_ do as opposed
to what they _could_ do. This made things hard to support because we
would prefer to build up a sandbox config by starting with something
extremely restrictive and only granting permissions for things the user
as explicitly allowed.
This PR changes things substantially by redefining the policy in terms
of two concepts:
* A `SandboxPermission` enum that defines permissions that can be
granted to the agent/sandbox.
* A `SandboxPolicy` that internally stores a `Vec<SandboxPermission>`,
but externally exposes a simpler API that can be used to configure
Seatbelt/Landlock.
Previous to this PR, we supported a `--sandbox` flag that effectively
mapped to an enum value in `SandboxPolicy`. Though now that
`SandboxPolicy` is a wrapper around `Vec<SandboxPermission>`, the single
`--sandbox` flag no longer makes sense. While I could have turned it
into a flag that the user can specify multiple times, I think the
current values to use with such a flag are long and potentially messy,
so for the moment, I have dropped support for `--sandbox` altogether and
we can bring it back once we have figured out the naming thing.
Since `--sandbox` is gone, users now have to specify `--full-auto` to
get a sandbox that allows writes in `cwd`. Admittedly, there is no clean
way to specify the equivalent of `--full-auto` in your `config.toml`
right now, so we will have to revisit that, as well.
Because `Config` presents a `SandboxPolicy` field and `SandboxPolicy`
changed considerably, I had to overhaul how config loading works, as
well. There are now two distinct concepts, `ConfigToml` and `Config`:
* `ConfigToml` is the deserialization of `~/.codex/config.toml`. As one
might expect, every field is `Optional` and it is `#[derive(Deserialize,
Default)]`. Consistent use of `Optional` makes it clear what the user
has specified explicitly.
* `Config` is the "normalized config" and is produced by merging
`ConfigToml` with `ConfigOverrides`. Where `ConfigToml` contains a raw
`Option<Vec<SandboxPermission>>`, `Config` presents only the final
`SandboxPolicy`.
The changes to `core/src/exec.rs` and `core/src/linux.rs` merit extra
special attention to ensure we are faithfully mapping the
`SandboxPolicy` to the Seatbelt and Landlock configs, respectively.
Also, take note that `core/src/seatbelt_readonly_policy.sbpl` has been
renamed to `codex-rs/core/src/seatbelt_base_policy.sbpl` and that
`(allow file-read*)` has been removed from the `.sbpl` file as now this
is added to the policy in `core/src/exec.rs` when
`sandbox_policy.has_full_disk_read_access()` is `true`.
When processing an `apply_patch` tool call, we were already computing
the new file content in order to compute the unified diff. Before this
PR, we were shelling out to `patch(1)` to apply the unified diff once
the user accepted the change, but this updates the code to just retain
the new file content and use it to write the file when the user accepts.
This simplifies deployment because it no longer assumes `patch(1)` is on
the host.
Note this change is internal to the Codex agent and does not affect
`protocol.rs`.
This PR adds a `debug landlock` subcommand to the Codex CLI for testing
how Codex would execute a command using the specified sandbox policy.
Built and ran this code in the `rust:latest` Docker container. In the
container, hitting the network with vanilla `curl` succeeds:
```
$ curl google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
```
whereas this fails, as expected:
```
$ cargo run -- debug landlock -s network-restricted -- curl google.com
curl: (6) getaddrinfo() thread failed to start
```
https://github.com/openai/codex/pull/642 introduced support for the
`--disable-response-storage` flag, but if you are a ZDR customer, it is
tedious to set this every time, so this PR makes it possible to set this
once in `config.toml` and be done with it.
Incidentally, this tidies things up such that now `init_codex()` takes
only one parameter: `Config`.
Originally, the `interactive` crate was going to be a placeholder for
building out a UX that was comparable to that of the existing TypeScript
CLI. Though after researching how Ratatui works, that seems difficult to
do because it is designed around the idea that it will redraw the full
screen buffer each time (and so any scrolling should be "internal" to
your Ratatui app) whereas the TypeScript CLI expects to render the full
history of the conversation every time(*) (which is why you can use your
terminal scrollbar to scroll it).
While it is possible to use Ratatui in a way that acts more like what
the TypeScript CLI is doing, it is awkward and seemingly results in
tedious code, so I think we should abandon that approach. As such, this
PR deletes the `interactive/` folder and the code that depended on it.
Further, since we added support for mousewheel scrolling in the TUI in
https://github.com/openai/codex/pull/641, it certainly feels much better
and the need for scroll support via the terminal scrollbar is greatly
diminished. This is now a more appropriate default UX for the
"multitool" CLI.
(*) Incidentally, I haven't verified this, but I think this results in
O(N^2) work in rendering, which seems potentially problematic for long
conversations.
* In both TypeScript and Rust, we now invoke `/usr/bin/sandbox-exec`
explicitly rather than whatever `sandbox-exec` happens to be on the
`PATH`.
* Changed `isSandboxExecAvailable` to use `access()` rather than
`command -v` so that:
* We only do the check once over the lifetime of the Codex process.
* The check is specific to `/usr/bin/sandbox-exec`.
* We now do a syscall rather than incur the overhead of spawning a
process, dealing with timeouts, etc.
I think there is still room for improvement here where we should move
the `isSandboxExecAvailable` check earlier in the CLI, ideally right
after we do arg parsing to verify that we can provide the Seatbelt
sandbox if that is what the user has requested.
Although we made some promising fixes in
https://github.com/openai/codex/pull/662, we are still seeing some
flakiness in `test_writable_root()`. If this continues to flake with the
more generous timeout, we should try something other than simply
increasing the timeout.
The existing `b` and `space` are sufficient and `d` and `u` default to
half-page scrolling in `less`, so the way we supported `d` and `u`
wasn't faithful to that, anyway:
https://man7.org/linux/man-pages/man1/less.1.html
If we decide to bring `d` and `u` back, they should probably match
`less`?
This changes how instantiating `Config` works and also adds
`approval_policy` and `sandbox_policy` as fields. The idea is:
* All fields of `Config` have appropriate default values.
* `Config` is initially loaded from `~/.codex/config.toml`, so values in
`config.toml` will override those defaults.
* Clients must instantiate `Config` via
`Config::load_with_overrides(ConfigOverrides)` where `ConfigOverrides`
has optional overrides that are expected to be settable based on CLI
flags.
The `Config` should be defined early in the program and then passed
down. Now functions like `init_codex()` take fewer individual parameters
because they can just take a `Config`.
Also, `Config::load()` used to fail silently if `~/.codex/config.toml`
had a parse error and fell back to the default config. This seemed
really bad because it wasn't clear why the values in my `config.toml`
weren't getting picked up. I changed things so that
`load_with_overrides()` returns `Result<Config>` and verified that the
various CLIs print a reasonable error if `config.toml` is malformed.
Finally, I also updated the TUI to show which **sandbox** value is being
used, as we do for other key values like **model** and **approval**.
This was also a reminder that the various values of `--sandbox` are
honored on Linux but not macOS today, so I added some TODOs about fixing
that.
Previously, the Rust TUI was writing log files to `/tmp`, which is
world-readable and not available on Windows, so that isn't great.
This PR tries to clean things up by adding a function that provides the
path to the "Codex config dir," e.g., `~/.codex` (though I suppose we
could support `$CODEX_HOME` to override this?) and then defines other
paths in terms of the result of `codex_dir()`.
For example, `log_dir()` returns the folder where log files should be
written which is defined in terms of `codex_dir()`. I updated the TUI to
use this function. On UNIX, we even go so far as to `chmod 600` the log
file by default, though as noted in a comment, it's a bit tedious to do
the equivalent on Windows, so we just let that go for now.
This also changes the default logging level to `info` for `codex_core`
and `codex_tui` when `RUST_LOG` is not specified. I'm not really sure if
we should use a more verbose default (it may be helpful when debugging
user issues), though if so, we should probably also set up log rotation?
Small fixes required:
* `ExitStatusExt` differs because UNIX expects exit code to be `i32`
whereas Windows does `u32`
* Marking a file "executable only by owner" is a bit more involved on
Windows. We just do something approximate for now (and add a TODO) to
get things compiling.
I created this PR on my personal Windows machine and `cargo test` and
`cargo clippy` succeed. Once this is in, I'll rebase
https://github.com/openai/codex/pull/665 on top so Windows stays fixed!
In putting up https://github.com/openai/codex/pull/665, I discovered
that the `expanduser` crate does not compile on Windows. Looking into
it, we do not seem to need it because we were only using it with a value
that was passed in via a command-line flag, so the shell expands `~` for
us before we see it, anyway. (I changed the type in `Cli` from `String`
to `PathBuf`, to boot.)
If we do need this sort of functionality in the future,
https://docs.rs/shellexpand/latest/shellexpand/fn.tilde.html seems
promising.
I got the sense of this wrong in
https://github.com/openai/codex/pull/642. In that PR, I made
`--disable-response-storage` work, but broke the default case.
With this fix, both cases work and I think the code is a bit cleaner.
This adds support for the `--disable-response-storage` flag across our
multiple Rust CLIs to support customers who have opted into Zero-Data
Retention (ZDR). The analogous changes to the TypeScript CLI were:
* https://github.com/openai/codex/pull/481
* https://github.com/openai/codex/pull/543
For a client using ZDR, `previous_response_id` will never be available,
so the `input` field of an API request must include the full transcript
of the conversation thus far. As such, this PR changes the type of
`Prompt.input` from `Vec<ResponseInputItem>` to `Vec<ResponseItem>`.
Practically speaking, `ResponseItem` was effectively a "superset" of
`ResponseInputItem` already. The main difference for us is that
`ResponseItem` includes the `FunctionCall` variant that we have to
include as part of the conversation history in the ZDR case.
Another key change in this PR is modifying `try_run_turn()` so that it
returns the `Vec<ResponseItem>` for the turn in addition to the
`Vec<ResponseInputItem>` produced by `try_run_turn()`. This is because
the caller of `run_turn()` needs to record the `Vec<ResponseItem>` when
ZDR is enabled.
To that end, this PR introduces `ZdrTranscript` (and adds
`zdr_transcript: Option<ZdrTranscript>` to `struct State` in `codex.rs`)
to take responsibility for maintaining the conversation transcript in
the ZDR case.
It is intuitive to try to scroll the conversation history using the
mouse in the TUI, but prior to this change, we only supported scrolling
via keyboard events.
This PR enables mouse capture upon initialization (and disables it on
exit) such that we get `ScrollUp` and `ScrollDown` events in
`codex-rs/tui/src/app.rs`. I initially mapped each event to scrolling by
one line, but that felt sluggish. I decided to introduce
`ScrollEventHelper` so we could debounce scroll events and measure the
number of scroll events in a 100ms window to determine the "magnitude"
of the scroll event. I put in a basic heuristic to start, but perhaps
someone more motivated can play with it over time.
`ScrollEventHelper` takes care of handling the atomic fields and thread
management to ensure an `AppEvent::Scroll` event is pumped back through
the event loop at the appropriate time with the accumulated delta.
We currently see a behavior that looks like this:
```
2025-04-25T16:52:24.552789Z WARN codex_core::codex: stream disconnected - retrying turn (1/10 in 232ms)...
codex> event: BackgroundEvent { message: "stream error: stream disconnected before completion: Transport error: error decoding response body; retrying 1/10 in 232ms…" }
2025-04-25T16:52:54.789885Z WARN codex_core::codex: stream disconnected - retrying turn (2/10 in 418ms)...
codex> event: BackgroundEvent { message: "stream error: stream disconnected before completion: Transport error: error decoding response body; retrying 2/10 in 418ms…" }
```
This PR contains a few different fixes that attempt to resolve/improve
this:
1. **Remove overall client timeout.** I think
[this](https://github.com/openai/codex/pull/658/files#diff-c39945d3c42f29b506ff54b7fa2be0795b06d7ad97f1bf33956f60e3c6f19c19L173)
is perhaps the big fix -- it looks to me like this was actually timing
out even if events were still coming through, and that was causing a
disconnect right in the middle of a healthy stream.
2. **Cap response sizes.** We were frequently sending MUCH larger
responses than the upstream typescript `codex`, and that was definitely
not helping. [Fix
here](https://github.com/openai/codex/pull/658/files#diff-d792bef59aa3ee8cb0cbad8b176dbfefe451c227ac89919da7c3e536a9d6cdc0R21-R26)
for that one.
3. **Much higher idle timeout.** Our idle timeout value was much lower
than typescript.
4. **Sub-linear backoff.** We were much too aggressively backing off,
[this](https://github.com/openai/codex/pull/658/files#diff-5d5959b95c6239e6188516da5c6b7eb78154cd9cfedfb9f753d30a7b6d6b8b06R30-R33)
makes it sub-exponential but maintains the jitter and such.
I was seeing that `stream error: stream disconnected` behavior
constantly, and anecdotally I can no longer reproduce. It feels much
snappier.
As described in detail in `codex-rs/execpolicy/README.md` introduced in
this PR, `execpolicy` is a tool that lets you define a set of _patterns_
used to match [`execv(3)`](https://linux.die.net/man/3/execv)
invocations. When a pattern is matched, `execpolicy` returns the parsed
version in a structured form that is amenable to static analysis.
The primary use case is to define patterns match commands that should be
auto-approved by a tool such as Codex. This supports a richer pattern
matching mechanism that the sort of prefix-matching we have done to
date, e.g.:
5e40d9d221/codex-cli/src/approvals.ts (L333-L354)
Note we are still playing with the API and the `system_path` option in
particular still needs some work.
##### What/Why
This PR makes it so that in Linux we actually respect the different
types of `--sandbox` flag, such that users can apply network and
filesystem restrictions in combination (currently the only supported
behavior), or just pick one or the other.
We should add similar support for OSX in a future PR.
##### Testing
From Linux devbox, updated tests to use more specific flags:
```
test linux::tests_linux::sandbox_blocks_ping ... ok
test linux::tests_linux::sandbox_blocks_getent ... ok
test linux::tests_linux::test_root_read ... ok
test linux::tests_linux::test_dev_null_write ... ok
test linux::tests_linux::sandbox_blocks_dev_tcp_redirection ... ok
test linux::tests_linux::sandbox_blocks_ssh ... ok
test linux::tests_linux::test_writable_root ... ok
test linux::tests_linux::sandbox_blocks_curl ... ok
test linux::tests_linux::sandbox_blocks_wget ... ok
test linux::tests_linux::sandbox_blocks_nc ... ok
test linux::tests_linux::test_root_write - should panic ... ok
```
##### Todo
- [ ] Add negative tests (e.g. confirm you can hit the network if you
configure filesystem only restrictions)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.