feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
[workspace]
|
|
|
|
|
members = [
|
|
|
|
|
"ansi-escape",
|
|
|
|
|
"apply-patch",
|
2025-07-28 08:31:24 -07:00
|
|
|
"arg0",
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
"cli",
|
2025-05-06 17:38:56 -07:00
|
|
|
"common",
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
"core",
|
|
|
|
|
"exec",
|
2025-04-24 17:14:47 -07:00
|
|
|
"execpolicy",
|
2025-06-25 13:29:03 -07:00
|
|
|
"file-search",
|
2025-09-23 16:59:52 +01:00
|
|
|
"git-tooling",
|
fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086)
Historically, we spawned the Seatbelt and Landlock sandboxes in
substantially different ways:
For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy
specified as an arg followed by the original command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec.rs#L147-L219
For **Landlock/Seccomp**, we would do
`tokio::runtime::Builder::new_current_thread()`, _invoke
Landlock/Seccomp APIs to modify the permissions of that new thread_, and
then spawn the command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec_linux.rs#L28-L49
While it is neat that Landlock/Seccomp supports applying a policy to
only one thread without having to apply it to the entire process, it
requires us to maintain two different codepaths and is a bit harder to
reason about. The tipping point was
https://github.com/openai/codex/pull/1061, in which we had to start
building up the `env` in an unexpected way for the existing
Landlock/Seccomp approach to continue to work.
This PR overhauls things so that we do similar things for Mac and Linux.
It turned out that we were already building our own "helper binary"
comparable to Mac's `sandbox-exec` as part of the `cli` crate:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/cli/Cargo.toml#L10-L12
We originally created this to build a small binary to include with the
Node.js version of the Codex CLI to provide support for Linux
sandboxing.
Though the sticky bit is that, at this point, we still want to deploy
the Rust version of Codex as a single, standalone binary rather than a
CLI and a supporting sandboxing binary. To satisfy this goal, we use
"the arg0 trick," in which we:
* use `std::env::current_exe()` to get the path to the CLI that is
currently running
* use the CLI as the `program` for the `Command`
* set `"codex-linux-sandbox"` as arg0 for the `Command`
A CLI that supports sandboxing should check arg0 at the start of the
program. If it is `"codex-linux-sandbox"`, it must invoke
`codex_linux_sandbox::run_main()`, which runs the CLI as if it were
`codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the
appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn
the original command, so do _replace_ the process rather than spawn a
subprocess. Incidentally, we do this before starting the Tokio runtime,
so the process should only have one thread when `execvp(3)` is called.
Because the `core` crate that needs to spawn the Linux sandboxing is not
a CLI in its own right, this means that every CLI that includes `core`
and relies on this behavior has to (1) implement it and (2) provide the
path to the sandboxing executable. While the path is almost always
`std::env::current_exe()`, we needed to make this configurable for
integration tests, so `Config` now has a `codex_linux_sandbox_exe:
Option<PathBuf>` property to facilitate threading this through,
introduced in https://github.com/openai/codex/pull/1089.
This common pattern is now captured in
`codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs`
functions that should use it have been updated as part of this PR.
The `codex-linux-sandbox` crate added to the Cargo workspace as part of
this PR now has the bulk of the Landlock/Seccomp logic, which makes
`core` a bit simpler. Indeed, `core/src/exec_linux.rs` and
`core/src/landlock.rs` were removed/ported as part of this PR. I also
moved the unit tests for this code into an integration test,
`linux-sandbox/tests/landlock.rs`, in which I use
`env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for
`codex_linux_sandbox_exe` since `std::env::current_exe()` is not
appropriate in that case.
2025-05-23 11:37:07 -07:00
|
|
|
"linux-sandbox",
|
feat: add support for login with ChatGPT (#1212)
This does not implement the full Login with ChatGPT experience, but it
should unblock people.
**What works**
* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.
**What does not work**
* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:
https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89
**Implementation**
Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.
As such, the most significant files in this PR are:
```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```
Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:
```
codex-rs/core/src/openai_api_key.rs
```
Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.
Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.
**Testing**
Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:
```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```
For reference:
* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
|
|
|
"login",
|
feat: initial McpClient for Rust (#822)
This PR introduces an initial `McpClient` that we will use to give Codex
itself programmatic access to foreign MCPs. This does not wire it up in
Codex itself yet, but the new `mcp-client` crate includes a `main.rs`
for basic testing for now.
Manually tested by sending a `tools/list` request to Codex's own MCP
server:
```
codex-rs$ cargo build
codex-rs$ cargo run --bin codex-mcp-client ./target/debug/codex-mcp-server
{
"tools": [
{
"description": "Run a Codex session. Accepts configuration parameters matching the Codex Config struct.",
"inputSchema": {
"properties": {
"approval-policy": {
"description": "Execution approval policy expressed as the kebab-case variant name (`unless-allow-listed`, `auto-edit`, `on-failure`, `never`).",
"enum": [
"auto-edit",
"unless-allow-listed",
"on-failure",
"never"
],
"type": "string"
},
"cwd": {
"description": "Working directory for the session. If relative, it is resolved against the server process's current working directory.",
"type": "string"
},
"disable-response-storage": {
"description": "Disable server-side response storage.",
"type": "boolean"
},
"model": {
"description": "Optional override for the model name (e.g. \"o3\", \"o4-mini\")",
"type": "string"
},
"prompt": {
"description": "The *initial user prompt* to start the Codex conversation.",
"type": "string"
},
"sandbox-permissions": {
"description": "Sandbox permissions using the same string values accepted by the CLI (e.g. \"disk-write-cwd\", \"network-full-access\").",
"items": {
"enum": [
"disk-full-read-access",
"disk-write-cwd",
"disk-write-platform-user-temp-folder",
"disk-write-platform-global-temp-folder",
"disk-full-write-access",
"network-full-access"
],
"type": "string"
},
"type": "array"
}
},
"required": [
"prompt"
],
"type": "object"
},
"name": "codex"
}
]
}
```
2025-05-05 12:52:55 -07:00
|
|
|
"mcp-client",
|
2025-05-02 17:25:58 -07:00
|
|
|
"mcp-server",
|
2025-05-02 13:33:14 -07:00
|
|
|
"mcp-types",
|
2025-08-05 11:31:11 -07:00
|
|
|
"ollama",
|
2025-08-15 12:44:40 -07:00
|
|
|
"protocol",
|
2025-08-18 13:08:53 -07:00
|
|
|
"protocol-ts",
|
2025-09-26 10:13:37 -07:00
|
|
|
"rmcp-client",
|
feat: introduce responses-api-proxy (#4246)
Details are in `responses-api-proxy/README.md`, but the key contribution
of this PR is a new subcommand, `codex responses-api-proxy`, which reads
the auth token for use with the OpenAI Responses API from `stdin` at
startup and then proxies `POST` requests to `/v1/responses` over to
`https://api.openai.com/v1/responses`, injecting the auth token as part
of the `Authorization` header.
The expectation is that `codex responses-api-proxy` is launched by a
privileged user who has access to the auth token so that it can be used
by unprivileged users of the Codex CLI on the same host.
If the client only has one user account with `sudo`, one option is to:
- run `sudo codex responses-api-proxy --http-shutdown --server-info
/tmp/server-info.json` to start the server
- record the port written to `/tmp/server-info.json`
- relinquish their `sudo` privileges (which is irreversible!) like so:
```
sudo deluser $USER sudo || sudo gpasswd -d $USER sudo || true
```
- use `codex` with the proxy (see `README.md`)
- when done, make a `GET` request to the server using the `PORT` from
`server-info.json` to shut it down:
```shell
curl --fail --silent --show-error "http://127.0.0.1:$PORT/shutdown"
```
To protect the auth token, we:
- allocate a 1024 byte buffer on the stack and write `"Bearer "` into it
to start
- we then read from `stdin`, copying to the contents into the buffer
after the prefix
- after verifying the input looks good, we create a `String` from that
buffer (so the data is now on the heap)
- we zero out the stack-allocated buffer using
https://crates.io/crates/zeroize so it is not optimized away by the
compiler
- we invoke `.leak()` on the `String` so we can treat its contents as a
`&'static str`, as it will live for the rest of the processs
- on UNIX, we `mlock(2)` the memory backing the `&'static str`
- when using the `&'static str` when building an HTTP request, we use
`HeaderValue::from_static()` to avoid copying the `&str`
- we also invoke `.set_sensitive(true)` on the `HeaderValue`, which in
theory indicates to other parts of the HTTP stack that the header should
be treated with "special care" to avoid leakage:
https://github.com/hyperium/http/blob/439d1c50d71e3be3204b6c4a1bf2255ed78e1f93/src/header/value.rs#L346-L376
2025-09-26 08:19:00 -07:00
|
|
|
"responses-api-proxy",
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
"tui",
|
2025-09-24 11:15:54 +01:00
|
|
|
"utils/readiness",
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
]
|
2025-07-30 18:37:00 -07:00
|
|
|
resolver = "2"
|
2025-04-29 16:38:47 -07:00
|
|
|
|
|
|
|
|
[workspace.package]
|
2025-04-30 12:39:03 -07:00
|
|
|
version = "0.0.0"
|
2025-05-07 08:37:48 -07:00
|
|
|
# Track the edition for all workspace crates in one place. Individual
|
|
|
|
|
# crates can still override this value, but keeping it here means new
|
|
|
|
|
# crates created with `cargo new -w ...` automatically inherit the 2024
|
|
|
|
|
# edition.
|
|
|
|
|
edition = "2024"
|
2025-04-29 16:38:47 -07:00
|
|
|
|
2025-09-22 18:47:01 +02:00
|
|
|
[workspace.dependencies]
|
|
|
|
|
# Internal
|
|
|
|
|
codex-ansi-escape = { path = "ansi-escape" }
|
|
|
|
|
codex-apply-patch = { path = "apply-patch" }
|
|
|
|
|
codex-arg0 = { path = "arg0" }
|
|
|
|
|
codex-chatgpt = { path = "chatgpt" }
|
|
|
|
|
codex-common = { path = "common" }
|
|
|
|
|
codex-core = { path = "core" }
|
|
|
|
|
codex-exec = { path = "exec" }
|
|
|
|
|
codex-file-search = { path = "file-search" }
|
2025-09-23 16:59:52 +01:00
|
|
|
codex-git-tooling = { path = "git-tooling" }
|
2025-09-22 18:47:01 +02:00
|
|
|
codex-linux-sandbox = { path = "linux-sandbox" }
|
|
|
|
|
codex-login = { path = "login" }
|
|
|
|
|
codex-mcp-client = { path = "mcp-client" }
|
|
|
|
|
codex-mcp-server = { path = "mcp-server" }
|
|
|
|
|
codex-ollama = { path = "ollama" }
|
|
|
|
|
codex-protocol = { path = "protocol" }
|
2025-09-26 10:13:37 -07:00
|
|
|
codex-rmcp-client = { path = "rmcp-client" }
|
2025-09-22 18:47:01 +02:00
|
|
|
codex-protocol-ts = { path = "protocol-ts" }
|
feat: introduce responses-api-proxy (#4246)
Details are in `responses-api-proxy/README.md`, but the key contribution
of this PR is a new subcommand, `codex responses-api-proxy`, which reads
the auth token for use with the OpenAI Responses API from `stdin` at
startup and then proxies `POST` requests to `/v1/responses` over to
`https://api.openai.com/v1/responses`, injecting the auth token as part
of the `Authorization` header.
The expectation is that `codex responses-api-proxy` is launched by a
privileged user who has access to the auth token so that it can be used
by unprivileged users of the Codex CLI on the same host.
If the client only has one user account with `sudo`, one option is to:
- run `sudo codex responses-api-proxy --http-shutdown --server-info
/tmp/server-info.json` to start the server
- record the port written to `/tmp/server-info.json`
- relinquish their `sudo` privileges (which is irreversible!) like so:
```
sudo deluser $USER sudo || sudo gpasswd -d $USER sudo || true
```
- use `codex` with the proxy (see `README.md`)
- when done, make a `GET` request to the server using the `PORT` from
`server-info.json` to shut it down:
```shell
curl --fail --silent --show-error "http://127.0.0.1:$PORT/shutdown"
```
To protect the auth token, we:
- allocate a 1024 byte buffer on the stack and write `"Bearer "` into it
to start
- we then read from `stdin`, copying to the contents into the buffer
after the prefix
- after verifying the input looks good, we create a `String` from that
buffer (so the data is now on the heap)
- we zero out the stack-allocated buffer using
https://crates.io/crates/zeroize so it is not optimized away by the
compiler
- we invoke `.leak()` on the `String` so we can treat its contents as a
`&'static str`, as it will live for the rest of the processs
- on UNIX, we `mlock(2)` the memory backing the `&'static str`
- when using the `&'static str` when building an HTTP request, we use
`HeaderValue::from_static()` to avoid copying the `&str`
- we also invoke `.set_sensitive(true)` on the `HeaderValue`, which in
theory indicates to other parts of the HTTP stack that the header should
be treated with "special care" to avoid leakage:
https://github.com/hyperium/http/blob/439d1c50d71e3be3204b6c4a1bf2255ed78e1f93/src/header/value.rs#L346-L376
2025-09-26 08:19:00 -07:00
|
|
|
codex-responses-api-proxy = { path = "responses-api-proxy" }
|
2025-09-22 18:47:01 +02:00
|
|
|
codex-tui = { path = "tui" }
|
2025-09-24 11:15:54 +01:00
|
|
|
codex-utils-readiness = { path = "utils/readiness" }
|
2025-09-22 18:47:01 +02:00
|
|
|
core_test_support = { path = "core/tests/common" }
|
|
|
|
|
mcp-types = { path = "mcp-types" }
|
|
|
|
|
mcp_test_support = { path = "mcp-server/tests/common" }
|
|
|
|
|
|
|
|
|
|
# External
|
|
|
|
|
allocative = "0.3.3"
|
|
|
|
|
ansi-to-tui = "7.0.0"
|
|
|
|
|
anyhow = "1"
|
|
|
|
|
arboard = "3"
|
|
|
|
|
askama = "0.12"
|
|
|
|
|
assert_cmd = "2"
|
|
|
|
|
async-channel = "2.3.1"
|
|
|
|
|
async-stream = "0.3.6"
|
2025-09-23 17:27:20 +01:00
|
|
|
async-trait = "0.1.89"
|
2025-09-22 18:47:01 +02:00
|
|
|
base64 = "0.22.1"
|
|
|
|
|
bytes = "1.10.1"
|
2025-09-24 16:53:26 +00:00
|
|
|
chrono = "0.4.42"
|
2025-09-22 18:47:01 +02:00
|
|
|
clap = "4"
|
|
|
|
|
clap_complete = "4"
|
|
|
|
|
color-eyre = "0.6.3"
|
|
|
|
|
crossterm = "0.28.1"
|
2025-09-25 10:02:28 -07:00
|
|
|
ctor = "0.5.0"
|
2025-09-22 18:47:01 +02:00
|
|
|
derive_more = "2"
|
|
|
|
|
diffy = "0.4.2"
|
|
|
|
|
dirs = "6"
|
|
|
|
|
dotenvy = "0.15.7"
|
|
|
|
|
env-flags = "0.1.1"
|
|
|
|
|
env_logger = "0.11.5"
|
|
|
|
|
eventsource-stream = "0.2.3"
|
2025-09-26 10:13:37 -07:00
|
|
|
escargot = "0.5"
|
2025-09-22 18:47:01 +02:00
|
|
|
futures = "0.3"
|
|
|
|
|
icu_decimal = "2.0.0"
|
|
|
|
|
icu_locale_core = "2.0.0"
|
|
|
|
|
ignore = "0.4.23"
|
|
|
|
|
image = { version = "^0.25.8", default-features = false }
|
2025-09-26 15:49:08 +02:00
|
|
|
indexmap = "2.6.0"
|
2025-09-22 18:47:01 +02:00
|
|
|
insta = "1.43.2"
|
|
|
|
|
itertools = "0.14.0"
|
|
|
|
|
landlock = "0.4.1"
|
|
|
|
|
lazy_static = "1"
|
|
|
|
|
libc = "0.2.175"
|
|
|
|
|
log = "0.4"
|
|
|
|
|
maplit = "1.0.2"
|
|
|
|
|
mime_guess = "2.0.5"
|
|
|
|
|
multimap = "0.10.0"
|
|
|
|
|
nucleo-matcher = "0.3.1"
|
|
|
|
|
openssl-sys = "*"
|
|
|
|
|
os_info = "3.12.0"
|
|
|
|
|
owo-colors = "4.2.0"
|
|
|
|
|
path-absolutize = "3.1.1"
|
|
|
|
|
path-clean = "1.0.1"
|
|
|
|
|
pathdiff = "0.2"
|
|
|
|
|
portable-pty = "0.9.0"
|
|
|
|
|
predicates = "3"
|
|
|
|
|
pretty_assertions = "1.4.1"
|
|
|
|
|
pulldown-cmark = "0.10"
|
|
|
|
|
rand = "0.9"
|
|
|
|
|
ratatui = "0.29.0"
|
|
|
|
|
regex-lite = "0.1.7"
|
|
|
|
|
reqwest = "0.12"
|
|
|
|
|
schemars = "0.8.22"
|
|
|
|
|
seccompiler = "0.5.0"
|
|
|
|
|
serde = "1"
|
|
|
|
|
serde_json = "1"
|
|
|
|
|
serde_with = "3.14"
|
|
|
|
|
sha1 = "0.10.6"
|
|
|
|
|
sha2 = "0.10"
|
|
|
|
|
shlex = "1.3.0"
|
|
|
|
|
similar = "2.7.0"
|
|
|
|
|
starlark = "0.13.0"
|
|
|
|
|
strum = "0.27.2"
|
|
|
|
|
strum_macros = "0.27.2"
|
|
|
|
|
supports-color = "3.0.2"
|
|
|
|
|
sys-locale = "0.3.2"
|
2025-09-23 23:41:35 -07:00
|
|
|
tempfile = "3.23.0"
|
2025-09-22 18:47:01 +02:00
|
|
|
textwrap = "0.16.2"
|
|
|
|
|
thiserror = "2.0.16"
|
|
|
|
|
time = "0.3"
|
|
|
|
|
tiny_http = "0.12"
|
|
|
|
|
tokio = "1"
|
|
|
|
|
tokio-stream = "0.1.17"
|
|
|
|
|
tokio-test = "0.4"
|
|
|
|
|
tokio-util = "0.7.16"
|
|
|
|
|
toml = "0.9.5"
|
|
|
|
|
toml_edit = "0.23.4"
|
|
|
|
|
tracing = "0.1.41"
|
|
|
|
|
tracing-appender = "0.2.3"
|
|
|
|
|
tracing-subscriber = "0.3.20"
|
|
|
|
|
tree-sitter = "0.25.9"
|
|
|
|
|
tree-sitter-bash = "0.25.0"
|
|
|
|
|
ts-rs = "11"
|
|
|
|
|
unicode-segmentation = "1.12.0"
|
2025-09-24 16:33:46 +00:00
|
|
|
unicode-width = "0.2"
|
2025-09-22 18:47:01 +02:00
|
|
|
url = "2"
|
|
|
|
|
urlencoding = "2.1"
|
|
|
|
|
uuid = "1"
|
|
|
|
|
vt100 = "0.16.2"
|
|
|
|
|
walkdir = "2.5.0"
|
|
|
|
|
webbrowser = "1.0"
|
|
|
|
|
which = "6"
|
|
|
|
|
wildmatch = "2.5.0"
|
|
|
|
|
wiremock = "0.6"
|
feat: introduce responses-api-proxy (#4246)
Details are in `responses-api-proxy/README.md`, but the key contribution
of this PR is a new subcommand, `codex responses-api-proxy`, which reads
the auth token for use with the OpenAI Responses API from `stdin` at
startup and then proxies `POST` requests to `/v1/responses` over to
`https://api.openai.com/v1/responses`, injecting the auth token as part
of the `Authorization` header.
The expectation is that `codex responses-api-proxy` is launched by a
privileged user who has access to the auth token so that it can be used
by unprivileged users of the Codex CLI on the same host.
If the client only has one user account with `sudo`, one option is to:
- run `sudo codex responses-api-proxy --http-shutdown --server-info
/tmp/server-info.json` to start the server
- record the port written to `/tmp/server-info.json`
- relinquish their `sudo` privileges (which is irreversible!) like so:
```
sudo deluser $USER sudo || sudo gpasswd -d $USER sudo || true
```
- use `codex` with the proxy (see `README.md`)
- when done, make a `GET` request to the server using the `PORT` from
`server-info.json` to shut it down:
```shell
curl --fail --silent --show-error "http://127.0.0.1:$PORT/shutdown"
```
To protect the auth token, we:
- allocate a 1024 byte buffer on the stack and write `"Bearer "` into it
to start
- we then read from `stdin`, copying to the contents into the buffer
after the prefix
- after verifying the input looks good, we create a `String` from that
buffer (so the data is now on the heap)
- we zero out the stack-allocated buffer using
https://crates.io/crates/zeroize so it is not optimized away by the
compiler
- we invoke `.leak()` on the `String` so we can treat its contents as a
`&'static str`, as it will live for the rest of the processs
- on UNIX, we `mlock(2)` the memory backing the `&'static str`
- when using the `&'static str` when building an HTTP request, we use
`HeaderValue::from_static()` to avoid copying the `&str`
- we also invoke `.set_sensitive(true)` on the `HeaderValue`, which in
theory indicates to other parts of the HTTP stack that the header should
be treated with "special care" to avoid leakage:
https://github.com/hyperium/http/blob/439d1c50d71e3be3204b6c4a1bf2255ed78e1f93/src/header/value.rs#L346-L376
2025-09-26 08:19:00 -07:00
|
|
|
zeroize = "1.8.1"
|
2025-09-22 18:47:01 +02:00
|
|
|
|
2025-05-08 09:46:18 -07:00
|
|
|
[workspace.lints]
|
fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086)
Historically, we spawned the Seatbelt and Landlock sandboxes in
substantially different ways:
For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy
specified as an arg followed by the original command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec.rs#L147-L219
For **Landlock/Seccomp**, we would do
`tokio::runtime::Builder::new_current_thread()`, _invoke
Landlock/Seccomp APIs to modify the permissions of that new thread_, and
then spawn the command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec_linux.rs#L28-L49
While it is neat that Landlock/Seccomp supports applying a policy to
only one thread without having to apply it to the entire process, it
requires us to maintain two different codepaths and is a bit harder to
reason about. The tipping point was
https://github.com/openai/codex/pull/1061, in which we had to start
building up the `env` in an unexpected way for the existing
Landlock/Seccomp approach to continue to work.
This PR overhauls things so that we do similar things for Mac and Linux.
It turned out that we were already building our own "helper binary"
comparable to Mac's `sandbox-exec` as part of the `cli` crate:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/cli/Cargo.toml#L10-L12
We originally created this to build a small binary to include with the
Node.js version of the Codex CLI to provide support for Linux
sandboxing.
Though the sticky bit is that, at this point, we still want to deploy
the Rust version of Codex as a single, standalone binary rather than a
CLI and a supporting sandboxing binary. To satisfy this goal, we use
"the arg0 trick," in which we:
* use `std::env::current_exe()` to get the path to the CLI that is
currently running
* use the CLI as the `program` for the `Command`
* set `"codex-linux-sandbox"` as arg0 for the `Command`
A CLI that supports sandboxing should check arg0 at the start of the
program. If it is `"codex-linux-sandbox"`, it must invoke
`codex_linux_sandbox::run_main()`, which runs the CLI as if it were
`codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the
appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn
the original command, so do _replace_ the process rather than spawn a
subprocess. Incidentally, we do this before starting the Tokio runtime,
so the process should only have one thread when `execvp(3)` is called.
Because the `core` crate that needs to spawn the Linux sandboxing is not
a CLI in its own right, this means that every CLI that includes `core`
and relies on this behavior has to (1) implement it and (2) provide the
path to the sandboxing executable. While the path is almost always
`std::env::current_exe()`, we needed to make this configurable for
integration tests, so `Config` now has a `codex_linux_sandbox_exe:
Option<PathBuf>` property to facilitate threading this through,
introduced in https://github.com/openai/codex/pull/1089.
This common pattern is now captured in
`codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs`
functions that should use it have been updated as part of this PR.
The `codex-linux-sandbox` crate added to the Cargo workspace as part of
this PR now has the bulk of the Landlock/Seccomp logic, which makes
`core` a bit simpler. Indeed, `core/src/exec_linux.rs` and
`core/src/landlock.rs` were removed/ported as part of this PR. I also
moved the unit tests for this code into an integration test,
`linux-sandbox/tests/landlock.rs`, in which I use
`env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for
`codex_linux_sandbox_exe` since `std::env::current_exe()` is not
appropriate in that case.
2025-05-23 11:37:07 -07:00
|
|
|
rust = {}
|
2025-05-08 09:46:18 -07:00
|
|
|
|
|
|
|
|
[workspace.lints.clippy]
|
2025-05-12 08:45:46 -07:00
|
|
|
expect_used = "deny"
|
2025-09-22 19:16:02 +02:00
|
|
|
identity_op = "deny"
|
|
|
|
|
manual_clamp = "deny"
|
|
|
|
|
manual_filter = "deny"
|
|
|
|
|
manual_find = "deny"
|
|
|
|
|
manual_flatten = "deny"
|
|
|
|
|
manual_map = "deny"
|
|
|
|
|
manual_memcpy = "deny"
|
|
|
|
|
manual_non_exhaustive = "deny"
|
|
|
|
|
manual_ok_or = "deny"
|
|
|
|
|
manual_range_contains = "deny"
|
|
|
|
|
manual_retain = "deny"
|
|
|
|
|
manual_strip = "deny"
|
|
|
|
|
manual_try_fold = "deny"
|
|
|
|
|
manual_unwrap_or = "deny"
|
|
|
|
|
needless_borrow = "deny"
|
|
|
|
|
needless_borrowed_reference = "deny"
|
|
|
|
|
needless_collect = "deny"
|
|
|
|
|
needless_late_init = "deny"
|
|
|
|
|
needless_option_as_deref = "deny"
|
|
|
|
|
needless_question_mark = "deny"
|
|
|
|
|
needless_update = "deny"
|
2025-09-11 11:59:37 -07:00
|
|
|
redundant_clone = "deny"
|
2025-09-22 20:30:16 +01:00
|
|
|
redundant_closure = "deny"
|
|
|
|
|
redundant_closure_for_method_calls = "deny"
|
2025-09-22 19:16:02 +02:00
|
|
|
redundant_static_lifetimes = "deny"
|
|
|
|
|
trivially_copy_pass_by_ref = "deny"
|
2025-08-28 11:25:23 -07:00
|
|
|
uninlined_format_args = "deny"
|
2025-09-22 19:16:02 +02:00
|
|
|
unnecessary_filter_map = "deny"
|
|
|
|
|
unnecessary_lazy_evaluations = "deny"
|
|
|
|
|
unnecessary_sort_by = "deny"
|
|
|
|
|
unnecessary_to_owned = "deny"
|
2025-05-08 09:46:18 -07:00
|
|
|
unwrap_used = "deny"
|
|
|
|
|
|
2025-09-22 18:47:01 +02:00
|
|
|
# cargo-shear cannot see the platform-specific openssl-sys usage, so we
|
|
|
|
|
# silence the false positive here instead of deleting a real dependency.
|
|
|
|
|
[workspace.metadata.cargo-shear]
|
2025-09-24 11:15:54 +01:00
|
|
|
ignored = ["openssl-sys", "codex-utils-readiness"]
|
2025-09-22 18:47:01 +02:00
|
|
|
|
2025-04-29 16:38:47 -07:00
|
|
|
[profile.release]
|
2025-04-29 19:21:26 -07:00
|
|
|
lto = "fat"
|
|
|
|
|
# Because we bundle some of these executables with the TypeScript CLI, we
|
|
|
|
|
# remove everything to make the binary as small as possible.
|
|
|
|
|
strip = "symbols"
|
2025-06-28 15:24:48 -07:00
|
|
|
|
|
|
|
|
# See https://github.com/openai/codex/issues/1411 for details.
|
|
|
|
|
codegen-units = 1
|
2025-07-28 07:45:49 -07:00
|
|
|
|
|
|
|
|
[patch.crates-io]
|
|
|
|
|
# ratatui = { path = "../../ratatui" }
|
|
|
|
|
ratatui = { git = "https://github.com/nornagon/ratatui", branch = "nornagon-v0.29.0-patch" }
|