2025-07-08 21:43:27 -07:00
|
|
|
|
use clap::CommandFactory;
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
use clap::Parser;
|
2025-07-08 21:43:27 -07:00
|
|
|
|
use clap_complete::Shell;
|
|
|
|
|
|
use clap_complete::generate;
|
2025-07-28 08:31:24 -07:00
|
|
|
|
use codex_arg0::arg0_dispatch_or_else;
|
2025-07-11 13:30:11 -04:00
|
|
|
|
use codex_chatgpt::apply_command::ApplyCommand;
|
|
|
|
|
|
use codex_chatgpt::apply_command::run_apply_command;
|
2025-05-07 08:37:48 -07:00
|
|
|
|
use codex_cli::LandlockCommand;
|
|
|
|
|
|
use codex_cli::SeatbeltCommand;
|
Windows Sandbox - Alpha version (#4905)
- Added the new codex-windows-sandbox crate that builds both a library
entry point (run_windows_sandbox_capture) and a CLI executable to launch
commands inside a Windows restricted-token sandbox, including ACL
management, capability SID provisioning, network lockdown, and output
capture
(windows-sandbox-rs/src/lib.rs:167, windows-sandbox-rs/src/main.rs:54).
- Introduced the experimental WindowsSandbox feature flag and wiring so
Windows builds can opt into the sandbox:
SandboxType::WindowsRestrictedToken, the in-process execution path, and
platform sandbox selection now honor the flag (core/src/features.rs:47,
core/src/config.rs:1224, core/src/safety.rs:19,
core/src/sandboxing/mod.rs:69, core/src/exec.rs:79,
core/src/exec.rs:172).
- Updated workspace metadata to include the new crate and its
Windows-specific dependencies so the core crate can link against it
(codex-rs/
Cargo.toml:91, core/Cargo.toml:86).
- Added a PowerShell bootstrap script that installs the Windows
toolchain, required CLI utilities, and builds the workspace to ease
development
on the platform (scripts/setup-windows.ps1:1).
- Landed a Python smoke-test suite that exercises
read-only/workspace-write policies, ACL behavior, and network denial for
the Windows sandbox
binary (windows-sandbox-rs/sandbox_smoketests.py:1).
2025-10-30 15:51:57 -07:00
|
|
|
|
use codex_cli::WindowsCommand;
|
2025-10-02 23:17:31 -07:00
|
|
|
|
use codex_cli::login::read_api_key_from_stdin;
|
2025-07-30 14:09:26 -07:00
|
|
|
|
use codex_cli::login::run_login_status;
|
2025-07-31 10:48:49 -07:00
|
|
|
|
use codex_cli::login::run_login_with_api_key;
|
feat: add support for login with ChatGPT (#1212)
This does not implement the full Login with ChatGPT experience, but it
should unblock people.
**What works**
* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.
**What does not work**
* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:
https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89
**Implementation**
Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.
As such, the most significant files in this PR are:
```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```
Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:
```
codex-rs/core/src/openai_api_key.rs
```
Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.
Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.
**Testing**
Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:
```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```
For reference:
* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
|
|
|
|
use codex_cli::login::run_login_with_chatgpt;
|
2025-09-29 19:34:57 -07:00
|
|
|
|
use codex_cli::login::run_login_with_device_code;
|
2025-08-07 01:17:33 -07:00
|
|
|
|
use codex_cli::login::run_logout;
|
2025-09-30 03:10:33 -07:00
|
|
|
|
use codex_cloud_tasks::Cli as CloudTasksCli;
|
feat: add support for -c/--config to override individual config items (#1137)
This PR introduces support for `-c`/`--config` so users can override
individual config values on the command line using `--config
name=value`. Example:
```
codex --config model=o4-mini
```
Making it possible to set arbitrary config values on the command line
results in a more flexible configuration scheme and makes it easier to
provide single-line examples that can be copy-pasted from documentation.
Effectively, it means there are four levels of configuration for some
values:
- Default value (e.g., `model` currently defaults to `o4-mini`)
- Value in `config.toml` (e.g., user could override the default to be
`model = "o3"` in their `config.toml`)
- Specifying `-c` or `--config` to override `model` (e.g., user can
include `-c model=o3` in their list of args to Codex)
- If available, a config-specific flag can be used, which takes
precedence over `-c` (e.g., user can specify `--model o3` in their list
of args to Codex)
Now that it is possible to specify anything that could be configured in
`config.toml` on the command line using `-c`, we do not need to have a
custom flag for every possible config option (which can clutter the
output of `--help`). To that end, as part of this PR, we drop support
for the `--disable-response-storage` flag, as users can now specify `-c
disable_response_storage=true` to get the equivalent functionality.
Under the hood, this works by loading the `config.toml` into a
`toml::Value`. Then for each `key=value`, we create a small synthetic
TOML file with `value` so that we can run the TOML parser to get the
equivalent `toml::Value`. We then parse `key` to determine the point in
the original `toml::Value` to do the insert/replace. Once all of the
overrides from `-c` args have been applied, the `toml::Value` is
deserialized into a `ConfigToml` and then the `ConfigOverrides` are
applied, as before.
2025-05-27 23:11:44 -07:00
|
|
|
|
use codex_common::CliConfigOverrides;
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
use codex_exec::Cli as ExecCli;
|
2025-09-30 03:10:33 -07:00
|
|
|
|
use codex_responses_api_proxy::Args as ResponsesApiProxyArgs;
|
2025-09-22 15:24:31 -07:00
|
|
|
|
use codex_tui::AppExitInfo;
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
use codex_tui::Cli as TuiCli;
|
2025-10-20 14:40:14 -07:00
|
|
|
|
use codex_tui::updates::UpdateAction;
|
2025-09-22 15:24:31 -07:00
|
|
|
|
use owo_colors::OwoColorize;
|
fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086)
Historically, we spawned the Seatbelt and Landlock sandboxes in
substantially different ways:
For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy
specified as an arg followed by the original command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec.rs#L147-L219
For **Landlock/Seccomp**, we would do
`tokio::runtime::Builder::new_current_thread()`, _invoke
Landlock/Seccomp APIs to modify the permissions of that new thread_, and
then spawn the command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec_linux.rs#L28-L49
While it is neat that Landlock/Seccomp supports applying a policy to
only one thread without having to apply it to the entire process, it
requires us to maintain two different codepaths and is a bit harder to
reason about. The tipping point was
https://github.com/openai/codex/pull/1061, in which we had to start
building up the `env` in an unexpected way for the existing
Landlock/Seccomp approach to continue to work.
This PR overhauls things so that we do similar things for Mac and Linux.
It turned out that we were already building our own "helper binary"
comparable to Mac's `sandbox-exec` as part of the `cli` crate:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/cli/Cargo.toml#L10-L12
We originally created this to build a small binary to include with the
Node.js version of the Codex CLI to provide support for Linux
sandboxing.
Though the sticky bit is that, at this point, we still want to deploy
the Rust version of Codex as a single, standalone binary rather than a
CLI and a supporting sandboxing binary. To satisfy this goal, we use
"the arg0 trick," in which we:
* use `std::env::current_exe()` to get the path to the CLI that is
currently running
* use the CLI as the `program` for the `Command`
* set `"codex-linux-sandbox"` as arg0 for the `Command`
A CLI that supports sandboxing should check arg0 at the start of the
program. If it is `"codex-linux-sandbox"`, it must invoke
`codex_linux_sandbox::run_main()`, which runs the CLI as if it were
`codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the
appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn
the original command, so do _replace_ the process rather than spawn a
subprocess. Incidentally, we do this before starting the Tokio runtime,
so the process should only have one thread when `execvp(3)` is called.
Because the `core` crate that needs to spawn the Linux sandboxing is not
a CLI in its own right, this means that every CLI that includes `core`
and relies on this behavior has to (1) implement it and (2) provide the
path to the sandboxing executable. While the path is almost always
`std::env::current_exe()`, we needed to make this configurable for
integration tests, so `Config` now has a `codex_linux_sandbox_exe:
Option<PathBuf>` property to facilitate threading this through,
introduced in https://github.com/openai/codex/pull/1089.
This common pattern is now captured in
`codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs`
functions that should use it have been updated as part of this PR.
The `codex-linux-sandbox` crate added to the Cargo workspace as part of
this PR now has the bulk of the Landlock/Seccomp logic, which makes
`core` a bit simpler. Indeed, `core/src/exec_linux.rs` and
`core/src/landlock.rs` were removed/ported as part of this PR. I also
moved the unit tests for this code into an integration test,
`linux-sandbox/tests/landlock.rs`, in which I use
`env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for
`codex_linux_sandbox_exe` since `std::env::current_exe()` is not
appropriate in that case.
2025-05-23 11:37:07 -07:00
|
|
|
|
use std::path::PathBuf;
|
2025-09-22 15:24:31 -07:00
|
|
|
|
use supports_color::Stream;
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
|
2025-09-14 21:30:56 -07:00
|
|
|
|
mod mcp_cmd;
|
2025-11-07 23:49:17 +01:00
|
|
|
|
mod wsl_paths;
|
2025-09-14 21:30:56 -07:00
|
|
|
|
|
|
|
|
|
|
use crate::mcp_cmd::McpCli;
|
2025-11-07 23:49:17 +01:00
|
|
|
|
use crate::wsl_paths::normalize_for_wsl;
|
2025-10-14 18:50:00 +01:00
|
|
|
|
use codex_core::config::Config;
|
|
|
|
|
|
use codex_core::config::ConfigOverrides;
|
2025-10-27 16:53:00 +00:00
|
|
|
|
use codex_core::features::is_known_feature_key;
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
|
|
|
|
|
|
/// Codex CLI
|
|
|
|
|
|
///
|
|
|
|
|
|
/// If no subcommand is specified, options will be forwarded to the interactive CLI.
|
|
|
|
|
|
#[derive(Debug, Parser)]
|
|
|
|
|
|
#[clap(
|
|
|
|
|
|
author,
|
|
|
|
|
|
version,
|
|
|
|
|
|
// If a sub‑command is given, ignore requirements of the default args.
|
2025-08-13 09:39:11 -07:00
|
|
|
|
subcommand_negates_reqs = true,
|
|
|
|
|
|
// The executable is sometimes invoked via a platform‑specific name like
|
|
|
|
|
|
// `codex-x86_64-unknown-linux-musl`, but the help output should always use
|
|
|
|
|
|
// the generic `codex` command name that users run.
|
2025-10-19 16:17:51 -07:00
|
|
|
|
bin_name = "codex",
|
|
|
|
|
|
override_usage = "codex [OPTIONS] [PROMPT]\n codex [OPTIONS] <COMMAND> [ARGS]"
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
)]
|
|
|
|
|
|
struct MultitoolCli {
|
feat: add support for -c/--config to override individual config items (#1137)
This PR introduces support for `-c`/`--config` so users can override
individual config values on the command line using `--config
name=value`. Example:
```
codex --config model=o4-mini
```
Making it possible to set arbitrary config values on the command line
results in a more flexible configuration scheme and makes it easier to
provide single-line examples that can be copy-pasted from documentation.
Effectively, it means there are four levels of configuration for some
values:
- Default value (e.g., `model` currently defaults to `o4-mini`)
- Value in `config.toml` (e.g., user could override the default to be
`model = "o3"` in their `config.toml`)
- Specifying `-c` or `--config` to override `model` (e.g., user can
include `-c model=o3` in their list of args to Codex)
- If available, a config-specific flag can be used, which takes
precedence over `-c` (e.g., user can specify `--model o3` in their list
of args to Codex)
Now that it is possible to specify anything that could be configured in
`config.toml` on the command line using `-c`, we do not need to have a
custom flag for every possible config option (which can clutter the
output of `--help`). To that end, as part of this PR, we drop support
for the `--disable-response-storage` flag, as users can now specify `-c
disable_response_storage=true` to get the equivalent functionality.
Under the hood, this works by loading the `config.toml` into a
`toml::Value`. Then for each `key=value`, we create a small synthetic
TOML file with `value` so that we can run the TOML parser to get the
equivalent `toml::Value`. We then parse `key` to determine the point in
the original `toml::Value` to do the insert/replace. Once all of the
overrides from `-c` args have been applied, the `toml::Value` is
deserialized into a `ConfigToml` and then the `ConfigOverrides` are
applied, as before.
2025-05-27 23:11:44 -07:00
|
|
|
|
#[clap(flatten)]
|
|
|
|
|
|
pub config_overrides: CliConfigOverrides,
|
|
|
|
|
|
|
2025-10-14 18:50:00 +01:00
|
|
|
|
#[clap(flatten)]
|
|
|
|
|
|
pub feature_toggles: FeatureToggles,
|
|
|
|
|
|
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
#[clap(flatten)]
|
fix: make the TUI the default/"interactive" CLI in Rust (#711)
Originally, the `interactive` crate was going to be a placeholder for
building out a UX that was comparable to that of the existing TypeScript
CLI. Though after researching how Ratatui works, that seems difficult to
do because it is designed around the idea that it will redraw the full
screen buffer each time (and so any scrolling should be "internal" to
your Ratatui app) whereas the TypeScript CLI expects to render the full
history of the conversation every time(*) (which is why you can use your
terminal scrollbar to scroll it).
While it is possible to use Ratatui in a way that acts more like what
the TypeScript CLI is doing, it is awkward and seemingly results in
tedious code, so I think we should abandon that approach. As such, this
PR deletes the `interactive/` folder and the code that depended on it.
Further, since we added support for mousewheel scrolling in the TUI in
https://github.com/openai/codex/pull/641, it certainly feels much better
and the need for scroll support via the terminal scrollbar is greatly
diminished. This is now a more appropriate default UX for the
"multitool" CLI.
(*) Incidentally, I haven't verified this, but I think this results in
O(N^2) work in rendering, which seems potentially problematic for long
conversations.
2025-04-28 13:46:22 -07:00
|
|
|
|
interactive: TuiCli,
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
|
|
|
|
|
|
#[clap(subcommand)]
|
|
|
|
|
|
subcommand: Option<Subcommand>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, clap::Subcommand)]
|
|
|
|
|
|
enum Subcommand {
|
|
|
|
|
|
/// Run Codex non-interactively.
|
|
|
|
|
|
#[clap(visible_alias = "e")]
|
|
|
|
|
|
Exec(ExecCli),
|
|
|
|
|
|
|
2025-07-30 14:09:26 -07:00
|
|
|
|
/// Manage login.
|
feat: add support for login with ChatGPT (#1212)
This does not implement the full Login with ChatGPT experience, but it
should unblock people.
**What works**
* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.
**What does not work**
* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:
https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89
**Implementation**
Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.
As such, the most significant files in this PR are:
```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```
Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:
```
codex-rs/core/src/openai_api_key.rs
```
Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.
Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.
**Testing**
Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:
```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```
For reference:
* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
|
|
|
|
Login(LoginCommand),
|
|
|
|
|
|
|
2025-08-07 01:17:33 -07:00
|
|
|
|
/// Remove stored authentication credentials.
|
|
|
|
|
|
Logout(LogoutCommand),
|
|
|
|
|
|
|
2025-09-14 21:30:56 -07:00
|
|
|
|
/// [experimental] Run Codex as an MCP server and manage MCP servers.
|
|
|
|
|
|
Mcp(McpCli),
|
2025-05-14 13:15:41 -07:00
|
|
|
|
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
/// [experimental] Run the Codex MCP server (stdio transport).
|
|
|
|
|
|
McpServer,
|
|
|
|
|
|
|
|
|
|
|
|
/// [experimental] Run the app server.
|
|
|
|
|
|
AppServer,
|
|
|
|
|
|
|
2025-07-08 21:43:27 -07:00
|
|
|
|
/// Generate shell completion scripts.
|
|
|
|
|
|
Completion(CompletionCommand),
|
|
|
|
|
|
|
2025-10-05 15:51:57 -07:00
|
|
|
|
/// Run commands within a Codex-provided sandbox.
|
|
|
|
|
|
#[clap(visible_alias = "debug")]
|
|
|
|
|
|
Sandbox(SandboxArgs),
|
2025-07-11 13:30:11 -04:00
|
|
|
|
|
|
|
|
|
|
/// Apply the latest diff produced by Codex agent as a `git apply` to your local working tree.
|
|
|
|
|
|
#[clap(visible_alias = "a")]
|
|
|
|
|
|
Apply(ApplyCommand),
|
2025-08-18 13:08:53 -07:00
|
|
|
|
|
2025-09-14 19:33:19 -04:00
|
|
|
|
/// Resume a previous interactive session (picker by default; use --last to continue the most recent).
|
|
|
|
|
|
Resume(ResumeCommand),
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
/// Internal: generate TypeScript protocol bindings.
|
|
|
|
|
|
#[clap(hide = true)]
|
|
|
|
|
|
GenerateTs(GenerateTsCommand),
|
2025-09-30 03:10:33 -07:00
|
|
|
|
/// [EXPERIMENTAL] Browse tasks from Codex Cloud and apply changes locally.
|
|
|
|
|
|
#[clap(name = "cloud", alias = "cloud-tasks")]
|
|
|
|
|
|
Cloud(CloudTasksCli),
|
|
|
|
|
|
|
|
|
|
|
|
/// Internal: run the responses API proxy.
|
|
|
|
|
|
#[clap(hide = true)]
|
|
|
|
|
|
ResponsesApiProxy(ResponsesApiProxyArgs),
|
2025-10-14 18:50:00 +01:00
|
|
|
|
|
2025-10-19 21:12:45 -07:00
|
|
|
|
/// Internal: relay stdio to a Unix domain socket.
|
|
|
|
|
|
#[clap(hide = true, name = "stdio-to-uds")]
|
|
|
|
|
|
StdioToUds(StdioToUdsCommand),
|
|
|
|
|
|
|
2025-10-14 18:50:00 +01:00
|
|
|
|
/// Inspect feature flags.
|
|
|
|
|
|
Features(FeaturesCli),
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-07-08 21:43:27 -07:00
|
|
|
|
#[derive(Debug, Parser)]
|
|
|
|
|
|
struct CompletionCommand {
|
|
|
|
|
|
/// Shell to generate completions for
|
|
|
|
|
|
#[clap(value_enum, default_value_t = Shell::Bash)]
|
|
|
|
|
|
shell: Shell,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-14 19:33:19 -04:00
|
|
|
|
#[derive(Debug, Parser)]
|
|
|
|
|
|
struct ResumeCommand {
|
|
|
|
|
|
/// Conversation/session id (UUID). When provided, resumes this session.
|
|
|
|
|
|
/// If omitted, use --last to pick the most recent recorded session.
|
|
|
|
|
|
#[arg(value_name = "SESSION_ID")]
|
|
|
|
|
|
session_id: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Continue the most recent session without showing the picker.
|
|
|
|
|
|
#[arg(long = "last", default_value_t = false, conflicts_with = "session_id")]
|
|
|
|
|
|
last: bool,
|
2025-09-15 02:16:17 -04:00
|
|
|
|
|
|
|
|
|
|
#[clap(flatten)]
|
|
|
|
|
|
config_overrides: TuiCli,
|
2025-09-14 19:33:19 -04:00
|
|
|
|
}
|
|
|
|
|
|
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
#[derive(Debug, Parser)]
|
2025-10-05 15:51:57 -07:00
|
|
|
|
struct SandboxArgs {
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
#[command(subcommand)]
|
2025-10-05 15:51:57 -07:00
|
|
|
|
cmd: SandboxCommand,
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, clap::Subcommand)]
|
2025-10-05 15:51:57 -07:00
|
|
|
|
enum SandboxCommand {
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
/// Run a command under Seatbelt (macOS only).
|
2025-10-05 15:51:57 -07:00
|
|
|
|
#[clap(visible_alias = "seatbelt")]
|
|
|
|
|
|
Macos(SeatbeltCommand),
|
2025-04-28 16:37:05 -07:00
|
|
|
|
|
|
|
|
|
|
/// Run a command under Landlock+seccomp (Linux only).
|
2025-10-05 15:51:57 -07:00
|
|
|
|
#[clap(visible_alias = "landlock")]
|
|
|
|
|
|
Linux(LandlockCommand),
|
Windows Sandbox - Alpha version (#4905)
- Added the new codex-windows-sandbox crate that builds both a library
entry point (run_windows_sandbox_capture) and a CLI executable to launch
commands inside a Windows restricted-token sandbox, including ACL
management, capability SID provisioning, network lockdown, and output
capture
(windows-sandbox-rs/src/lib.rs:167, windows-sandbox-rs/src/main.rs:54).
- Introduced the experimental WindowsSandbox feature flag and wiring so
Windows builds can opt into the sandbox:
SandboxType::WindowsRestrictedToken, the in-process execution path, and
platform sandbox selection now honor the flag (core/src/features.rs:47,
core/src/config.rs:1224, core/src/safety.rs:19,
core/src/sandboxing/mod.rs:69, core/src/exec.rs:79,
core/src/exec.rs:172).
- Updated workspace metadata to include the new crate and its
Windows-specific dependencies so the core crate can link against it
(codex-rs/
Cargo.toml:91, core/Cargo.toml:86).
- Added a PowerShell bootstrap script that installs the Windows
toolchain, required CLI utilities, and builds the workspace to ease
development
on the platform (scripts/setup-windows.ps1:1).
- Landed a Python smoke-test suite that exercises
read-only/workspace-write policies, ACL behavior, and network denial for
the Windows sandbox
binary (windows-sandbox-rs/sandbox_smoketests.py:1).
2025-10-30 15:51:57 -07:00
|
|
|
|
|
|
|
|
|
|
/// Run a command under Windows restricted token (Windows only).
|
|
|
|
|
|
Windows(WindowsCommand),
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, Parser)]
|
feat: add support for login with ChatGPT (#1212)
This does not implement the full Login with ChatGPT experience, but it
should unblock people.
**What works**
* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.
**What does not work**
* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:
https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89
**Implementation**
Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.
As such, the most significant files in this PR are:
```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```
Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:
```
codex-rs/core/src/openai_api_key.rs
```
Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.
Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.
**Testing**
Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:
```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```
For reference:
* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
|
|
|
|
struct LoginCommand {
|
|
|
|
|
|
#[clap(skip)]
|
|
|
|
|
|
config_overrides: CliConfigOverrides,
|
2025-07-30 14:09:26 -07:00
|
|
|
|
|
2025-10-02 23:17:31 -07:00
|
|
|
|
#[arg(
|
|
|
|
|
|
long = "with-api-key",
|
|
|
|
|
|
help = "Read the API key from stdin (e.g. `printenv OPENAI_API_KEY | codex login --with-api-key`)"
|
|
|
|
|
|
)]
|
|
|
|
|
|
with_api_key: bool,
|
|
|
|
|
|
|
|
|
|
|
|
#[arg(
|
|
|
|
|
|
long = "api-key",
|
|
|
|
|
|
value_name = "API_KEY",
|
|
|
|
|
|
help = "(deprecated) Previously accepted the API key directly; now exits with guidance to use --with-api-key",
|
|
|
|
|
|
hide = true
|
|
|
|
|
|
)]
|
2025-07-31 10:48:49 -07:00
|
|
|
|
api_key: Option<String>,
|
|
|
|
|
|
|
2025-10-08 15:29:20 -07:00
|
|
|
|
#[arg(long = "device-auth")]
|
2025-09-29 19:34:57 -07:00
|
|
|
|
use_device_code: bool,
|
|
|
|
|
|
|
|
|
|
|
|
/// EXPERIMENTAL: Use custom OAuth issuer base URL (advanced)
|
|
|
|
|
|
/// Override the OAuth issuer base URL (advanced)
|
|
|
|
|
|
#[arg(long = "experimental_issuer", value_name = "URL", hide = true)]
|
|
|
|
|
|
issuer_base_url: Option<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// EXPERIMENTAL: Use custom OAuth client ID (advanced)
|
|
|
|
|
|
#[arg(long = "experimental_client-id", value_name = "CLIENT_ID", hide = true)]
|
|
|
|
|
|
client_id: Option<String>,
|
|
|
|
|
|
|
2025-07-30 14:09:26 -07:00
|
|
|
|
#[command(subcommand)]
|
|
|
|
|
|
action: Option<LoginSubcommand>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, clap::Subcommand)]
|
|
|
|
|
|
enum LoginSubcommand {
|
|
|
|
|
|
/// Show login status.
|
|
|
|
|
|
Status,
|
feat: add support for login with ChatGPT (#1212)
This does not implement the full Login with ChatGPT experience, but it
should unblock people.
**What works**
* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.
**What does not work**
* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:
https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89
**Implementation**
Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.
As such, the most significant files in this PR are:
```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```
Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:
```
codex-rs/core/src/openai_api_key.rs
```
Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.
Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.
**Testing**
Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:
```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```
For reference:
* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
|
|
|
|
}
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
|
2025-08-07 01:17:33 -07:00
|
|
|
|
#[derive(Debug, Parser)]
|
|
|
|
|
|
struct LogoutCommand {
|
|
|
|
|
|
#[clap(skip)]
|
|
|
|
|
|
config_overrides: CliConfigOverrides,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-08-18 13:08:53 -07:00
|
|
|
|
#[derive(Debug, Parser)]
|
|
|
|
|
|
struct GenerateTsCommand {
|
|
|
|
|
|
/// Output directory where .ts files will be written
|
|
|
|
|
|
#[arg(short = 'o', long = "out", value_name = "DIR")]
|
|
|
|
|
|
out_dir: PathBuf,
|
|
|
|
|
|
|
|
|
|
|
|
/// Optional path to the Prettier executable to format generated files
|
|
|
|
|
|
#[arg(short = 'p', long = "prettier", value_name = "PRETTIER_BIN")]
|
|
|
|
|
|
prettier: Option<PathBuf>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-10-19 21:12:45 -07:00
|
|
|
|
#[derive(Debug, Parser)]
|
|
|
|
|
|
struct StdioToUdsCommand {
|
|
|
|
|
|
/// Path to the Unix domain socket to connect to.
|
|
|
|
|
|
#[arg(value_name = "SOCKET_PATH")]
|
|
|
|
|
|
socket_path: PathBuf,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-22 15:24:31 -07:00
|
|
|
|
fn format_exit_messages(exit_info: AppExitInfo, color_enabled: bool) -> Vec<String> {
|
|
|
|
|
|
let AppExitInfo {
|
|
|
|
|
|
token_usage,
|
|
|
|
|
|
conversation_id,
|
2025-10-15 16:11:20 -07:00
|
|
|
|
..
|
2025-09-22 15:24:31 -07:00
|
|
|
|
} = exit_info;
|
|
|
|
|
|
|
|
|
|
|
|
if token_usage.is_zero() {
|
|
|
|
|
|
return Vec::new();
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
let mut lines = vec![format!(
|
|
|
|
|
|
"{}",
|
|
|
|
|
|
codex_core::protocol::FinalOutput::from(token_usage)
|
|
|
|
|
|
)];
|
|
|
|
|
|
|
|
|
|
|
|
if let Some(session_id) = conversation_id {
|
|
|
|
|
|
let resume_cmd = format!("codex resume {session_id}");
|
|
|
|
|
|
let command = if color_enabled {
|
|
|
|
|
|
resume_cmd.cyan().to_string()
|
|
|
|
|
|
} else {
|
|
|
|
|
|
resume_cmd
|
|
|
|
|
|
};
|
2025-09-28 20:50:04 -07:00
|
|
|
|
lines.push(format!("To continue this session, run {command}"));
|
2025-09-22 15:24:31 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
lines
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-10-15 16:11:20 -07:00
|
|
|
|
/// Handle the app exit and print the results. Optionally run the update action.
|
|
|
|
|
|
fn handle_app_exit(exit_info: AppExitInfo) -> anyhow::Result<()> {
|
|
|
|
|
|
let update_action = exit_info.update_action;
|
2025-09-22 15:24:31 -07:00
|
|
|
|
let color_enabled = supports_color::on(Stream::Stdout).is_some();
|
|
|
|
|
|
for line in format_exit_messages(exit_info, color_enabled) {
|
|
|
|
|
|
println!("{line}");
|
|
|
|
|
|
}
|
2025-10-15 16:11:20 -07:00
|
|
|
|
if let Some(action) = update_action {
|
|
|
|
|
|
run_update_action(action)?;
|
|
|
|
|
|
}
|
|
|
|
|
|
Ok(())
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Run the update action and print the result.
|
|
|
|
|
|
fn run_update_action(action: UpdateAction) -> anyhow::Result<()> {
|
|
|
|
|
|
println!();
|
|
|
|
|
|
let (cmd, args) = action.command_args();
|
|
|
|
|
|
let cmd_str = action.command_str();
|
|
|
|
|
|
println!("Updating Codex via `{cmd_str}`...");
|
2025-11-07 23:49:17 +01:00
|
|
|
|
let command_path = normalize_for_wsl(cmd);
|
|
|
|
|
|
let normalized_args: Vec<String> = args.iter().map(normalize_for_wsl).collect();
|
|
|
|
|
|
let status = std::process::Command::new(&command_path)
|
|
|
|
|
|
.args(&normalized_args)
|
|
|
|
|
|
.status()?;
|
2025-10-15 16:11:20 -07:00
|
|
|
|
if !status.success() {
|
|
|
|
|
|
anyhow::bail!("`{cmd_str}` failed with status {status}");
|
|
|
|
|
|
}
|
|
|
|
|
|
println!();
|
|
|
|
|
|
println!("🎉 Update ran successfully! Please restart Codex.");
|
|
|
|
|
|
Ok(())
|
2025-09-22 15:24:31 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
2025-10-14 18:50:00 +01:00
|
|
|
|
#[derive(Debug, Default, Parser, Clone)]
|
|
|
|
|
|
struct FeatureToggles {
|
|
|
|
|
|
/// Enable a feature (repeatable). Equivalent to `-c features.<name>=true`.
|
|
|
|
|
|
#[arg(long = "enable", value_name = "FEATURE", action = clap::ArgAction::Append, global = true)]
|
|
|
|
|
|
enable: Vec<String>,
|
|
|
|
|
|
|
|
|
|
|
|
/// Disable a feature (repeatable). Equivalent to `-c features.<name>=false`.
|
|
|
|
|
|
#[arg(long = "disable", value_name = "FEATURE", action = clap::ArgAction::Append, global = true)]
|
|
|
|
|
|
disable: Vec<String>,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
impl FeatureToggles {
|
2025-10-27 16:53:00 +00:00
|
|
|
|
fn to_overrides(&self) -> anyhow::Result<Vec<String>> {
|
2025-10-14 18:50:00 +01:00
|
|
|
|
let mut v = Vec::new();
|
2025-10-27 16:53:00 +00:00
|
|
|
|
for feature in &self.enable {
|
|
|
|
|
|
Self::validate_feature(feature)?;
|
|
|
|
|
|
v.push(format!("features.{feature}=true"));
|
2025-10-14 18:50:00 +01:00
|
|
|
|
}
|
2025-10-27 16:53:00 +00:00
|
|
|
|
for feature in &self.disable {
|
|
|
|
|
|
Self::validate_feature(feature)?;
|
|
|
|
|
|
v.push(format!("features.{feature}=false"));
|
|
|
|
|
|
}
|
|
|
|
|
|
Ok(v)
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
fn validate_feature(feature: &str) -> anyhow::Result<()> {
|
|
|
|
|
|
if is_known_feature_key(feature) {
|
|
|
|
|
|
Ok(())
|
|
|
|
|
|
} else {
|
|
|
|
|
|
anyhow::bail!("Unknown feature flag: {feature}")
|
2025-10-14 18:50:00 +01:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, Parser)]
|
|
|
|
|
|
struct FeaturesCli {
|
|
|
|
|
|
#[command(subcommand)]
|
|
|
|
|
|
sub: FeaturesSubcommand,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, Parser)]
|
|
|
|
|
|
enum FeaturesSubcommand {
|
|
|
|
|
|
/// List known features with their stage and effective state.
|
|
|
|
|
|
List,
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
fn stage_str(stage: codex_core::features::Stage) -> &'static str {
|
|
|
|
|
|
use codex_core::features::Stage;
|
|
|
|
|
|
match stage {
|
|
|
|
|
|
Stage::Experimental => "experimental",
|
|
|
|
|
|
Stage::Beta => "beta",
|
|
|
|
|
|
Stage::Stable => "stable",
|
|
|
|
|
|
Stage::Deprecated => "deprecated",
|
|
|
|
|
|
Stage::Removed => "removed",
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-30 14:34:35 -07:00
|
|
|
|
/// As early as possible in the process lifecycle, apply hardening measures. We
|
|
|
|
|
|
/// skip this in debug builds to avoid interfering with debugging.
|
2025-09-25 10:02:28 -07:00
|
|
|
|
#[ctor::ctor]
|
2025-09-30 14:34:35 -07:00
|
|
|
|
#[cfg(not(debug_assertions))]
|
2025-09-25 10:02:28 -07:00
|
|
|
|
fn pre_main_hardening() {
|
2025-09-30 14:34:35 -07:00
|
|
|
|
codex_process_hardening::pre_main_hardening();
|
2025-09-25 10:02:28 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086)
Historically, we spawned the Seatbelt and Landlock sandboxes in
substantially different ways:
For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy
specified as an arg followed by the original command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec.rs#L147-L219
For **Landlock/Seccomp**, we would do
`tokio::runtime::Builder::new_current_thread()`, _invoke
Landlock/Seccomp APIs to modify the permissions of that new thread_, and
then spawn the command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec_linux.rs#L28-L49
While it is neat that Landlock/Seccomp supports applying a policy to
only one thread without having to apply it to the entire process, it
requires us to maintain two different codepaths and is a bit harder to
reason about. The tipping point was
https://github.com/openai/codex/pull/1061, in which we had to start
building up the `env` in an unexpected way for the existing
Landlock/Seccomp approach to continue to work.
This PR overhauls things so that we do similar things for Mac and Linux.
It turned out that we were already building our own "helper binary"
comparable to Mac's `sandbox-exec` as part of the `cli` crate:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/cli/Cargo.toml#L10-L12
We originally created this to build a small binary to include with the
Node.js version of the Codex CLI to provide support for Linux
sandboxing.
Though the sticky bit is that, at this point, we still want to deploy
the Rust version of Codex as a single, standalone binary rather than a
CLI and a supporting sandboxing binary. To satisfy this goal, we use
"the arg0 trick," in which we:
* use `std::env::current_exe()` to get the path to the CLI that is
currently running
* use the CLI as the `program` for the `Command`
* set `"codex-linux-sandbox"` as arg0 for the `Command`
A CLI that supports sandboxing should check arg0 at the start of the
program. If it is `"codex-linux-sandbox"`, it must invoke
`codex_linux_sandbox::run_main()`, which runs the CLI as if it were
`codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the
appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn
the original command, so do _replace_ the process rather than spawn a
subprocess. Incidentally, we do this before starting the Tokio runtime,
so the process should only have one thread when `execvp(3)` is called.
Because the `core` crate that needs to spawn the Linux sandboxing is not
a CLI in its own right, this means that every CLI that includes `core`
and relies on this behavior has to (1) implement it and (2) provide the
path to the sandboxing executable. While the path is almost always
`std::env::current_exe()`, we needed to make this configurable for
integration tests, so `Config` now has a `codex_linux_sandbox_exe:
Option<PathBuf>` property to facilitate threading this through,
introduced in https://github.com/openai/codex/pull/1089.
This common pattern is now captured in
`codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs`
functions that should use it have been updated as part of this PR.
The `codex-linux-sandbox` crate added to the Cargo workspace as part of
this PR now has the bulk of the Landlock/Seccomp logic, which makes
`core` a bit simpler. Indeed, `core/src/exec_linux.rs` and
`core/src/landlock.rs` were removed/ported as part of this PR. I also
moved the unit tests for this code into an integration test,
`linux-sandbox/tests/landlock.rs`, in which I use
`env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for
`codex_linux_sandbox_exe` since `std::env::current_exe()` is not
appropriate in that case.
2025-05-23 11:37:07 -07:00
|
|
|
|
fn main() -> anyhow::Result<()> {
|
2025-07-28 08:31:24 -07:00
|
|
|
|
arg0_dispatch_or_else(|codex_linux_sandbox_exe| async move {
|
fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086)
Historically, we spawned the Seatbelt and Landlock sandboxes in
substantially different ways:
For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy
specified as an arg followed by the original command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec.rs#L147-L219
For **Landlock/Seccomp**, we would do
`tokio::runtime::Builder::new_current_thread()`, _invoke
Landlock/Seccomp APIs to modify the permissions of that new thread_, and
then spawn the command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec_linux.rs#L28-L49
While it is neat that Landlock/Seccomp supports applying a policy to
only one thread without having to apply it to the entire process, it
requires us to maintain two different codepaths and is a bit harder to
reason about. The tipping point was
https://github.com/openai/codex/pull/1061, in which we had to start
building up the `env` in an unexpected way for the existing
Landlock/Seccomp approach to continue to work.
This PR overhauls things so that we do similar things for Mac and Linux.
It turned out that we were already building our own "helper binary"
comparable to Mac's `sandbox-exec` as part of the `cli` crate:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/cli/Cargo.toml#L10-L12
We originally created this to build a small binary to include with the
Node.js version of the Codex CLI to provide support for Linux
sandboxing.
Though the sticky bit is that, at this point, we still want to deploy
the Rust version of Codex as a single, standalone binary rather than a
CLI and a supporting sandboxing binary. To satisfy this goal, we use
"the arg0 trick," in which we:
* use `std::env::current_exe()` to get the path to the CLI that is
currently running
* use the CLI as the `program` for the `Command`
* set `"codex-linux-sandbox"` as arg0 for the `Command`
A CLI that supports sandboxing should check arg0 at the start of the
program. If it is `"codex-linux-sandbox"`, it must invoke
`codex_linux_sandbox::run_main()`, which runs the CLI as if it were
`codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the
appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn
the original command, so do _replace_ the process rather than spawn a
subprocess. Incidentally, we do this before starting the Tokio runtime,
so the process should only have one thread when `execvp(3)` is called.
Because the `core` crate that needs to spawn the Linux sandboxing is not
a CLI in its own right, this means that every CLI that includes `core`
and relies on this behavior has to (1) implement it and (2) provide the
path to the sandboxing executable. While the path is almost always
`std::env::current_exe()`, we needed to make this configurable for
integration tests, so `Config` now has a `codex_linux_sandbox_exe:
Option<PathBuf>` property to facilitate threading this through,
introduced in https://github.com/openai/codex/pull/1089.
This common pattern is now captured in
`codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs`
functions that should use it have been updated as part of this PR.
The `codex-linux-sandbox` crate added to the Cargo workspace as part of
this PR now has the bulk of the Landlock/Seccomp logic, which makes
`core` a bit simpler. Indeed, `core/src/exec_linux.rs` and
`core/src/landlock.rs` were removed/ported as part of this PR. I also
moved the unit tests for this code into an integration test,
`linux-sandbox/tests/landlock.rs`, in which I use
`env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for
`codex_linux_sandbox_exe` since `std::env::current_exe()` is not
appropriate in that case.
2025-05-23 11:37:07 -07:00
|
|
|
|
cli_main(codex_linux_sandbox_exe).await?;
|
|
|
|
|
|
Ok(())
|
|
|
|
|
|
})
|
|
|
|
|
|
}
|
2025-05-22 21:52:28 -07:00
|
|
|
|
|
fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086)
Historically, we spawned the Seatbelt and Landlock sandboxes in
substantially different ways:
For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy
specified as an arg followed by the original command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec.rs#L147-L219
For **Landlock/Seccomp**, we would do
`tokio::runtime::Builder::new_current_thread()`, _invoke
Landlock/Seccomp APIs to modify the permissions of that new thread_, and
then spawn the command:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec_linux.rs#L28-L49
While it is neat that Landlock/Seccomp supports applying a policy to
only one thread without having to apply it to the entire process, it
requires us to maintain two different codepaths and is a bit harder to
reason about. The tipping point was
https://github.com/openai/codex/pull/1061, in which we had to start
building up the `env` in an unexpected way for the existing
Landlock/Seccomp approach to continue to work.
This PR overhauls things so that we do similar things for Mac and Linux.
It turned out that we were already building our own "helper binary"
comparable to Mac's `sandbox-exec` as part of the `cli` crate:
https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/cli/Cargo.toml#L10-L12
We originally created this to build a small binary to include with the
Node.js version of the Codex CLI to provide support for Linux
sandboxing.
Though the sticky bit is that, at this point, we still want to deploy
the Rust version of Codex as a single, standalone binary rather than a
CLI and a supporting sandboxing binary. To satisfy this goal, we use
"the arg0 trick," in which we:
* use `std::env::current_exe()` to get the path to the CLI that is
currently running
* use the CLI as the `program` for the `Command`
* set `"codex-linux-sandbox"` as arg0 for the `Command`
A CLI that supports sandboxing should check arg0 at the start of the
program. If it is `"codex-linux-sandbox"`, it must invoke
`codex_linux_sandbox::run_main()`, which runs the CLI as if it were
`codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the
appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn
the original command, so do _replace_ the process rather than spawn a
subprocess. Incidentally, we do this before starting the Tokio runtime,
so the process should only have one thread when `execvp(3)` is called.
Because the `core` crate that needs to spawn the Linux sandboxing is not
a CLI in its own right, this means that every CLI that includes `core`
and relies on this behavior has to (1) implement it and (2) provide the
path to the sandboxing executable. While the path is almost always
`std::env::current_exe()`, we needed to make this configurable for
integration tests, so `Config` now has a `codex_linux_sandbox_exe:
Option<PathBuf>` property to facilitate threading this through,
introduced in https://github.com/openai/codex/pull/1089.
This common pattern is now captured in
`codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs`
functions that should use it have been updated as part of this PR.
The `codex-linux-sandbox` crate added to the Cargo workspace as part of
this PR now has the bulk of the Landlock/Seccomp logic, which makes
`core` a bit simpler. Indeed, `core/src/exec_linux.rs` and
`core/src/landlock.rs` were removed/ported as part of this PR. I also
moved the unit tests for this code into an integration test,
`linux-sandbox/tests/landlock.rs`, in which I use
`env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for
`codex_linux_sandbox_exe` since `std::env::current_exe()` is not
appropriate in that case.
2025-05-23 11:37:07 -07:00
|
|
|
|
async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
|
2025-09-14 19:33:19 -04:00
|
|
|
|
let MultitoolCli {
|
2025-10-14 18:50:00 +01:00
|
|
|
|
config_overrides: mut root_config_overrides,
|
|
|
|
|
|
feature_toggles,
|
2025-09-14 19:33:19 -04:00
|
|
|
|
mut interactive,
|
|
|
|
|
|
subcommand,
|
|
|
|
|
|
} = MultitoolCli::parse();
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
|
2025-10-14 18:50:00 +01:00
|
|
|
|
// Fold --enable/--disable into config overrides so they flow to all subcommands.
|
2025-10-27 16:53:00 +00:00
|
|
|
|
let toggle_overrides = feature_toggles.to_overrides()?;
|
|
|
|
|
|
root_config_overrides.raw_overrides.extend(toggle_overrides);
|
2025-10-14 18:50:00 +01:00
|
|
|
|
|
2025-09-14 19:33:19 -04:00
|
|
|
|
match subcommand {
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
None => {
|
2025-09-14 19:33:19 -04:00
|
|
|
|
prepend_config_flags(
|
|
|
|
|
|
&mut interactive.config_overrides,
|
|
|
|
|
|
root_config_overrides.clone(),
|
|
|
|
|
|
);
|
2025-09-22 15:24:31 -07:00
|
|
|
|
let exit_info = codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
|
2025-10-15 16:11:20 -07:00
|
|
|
|
handle_app_exit(exit_info)?;
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
}
|
feat: add support for -c/--config to override individual config items (#1137)
This PR introduces support for `-c`/`--config` so users can override
individual config values on the command line using `--config
name=value`. Example:
```
codex --config model=o4-mini
```
Making it possible to set arbitrary config values on the command line
results in a more flexible configuration scheme and makes it easier to
provide single-line examples that can be copy-pasted from documentation.
Effectively, it means there are four levels of configuration for some
values:
- Default value (e.g., `model` currently defaults to `o4-mini`)
- Value in `config.toml` (e.g., user could override the default to be
`model = "o3"` in their `config.toml`)
- Specifying `-c` or `--config` to override `model` (e.g., user can
include `-c model=o3` in their list of args to Codex)
- If available, a config-specific flag can be used, which takes
precedence over `-c` (e.g., user can specify `--model o3` in their list
of args to Codex)
Now that it is possible to specify anything that could be configured in
`config.toml` on the command line using `-c`, we do not need to have a
custom flag for every possible config option (which can clutter the
output of `--help`). To that end, as part of this PR, we drop support
for the `--disable-response-storage` flag, as users can now specify `-c
disable_response_storage=true` to get the equivalent functionality.
Under the hood, this works by loading the `config.toml` into a
`toml::Value`. Then for each `key=value`, we create a small synthetic
TOML file with `value` so that we can run the TOML parser to get the
equivalent `toml::Value`. We then parse `key` to determine the point in
the original `toml::Value` to do the insert/replace. Once all of the
overrides from `-c` args have been applied, the `toml::Value` is
deserialized into a `ConfigToml` and then the `ConfigOverrides` are
applied, as before.
2025-05-27 23:11:44 -07:00
|
|
|
|
Some(Subcommand::Exec(mut exec_cli)) => {
|
2025-09-14 19:33:19 -04:00
|
|
|
|
prepend_config_flags(
|
|
|
|
|
|
&mut exec_cli.config_overrides,
|
|
|
|
|
|
root_config_overrides.clone(),
|
|
|
|
|
|
);
|
2025-05-22 21:52:28 -07:00
|
|
|
|
codex_exec::run_main(exec_cli, codex_linux_sandbox_exe).await?;
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
}
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
Some(Subcommand::McpServer) => {
|
|
|
|
|
|
codex_mcp_server::run_main(codex_linux_sandbox_exe, root_config_overrides).await?;
|
|
|
|
|
|
}
|
2025-09-14 21:30:56 -07:00
|
|
|
|
Some(Subcommand::Mcp(mut mcp_cli)) => {
|
|
|
|
|
|
// Propagate any root-level config overrides (e.g. `-c key=value`).
|
|
|
|
|
|
prepend_config_flags(&mut mcp_cli.config_overrides, root_config_overrides.clone());
|
fix: separate `codex mcp` into `codex mcp-server` and `codex app-server` (#4471)
This is a very large PR with some non-backwards-compatible changes.
Historically, `codex mcp` (or `codex mcp serve`) started a JSON-RPC-ish
server that had two overlapping responsibilities:
- Running an MCP server, providing some basic tool calls.
- Running the app server used to power experiences such as the VS Code
extension.
This PR aims to separate these into distinct concepts:
- `codex mcp-server` for the MCP server
- `codex app-server` for the "application server"
Note `codex mcp` still exists because it already has its own subcommands
for MCP management (`list`, `add`, etc.)
The MCP logic continues to live in `codex-rs/mcp-server` whereas the
refactored app server logic is in the new `codex-rs/app-server` folder.
Note that most of the existing integration tests in
`codex-rs/mcp-server/tests/suite` were actually for the app server, so
all the tests have been moved with the exception of
`codex-rs/mcp-server/tests/suite/mod.rs`.
Because this is already a large diff, I tried not to change more than I
had to, so `codex-rs/app-server/tests/common/mcp_process.rs` still uses
the name `McpProcess` for now, but I will do some mechanical renamings
to things like `AppServer` in subsequent PRs.
While `mcp-server` and `app-server` share some overlapping functionality
(like reading streams of JSONL and dispatching based on message types)
and some differences (completely different message types), I ended up
doing a bit of copypasta between the two crates, as both have somewhat
similar `message_processor.rs` and `outgoing_message.rs` files for now,
though I expect them to diverge more in the near future.
One material change is that of the initialize handshake for `codex
app-server`, as we no longer use the MCP types for that handshake.
Instead, we update `codex-rs/protocol/src/mcp_protocol.rs` to add an
`Initialize` variant to `ClientRequest`, which takes the `ClientInfo`
object we need to update the `USER_AGENT_SUFFIX` in
`codex-rs/app-server/src/message_processor.rs`.
One other material change is in
`codex-rs/app-server/src/codex_message_processor.rs` where I eliminated
a use of the `send_event_as_notification()` method I am generally trying
to deprecate (because it blindly maps an `EventMsg` into a
`JSONNotification`) in favor of `send_server_notification()`, which
takes a `ServerNotification`, as that is intended to be a custom enum of
all notification types supported by the app server. So to make this
update, I had to introduce a new variant of `ServerNotification`,
`SessionConfigured`, which is a non-backwards compatible change with the
old `codex mcp`, and clients will have to be updated after the next
release that contains this PR. Note that
`codex-rs/app-server/tests/suite/list_resume.rs` also had to be update
to reflect this change.
I introduced `codex-rs/utils/json-to-toml/src/lib.rs` as a small utility
crate to avoid some of the copying between `mcp-server` and
`app-server`.
2025-09-30 00:06:18 -07:00
|
|
|
|
mcp_cli.run().await?;
|
|
|
|
|
|
}
|
|
|
|
|
|
Some(Subcommand::AppServer) => {
|
|
|
|
|
|
codex_app_server::run_main(codex_linux_sandbox_exe, root_config_overrides).await?;
|
2025-09-14 19:33:19 -04:00
|
|
|
|
}
|
2025-09-15 02:16:17 -04:00
|
|
|
|
Some(Subcommand::Resume(ResumeCommand {
|
|
|
|
|
|
session_id,
|
|
|
|
|
|
last,
|
|
|
|
|
|
config_overrides,
|
|
|
|
|
|
})) => {
|
|
|
|
|
|
interactive = finalize_resume_interactive(
|
|
|
|
|
|
interactive,
|
2025-09-14 19:33:19 -04:00
|
|
|
|
root_config_overrides.clone(),
|
2025-09-15 02:16:17 -04:00
|
|
|
|
session_id,
|
|
|
|
|
|
last,
|
|
|
|
|
|
config_overrides,
|
2025-09-14 19:33:19 -04:00
|
|
|
|
);
|
2025-10-06 16:07:22 -07:00
|
|
|
|
let exit_info = codex_tui::run_main(interactive, codex_linux_sandbox_exe).await?;
|
2025-10-15 16:11:20 -07:00
|
|
|
|
handle_app_exit(exit_info)?;
|
2025-05-14 13:15:41 -07:00
|
|
|
|
}
|
feat: add support for login with ChatGPT (#1212)
This does not implement the full Login with ChatGPT experience, but it
should unblock people.
**What works**
* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.
**What does not work**
* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:
https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89
**Implementation**
Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.
As such, the most significant files in this PR are:
```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```
Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:
```
codex-rs/core/src/openai_api_key.rs
```
Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.
Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.
**Testing**
Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:
```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```
For reference:
* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
|
|
|
|
Some(Subcommand::Login(mut login_cli)) => {
|
2025-09-14 19:33:19 -04:00
|
|
|
|
prepend_config_flags(
|
|
|
|
|
|
&mut login_cli.config_overrides,
|
|
|
|
|
|
root_config_overrides.clone(),
|
|
|
|
|
|
);
|
2025-07-30 14:09:26 -07:00
|
|
|
|
match login_cli.action {
|
|
|
|
|
|
Some(LoginSubcommand::Status) => {
|
|
|
|
|
|
run_login_status(login_cli.config_overrides).await;
|
|
|
|
|
|
}
|
|
|
|
|
|
None => {
|
2025-09-29 19:34:57 -07:00
|
|
|
|
if login_cli.use_device_code {
|
|
|
|
|
|
run_login_with_device_code(
|
|
|
|
|
|
login_cli.config_overrides,
|
|
|
|
|
|
login_cli.issuer_base_url,
|
|
|
|
|
|
login_cli.client_id,
|
|
|
|
|
|
)
|
|
|
|
|
|
.await;
|
2025-10-02 23:17:31 -07:00
|
|
|
|
} else if login_cli.api_key.is_some() {
|
|
|
|
|
|
eprintln!(
|
|
|
|
|
|
"The --api-key flag is no longer supported. Pipe the key instead, e.g. `printenv OPENAI_API_KEY | codex login --with-api-key`."
|
|
|
|
|
|
);
|
|
|
|
|
|
std::process::exit(1);
|
|
|
|
|
|
} else if login_cli.with_api_key {
|
|
|
|
|
|
let api_key = read_api_key_from_stdin();
|
2025-07-31 10:48:49 -07:00
|
|
|
|
run_login_with_api_key(login_cli.config_overrides, api_key).await;
|
|
|
|
|
|
} else {
|
|
|
|
|
|
run_login_with_chatgpt(login_cli.config_overrides).await;
|
|
|
|
|
|
}
|
2025-07-30 14:09:26 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
feat: add support for login with ChatGPT (#1212)
This does not implement the full Login with ChatGPT experience, but it
should unblock people.
**What works**
* The `codex` multitool now has a `login` subcommand, so you can run
`codex login`, which should write `CODEX_HOME/auth.json` if you complete
the flow successfully. The TUI will now read the `OPENAI_API_KEY` from
`auth.json`.
* The TUI should refresh the token if it has expired and the necessary
information is in `auth.json`.
* There is a `LoginScreen` in the TUI that tells you to run `codex
login` if both (1) your model provider expects to use `OPENAI_API_KEY`
as its env var, and (2) `OPENAI_API_KEY` is not set.
**What does not work**
* The `LoginScreen` does not support the login flow from within the TUI.
Instead, it tells you to quit, run `codex login`, and then run `codex`
again.
* `codex exec` does read from `auth.json` yet, nor does it direct the
user to go through the login flow if `OPENAI_API_KEY` is not be found.
* The `maybeRedeemCredits()` function from `get-api-key.tsx` has not
been ported from TypeScript to `login_with_chatgpt.py` yet:
https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89
**Implementation**
Currently, the OAuth flow requires running a local webserver on
`127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost
of a webserver dependency in the Rust CLI just to support login, so
instead we implement this logic in Python, as Python has a `http.server`
module as part of its standard library. Specifically, we bundle the
contents of a single Python file as a string in the Rust CLI and then
use it to spawn a subprocess as `python3 -c
{{SOURCE_FOR_PYTHON_SERVER}}`.
As such, the most significant files in this PR are:
```
codex-rs/login/src/login_with_chatgpt.py
codex-rs/login/src/lib.rs
```
Now that the CLI may load `OPENAI_API_KEY` from the environment _or_
`CODEX_HOME/auth.json`, we need a new abstraction for reading/writing
this variable, so we introduce:
```
codex-rs/core/src/openai_api_key.rs
```
Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024,
so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY`
so it is read in a thread-safe manner.
Ultimately, it should be possible to go through the entire login flow
from the TUI. This PR introduces a placeholder `LoginScreen` UI for that
right now, though the new `codex login` subcommand introduced in this PR
should be a viable workaround until the UI is ready.
**Testing**
Because the login flow is currently implemented in a standalone Python
file, you can test it without building any Rust code as follows:
```
rm -rf /tmp/codex_home && mkdir /tmp/codex_home
CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py
```
For reference:
* the original TypeScript implementation was introduced in
https://github.com/openai/codex/pull/963
* support for redeeming credits was later added in
https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
|
|
|
|
}
|
2025-08-07 01:17:33 -07:00
|
|
|
|
Some(Subcommand::Logout(mut logout_cli)) => {
|
2025-09-14 19:33:19 -04:00
|
|
|
|
prepend_config_flags(
|
|
|
|
|
|
&mut logout_cli.config_overrides,
|
|
|
|
|
|
root_config_overrides.clone(),
|
|
|
|
|
|
);
|
2025-08-07 01:17:33 -07:00
|
|
|
|
run_logout(logout_cli.config_overrides).await;
|
|
|
|
|
|
}
|
2025-07-08 21:43:27 -07:00
|
|
|
|
Some(Subcommand::Completion(completion_cli)) => {
|
|
|
|
|
|
print_completion(completion_cli);
|
|
|
|
|
|
}
|
2025-09-30 03:10:33 -07:00
|
|
|
|
Some(Subcommand::Cloud(mut cloud_cli)) => {
|
|
|
|
|
|
prepend_config_flags(
|
|
|
|
|
|
&mut cloud_cli.config_overrides,
|
|
|
|
|
|
root_config_overrides.clone(),
|
|
|
|
|
|
);
|
|
|
|
|
|
codex_cloud_tasks::run_main(cloud_cli, codex_linux_sandbox_exe).await?;
|
|
|
|
|
|
}
|
2025-10-05 15:51:57 -07:00
|
|
|
|
Some(Subcommand::Sandbox(sandbox_args)) => match sandbox_args.cmd {
|
|
|
|
|
|
SandboxCommand::Macos(mut seatbelt_cli) => {
|
2025-09-14 19:33:19 -04:00
|
|
|
|
prepend_config_flags(
|
|
|
|
|
|
&mut seatbelt_cli.config_overrides,
|
|
|
|
|
|
root_config_overrides.clone(),
|
|
|
|
|
|
);
|
2025-05-23 11:53:13 -07:00
|
|
|
|
codex_cli::debug_sandbox::run_command_under_seatbelt(
|
feat: add support for -c/--config to override individual config items (#1137)
This PR introduces support for `-c`/`--config` so users can override
individual config values on the command line using `--config
name=value`. Example:
```
codex --config model=o4-mini
```
Making it possible to set arbitrary config values on the command line
results in a more flexible configuration scheme and makes it easier to
provide single-line examples that can be copy-pasted from documentation.
Effectively, it means there are four levels of configuration for some
values:
- Default value (e.g., `model` currently defaults to `o4-mini`)
- Value in `config.toml` (e.g., user could override the default to be
`model = "o3"` in their `config.toml`)
- Specifying `-c` or `--config` to override `model` (e.g., user can
include `-c model=o3` in their list of args to Codex)
- If available, a config-specific flag can be used, which takes
precedence over `-c` (e.g., user can specify `--model o3` in their list
of args to Codex)
Now that it is possible to specify anything that could be configured in
`config.toml` on the command line using `-c`, we do not need to have a
custom flag for every possible config option (which can clutter the
output of `--help`). To that end, as part of this PR, we drop support
for the `--disable-response-storage` flag, as users can now specify `-c
disable_response_storage=true` to get the equivalent functionality.
Under the hood, this works by loading the `config.toml` into a
`toml::Value`. Then for each `key=value`, we create a small synthetic
TOML file with `value` so that we can run the TOML parser to get the
equivalent `toml::Value`. We then parse `key` to determine the point in
the original `toml::Value` to do the insert/replace. Once all of the
overrides from `-c` args have been applied, the `toml::Value` is
deserialized into a `ConfigToml` and then the `ConfigOverrides` are
applied, as before.
2025-05-27 23:11:44 -07:00
|
|
|
|
seatbelt_cli,
|
2025-05-23 11:53:13 -07:00
|
|
|
|
codex_linux_sandbox_exe,
|
|
|
|
|
|
)
|
|
|
|
|
|
.await?;
|
2025-04-28 16:37:05 -07:00
|
|
|
|
}
|
2025-10-05 15:51:57 -07:00
|
|
|
|
SandboxCommand::Linux(mut landlock_cli) => {
|
2025-09-14 19:33:19 -04:00
|
|
|
|
prepend_config_flags(
|
|
|
|
|
|
&mut landlock_cli.config_overrides,
|
|
|
|
|
|
root_config_overrides.clone(),
|
|
|
|
|
|
);
|
2025-05-23 11:53:13 -07:00
|
|
|
|
codex_cli::debug_sandbox::run_command_under_landlock(
|
feat: add support for -c/--config to override individual config items (#1137)
This PR introduces support for `-c`/`--config` so users can override
individual config values on the command line using `--config
name=value`. Example:
```
codex --config model=o4-mini
```
Making it possible to set arbitrary config values on the command line
results in a more flexible configuration scheme and makes it easier to
provide single-line examples that can be copy-pasted from documentation.
Effectively, it means there are four levels of configuration for some
values:
- Default value (e.g., `model` currently defaults to `o4-mini`)
- Value in `config.toml` (e.g., user could override the default to be
`model = "o3"` in their `config.toml`)
- Specifying `-c` or `--config` to override `model` (e.g., user can
include `-c model=o3` in their list of args to Codex)
- If available, a config-specific flag can be used, which takes
precedence over `-c` (e.g., user can specify `--model o3` in their list
of args to Codex)
Now that it is possible to specify anything that could be configured in
`config.toml` on the command line using `-c`, we do not need to have a
custom flag for every possible config option (which can clutter the
output of `--help`). To that end, as part of this PR, we drop support
for the `--disable-response-storage` flag, as users can now specify `-c
disable_response_storage=true` to get the equivalent functionality.
Under the hood, this works by loading the `config.toml` into a
`toml::Value`. Then for each `key=value`, we create a small synthetic
TOML file with `value` so that we can run the TOML parser to get the
equivalent `toml::Value`. We then parse `key` to determine the point in
the original `toml::Value` to do the insert/replace. Once all of the
overrides from `-c` args have been applied, the `toml::Value` is
deserialized into a `ConfigToml` and then the `ConfigOverrides` are
applied, as before.
2025-05-27 23:11:44 -07:00
|
|
|
|
landlock_cli,
|
2025-05-23 11:53:13 -07:00
|
|
|
|
codex_linux_sandbox_exe,
|
|
|
|
|
|
)
|
|
|
|
|
|
.await?;
|
2025-04-28 16:37:05 -07:00
|
|
|
|
}
|
Windows Sandbox - Alpha version (#4905)
- Added the new codex-windows-sandbox crate that builds both a library
entry point (run_windows_sandbox_capture) and a CLI executable to launch
commands inside a Windows restricted-token sandbox, including ACL
management, capability SID provisioning, network lockdown, and output
capture
(windows-sandbox-rs/src/lib.rs:167, windows-sandbox-rs/src/main.rs:54).
- Introduced the experimental WindowsSandbox feature flag and wiring so
Windows builds can opt into the sandbox:
SandboxType::WindowsRestrictedToken, the in-process execution path, and
platform sandbox selection now honor the flag (core/src/features.rs:47,
core/src/config.rs:1224, core/src/safety.rs:19,
core/src/sandboxing/mod.rs:69, core/src/exec.rs:79,
core/src/exec.rs:172).
- Updated workspace metadata to include the new crate and its
Windows-specific dependencies so the core crate can link against it
(codex-rs/
Cargo.toml:91, core/Cargo.toml:86).
- Added a PowerShell bootstrap script that installs the Windows
toolchain, required CLI utilities, and builds the workspace to ease
development
on the platform (scripts/setup-windows.ps1:1).
- Landed a Python smoke-test suite that exercises
read-only/workspace-write policies, ACL behavior, and network denial for
the Windows sandbox
binary (windows-sandbox-rs/sandbox_smoketests.py:1).
2025-10-30 15:51:57 -07:00
|
|
|
|
SandboxCommand::Windows(mut windows_cli) => {
|
|
|
|
|
|
prepend_config_flags(
|
|
|
|
|
|
&mut windows_cli.config_overrides,
|
|
|
|
|
|
root_config_overrides.clone(),
|
|
|
|
|
|
);
|
|
|
|
|
|
codex_cli::debug_sandbox::run_command_under_windows(
|
|
|
|
|
|
windows_cli,
|
|
|
|
|
|
codex_linux_sandbox_exe,
|
|
|
|
|
|
)
|
|
|
|
|
|
.await?;
|
|
|
|
|
|
}
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
},
|
2025-07-11 13:30:11 -04:00
|
|
|
|
Some(Subcommand::Apply(mut apply_cli)) => {
|
2025-09-14 19:33:19 -04:00
|
|
|
|
prepend_config_flags(
|
|
|
|
|
|
&mut apply_cli.config_overrides,
|
|
|
|
|
|
root_config_overrides.clone(),
|
|
|
|
|
|
);
|
2025-07-23 15:40:00 -07:00
|
|
|
|
run_apply_command(apply_cli, None).await?;
|
2025-07-11 13:30:11 -04:00
|
|
|
|
}
|
2025-09-30 03:10:33 -07:00
|
|
|
|
Some(Subcommand::ResponsesApiProxy(args)) => {
|
|
|
|
|
|
tokio::task::spawn_blocking(move || codex_responses_api_proxy::run_main(args))
|
|
|
|
|
|
.await??;
|
|
|
|
|
|
}
|
2025-10-19 21:12:45 -07:00
|
|
|
|
Some(Subcommand::StdioToUds(cmd)) => {
|
|
|
|
|
|
let socket_path = cmd.socket_path;
|
|
|
|
|
|
tokio::task::spawn_blocking(move || codex_stdio_to_uds::run(socket_path.as_path()))
|
|
|
|
|
|
.await??;
|
|
|
|
|
|
}
|
2025-08-18 13:08:53 -07:00
|
|
|
|
Some(Subcommand::GenerateTs(gen_cli)) => {
|
|
|
|
|
|
codex_protocol_ts::generate_ts(&gen_cli.out_dir, gen_cli.prettier.as_deref())?;
|
|
|
|
|
|
}
|
2025-10-14 18:50:00 +01:00
|
|
|
|
Some(Subcommand::Features(FeaturesCli { sub })) => match sub {
|
|
|
|
|
|
FeaturesSubcommand::List => {
|
|
|
|
|
|
// Respect root-level `-c` overrides plus top-level flags like `--profile`.
|
2025-11-03 16:11:50 -08:00
|
|
|
|
let mut cli_kv_overrides = root_config_overrides
|
2025-10-14 18:50:00 +01:00
|
|
|
|
.parse_overrides()
|
Windows Sandbox - Alpha version (#4905)
- Added the new codex-windows-sandbox crate that builds both a library
entry point (run_windows_sandbox_capture) and a CLI executable to launch
commands inside a Windows restricted-token sandbox, including ACL
management, capability SID provisioning, network lockdown, and output
capture
(windows-sandbox-rs/src/lib.rs:167, windows-sandbox-rs/src/main.rs:54).
- Introduced the experimental WindowsSandbox feature flag and wiring so
Windows builds can opt into the sandbox:
SandboxType::WindowsRestrictedToken, the in-process execution path, and
platform sandbox selection now honor the flag (core/src/features.rs:47,
core/src/config.rs:1224, core/src/safety.rs:19,
core/src/sandboxing/mod.rs:69, core/src/exec.rs:79,
core/src/exec.rs:172).
- Updated workspace metadata to include the new crate and its
Windows-specific dependencies so the core crate can link against it
(codex-rs/
Cargo.toml:91, core/Cargo.toml:86).
- Added a PowerShell bootstrap script that installs the Windows
toolchain, required CLI utilities, and builds the workspace to ease
development
on the platform (scripts/setup-windows.ps1:1).
- Landed a Python smoke-test suite that exercises
read-only/workspace-write policies, ACL behavior, and network denial for
the Windows sandbox
binary (windows-sandbox-rs/sandbox_smoketests.py:1).
2025-10-30 15:51:57 -07:00
|
|
|
|
.map_err(anyhow::Error::msg)?;
|
2025-10-14 18:50:00 +01:00
|
|
|
|
|
2025-11-03 16:11:50 -08:00
|
|
|
|
// Honor `--search` via the new feature toggle.
|
|
|
|
|
|
if interactive.web_search {
|
|
|
|
|
|
cli_kv_overrides.push((
|
|
|
|
|
|
"features.web_search_request".to_string(),
|
|
|
|
|
|
toml::Value::Boolean(true),
|
|
|
|
|
|
));
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-10-14 18:50:00 +01:00
|
|
|
|
// Thread through relevant top-level flags (at minimum, `--profile`).
|
|
|
|
|
|
let overrides = ConfigOverrides {
|
|
|
|
|
|
config_profile: interactive.config_profile.clone(),
|
|
|
|
|
|
..Default::default()
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
let config = Config::load_with_cli_overrides(cli_kv_overrides, overrides).await?;
|
|
|
|
|
|
for def in codex_core::features::FEATURES.iter() {
|
|
|
|
|
|
let name = def.key;
|
|
|
|
|
|
let stage = stage_str(def.stage);
|
|
|
|
|
|
let enabled = config.features.enabled(def.id);
|
|
|
|
|
|
println!("{name}\t{stage}\t{enabled}");
|
|
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
},
|
feat: initial import of Rust implementation of Codex CLI in codex-rs/ (#629)
As stated in `codex-rs/README.md`:
Today, Codex CLI is written in TypeScript and requires Node.js 22+ to
run it. For a number of users, this runtime requirement inhibits
adoption: they would be better served by a standalone executable. As
maintainers, we want Codex to run efficiently in a wide range of
environments with minimal overhead. We also want to take advantage of
operating system-specific APIs to provide better sandboxing, where
possible.
To that end, we are moving forward with a Rust implementation of Codex
CLI contained in this folder, which has the following benefits:
- The CLI compiles to small, standalone, platform-specific binaries.
- Can make direct, native calls to
[seccomp](https://man7.org/linux/man-pages/man2/seccomp.2.html) and
[landlock](https://man7.org/linux/man-pages/man7/landlock.7.html) in
order to support sandboxing on Linux.
- No runtime garbage collection, resulting in lower memory consumption
and better, more predictable performance.
Currently, the Rust implementation is materially behind the TypeScript
implementation in functionality, so continue to use the TypeScript
implmentation for the time being. We will publish native executables via
GitHub Releases as soon as we feel the Rust version is usable.
2025-04-24 13:31:40 -07:00
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
Ok(())
|
|
|
|
|
|
}
|
feat: add support for -c/--config to override individual config items (#1137)
This PR introduces support for `-c`/`--config` so users can override
individual config values on the command line using `--config
name=value`. Example:
```
codex --config model=o4-mini
```
Making it possible to set arbitrary config values on the command line
results in a more flexible configuration scheme and makes it easier to
provide single-line examples that can be copy-pasted from documentation.
Effectively, it means there are four levels of configuration for some
values:
- Default value (e.g., `model` currently defaults to `o4-mini`)
- Value in `config.toml` (e.g., user could override the default to be
`model = "o3"` in their `config.toml`)
- Specifying `-c` or `--config` to override `model` (e.g., user can
include `-c model=o3` in their list of args to Codex)
- If available, a config-specific flag can be used, which takes
precedence over `-c` (e.g., user can specify `--model o3` in their list
of args to Codex)
Now that it is possible to specify anything that could be configured in
`config.toml` on the command line using `-c`, we do not need to have a
custom flag for every possible config option (which can clutter the
output of `--help`). To that end, as part of this PR, we drop support
for the `--disable-response-storage` flag, as users can now specify `-c
disable_response_storage=true` to get the equivalent functionality.
Under the hood, this works by loading the `config.toml` into a
`toml::Value`. Then for each `key=value`, we create a small synthetic
TOML file with `value` so that we can run the TOML parser to get the
equivalent `toml::Value`. We then parse `key` to determine the point in
the original `toml::Value` to do the insert/replace. Once all of the
overrides from `-c` args have been applied, the `toml::Value` is
deserialized into a `ConfigToml` and then the `ConfigOverrides` are
applied, as before.
2025-05-27 23:11:44 -07:00
|
|
|
|
|
|
|
|
|
|
/// Prepend root-level overrides so they have lower precedence than
|
|
|
|
|
|
/// CLI-specific ones specified after the subcommand (if any).
|
|
|
|
|
|
fn prepend_config_flags(
|
|
|
|
|
|
subcommand_config_overrides: &mut CliConfigOverrides,
|
|
|
|
|
|
cli_config_overrides: CliConfigOverrides,
|
|
|
|
|
|
) {
|
|
|
|
|
|
subcommand_config_overrides
|
|
|
|
|
|
.raw_overrides
|
|
|
|
|
|
.splice(0..0, cli_config_overrides.raw_overrides);
|
|
|
|
|
|
}
|
2025-07-08 21:43:27 -07:00
|
|
|
|
|
2025-09-15 02:16:17 -04:00
|
|
|
|
/// Build the final `TuiCli` for a `codex resume` invocation.
|
|
|
|
|
|
fn finalize_resume_interactive(
|
|
|
|
|
|
mut interactive: TuiCli,
|
|
|
|
|
|
root_config_overrides: CliConfigOverrides,
|
|
|
|
|
|
session_id: Option<String>,
|
|
|
|
|
|
last: bool,
|
|
|
|
|
|
resume_cli: TuiCli,
|
|
|
|
|
|
) -> TuiCli {
|
|
|
|
|
|
// Start with the parsed interactive CLI so resume shares the same
|
|
|
|
|
|
// configuration surface area as `codex` without additional flags.
|
|
|
|
|
|
let resume_session_id = session_id;
|
|
|
|
|
|
interactive.resume_picker = resume_session_id.is_none() && !last;
|
|
|
|
|
|
interactive.resume_last = last;
|
|
|
|
|
|
interactive.resume_session_id = resume_session_id;
|
|
|
|
|
|
|
|
|
|
|
|
// Merge resume-scoped flags and overrides with highest precedence.
|
|
|
|
|
|
merge_resume_cli_flags(&mut interactive, resume_cli);
|
|
|
|
|
|
|
|
|
|
|
|
// Propagate any root-level config overrides (e.g. `-c key=value`).
|
|
|
|
|
|
prepend_config_flags(&mut interactive.config_overrides, root_config_overrides);
|
|
|
|
|
|
|
|
|
|
|
|
interactive
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/// Merge flags provided to `codex resume` so they take precedence over any
|
|
|
|
|
|
/// root-level flags. Only overrides fields explicitly set on the resume-scoped
|
|
|
|
|
|
/// CLI. Also appends `-c key=value` overrides with highest precedence.
|
|
|
|
|
|
fn merge_resume_cli_flags(interactive: &mut TuiCli, resume_cli: TuiCli) {
|
|
|
|
|
|
if let Some(model) = resume_cli.model {
|
|
|
|
|
|
interactive.model = Some(model);
|
|
|
|
|
|
}
|
|
|
|
|
|
if resume_cli.oss {
|
|
|
|
|
|
interactive.oss = true;
|
|
|
|
|
|
}
|
|
|
|
|
|
if let Some(profile) = resume_cli.config_profile {
|
|
|
|
|
|
interactive.config_profile = Some(profile);
|
|
|
|
|
|
}
|
|
|
|
|
|
if let Some(sandbox) = resume_cli.sandbox_mode {
|
|
|
|
|
|
interactive.sandbox_mode = Some(sandbox);
|
|
|
|
|
|
}
|
|
|
|
|
|
if let Some(approval) = resume_cli.approval_policy {
|
|
|
|
|
|
interactive.approval_policy = Some(approval);
|
|
|
|
|
|
}
|
|
|
|
|
|
if resume_cli.full_auto {
|
|
|
|
|
|
interactive.full_auto = true;
|
|
|
|
|
|
}
|
|
|
|
|
|
if resume_cli.dangerously_bypass_approvals_and_sandbox {
|
|
|
|
|
|
interactive.dangerously_bypass_approvals_and_sandbox = true;
|
|
|
|
|
|
}
|
|
|
|
|
|
if let Some(cwd) = resume_cli.cwd {
|
|
|
|
|
|
interactive.cwd = Some(cwd);
|
|
|
|
|
|
}
|
|
|
|
|
|
if resume_cli.web_search {
|
|
|
|
|
|
interactive.web_search = true;
|
|
|
|
|
|
}
|
|
|
|
|
|
if !resume_cli.images.is_empty() {
|
|
|
|
|
|
interactive.images = resume_cli.images;
|
|
|
|
|
|
}
|
2025-10-18 22:13:53 -07:00
|
|
|
|
if !resume_cli.add_dir.is_empty() {
|
|
|
|
|
|
interactive.add_dir.extend(resume_cli.add_dir);
|
|
|
|
|
|
}
|
2025-09-15 02:16:17 -04:00
|
|
|
|
if let Some(prompt) = resume_cli.prompt {
|
|
|
|
|
|
interactive.prompt = Some(prompt);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
interactive
|
|
|
|
|
|
.config_overrides
|
|
|
|
|
|
.raw_overrides
|
|
|
|
|
|
.extend(resume_cli.config_overrides.raw_overrides);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-07-08 21:43:27 -07:00
|
|
|
|
fn print_completion(cmd: CompletionCommand) {
|
|
|
|
|
|
let mut app = MultitoolCli::command();
|
2025-07-09 14:08:35 -07:00
|
|
|
|
let name = "codex";
|
2025-07-08 21:43:27 -07:00
|
|
|
|
generate(cmd.shell, &mut app, name, &mut std::io::stdout());
|
|
|
|
|
|
}
|
2025-09-15 02:16:17 -04:00
|
|
|
|
|
|
|
|
|
|
#[cfg(test)]
|
|
|
|
|
|
mod tests {
|
|
|
|
|
|
use super::*;
|
2025-10-05 14:12:31 -07:00
|
|
|
|
use assert_matches::assert_matches;
|
2025-09-22 15:24:31 -07:00
|
|
|
|
use codex_core::protocol::TokenUsage;
|
fix: remove mcp-types from app server protocol (#4537)
We continue the separation between `codex app-server` and `codex
mcp-server`.
In particular, we introduce a new crate, `codex-app-server-protocol`,
and migrate `codex-rs/protocol/src/mcp_protocol.rs` into it, renaming it
`codex-rs/app-server-protocol/src/protocol.rs`.
Because `ConversationId` was defined in `mcp_protocol.rs`, we move it
into its own file, `codex-rs/protocol/src/conversation_id.rs`, and
because it is referenced in a ton of places, we have to touch a lot of
files as part of this PR.
We also decide to get away from proper JSON-RPC 2.0 semantics, so we
also introduce `codex-rs/app-server-protocol/src/jsonrpc_lite.rs`, which
is basically the same `JSONRPCMessage` type defined in `mcp-types`
except with all of the `"jsonrpc": "2.0"` removed.
Getting rid of `"jsonrpc": "2.0"` makes our serialization logic
considerably simpler, as we can lean heavier on serde to serialize
directly into the wire format that we use now.
2025-09-30 19:16:26 -07:00
|
|
|
|
use codex_protocol::ConversationId;
|
2025-10-27 16:53:00 +00:00
|
|
|
|
use pretty_assertions::assert_eq;
|
2025-09-15 02:16:17 -04:00
|
|
|
|
|
|
|
|
|
|
fn finalize_from_args(args: &[&str]) -> TuiCli {
|
|
|
|
|
|
let cli = MultitoolCli::try_parse_from(args).expect("parse");
|
|
|
|
|
|
let MultitoolCli {
|
|
|
|
|
|
interactive,
|
|
|
|
|
|
config_overrides: root_overrides,
|
|
|
|
|
|
subcommand,
|
2025-10-14 18:50:00 +01:00
|
|
|
|
feature_toggles: _,
|
2025-09-15 02:16:17 -04:00
|
|
|
|
} = cli;
|
|
|
|
|
|
|
|
|
|
|
|
let Subcommand::Resume(ResumeCommand {
|
|
|
|
|
|
session_id,
|
|
|
|
|
|
last,
|
|
|
|
|
|
config_overrides: resume_cli,
|
|
|
|
|
|
}) = subcommand.expect("resume present")
|
|
|
|
|
|
else {
|
|
|
|
|
|
unreachable!()
|
|
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
finalize_resume_interactive(interactive, root_overrides, session_id, last, resume_cli)
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-22 15:24:31 -07:00
|
|
|
|
fn sample_exit_info(conversation: Option<&str>) -> AppExitInfo {
|
|
|
|
|
|
let token_usage = TokenUsage {
|
|
|
|
|
|
output_tokens: 2,
|
|
|
|
|
|
total_tokens: 2,
|
|
|
|
|
|
..Default::default()
|
|
|
|
|
|
};
|
|
|
|
|
|
AppExitInfo {
|
|
|
|
|
|
token_usage,
|
|
|
|
|
|
conversation_id: conversation
|
|
|
|
|
|
.map(ConversationId::from_string)
|
|
|
|
|
|
.map(Result::unwrap),
|
2025-10-15 16:11:20 -07:00
|
|
|
|
update_action: None,
|
2025-09-22 15:24:31 -07:00
|
|
|
|
}
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn format_exit_messages_skips_zero_usage() {
|
|
|
|
|
|
let exit_info = AppExitInfo {
|
|
|
|
|
|
token_usage: TokenUsage::default(),
|
|
|
|
|
|
conversation_id: None,
|
2025-10-15 16:11:20 -07:00
|
|
|
|
update_action: None,
|
2025-09-22 15:24:31 -07:00
|
|
|
|
};
|
|
|
|
|
|
let lines = format_exit_messages(exit_info, false);
|
|
|
|
|
|
assert!(lines.is_empty());
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn format_exit_messages_includes_resume_hint_without_color() {
|
|
|
|
|
|
let exit_info = sample_exit_info(Some("123e4567-e89b-12d3-a456-426614174000"));
|
|
|
|
|
|
let lines = format_exit_messages(exit_info, false);
|
|
|
|
|
|
assert_eq!(
|
|
|
|
|
|
lines,
|
|
|
|
|
|
vec![
|
|
|
|
|
|
"Token usage: total=2 input=0 output=2".to_string(),
|
2025-09-28 20:50:04 -07:00
|
|
|
|
"To continue this session, run codex resume 123e4567-e89b-12d3-a456-426614174000"
|
2025-09-22 15:24:31 -07:00
|
|
|
|
.to_string(),
|
|
|
|
|
|
]
|
|
|
|
|
|
);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn format_exit_messages_applies_color_when_enabled() {
|
|
|
|
|
|
let exit_info = sample_exit_info(Some("123e4567-e89b-12d3-a456-426614174000"));
|
|
|
|
|
|
let lines = format_exit_messages(exit_info, true);
|
|
|
|
|
|
assert_eq!(lines.len(), 2);
|
|
|
|
|
|
assert!(lines[1].contains("\u{1b}[36m"));
|
|
|
|
|
|
}
|
|
|
|
|
|
|
2025-09-15 02:16:17 -04:00
|
|
|
|
#[test]
|
|
|
|
|
|
fn resume_model_flag_applies_when_no_root_flags() {
|
|
|
|
|
|
let interactive = finalize_from_args(["codex", "resume", "-m", "gpt-5-test"].as_ref());
|
|
|
|
|
|
|
|
|
|
|
|
assert_eq!(interactive.model.as_deref(), Some("gpt-5-test"));
|
|
|
|
|
|
assert!(interactive.resume_picker);
|
|
|
|
|
|
assert!(!interactive.resume_last);
|
|
|
|
|
|
assert_eq!(interactive.resume_session_id, None);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn resume_picker_logic_none_and_not_last() {
|
|
|
|
|
|
let interactive = finalize_from_args(["codex", "resume"].as_ref());
|
|
|
|
|
|
assert!(interactive.resume_picker);
|
|
|
|
|
|
assert!(!interactive.resume_last);
|
|
|
|
|
|
assert_eq!(interactive.resume_session_id, None);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn resume_picker_logic_last() {
|
|
|
|
|
|
let interactive = finalize_from_args(["codex", "resume", "--last"].as_ref());
|
|
|
|
|
|
assert!(!interactive.resume_picker);
|
|
|
|
|
|
assert!(interactive.resume_last);
|
|
|
|
|
|
assert_eq!(interactive.resume_session_id, None);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn resume_picker_logic_with_session_id() {
|
|
|
|
|
|
let interactive = finalize_from_args(["codex", "resume", "1234"].as_ref());
|
|
|
|
|
|
assert!(!interactive.resume_picker);
|
|
|
|
|
|
assert!(!interactive.resume_last);
|
|
|
|
|
|
assert_eq!(interactive.resume_session_id.as_deref(), Some("1234"));
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn resume_merges_option_flags_and_full_auto() {
|
|
|
|
|
|
let interactive = finalize_from_args(
|
|
|
|
|
|
[
|
|
|
|
|
|
"codex",
|
|
|
|
|
|
"resume",
|
|
|
|
|
|
"sid",
|
|
|
|
|
|
"--oss",
|
|
|
|
|
|
"--full-auto",
|
|
|
|
|
|
"--search",
|
|
|
|
|
|
"--sandbox",
|
|
|
|
|
|
"workspace-write",
|
|
|
|
|
|
"--ask-for-approval",
|
|
|
|
|
|
"on-request",
|
|
|
|
|
|
"-m",
|
|
|
|
|
|
"gpt-5-test",
|
|
|
|
|
|
"-p",
|
|
|
|
|
|
"my-profile",
|
|
|
|
|
|
"-C",
|
|
|
|
|
|
"/tmp",
|
|
|
|
|
|
"-i",
|
|
|
|
|
|
"/tmp/a.png,/tmp/b.png",
|
|
|
|
|
|
]
|
|
|
|
|
|
.as_ref(),
|
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
|
|
assert_eq!(interactive.model.as_deref(), Some("gpt-5-test"));
|
|
|
|
|
|
assert!(interactive.oss);
|
|
|
|
|
|
assert_eq!(interactive.config_profile.as_deref(), Some("my-profile"));
|
2025-10-05 14:12:31 -07:00
|
|
|
|
assert_matches!(
|
2025-09-15 02:16:17 -04:00
|
|
|
|
interactive.sandbox_mode,
|
|
|
|
|
|
Some(codex_common::SandboxModeCliArg::WorkspaceWrite)
|
2025-10-05 14:12:31 -07:00
|
|
|
|
);
|
|
|
|
|
|
assert_matches!(
|
2025-09-15 02:16:17 -04:00
|
|
|
|
interactive.approval_policy,
|
|
|
|
|
|
Some(codex_common::ApprovalModeCliArg::OnRequest)
|
2025-10-05 14:12:31 -07:00
|
|
|
|
);
|
2025-09-15 02:16:17 -04:00
|
|
|
|
assert!(interactive.full_auto);
|
|
|
|
|
|
assert_eq!(
|
|
|
|
|
|
interactive.cwd.as_deref(),
|
|
|
|
|
|
Some(std::path::Path::new("/tmp"))
|
|
|
|
|
|
);
|
|
|
|
|
|
assert!(interactive.web_search);
|
|
|
|
|
|
let has_a = interactive
|
|
|
|
|
|
.images
|
|
|
|
|
|
.iter()
|
|
|
|
|
|
.any(|p| p == std::path::Path::new("/tmp/a.png"));
|
|
|
|
|
|
let has_b = interactive
|
|
|
|
|
|
.images
|
|
|
|
|
|
.iter()
|
|
|
|
|
|
.any(|p| p == std::path::Path::new("/tmp/b.png"));
|
|
|
|
|
|
assert!(has_a && has_b);
|
|
|
|
|
|
assert!(!interactive.resume_picker);
|
|
|
|
|
|
assert!(!interactive.resume_last);
|
|
|
|
|
|
assert_eq!(interactive.resume_session_id.as_deref(), Some("sid"));
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn resume_merges_dangerously_bypass_flag() {
|
|
|
|
|
|
let interactive = finalize_from_args(
|
|
|
|
|
|
[
|
|
|
|
|
|
"codex",
|
|
|
|
|
|
"resume",
|
|
|
|
|
|
"--dangerously-bypass-approvals-and-sandbox",
|
|
|
|
|
|
]
|
|
|
|
|
|
.as_ref(),
|
|
|
|
|
|
);
|
|
|
|
|
|
assert!(interactive.dangerously_bypass_approvals_and_sandbox);
|
|
|
|
|
|
assert!(interactive.resume_picker);
|
|
|
|
|
|
assert!(!interactive.resume_last);
|
|
|
|
|
|
assert_eq!(interactive.resume_session_id, None);
|
|
|
|
|
|
}
|
2025-10-27 16:53:00 +00:00
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn feature_toggles_known_features_generate_overrides() {
|
|
|
|
|
|
let toggles = FeatureToggles {
|
|
|
|
|
|
enable: vec!["web_search_request".to_string()],
|
|
|
|
|
|
disable: vec!["unified_exec".to_string()],
|
|
|
|
|
|
};
|
|
|
|
|
|
let overrides = toggles.to_overrides().expect("valid features");
|
|
|
|
|
|
assert_eq!(
|
|
|
|
|
|
overrides,
|
|
|
|
|
|
vec![
|
|
|
|
|
|
"features.web_search_request=true".to_string(),
|
|
|
|
|
|
"features.unified_exec=false".to_string(),
|
|
|
|
|
|
]
|
|
|
|
|
|
);
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
|
fn feature_toggles_unknown_feature_errors() {
|
|
|
|
|
|
let toggles = FeatureToggles {
|
|
|
|
|
|
enable: vec!["does_not_exist".to_string()],
|
|
|
|
|
|
disable: Vec::new(),
|
|
|
|
|
|
};
|
|
|
|
|
|
let err = toggles
|
|
|
|
|
|
.to_overrides()
|
|
|
|
|
|
.expect_err("feature should be rejected");
|
|
|
|
|
|
assert_eq!(err.to_string(), "Unknown feature flag: does_not_exist");
|
|
|
|
|
|
}
|
2025-09-15 02:16:17 -04:00
|
|
|
|
}
|