Files
llmx/codex-rs/cli/src/main.rs

236 lines
7.4 KiB
Rust
Raw Normal View History

use clap::CommandFactory;
use clap::Parser;
use clap_complete::Shell;
use clap_complete::generate;
use codex_arg0::arg0_dispatch_or_else;
use codex_chatgpt::apply_command::ApplyCommand;
use codex_chatgpt::apply_command::run_apply_command;
use codex_cli::LandlockCommand;
use codex_cli::SeatbeltCommand;
use codex_cli::login::run_login_status;
use codex_cli::login::run_login_with_api_key;
feat: add support for login with ChatGPT (#1212) This does not implement the full Login with ChatGPT experience, but it should unblock people. **What works** * The `codex` multitool now has a `login` subcommand, so you can run `codex login`, which should write `CODEX_HOME/auth.json` if you complete the flow successfully. The TUI will now read the `OPENAI_API_KEY` from `auth.json`. * The TUI should refresh the token if it has expired and the necessary information is in `auth.json`. * There is a `LoginScreen` in the TUI that tells you to run `codex login` if both (1) your model provider expects to use `OPENAI_API_KEY` as its env var, and (2) `OPENAI_API_KEY` is not set. **What does not work** * The `LoginScreen` does not support the login flow from within the TUI. Instead, it tells you to quit, run `codex login`, and then run `codex` again. * `codex exec` does read from `auth.json` yet, nor does it direct the user to go through the login flow if `OPENAI_API_KEY` is not be found. * The `maybeRedeemCredits()` function from `get-api-key.tsx` has not been ported from TypeScript to `login_with_chatgpt.py` yet: https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89 **Implementation** Currently, the OAuth flow requires running a local webserver on `127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost of a webserver dependency in the Rust CLI just to support login, so instead we implement this logic in Python, as Python has a `http.server` module as part of its standard library. Specifically, we bundle the contents of a single Python file as a string in the Rust CLI and then use it to spawn a subprocess as `python3 -c {{SOURCE_FOR_PYTHON_SERVER}}`. As such, the most significant files in this PR are: ``` codex-rs/login/src/login_with_chatgpt.py codex-rs/login/src/lib.rs ``` Now that the CLI may load `OPENAI_API_KEY` from the environment _or_ `CODEX_HOME/auth.json`, we need a new abstraction for reading/writing this variable, so we introduce: ``` codex-rs/core/src/openai_api_key.rs ``` Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024, so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY` so it is read in a thread-safe manner. Ultimately, it should be possible to go through the entire login flow from the TUI. This PR introduces a placeholder `LoginScreen` UI for that right now, though the new `codex login` subcommand introduced in this PR should be a viable workaround until the UI is ready. **Testing** Because the login flow is currently implemented in a standalone Python file, you can test it without building any Rust code as follows: ``` rm -rf /tmp/codex_home && mkdir /tmp/codex_home CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py ``` For reference: * the original TypeScript implementation was introduced in https://github.com/openai/codex/pull/963 * support for redeeming credits was later added in https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
use codex_cli::login::run_login_with_chatgpt;
use codex_cli::login::run_logout;
use codex_cli::proto;
feat: add support for -c/--config to override individual config items (#1137) This PR introduces support for `-c`/`--config` so users can override individual config values on the command line using `--config name=value`. Example: ``` codex --config model=o4-mini ``` Making it possible to set arbitrary config values on the command line results in a more flexible configuration scheme and makes it easier to provide single-line examples that can be copy-pasted from documentation. Effectively, it means there are four levels of configuration for some values: - Default value (e.g., `model` currently defaults to `o4-mini`) - Value in `config.toml` (e.g., user could override the default to be `model = "o3"` in their `config.toml`) - Specifying `-c` or `--config` to override `model` (e.g., user can include `-c model=o3` in their list of args to Codex) - If available, a config-specific flag can be used, which takes precedence over `-c` (e.g., user can specify `--model o3` in their list of args to Codex) Now that it is possible to specify anything that could be configured in `config.toml` on the command line using `-c`, we do not need to have a custom flag for every possible config option (which can clutter the output of `--help`). To that end, as part of this PR, we drop support for the `--disable-response-storage` flag, as users can now specify `-c disable_response_storage=true` to get the equivalent functionality. Under the hood, this works by loading the `config.toml` into a `toml::Value`. Then for each `key=value`, we create a small synthetic TOML file with `value` so that we can run the TOML parser to get the equivalent `toml::Value`. We then parse `key` to determine the point in the original `toml::Value` to do the insert/replace. Once all of the overrides from `-c` args have been applied, the `toml::Value` is deserialized into a `ConfigToml` and then the `ConfigOverrides` are applied, as before.
2025-05-27 23:11:44 -07:00
use codex_common::CliConfigOverrides;
use codex_exec::Cli as ExecCli;
use codex_tui::Cli as TuiCli;
fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086) Historically, we spawned the Seatbelt and Landlock sandboxes in substantially different ways: For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy specified as an arg followed by the original command: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec.rs#L147-L219 For **Landlock/Seccomp**, we would do `tokio::runtime::Builder::new_current_thread()`, _invoke Landlock/Seccomp APIs to modify the permissions of that new thread_, and then spawn the command: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec_linux.rs#L28-L49 While it is neat that Landlock/Seccomp supports applying a policy to only one thread without having to apply it to the entire process, it requires us to maintain two different codepaths and is a bit harder to reason about. The tipping point was https://github.com/openai/codex/pull/1061, in which we had to start building up the `env` in an unexpected way for the existing Landlock/Seccomp approach to continue to work. This PR overhauls things so that we do similar things for Mac and Linux. It turned out that we were already building our own "helper binary" comparable to Mac's `sandbox-exec` as part of the `cli` crate: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/cli/Cargo.toml#L10-L12 We originally created this to build a small binary to include with the Node.js version of the Codex CLI to provide support for Linux sandboxing. Though the sticky bit is that, at this point, we still want to deploy the Rust version of Codex as a single, standalone binary rather than a CLI and a supporting sandboxing binary. To satisfy this goal, we use "the arg0 trick," in which we: * use `std::env::current_exe()` to get the path to the CLI that is currently running * use the CLI as the `program` for the `Command` * set `"codex-linux-sandbox"` as arg0 for the `Command` A CLI that supports sandboxing should check arg0 at the start of the program. If it is `"codex-linux-sandbox"`, it must invoke `codex_linux_sandbox::run_main()`, which runs the CLI as if it were `codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn the original command, so do _replace_ the process rather than spawn a subprocess. Incidentally, we do this before starting the Tokio runtime, so the process should only have one thread when `execvp(3)` is called. Because the `core` crate that needs to spawn the Linux sandboxing is not a CLI in its own right, this means that every CLI that includes `core` and relies on this behavior has to (1) implement it and (2) provide the path to the sandboxing executable. While the path is almost always `std::env::current_exe()`, we needed to make this configurable for integration tests, so `Config` now has a `codex_linux_sandbox_exe: Option<PathBuf>` property to facilitate threading this through, introduced in https://github.com/openai/codex/pull/1089. This common pattern is now captured in `codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs` functions that should use it have been updated as part of this PR. The `codex-linux-sandbox` crate added to the Cargo workspace as part of this PR now has the bulk of the Landlock/Seccomp logic, which makes `core` a bit simpler. Indeed, `core/src/exec_linux.rs` and `core/src/landlock.rs` were removed/ported as part of this PR. I also moved the unit tests for this code into an integration test, `linux-sandbox/tests/landlock.rs`, in which I use `env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for `codex_linux_sandbox_exe` since `std::env::current_exe()` is not appropriate in that case.
2025-05-23 11:37:07 -07:00
use std::path::PathBuf;
use crate::proto::ProtoCli;
/// Codex CLI
///
/// If no subcommand is specified, options will be forwarded to the interactive CLI.
#[derive(Debug, Parser)]
#[clap(
author,
version,
// If a subcommand is given, ignore requirements of the default args.
subcommand_negates_reqs = true,
// The executable is sometimes invoked via a platformspecific name like
// `codex-x86_64-unknown-linux-musl`, but the help output should always use
// the generic `codex` command name that users run.
bin_name = "codex"
)]
struct MultitoolCli {
feat: add support for -c/--config to override individual config items (#1137) This PR introduces support for `-c`/`--config` so users can override individual config values on the command line using `--config name=value`. Example: ``` codex --config model=o4-mini ``` Making it possible to set arbitrary config values on the command line results in a more flexible configuration scheme and makes it easier to provide single-line examples that can be copy-pasted from documentation. Effectively, it means there are four levels of configuration for some values: - Default value (e.g., `model` currently defaults to `o4-mini`) - Value in `config.toml` (e.g., user could override the default to be `model = "o3"` in their `config.toml`) - Specifying `-c` or `--config` to override `model` (e.g., user can include `-c model=o3` in their list of args to Codex) - If available, a config-specific flag can be used, which takes precedence over `-c` (e.g., user can specify `--model o3` in their list of args to Codex) Now that it is possible to specify anything that could be configured in `config.toml` on the command line using `-c`, we do not need to have a custom flag for every possible config option (which can clutter the output of `--help`). To that end, as part of this PR, we drop support for the `--disable-response-storage` flag, as users can now specify `-c disable_response_storage=true` to get the equivalent functionality. Under the hood, this works by loading the `config.toml` into a `toml::Value`. Then for each `key=value`, we create a small synthetic TOML file with `value` so that we can run the TOML parser to get the equivalent `toml::Value`. We then parse `key` to determine the point in the original `toml::Value` to do the insert/replace. Once all of the overrides from `-c` args have been applied, the `toml::Value` is deserialized into a `ConfigToml` and then the `ConfigOverrides` are applied, as before.
2025-05-27 23:11:44 -07:00
#[clap(flatten)]
pub config_overrides: CliConfigOverrides,
#[clap(flatten)]
interactive: TuiCli,
#[clap(subcommand)]
subcommand: Option<Subcommand>,
}
#[derive(Debug, clap::Subcommand)]
enum Subcommand {
/// Run Codex non-interactively.
#[clap(visible_alias = "e")]
Exec(ExecCli),
/// Manage login.
feat: add support for login with ChatGPT (#1212) This does not implement the full Login with ChatGPT experience, but it should unblock people. **What works** * The `codex` multitool now has a `login` subcommand, so you can run `codex login`, which should write `CODEX_HOME/auth.json` if you complete the flow successfully. The TUI will now read the `OPENAI_API_KEY` from `auth.json`. * The TUI should refresh the token if it has expired and the necessary information is in `auth.json`. * There is a `LoginScreen` in the TUI that tells you to run `codex login` if both (1) your model provider expects to use `OPENAI_API_KEY` as its env var, and (2) `OPENAI_API_KEY` is not set. **What does not work** * The `LoginScreen` does not support the login flow from within the TUI. Instead, it tells you to quit, run `codex login`, and then run `codex` again. * `codex exec` does read from `auth.json` yet, nor does it direct the user to go through the login flow if `OPENAI_API_KEY` is not be found. * The `maybeRedeemCredits()` function from `get-api-key.tsx` has not been ported from TypeScript to `login_with_chatgpt.py` yet: https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89 **Implementation** Currently, the OAuth flow requires running a local webserver on `127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost of a webserver dependency in the Rust CLI just to support login, so instead we implement this logic in Python, as Python has a `http.server` module as part of its standard library. Specifically, we bundle the contents of a single Python file as a string in the Rust CLI and then use it to spawn a subprocess as `python3 -c {{SOURCE_FOR_PYTHON_SERVER}}`. As such, the most significant files in this PR are: ``` codex-rs/login/src/login_with_chatgpt.py codex-rs/login/src/lib.rs ``` Now that the CLI may load `OPENAI_API_KEY` from the environment _or_ `CODEX_HOME/auth.json`, we need a new abstraction for reading/writing this variable, so we introduce: ``` codex-rs/core/src/openai_api_key.rs ``` Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024, so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY` so it is read in a thread-safe manner. Ultimately, it should be possible to go through the entire login flow from the TUI. This PR introduces a placeholder `LoginScreen` UI for that right now, though the new `codex login` subcommand introduced in this PR should be a viable workaround until the UI is ready. **Testing** Because the login flow is currently implemented in a standalone Python file, you can test it without building any Rust code as follows: ``` rm -rf /tmp/codex_home && mkdir /tmp/codex_home CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py ``` For reference: * the original TypeScript implementation was introduced in https://github.com/openai/codex/pull/963 * support for redeeming credits was later added in https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
Login(LoginCommand),
/// Remove stored authentication credentials.
Logout(LogoutCommand),
/// Experimental: run Codex as an MCP server.
Mcp,
/// Run the Protocol stream via stdin/stdout
#[clap(visible_alias = "p")]
Proto(ProtoCli),
/// Generate shell completion scripts.
Completion(CompletionCommand),
/// Internal debugging commands.
Debug(DebugArgs),
/// Apply the latest diff produced by Codex agent as a `git apply` to your local working tree.
#[clap(visible_alias = "a")]
Apply(ApplyCommand),
2025-08-18 13:08:53 -07:00
/// Internal: generate TypeScript protocol bindings.
#[clap(hide = true)]
GenerateTs(GenerateTsCommand),
}
#[derive(Debug, Parser)]
struct CompletionCommand {
/// Shell to generate completions for
#[clap(value_enum, default_value_t = Shell::Bash)]
shell: Shell,
}
#[derive(Debug, Parser)]
struct DebugArgs {
#[command(subcommand)]
cmd: DebugCommand,
}
#[derive(Debug, clap::Subcommand)]
enum DebugCommand {
/// Run a command under Seatbelt (macOS only).
Seatbelt(SeatbeltCommand),
/// Run a command under Landlock+seccomp (Linux only).
Landlock(LandlockCommand),
}
#[derive(Debug, Parser)]
feat: add support for login with ChatGPT (#1212) This does not implement the full Login with ChatGPT experience, but it should unblock people. **What works** * The `codex` multitool now has a `login` subcommand, so you can run `codex login`, which should write `CODEX_HOME/auth.json` if you complete the flow successfully. The TUI will now read the `OPENAI_API_KEY` from `auth.json`. * The TUI should refresh the token if it has expired and the necessary information is in `auth.json`. * There is a `LoginScreen` in the TUI that tells you to run `codex login` if both (1) your model provider expects to use `OPENAI_API_KEY` as its env var, and (2) `OPENAI_API_KEY` is not set. **What does not work** * The `LoginScreen` does not support the login flow from within the TUI. Instead, it tells you to quit, run `codex login`, and then run `codex` again. * `codex exec` does read from `auth.json` yet, nor does it direct the user to go through the login flow if `OPENAI_API_KEY` is not be found. * The `maybeRedeemCredits()` function from `get-api-key.tsx` has not been ported from TypeScript to `login_with_chatgpt.py` yet: https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89 **Implementation** Currently, the OAuth flow requires running a local webserver on `127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost of a webserver dependency in the Rust CLI just to support login, so instead we implement this logic in Python, as Python has a `http.server` module as part of its standard library. Specifically, we bundle the contents of a single Python file as a string in the Rust CLI and then use it to spawn a subprocess as `python3 -c {{SOURCE_FOR_PYTHON_SERVER}}`. As such, the most significant files in this PR are: ``` codex-rs/login/src/login_with_chatgpt.py codex-rs/login/src/lib.rs ``` Now that the CLI may load `OPENAI_API_KEY` from the environment _or_ `CODEX_HOME/auth.json`, we need a new abstraction for reading/writing this variable, so we introduce: ``` codex-rs/core/src/openai_api_key.rs ``` Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024, so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY` so it is read in a thread-safe manner. Ultimately, it should be possible to go through the entire login flow from the TUI. This PR introduces a placeholder `LoginScreen` UI for that right now, though the new `codex login` subcommand introduced in this PR should be a viable workaround until the UI is ready. **Testing** Because the login flow is currently implemented in a standalone Python file, you can test it without building any Rust code as follows: ``` rm -rf /tmp/codex_home && mkdir /tmp/codex_home CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py ``` For reference: * the original TypeScript implementation was introduced in https://github.com/openai/codex/pull/963 * support for redeeming credits was later added in https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
struct LoginCommand {
#[clap(skip)]
config_overrides: CliConfigOverrides,
#[arg(long = "api-key", value_name = "API_KEY")]
api_key: Option<String>,
#[command(subcommand)]
action: Option<LoginSubcommand>,
}
#[derive(Debug, clap::Subcommand)]
enum LoginSubcommand {
/// Show login status.
Status,
feat: add support for login with ChatGPT (#1212) This does not implement the full Login with ChatGPT experience, but it should unblock people. **What works** * The `codex` multitool now has a `login` subcommand, so you can run `codex login`, which should write `CODEX_HOME/auth.json` if you complete the flow successfully. The TUI will now read the `OPENAI_API_KEY` from `auth.json`. * The TUI should refresh the token if it has expired and the necessary information is in `auth.json`. * There is a `LoginScreen` in the TUI that tells you to run `codex login` if both (1) your model provider expects to use `OPENAI_API_KEY` as its env var, and (2) `OPENAI_API_KEY` is not set. **What does not work** * The `LoginScreen` does not support the login flow from within the TUI. Instead, it tells you to quit, run `codex login`, and then run `codex` again. * `codex exec` does read from `auth.json` yet, nor does it direct the user to go through the login flow if `OPENAI_API_KEY` is not be found. * The `maybeRedeemCredits()` function from `get-api-key.tsx` has not been ported from TypeScript to `login_with_chatgpt.py` yet: https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89 **Implementation** Currently, the OAuth flow requires running a local webserver on `127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost of a webserver dependency in the Rust CLI just to support login, so instead we implement this logic in Python, as Python has a `http.server` module as part of its standard library. Specifically, we bundle the contents of a single Python file as a string in the Rust CLI and then use it to spawn a subprocess as `python3 -c {{SOURCE_FOR_PYTHON_SERVER}}`. As such, the most significant files in this PR are: ``` codex-rs/login/src/login_with_chatgpt.py codex-rs/login/src/lib.rs ``` Now that the CLI may load `OPENAI_API_KEY` from the environment _or_ `CODEX_HOME/auth.json`, we need a new abstraction for reading/writing this variable, so we introduce: ``` codex-rs/core/src/openai_api_key.rs ``` Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024, so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY` so it is read in a thread-safe manner. Ultimately, it should be possible to go through the entire login flow from the TUI. This PR introduces a placeholder `LoginScreen` UI for that right now, though the new `codex login` subcommand introduced in this PR should be a viable workaround until the UI is ready. **Testing** Because the login flow is currently implemented in a standalone Python file, you can test it without building any Rust code as follows: ``` rm -rf /tmp/codex_home && mkdir /tmp/codex_home CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py ``` For reference: * the original TypeScript implementation was introduced in https://github.com/openai/codex/pull/963 * support for redeeming credits was later added in https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
}
#[derive(Debug, Parser)]
struct LogoutCommand {
#[clap(skip)]
config_overrides: CliConfigOverrides,
}
2025-08-18 13:08:53 -07:00
#[derive(Debug, Parser)]
struct GenerateTsCommand {
/// Output directory where .ts files will be written
#[arg(short = 'o', long = "out", value_name = "DIR")]
out_dir: PathBuf,
/// Optional path to the Prettier executable to format generated files
#[arg(short = 'p', long = "prettier", value_name = "PRETTIER_BIN")]
prettier: Option<PathBuf>,
}
fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086) Historically, we spawned the Seatbelt and Landlock sandboxes in substantially different ways: For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy specified as an arg followed by the original command: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec.rs#L147-L219 For **Landlock/Seccomp**, we would do `tokio::runtime::Builder::new_current_thread()`, _invoke Landlock/Seccomp APIs to modify the permissions of that new thread_, and then spawn the command: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec_linux.rs#L28-L49 While it is neat that Landlock/Seccomp supports applying a policy to only one thread without having to apply it to the entire process, it requires us to maintain two different codepaths and is a bit harder to reason about. The tipping point was https://github.com/openai/codex/pull/1061, in which we had to start building up the `env` in an unexpected way for the existing Landlock/Seccomp approach to continue to work. This PR overhauls things so that we do similar things for Mac and Linux. It turned out that we were already building our own "helper binary" comparable to Mac's `sandbox-exec` as part of the `cli` crate: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/cli/Cargo.toml#L10-L12 We originally created this to build a small binary to include with the Node.js version of the Codex CLI to provide support for Linux sandboxing. Though the sticky bit is that, at this point, we still want to deploy the Rust version of Codex as a single, standalone binary rather than a CLI and a supporting sandboxing binary. To satisfy this goal, we use "the arg0 trick," in which we: * use `std::env::current_exe()` to get the path to the CLI that is currently running * use the CLI as the `program` for the `Command` * set `"codex-linux-sandbox"` as arg0 for the `Command` A CLI that supports sandboxing should check arg0 at the start of the program. If it is `"codex-linux-sandbox"`, it must invoke `codex_linux_sandbox::run_main()`, which runs the CLI as if it were `codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn the original command, so do _replace_ the process rather than spawn a subprocess. Incidentally, we do this before starting the Tokio runtime, so the process should only have one thread when `execvp(3)` is called. Because the `core` crate that needs to spawn the Linux sandboxing is not a CLI in its own right, this means that every CLI that includes `core` and relies on this behavior has to (1) implement it and (2) provide the path to the sandboxing executable. While the path is almost always `std::env::current_exe()`, we needed to make this configurable for integration tests, so `Config` now has a `codex_linux_sandbox_exe: Option<PathBuf>` property to facilitate threading this through, introduced in https://github.com/openai/codex/pull/1089. This common pattern is now captured in `codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs` functions that should use it have been updated as part of this PR. The `codex-linux-sandbox` crate added to the Cargo workspace as part of this PR now has the bulk of the Landlock/Seccomp logic, which makes `core` a bit simpler. Indeed, `core/src/exec_linux.rs` and `core/src/landlock.rs` were removed/ported as part of this PR. I also moved the unit tests for this code into an integration test, `linux-sandbox/tests/landlock.rs`, in which I use `env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for `codex_linux_sandbox_exe` since `std::env::current_exe()` is not appropriate in that case.
2025-05-23 11:37:07 -07:00
fn main() -> anyhow::Result<()> {
arg0_dispatch_or_else(|codex_linux_sandbox_exe| async move {
fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086) Historically, we spawned the Seatbelt and Landlock sandboxes in substantially different ways: For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy specified as an arg followed by the original command: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec.rs#L147-L219 For **Landlock/Seccomp**, we would do `tokio::runtime::Builder::new_current_thread()`, _invoke Landlock/Seccomp APIs to modify the permissions of that new thread_, and then spawn the command: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec_linux.rs#L28-L49 While it is neat that Landlock/Seccomp supports applying a policy to only one thread without having to apply it to the entire process, it requires us to maintain two different codepaths and is a bit harder to reason about. The tipping point was https://github.com/openai/codex/pull/1061, in which we had to start building up the `env` in an unexpected way for the existing Landlock/Seccomp approach to continue to work. This PR overhauls things so that we do similar things for Mac and Linux. It turned out that we were already building our own "helper binary" comparable to Mac's `sandbox-exec` as part of the `cli` crate: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/cli/Cargo.toml#L10-L12 We originally created this to build a small binary to include with the Node.js version of the Codex CLI to provide support for Linux sandboxing. Though the sticky bit is that, at this point, we still want to deploy the Rust version of Codex as a single, standalone binary rather than a CLI and a supporting sandboxing binary. To satisfy this goal, we use "the arg0 trick," in which we: * use `std::env::current_exe()` to get the path to the CLI that is currently running * use the CLI as the `program` for the `Command` * set `"codex-linux-sandbox"` as arg0 for the `Command` A CLI that supports sandboxing should check arg0 at the start of the program. If it is `"codex-linux-sandbox"`, it must invoke `codex_linux_sandbox::run_main()`, which runs the CLI as if it were `codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn the original command, so do _replace_ the process rather than spawn a subprocess. Incidentally, we do this before starting the Tokio runtime, so the process should only have one thread when `execvp(3)` is called. Because the `core` crate that needs to spawn the Linux sandboxing is not a CLI in its own right, this means that every CLI that includes `core` and relies on this behavior has to (1) implement it and (2) provide the path to the sandboxing executable. While the path is almost always `std::env::current_exe()`, we needed to make this configurable for integration tests, so `Config` now has a `codex_linux_sandbox_exe: Option<PathBuf>` property to facilitate threading this through, introduced in https://github.com/openai/codex/pull/1089. This common pattern is now captured in `codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs` functions that should use it have been updated as part of this PR. The `codex-linux-sandbox` crate added to the Cargo workspace as part of this PR now has the bulk of the Landlock/Seccomp logic, which makes `core` a bit simpler. Indeed, `core/src/exec_linux.rs` and `core/src/landlock.rs` were removed/ported as part of this PR. I also moved the unit tests for this code into an integration test, `linux-sandbox/tests/landlock.rs`, in which I use `env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for `codex_linux_sandbox_exe` since `std::env::current_exe()` is not appropriate in that case.
2025-05-23 11:37:07 -07:00
cli_main(codex_linux_sandbox_exe).await?;
Ok(())
})
}
fix: overhaul how we spawn commands under seccomp/landlock on Linux (#1086) Historically, we spawned the Seatbelt and Landlock sandboxes in substantially different ways: For **Seatbelt**, we would run `/usr/bin/sandbox-exec` with our policy specified as an arg followed by the original command: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec.rs#L147-L219 For **Landlock/Seccomp**, we would do `tokio::runtime::Builder::new_current_thread()`, _invoke Landlock/Seccomp APIs to modify the permissions of that new thread_, and then spawn the command: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/core/src/exec_linux.rs#L28-L49 While it is neat that Landlock/Seccomp supports applying a policy to only one thread without having to apply it to the entire process, it requires us to maintain two different codepaths and is a bit harder to reason about. The tipping point was https://github.com/openai/codex/pull/1061, in which we had to start building up the `env` in an unexpected way for the existing Landlock/Seccomp approach to continue to work. This PR overhauls things so that we do similar things for Mac and Linux. It turned out that we were already building our own "helper binary" comparable to Mac's `sandbox-exec` as part of the `cli` crate: https://github.com/openai/codex/blob/d1de7bb383552e8fadd94be79d65d188e00fd562/codex-rs/cli/Cargo.toml#L10-L12 We originally created this to build a small binary to include with the Node.js version of the Codex CLI to provide support for Linux sandboxing. Though the sticky bit is that, at this point, we still want to deploy the Rust version of Codex as a single, standalone binary rather than a CLI and a supporting sandboxing binary. To satisfy this goal, we use "the arg0 trick," in which we: * use `std::env::current_exe()` to get the path to the CLI that is currently running * use the CLI as the `program` for the `Command` * set `"codex-linux-sandbox"` as arg0 for the `Command` A CLI that supports sandboxing should check arg0 at the start of the program. If it is `"codex-linux-sandbox"`, it must invoke `codex_linux_sandbox::run_main()`, which runs the CLI as if it were `codex-linux-sandbox`. When acting as `codex-linux-sandbox`, we make the appropriate Landlock/Seccomp API calls and then use `execvp(3)` to spawn the original command, so do _replace_ the process rather than spawn a subprocess. Incidentally, we do this before starting the Tokio runtime, so the process should only have one thread when `execvp(3)` is called. Because the `core` crate that needs to spawn the Linux sandboxing is not a CLI in its own right, this means that every CLI that includes `core` and relies on this behavior has to (1) implement it and (2) provide the path to the sandboxing executable. While the path is almost always `std::env::current_exe()`, we needed to make this configurable for integration tests, so `Config` now has a `codex_linux_sandbox_exe: Option<PathBuf>` property to facilitate threading this through, introduced in https://github.com/openai/codex/pull/1089. This common pattern is now captured in `codex_linux_sandbox::run_with_sandbox()` and all of the `main.rs` functions that should use it have been updated as part of this PR. The `codex-linux-sandbox` crate added to the Cargo workspace as part of this PR now has the bulk of the Landlock/Seccomp logic, which makes `core` a bit simpler. Indeed, `core/src/exec_linux.rs` and `core/src/landlock.rs` were removed/ported as part of this PR. I also moved the unit tests for this code into an integration test, `linux-sandbox/tests/landlock.rs`, in which I use `env!("CARGO_BIN_EXE_codex-linux-sandbox")` as the value for `codex_linux_sandbox_exe` since `std::env::current_exe()` is not appropriate in that case.
2025-05-23 11:37:07 -07:00
async fn cli_main(codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
let cli = MultitoolCli::parse();
match cli.subcommand {
None => {
feat: add support for -c/--config to override individual config items (#1137) This PR introduces support for `-c`/`--config` so users can override individual config values on the command line using `--config name=value`. Example: ``` codex --config model=o4-mini ``` Making it possible to set arbitrary config values on the command line results in a more flexible configuration scheme and makes it easier to provide single-line examples that can be copy-pasted from documentation. Effectively, it means there are four levels of configuration for some values: - Default value (e.g., `model` currently defaults to `o4-mini`) - Value in `config.toml` (e.g., user could override the default to be `model = "o3"` in their `config.toml`) - Specifying `-c` or `--config` to override `model` (e.g., user can include `-c model=o3` in their list of args to Codex) - If available, a config-specific flag can be used, which takes precedence over `-c` (e.g., user can specify `--model o3` in their list of args to Codex) Now that it is possible to specify anything that could be configured in `config.toml` on the command line using `-c`, we do not need to have a custom flag for every possible config option (which can clutter the output of `--help`). To that end, as part of this PR, we drop support for the `--disable-response-storage` flag, as users can now specify `-c disable_response_storage=true` to get the equivalent functionality. Under the hood, this works by loading the `config.toml` into a `toml::Value`. Then for each `key=value`, we create a small synthetic TOML file with `value` so that we can run the TOML parser to get the equivalent `toml::Value`. We then parse `key` to determine the point in the original `toml::Value` to do the insert/replace. Once all of the overrides from `-c` args have been applied, the `toml::Value` is deserialized into a `ConfigToml` and then the `ConfigOverrides` are applied, as before.
2025-05-27 23:11:44 -07:00
let mut tui_cli = cli.interactive;
prepend_config_flags(&mut tui_cli.config_overrides, cli.config_overrides);
let usage = codex_tui::run_main(tui_cli, codex_linux_sandbox_exe).await?;
if !usage.is_zero() {
println!("{}", codex_core::protocol::FinalOutput::from(usage));
}
}
feat: add support for -c/--config to override individual config items (#1137) This PR introduces support for `-c`/`--config` so users can override individual config values on the command line using `--config name=value`. Example: ``` codex --config model=o4-mini ``` Making it possible to set arbitrary config values on the command line results in a more flexible configuration scheme and makes it easier to provide single-line examples that can be copy-pasted from documentation. Effectively, it means there are four levels of configuration for some values: - Default value (e.g., `model` currently defaults to `o4-mini`) - Value in `config.toml` (e.g., user could override the default to be `model = "o3"` in their `config.toml`) - Specifying `-c` or `--config` to override `model` (e.g., user can include `-c model=o3` in their list of args to Codex) - If available, a config-specific flag can be used, which takes precedence over `-c` (e.g., user can specify `--model o3` in their list of args to Codex) Now that it is possible to specify anything that could be configured in `config.toml` on the command line using `-c`, we do not need to have a custom flag for every possible config option (which can clutter the output of `--help`). To that end, as part of this PR, we drop support for the `--disable-response-storage` flag, as users can now specify `-c disable_response_storage=true` to get the equivalent functionality. Under the hood, this works by loading the `config.toml` into a `toml::Value`. Then for each `key=value`, we create a small synthetic TOML file with `value` so that we can run the TOML parser to get the equivalent `toml::Value`. We then parse `key` to determine the point in the original `toml::Value` to do the insert/replace. Once all of the overrides from `-c` args have been applied, the `toml::Value` is deserialized into a `ConfigToml` and then the `ConfigOverrides` are applied, as before.
2025-05-27 23:11:44 -07:00
Some(Subcommand::Exec(mut exec_cli)) => {
prepend_config_flags(&mut exec_cli.config_overrides, cli.config_overrides);
codex_exec::run_main(exec_cli, codex_linux_sandbox_exe).await?;
}
Some(Subcommand::Mcp) => {
codex_mcp_server::run_main(codex_linux_sandbox_exe, cli.config_overrides).await?;
}
feat: add support for login with ChatGPT (#1212) This does not implement the full Login with ChatGPT experience, but it should unblock people. **What works** * The `codex` multitool now has a `login` subcommand, so you can run `codex login`, which should write `CODEX_HOME/auth.json` if you complete the flow successfully. The TUI will now read the `OPENAI_API_KEY` from `auth.json`. * The TUI should refresh the token if it has expired and the necessary information is in `auth.json`. * There is a `LoginScreen` in the TUI that tells you to run `codex login` if both (1) your model provider expects to use `OPENAI_API_KEY` as its env var, and (2) `OPENAI_API_KEY` is not set. **What does not work** * The `LoginScreen` does not support the login flow from within the TUI. Instead, it tells you to quit, run `codex login`, and then run `codex` again. * `codex exec` does read from `auth.json` yet, nor does it direct the user to go through the login flow if `OPENAI_API_KEY` is not be found. * The `maybeRedeemCredits()` function from `get-api-key.tsx` has not been ported from TypeScript to `login_with_chatgpt.py` yet: https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89 **Implementation** Currently, the OAuth flow requires running a local webserver on `127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost of a webserver dependency in the Rust CLI just to support login, so instead we implement this logic in Python, as Python has a `http.server` module as part of its standard library. Specifically, we bundle the contents of a single Python file as a string in the Rust CLI and then use it to spawn a subprocess as `python3 -c {{SOURCE_FOR_PYTHON_SERVER}}`. As such, the most significant files in this PR are: ``` codex-rs/login/src/login_with_chatgpt.py codex-rs/login/src/lib.rs ``` Now that the CLI may load `OPENAI_API_KEY` from the environment _or_ `CODEX_HOME/auth.json`, we need a new abstraction for reading/writing this variable, so we introduce: ``` codex-rs/core/src/openai_api_key.rs ``` Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024, so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY` so it is read in a thread-safe manner. Ultimately, it should be possible to go through the entire login flow from the TUI. This PR introduces a placeholder `LoginScreen` UI for that right now, though the new `codex login` subcommand introduced in this PR should be a viable workaround until the UI is ready. **Testing** Because the login flow is currently implemented in a standalone Python file, you can test it without building any Rust code as follows: ``` rm -rf /tmp/codex_home && mkdir /tmp/codex_home CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py ``` For reference: * the original TypeScript implementation was introduced in https://github.com/openai/codex/pull/963 * support for redeeming credits was later added in https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
Some(Subcommand::Login(mut login_cli)) => {
prepend_config_flags(&mut login_cli.config_overrides, cli.config_overrides);
match login_cli.action {
Some(LoginSubcommand::Status) => {
run_login_status(login_cli.config_overrides).await;
}
None => {
if let Some(api_key) = login_cli.api_key {
run_login_with_api_key(login_cli.config_overrides, api_key).await;
} else {
run_login_with_chatgpt(login_cli.config_overrides).await;
}
}
}
feat: add support for login with ChatGPT (#1212) This does not implement the full Login with ChatGPT experience, but it should unblock people. **What works** * The `codex` multitool now has a `login` subcommand, so you can run `codex login`, which should write `CODEX_HOME/auth.json` if you complete the flow successfully. The TUI will now read the `OPENAI_API_KEY` from `auth.json`. * The TUI should refresh the token if it has expired and the necessary information is in `auth.json`. * There is a `LoginScreen` in the TUI that tells you to run `codex login` if both (1) your model provider expects to use `OPENAI_API_KEY` as its env var, and (2) `OPENAI_API_KEY` is not set. **What does not work** * The `LoginScreen` does not support the login flow from within the TUI. Instead, it tells you to quit, run `codex login`, and then run `codex` again. * `codex exec` does read from `auth.json` yet, nor does it direct the user to go through the login flow if `OPENAI_API_KEY` is not be found. * The `maybeRedeemCredits()` function from `get-api-key.tsx` has not been ported from TypeScript to `login_with_chatgpt.py` yet: https://github.com/openai/codex/blob/a67a67f3258fc21e147b6786a143fe3e15e6d5ba/codex-cli/src/utils/get-api-key.tsx#L84-L89 **Implementation** Currently, the OAuth flow requires running a local webserver on `127.0.0.1:1455`. It seemed wasteful to incur the additional binary cost of a webserver dependency in the Rust CLI just to support login, so instead we implement this logic in Python, as Python has a `http.server` module as part of its standard library. Specifically, we bundle the contents of a single Python file as a string in the Rust CLI and then use it to spawn a subprocess as `python3 -c {{SOURCE_FOR_PYTHON_SERVER}}`. As such, the most significant files in this PR are: ``` codex-rs/login/src/login_with_chatgpt.py codex-rs/login/src/lib.rs ``` Now that the CLI may load `OPENAI_API_KEY` from the environment _or_ `CODEX_HOME/auth.json`, we need a new abstraction for reading/writing this variable, so we introduce: ``` codex-rs/core/src/openai_api_key.rs ``` Note that `std::env::set_var()` is [rightfully] `unsafe` in Rust 2024, so we use a LazyLock<RwLock<Option<String>>> to store `OPENAI_API_KEY` so it is read in a thread-safe manner. Ultimately, it should be possible to go through the entire login flow from the TUI. This PR introduces a placeholder `LoginScreen` UI for that right now, though the new `codex login` subcommand introduced in this PR should be a viable workaround until the UI is ready. **Testing** Because the login flow is currently implemented in a standalone Python file, you can test it without building any Rust code as follows: ``` rm -rf /tmp/codex_home && mkdir /tmp/codex_home CODEX_HOME=/tmp/codex_home python3 codex-rs/login/src/login_with_chatgpt.py ``` For reference: * the original TypeScript implementation was introduced in https://github.com/openai/codex/pull/963 * support for redeeming credits was later added in https://github.com/openai/codex/pull/974
2025-06-04 08:44:17 -07:00
}
Some(Subcommand::Logout(mut logout_cli)) => {
prepend_config_flags(&mut logout_cli.config_overrides, cli.config_overrides);
run_logout(logout_cli.config_overrides).await;
}
feat: add support for -c/--config to override individual config items (#1137) This PR introduces support for `-c`/`--config` so users can override individual config values on the command line using `--config name=value`. Example: ``` codex --config model=o4-mini ``` Making it possible to set arbitrary config values on the command line results in a more flexible configuration scheme and makes it easier to provide single-line examples that can be copy-pasted from documentation. Effectively, it means there are four levels of configuration for some values: - Default value (e.g., `model` currently defaults to `o4-mini`) - Value in `config.toml` (e.g., user could override the default to be `model = "o3"` in their `config.toml`) - Specifying `-c` or `--config` to override `model` (e.g., user can include `-c model=o3` in their list of args to Codex) - If available, a config-specific flag can be used, which takes precedence over `-c` (e.g., user can specify `--model o3` in their list of args to Codex) Now that it is possible to specify anything that could be configured in `config.toml` on the command line using `-c`, we do not need to have a custom flag for every possible config option (which can clutter the output of `--help`). To that end, as part of this PR, we drop support for the `--disable-response-storage` flag, as users can now specify `-c disable_response_storage=true` to get the equivalent functionality. Under the hood, this works by loading the `config.toml` into a `toml::Value`. Then for each `key=value`, we create a small synthetic TOML file with `value` so that we can run the TOML parser to get the equivalent `toml::Value`. We then parse `key` to determine the point in the original `toml::Value` to do the insert/replace. Once all of the overrides from `-c` args have been applied, the `toml::Value` is deserialized into a `ConfigToml` and then the `ConfigOverrides` are applied, as before.
2025-05-27 23:11:44 -07:00
Some(Subcommand::Proto(mut proto_cli)) => {
prepend_config_flags(&mut proto_cli.config_overrides, cli.config_overrides);
proto::run_main(proto_cli).await?;
}
Some(Subcommand::Completion(completion_cli)) => {
print_completion(completion_cli);
}
Some(Subcommand::Debug(debug_args)) => match debug_args.cmd {
feat: add support for -c/--config to override individual config items (#1137) This PR introduces support for `-c`/`--config` so users can override individual config values on the command line using `--config name=value`. Example: ``` codex --config model=o4-mini ``` Making it possible to set arbitrary config values on the command line results in a more flexible configuration scheme and makes it easier to provide single-line examples that can be copy-pasted from documentation. Effectively, it means there are four levels of configuration for some values: - Default value (e.g., `model` currently defaults to `o4-mini`) - Value in `config.toml` (e.g., user could override the default to be `model = "o3"` in their `config.toml`) - Specifying `-c` or `--config` to override `model` (e.g., user can include `-c model=o3` in their list of args to Codex) - If available, a config-specific flag can be used, which takes precedence over `-c` (e.g., user can specify `--model o3` in their list of args to Codex) Now that it is possible to specify anything that could be configured in `config.toml` on the command line using `-c`, we do not need to have a custom flag for every possible config option (which can clutter the output of `--help`). To that end, as part of this PR, we drop support for the `--disable-response-storage` flag, as users can now specify `-c disable_response_storage=true` to get the equivalent functionality. Under the hood, this works by loading the `config.toml` into a `toml::Value`. Then for each `key=value`, we create a small synthetic TOML file with `value` so that we can run the TOML parser to get the equivalent `toml::Value`. We then parse `key` to determine the point in the original `toml::Value` to do the insert/replace. Once all of the overrides from `-c` args have been applied, the `toml::Value` is deserialized into a `ConfigToml` and then the `ConfigOverrides` are applied, as before.
2025-05-27 23:11:44 -07:00
DebugCommand::Seatbelt(mut seatbelt_cli) => {
prepend_config_flags(&mut seatbelt_cli.config_overrides, cli.config_overrides);
codex_cli::debug_sandbox::run_command_under_seatbelt(
feat: add support for -c/--config to override individual config items (#1137) This PR introduces support for `-c`/`--config` so users can override individual config values on the command line using `--config name=value`. Example: ``` codex --config model=o4-mini ``` Making it possible to set arbitrary config values on the command line results in a more flexible configuration scheme and makes it easier to provide single-line examples that can be copy-pasted from documentation. Effectively, it means there are four levels of configuration for some values: - Default value (e.g., `model` currently defaults to `o4-mini`) - Value in `config.toml` (e.g., user could override the default to be `model = "o3"` in their `config.toml`) - Specifying `-c` or `--config` to override `model` (e.g., user can include `-c model=o3` in their list of args to Codex) - If available, a config-specific flag can be used, which takes precedence over `-c` (e.g., user can specify `--model o3` in their list of args to Codex) Now that it is possible to specify anything that could be configured in `config.toml` on the command line using `-c`, we do not need to have a custom flag for every possible config option (which can clutter the output of `--help`). To that end, as part of this PR, we drop support for the `--disable-response-storage` flag, as users can now specify `-c disable_response_storage=true` to get the equivalent functionality. Under the hood, this works by loading the `config.toml` into a `toml::Value`. Then for each `key=value`, we create a small synthetic TOML file with `value` so that we can run the TOML parser to get the equivalent `toml::Value`. We then parse `key` to determine the point in the original `toml::Value` to do the insert/replace. Once all of the overrides from `-c` args have been applied, the `toml::Value` is deserialized into a `ConfigToml` and then the `ConfigOverrides` are applied, as before.
2025-05-27 23:11:44 -07:00
seatbelt_cli,
codex_linux_sandbox_exe,
)
.await?;
}
feat: add support for -c/--config to override individual config items (#1137) This PR introduces support for `-c`/`--config` so users can override individual config values on the command line using `--config name=value`. Example: ``` codex --config model=o4-mini ``` Making it possible to set arbitrary config values on the command line results in a more flexible configuration scheme and makes it easier to provide single-line examples that can be copy-pasted from documentation. Effectively, it means there are four levels of configuration for some values: - Default value (e.g., `model` currently defaults to `o4-mini`) - Value in `config.toml` (e.g., user could override the default to be `model = "o3"` in their `config.toml`) - Specifying `-c` or `--config` to override `model` (e.g., user can include `-c model=o3` in their list of args to Codex) - If available, a config-specific flag can be used, which takes precedence over `-c` (e.g., user can specify `--model o3` in their list of args to Codex) Now that it is possible to specify anything that could be configured in `config.toml` on the command line using `-c`, we do not need to have a custom flag for every possible config option (which can clutter the output of `--help`). To that end, as part of this PR, we drop support for the `--disable-response-storage` flag, as users can now specify `-c disable_response_storage=true` to get the equivalent functionality. Under the hood, this works by loading the `config.toml` into a `toml::Value`. Then for each `key=value`, we create a small synthetic TOML file with `value` so that we can run the TOML parser to get the equivalent `toml::Value`. We then parse `key` to determine the point in the original `toml::Value` to do the insert/replace. Once all of the overrides from `-c` args have been applied, the `toml::Value` is deserialized into a `ConfigToml` and then the `ConfigOverrides` are applied, as before.
2025-05-27 23:11:44 -07:00
DebugCommand::Landlock(mut landlock_cli) => {
prepend_config_flags(&mut landlock_cli.config_overrides, cli.config_overrides);
codex_cli::debug_sandbox::run_command_under_landlock(
feat: add support for -c/--config to override individual config items (#1137) This PR introduces support for `-c`/`--config` so users can override individual config values on the command line using `--config name=value`. Example: ``` codex --config model=o4-mini ``` Making it possible to set arbitrary config values on the command line results in a more flexible configuration scheme and makes it easier to provide single-line examples that can be copy-pasted from documentation. Effectively, it means there are four levels of configuration for some values: - Default value (e.g., `model` currently defaults to `o4-mini`) - Value in `config.toml` (e.g., user could override the default to be `model = "o3"` in their `config.toml`) - Specifying `-c` or `--config` to override `model` (e.g., user can include `-c model=o3` in their list of args to Codex) - If available, a config-specific flag can be used, which takes precedence over `-c` (e.g., user can specify `--model o3` in their list of args to Codex) Now that it is possible to specify anything that could be configured in `config.toml` on the command line using `-c`, we do not need to have a custom flag for every possible config option (which can clutter the output of `--help`). To that end, as part of this PR, we drop support for the `--disable-response-storage` flag, as users can now specify `-c disable_response_storage=true` to get the equivalent functionality. Under the hood, this works by loading the `config.toml` into a `toml::Value`. Then for each `key=value`, we create a small synthetic TOML file with `value` so that we can run the TOML parser to get the equivalent `toml::Value`. We then parse `key` to determine the point in the original `toml::Value` to do the insert/replace. Once all of the overrides from `-c` args have been applied, the `toml::Value` is deserialized into a `ConfigToml` and then the `ConfigOverrides` are applied, as before.
2025-05-27 23:11:44 -07:00
landlock_cli,
codex_linux_sandbox_exe,
)
.await?;
}
},
Some(Subcommand::Apply(mut apply_cli)) => {
prepend_config_flags(&mut apply_cli.config_overrides, cli.config_overrides);
run_apply_command(apply_cli, None).await?;
}
2025-08-18 13:08:53 -07:00
Some(Subcommand::GenerateTs(gen_cli)) => {
codex_protocol_ts::generate_ts(&gen_cli.out_dir, gen_cli.prettier.as_deref())?;
}
}
Ok(())
}
feat: add support for -c/--config to override individual config items (#1137) This PR introduces support for `-c`/`--config` so users can override individual config values on the command line using `--config name=value`. Example: ``` codex --config model=o4-mini ``` Making it possible to set arbitrary config values on the command line results in a more flexible configuration scheme and makes it easier to provide single-line examples that can be copy-pasted from documentation. Effectively, it means there are four levels of configuration for some values: - Default value (e.g., `model` currently defaults to `o4-mini`) - Value in `config.toml` (e.g., user could override the default to be `model = "o3"` in their `config.toml`) - Specifying `-c` or `--config` to override `model` (e.g., user can include `-c model=o3` in their list of args to Codex) - If available, a config-specific flag can be used, which takes precedence over `-c` (e.g., user can specify `--model o3` in their list of args to Codex) Now that it is possible to specify anything that could be configured in `config.toml` on the command line using `-c`, we do not need to have a custom flag for every possible config option (which can clutter the output of `--help`). To that end, as part of this PR, we drop support for the `--disable-response-storage` flag, as users can now specify `-c disable_response_storage=true` to get the equivalent functionality. Under the hood, this works by loading the `config.toml` into a `toml::Value`. Then for each `key=value`, we create a small synthetic TOML file with `value` so that we can run the TOML parser to get the equivalent `toml::Value`. We then parse `key` to determine the point in the original `toml::Value` to do the insert/replace. Once all of the overrides from `-c` args have been applied, the `toml::Value` is deserialized into a `ConfigToml` and then the `ConfigOverrides` are applied, as before.
2025-05-27 23:11:44 -07:00
/// Prepend root-level overrides so they have lower precedence than
/// CLI-specific ones specified after the subcommand (if any).
fn prepend_config_flags(
subcommand_config_overrides: &mut CliConfigOverrides,
cli_config_overrides: CliConfigOverrides,
) {
subcommand_config_overrides
.raw_overrides
.splice(0..0, cli_config_overrides.raw_overrides);
}
fn print_completion(cmd: CompletionCommand) {
let mut app = MultitoolCli::command();
let name = "codex";
generate(cmd.shell, &mut app, name, &mut std::io::stdout());
}